Categories:
Overview of DML Commands (Data Loading)

PUT

Uploads (i.e. stages) one or more data files from a local directory/folder on a client machine to one of the following Snowflake (i.e. internal) locations:

  • The current user’s stage.
  • The specified table’s stage.
  • Named internal stage.

Note

PUT does not support uploading files to an external stage (e.g. S3 bucket) or named external stage.

Once files are uploaded, the data in the files can be loaded into a table using the COPY INTO table command.

See also:
GET , LIST , REMOVE

Syntax

PUT <local_source> <internal_location> [ PARALLEL = <integer> ]
                                       [ AUTO_COMPRESS = TRUE | FALSE ]
                                       [ SOURCE_COMPRESSION = <constant> ]

Where:

-- <internal_location> is either:

   { @~[/<path>] |
     @[<namespace>.]%<table_name>[/<path>] |
     @[<namespace>.]<internal_stage_name>[/<path>] }

Parameters

Required:

local_source

A URI specifier for the data file(s) on the client machine to upload. Uses the form:

file://<path_to_file>/<filename>

Where:

  • path_to_file specifies the directory path to the file(s) to upload. If the files are located in the root directory (or sub-directory) on the client machine:
    • In a Linux or Mac OS environment, you must include the initial forward slash in the path, e.g. file:///tmp/load.
    • In a Windows environment, you must include the drive in the path, e.g. file://C:\tmp\load.
  • filename specifies the name of the file(s) to upload. Wildcard characters (*, ?) are supported to enable uploading multiple files in a directory.

The URI can be a string enclosed in single quotes, which allows special characters, including spaces, in directory and file names.

internal_location

A URI specifier for the location in Snowflake where to upload the files:

@~[/<path>] Uploads files to the stage for the current user.
@[<namespace>.]%<table_name>[/<path>] Uploads files to the stage for the specified table.
@[<namespace>.]<internal_stage_name>[/<path>] Uploads files to the specified internal named stage.
  • namespace is the database and/or schema in which the table or internal stage resides. It is optional if a database and schema are currently in use within the user session; otherwise, it is required.
  • The optional path parameter restricts the set of files being uploaded to the files under the folder prefix. If path is specified, but no file is explicitly named in the path, all data files in the path are uploaded.

The URI can be a string enclosed in single quotes, which allows special characters, including spaces, in location names.

Optional:

PARALLEL = integer

Number of threads for uploading files. Integer value from 1 (no parallelism) to 99 (use 99 threads for uploading files).

PUT operations separate batches of data files by size. Small files (< 16MB compressed or uncompressed) are staged in parallel as individual files; whereas larger files are automatically split into chunks, staged concurrently and reassembled in the target stage. A single thread can upload multiple chunks.

Default value is 4.

AUTO_COMPRESS = TRUE | FALSE

When TRUE, files are automatically compressed using gzip, if they are not already compressed. This option does not support other compression types. To use a different compression type, compress the file separately before executing this command. Then, identify the compression type using the SOURCE_COMPRESSION option.

Default value is TRUE.

SOURCE_COMPRESSION = constant

String constant that specifies the method of compression used on compressed files that are being staged. Snowflake uses this option to detect how the data files were compressed so that they can be uncompressed and the data extracted for loading; it does not use this option to compress the file.

Valid values are:

AUTO

Compression type detected automatically.

Note

Snowflake does not yet automatically detect files compressed using Brotli or Zstandard. For these files, you must explicitly specify BROTLI or ZSTD instead of AUTO.

GZIP
Files treated as gzip-compressed.
bzip2
Files treated as bzip2-compressed.
BROTLI
Files treated as Brotli-compressed.
ZSTD
Files treated as Zstandard-compressed. Note that Snowflake supports Zstandard v0.8 and higher.
DEFLATE
Files treated as deflate-compressed (with zlib header, RFC1950).
RAW_DEFLATE
Files treated as raw deflate compressed (without header, RFC1951).
NONE
Files treated as not compressed.

Default value is AUTO.

Note

Loading files that were compressed with other utilities (e.g. lzip, lzma, lzop, and xz) is not currently supported.

Usage Notes

  • The PUT command cannot be executed from the Worksheet in the Snowflake web interface; instead. Use the SnowSQL client (or an equivalent client) to upload data files.

  • File-globbing patterns (i.e. wildcards) are accepted.

  • The PUT command does not create or rename files; the specified internal_location is the prefix for an existing file, not a new file.

  • Uploaded files are automatically encrypted with 128-bit or 256-bit keys. The CLIENT_ENCRYPTION_KEY_SIZE account parameter specifies the size key used to encrypt the files.

  • The PUT command ignores any duplicate files you attempt to stage to the same stage. In this context, a duplicate file is a file that has the same name as a staged file and has not been modified. To overwrite an existing staged file, you must modify the file you are staging, generating a new checksum.

  • If referencing a named stage that includes one or more spaces in the name, add single quotes around the stage reference. For example: '@"my stage"'.

  • To improve performance when uploading large files, increase the PARALLEL parameter value.

  • For security reasons, the PUT command times out after a set period of time when transferring data files to internal stages. If the command times out, compress your data files using a third-party tool before staging the files using the PUT command. Specify the compression type for the files using the SOURCE_COMPRESSION parameter.

    You can compress files of any size, although we specifically recommend splitting large chunks of data into files roughly 10-100 MB in size compressed to take advantage of parallel operations later, when loading the staged data into tables using the COPY INTO table command.

  • The ODBC Driver does not currently support the PUT command.

Examples

Upload a file named mydata.csv in the /data directory in root (in a Linux or Mac OS environment) to the stage for the current user with automatic data compression enabled:

PUT file:///data/mydata.csv @~ AUTO_COMPRESS=TRUE;

------------+---------------+-------------+-------------+--------------------+--------------------+----------+------------+---------+
 source     |    target     | source_size | target_size | source_compression | target_compression |  status  | encryption | message |
------------+---------------+-------------+-------------+--------------------+--------------------+----------+------------+---------+
 mydata.csv | mydata.csv.gz | 174661267   | 38593931    | NONE               | GZIP               | UPLOADED | ENCRYPTED  |         |
------------+---------------+-------------+-------------+--------------------+--------------------+----------+------------+---------+

Upload a file named orders_001.csv in the /data directory in root (in a Linux or Mac OS environment) to the stage for the orderstiny_ext table with automatic data compression disabled:

PUT file:///data/orders_001.csv @%orderstiny_ext AUTO_COMPRESS=FALSE;

----------------+----------------+-------------+-------------+--------------------+--------------------+----------+------------+---------+
     source     |     target     | source_size | target_size | source_compression | target_compression |  status  | encryption | message |
----------------+----------------+-------------+-------------+--------------------+--------------------+----------+------------+---------+
 orders_001.csv | orders_001.csv | 1126625     | 1126625     | NONE               | NONE               | UPLOADED | ENCRYPTED  |         |
----------------+----------------+-------------+-------------+--------------------+--------------------+----------+------------+---------+

Same example as above, but using wildcard characters in the filename to upload multiple files to the stage for orderstiny_ext:

PUT file:///data/orders_10*.csv @%orderstiny_ext AUTO_COMPRESS=FALSE;

----------------+----------------+-------------+-------------+--------------------+--------------------+----------+------------+---------+
     source     |     target     | source_size | target_size | source_compression | target_compression |  status  | encryption | message |
----------------+----------------+-------------+-------------+--------------------+--------------------+----------+------------+---------+
 orders_001.csv | orders_001.csv | 1126625     | 1126625     | NONE               | NONE               | UPLOADED | ENCRYPTED  |         |
 orders_101.csv | orders_101.csv | 5214        | 5214        | NONE               | NONE               | UPLOADED | ENCRYPTED  |         |
----------------+----------------+-------------+-------------+--------------------+--------------------+----------+------------+---------+

Upload a file named mydata.csv in the /data directory in root (in a Linux or Mac OS environment) to an internal stage named my_int_stage:

PUT file:///data/mydata.csv @my_int_stage;

------------+---------------+-------------+-------------+--------------------+--------------------+----------+------------+---------+
 source     |    target     | source_size | target_size | source_compression | target_compression |  status  | encryption | message |
------------+---------------+-------------+-------------+--------------------+--------------------+----------+------------+---------+
 mydata.csv | mydata.csv.gz | 174661267   | 38593931    | NONE               | GZIP               | UPLOADED | ENCRYPTED  |         |
------------+---------------+-------------+-------------+--------------------+--------------------+----------+------------+---------+