DML Commands - Data Loading


Uploads (i.e. stages) data files from a local directory/folder on a client machine to one of the following Snowflake stages:

  • Named internal stage.

  • Internal stage for a specified table.

  • Internal stage for the current user.

Once files are staged, the data in the files can be loaded into a table using the COPY INTO <table> command.


PUT does not support uploading files to named external stages or external locations (i.e. S3 buckets or Azure containers). To upload files to external stages/locations, use the utilities provided by S3/Azure.

See also:



PUT file://<path_to_file>/<filename> internalStage
    [ PARALLEL = <integer> ]


internalStage ::=
  | @[<namespace>.]%<table_name>[/<path>]
  | @~[/<path>]

Required Parameters


Specifies the URI for the data file(s) on the client machine, where:

  • path_to_file is the local directory path to the file(s) to upload. If the files are located in the root directory (or sub-directory) on the client machine:


    You must include the initial forward slash in the path (e.g. file:///tmp/load).


    You must include the drive and backslash in the path (e.g. file://C:\temp\load).

  • filename is the name of the file(s) to upload. Wildcard characters (*, ?) are supported to enable uploading multiple files in a directory.

The URI can be enclosed in single quotes, which allows special characters, including spaces, in directory and file names; however, the drive and path separator is a forward slash (/) for all supported operating systems (e.g. 'file://C:/temp/load data' for a path in Windows containing a directory named load data).


Specifies the location in Snowflake where to upload the files:


Files are uploaded to the specified named internal stage.


Files are uploaded to the stage for the specified table.


Files are uploaded to the stage for the current user.


  • namespace is the database and/or schema in which the named internal stage or table resides. It is optional if a database and schema are currently in use within the session; otherwise, it is required.

  • The optional path restricts the set of files being uploaded to the files under the folder prefix. If path is specified, but no file is explicitly named in the path, all data files in the path are uploaded.

The string can be enclosed in single quotes, which allows special characters, including spaces, in location names (e.g. '@"my stage"' for a stage named "my stage").

Optional Parameters

PARALLEL = integer

Specifies the number of threads to use for uploading files. The upload process separate batches of data files by size:

  • Small files (< 16 MB compressed or uncompressed) are staged in parallel as individual files

  • Larger files are automatically split into chunks, staged concurrently and reassembled in the target stage. A single thread can upload multiple chunks.

Increasing the number of threads can improve performance when uploading large files.

Supported values: Any integer value from 1 (no parallelism) to 99 (use 99 threads for uploading files).

Default: 4


Specifies whether Snowflake uses gzip to compress files during upload:

  • TRUE: Files are compressed (if they are not already compressed).

  • FALSE: Files are not compressed (i.e. files are uploaded as-is).

This option does not support other compression types. To use a different compression type, compress the file separately before executing the PUT command. Then, identify the compression type using the SOURCE_COMPRESSION option.

Default: TRUE


Specifies the method of compression used on already-compressed files that are being staged:

Supported Values



Compression algorithm detected automatically, except for Brotli-compressed files, which cannot currently be detected automatically. If loading Brotli-compressed files, explicitly use BROTLI instead of AUTO_DETECT.




Must be used if loading Brotli-compressed files.


Zstandard v0.8 (and higher) supported.


Deflate-compressed files (with zlib header, RFC1950).


Raw Deflate-compressed files (without header, RFC1951).


Files for loading data have not been compressed.



Snowflake uses this option to detect how the data files were compressed so that they can be uncompressed and the data extracted for loading; it does not use this option to compress the files.

Uploading files that were compressed with other utilities (e.g. lzip, lzma, lzop, and xz) is not currently supported.

Usage Notes

  • The command cannot be executed from the Worksheets Worksheet tab page in the Snowflake web interface; instead, use the SnowSQL client to upload data files, or check the documentation for the specific Snowflake client to verify support for this command.

  • Version 2.17.5 and higher of the ODBC driver supports this command.

    Currently, this support is not available to Snowflake accounts hosted on the Microsoft Azure cloud platform. Use SnowSQL (CLI Client) or the JDBC Driver instead.

  • File-globbing patterns (i.e. wildcards) are supported.

  • The command does not create or rename files; the prefix (if any) in the specified internal stage is for existing files, not new files.

  • Uploaded files are automatically encrypted with 128-bit or 256-bit keys. The CLIENT_ENCRYPTION_KEY_SIZE account parameter specifies the size key used to encrypt the files.

  • The command ignores any duplicate files you attempt to upload to the same stage. In this context, a duplicate file is an unmodified file with the same name as an already-staged file.

    To overwrite an already-staged file, you must modify the file you are uploading so that it is different from the staged file, which results in a new checksum for the newly-staged file.


For security reasons, the command times out after a set period of time. This can occur when loading large, uncompressed data files. To avoid any timeout issues, we recommend compressing large data files using one of the supported compression types before uploading the files. Then, specify the compression type for the files using the SOURCE_COMPRESSION option.

You can also consider increasing the value of the PARALLEL option, which can help with performance when uploading large data files.

Furthermore, to take advantage of parallel operations when loading data into tables (using the COPY INTO <table> command), we recommend using data files ranging in size from roughly 10 to 100 MB compressed. If your data files are larger, consider using a 3rd-party tool to split them into smaller files before compressing and uploading them.


Upload a file named mydata.csv in the /tmp/data directory (in a Linux or macOS environment) to an internal stage named my_int_stage:

PUT file:///tmp/data/mydata.csv @my_int_stage;

Upload a file named orders_001.csv in the /tmp/data directory (in a Linux or macOS environment) to the stage for the orderstiny_ext table, with automatic data compression disabled:

PUT file:///tmp/data/orders_001.csv @%orderstiny_ext AUTO_COMPRESS=FALSE;

Same example as above, but using wildcard characters in the filename to upload multiple files:

PUT file:///tmp/data/orders_*01.csv @%orderstiny_ext AUTO_COMPRESS=FALSE;

Upload a file named mydata.csv in the C:\temp\data directory (in a Windows environment) to the stage for the current user, with automatic data compression enabled:

PUT file://C:\temp\data\mydata.csv @~ AUTO_COMPRESS=TRUE;