Categories:

DML Commands - Data Unloading

COPY INTO <location>

Unloads data from a table (or query) into one or more files in one of the following locations:

  • Named internal stage (or table/user stage). The files can then be downloaded from the stage/location using the GET command.

  • Named external stage that references an external location (AWS S3, Google Cloud Storage, or Microsoft Azure).

  • External location (AWS S3, Google Cloud Storage, or Microsoft Azure).

See also:

COPY INTO <table>

In this Topic:

Syntax

COPY INTO { internalStage | externalStage | externalLocation }
     FROM { [<namespace>.]<table_name> | ( <query> ) }
[ FILE_FORMAT = ( { FORMAT_NAME = '[<namespace>.]<file_format_name>' |
                    TYPE = { CSV | JSON | PARQUET } [ formatTypeOptions ] } ) ]
[ copyOptions ]
[ HEADER ]

Where:

internalStage

internalStage ::=
    @[<namespace>.]<int_stage_name>[/<path>]
  | @[<namespace>.]%<table_name>[/<path>]
  | @~[/<path>]

externalStage

externalStage ::=
  @[<namespace>.]<ext_stage_name>[/<path>]

externalLocation (for AWS S3 bucket)

externalLocation ::=
  's3://<bucket>[/<path>]'
  [ { STORAGE_INTEGRATION = <integration_name> } | { CREDENTIALS = ( {  { AWS_KEY_ID = `<string>` AWS_SECRET_KEY = `<string>` [ AWS_TOKEN = `<string>` ] } | AWS_ROLE = `<string>`  } ) ) }` ]
  [ ENCRYPTION = ( [ TYPE = AWS_CSE ] [ MASTER_KEY = '<string>' ] |
                   [ TYPE = AWS_SSE_S3 ] |
                   [ TYPE = AWS_SSE_KMS [ KMS_KEY_ID = '<string>' ] |
                   [ TYPE = NONE ] ) ]

externalLocation (for Google Cloud Storage bucket)

externalLocation ::=
  'gcs://<bucket>[/<path>]'
  [ STORAGE_INTEGRATION = <integration_name> ]

externalLocation (for Microsoft Azure container)

externalLocation ::=
  'azure://<account>.blob.core.windows.net/<container>[/<path>]'
  [ CREDENTIALS = ( [ AZURE_SAS_TOKEN = <string> ] ) ]
  [ ENCRYPTION = ( [ TYPE = { AZURE_CSE | NONE } ] [ MASTER_KEY = '<string>' ] ) ]

formatTypeOptions (type-specific)

formatTypeOptions ::=
-- If FILE_FORMAT = ( TYPE = CSV ... )
     COMPRESSION = AUTO | GZIP | BZ2 | BROTLI | ZSTD | DEFLATE | RAW_DEFLATE | NONE
     RECORD_DELIMITER = '<character>' | NONE
     FIELD_DELIMITER = '<character>' | NONE
     FILE_EXTENSION = '<string>'
     ESCAPE = '<character>' | NONE
     ESCAPE_UNENCLOSED_FIELD = '<character>' | NONE
     DATE_FORMAT = '<string>' | AUTO
     TIME_FORMAT = '<string>' | AUTO
     TIMESTAMP_FORMAT = '<string>' | AUTO
     BINARY_FORMAT = HEX | BASE64 | UTF8
     FIELD_OPTIONALLY_ENCLOSED_BY = '<character>' | NONE
     NULL_IF = ( '<string1>' [ , '<string2>' , ... ] )
     EMPTY_FIELD_AS_NULL = TRUE | FALSE
-- If FILE_FORMAT = ( TYPE = JSON ... )
     COMPRESSION = AUTO | GZIP | BZ2 | DEFLATE | RAW_DEFLATE | NONE
     FILE_EXTENSION = '<string>'
-- If FILE_FORMAT = ( TYPE = PARQUET ... )
     COMPRESSION = AUTO | SNAPPY | NONE
     SNAPPY_COMPRESSION = TRUE | FALSE

copyOptions

copyOptions ::=
     OVERWRITE = TRUE | FALSE
     SINGLE = TRUE | FALSE
     MAX_FILE_SIZE = <num>

Required Parameters

INTO ...

Specifies the internal or external location where the data files are unloaded:

@[namespace.]int_stage_name[/path]

Files are unloaded to the specified named internal stage.

@[namespace.]ext_stage_name[/path]

Files are unloaded to the specified named external stage.

@[namespace.]%table_name[/path]

Files are unloaded to the stage for the specified table.

@~[/path]

Files are unloaded to the stage for the current user.

's3://bucket[/path]'

Files are unloaded to the specified external location (S3 bucket). Additional parameters may be required. For details, see Additional Cloud Provider Parameters (in this topic).

'azure://account.blob.core.windows.net/container[/path]'

Files are unloaded to the specified external location (Azure container). Additional parameters may be required. For details, see Additional Cloud Provider Parameters (in this topic).

Where:

  • namespace is the database and/or schema in which the internal or external stage resides, in the form of database_name.schema_name or schema_name. It is optional if a database and schema are currently in use within the user session; otherwise, it is required.

  • The optional path parameter specifies a folder and filename prefix for the file(s) containing unloaded data. If a filename prefix is not included in path, the filenames for the generated data files are prefixed with data_.

    Relative path modifiers such as /./ and /../ are interpreted literally, because “paths” are literal prefixes for a name. For example:

    -- S3 bucket
    COPY INTO 's3://mybucket/./../a.csv' FROM mytable;
    
    -- Azure container
    COPY INTO 'azure://myaccount.blob.core.windows.net/mycontainer/./../a.csv' FROM mytable;
    

    In these COPY statements, Snowflake creates a file that is literally named ./../a.csv in the storage location.

Note

The URI string for an external location (i.e. AWS S3, Google Cloud Storage, or Microsoft Azure) must be enclosed in single quotes; however, you can enclose any string in single quotes, which allows special characters, including spaces, to be used in location and file names. For example:

-- Stages
COPY INTO '@mystage/path 1/file 1.csv' FROM mytable;
COPY INTO '@%mytable/path 1/file 1.csv' FROM mytable;
COPY INTO '@~/path 1/file 1.csv' FROM mytable;

-- S3 bucket
COPY INTO 's3://mybucket 1/prefix 1/file 1.csv' FROM mytable;

-- Azure container
COPY INTO 'azure://myaccount.blob.core.windows.net/mycontainer/encrypted_files/file 1.csv' FROM mytable;
FROM ...

Specifies the source of the data to be unloaded, which can either be a table or a query:

[namespace.]table_name

Specifies the name of the table from which data is unloaded.

Namespace optionally specifies the database and/or schema in which the table resides, in the form of database_name.schema_name or schema_name. It is optional if a database and schema are currently in use within the user session; otherwise, it is required.

( query )

SELECT statement that returns data to be unloaded into files. You can limit the number of rows returned by specifying a LIMIT / FETCH clause in the query.

Additional Cloud Provider Parameters

STORAGE_INTEGRATION = integration_name or . CREDENTIALS = ( cloud_specific_credentials )

For use in ad hoc COPY statements (statements that do not reference a named external stage). Specifies the security credentials for connecting to the cloud provider and accessing the private/protected storage container where the unloaded files are staged.

Required only for unloading into an external private/protected cloud storage location; not required for public buckets/containers

AWS S3

STORAGE_INTEGRATION = integration_name>

Specifies the name of the storage integration used to delegate authentication responsibility for external cloud storage to a Snowflake identity and access management (IAM) entity. For more details, see CREATE STORAGE INTEGRATION.

Note

We highly recommend the use of storage integrations. This option avoids the need to supply cloud storage credentials using the CREDENTIALS parameter when creating stages or loading data.

CREDENTIALS = ( AWS_KEY_ID = 'string' AWS_SECRET_KEY = 'string' [ AWS_TOKEN = 'string' ] ) or . CREDENTIALS = ( AWS_ROLE = 'string' )

Specifies the security credentials for connecting to AWS and accessing the private/protected S3 bucket where the unloaded files are staged. For more information, see Configuring Secure Access to AWS S3.

The credentials you specify depend on whether you associated the Snowflake access permissions for the bucket with an AWS IAM (Identity & Access Management) user or role:

  • IAM user: Temporary IAM credentials are required. Temporary (aka “scoped”) credentials are generated by AWS Security Token Service (STS) and consist of three components:

    • AWS_KEY_ID

    • AWS_SECRET_KEY

    • AWS_TOKEN

    All three are required to access a private/protected bucket. After a designated period of time, temporary credentials expire and can no longer be used. You must then generate a new set of valid temporary credentials.

    Important

    COPY commands contain complex syntax and sensitive information, such as credentials. In addition, they are executed frequently and are often stored in scripts or worksheets, which could lead to sensitive information being inadvertently exposed. The COPY command allows permanent (aka “long-term”) credentials to be used; however, for security reasons, do not use permanent credentials in COPY commands. Instead, use temporary credentials.

    If you must use permanent credentials, use external stages, for which credentials are entered once and securely stored, minimizing the potential for exposure.

  • IAM role: Omit the security credentials and access keys and, instead, identify the role using AWS_ROLE and specify the AWS role ARN (Amazon Resource Name).

    Note

    Using an IAM role for credentials requires a named external stage. Accessing an S3 bucket URL directly in a COPY statement is not supported.

Google Cloud Storage

STORAGE_INTEGRATION = integration_name>

Specifies the name of the storage integration used to delegate authentication responsibility for external cloud storage to a Snowflake identity and access management (IAM) entity. For more details, see CREATE STORAGE INTEGRATION.

Microsoft Azure

CREDENTIALS = ( AZURE_SAS_TOKEN = string )

Specifies the shared access signature (SAS) token for connecting to Azure and accessing the private/protected container where the files containing unloaded data are staged. Credentials are generated by Azure.

ENCRYPTION = ( cloud_specific_encryption )

For use in ad hoc COPY statements (statements that do not reference a named external stage). Required only for loading from encrypted files; not required if files are unencrypted. Specifies the encryption settings used to decrypt encrypted files in the storage location.

AWS S3

ENCRYPTION = ( [ TYPE = AWS_CSE ] [ MASTER_KEY = '<string>' ] | [ TYPE = AWS_SSE_S3 ] | [ TYPE = AWS_SSE_KMS [ KMS_KEY_ID = '<string>' ] | [ TYPE = NONE ] )

Specifies the encryption settings used to encrypt files unloaded to the S3 bucket:

TYPE = ...

Specifies the encryption type used. Possible values are:

  • AWS_CSE: Client-side encryption (requires a MASTER_KEY value). Currently, the client-side master key you provide can only be a symmetric key. Note that, when a MASTER_KEY value is provided, Snowflake assumes TYPE = AWS_CSE (i.e. when a MASTER_KEY value is provided, TYPE is not required).

  • AWS_SSE_S3: Server-side encryption that requires no additional encryption settings.

  • AWS_SSE_KMS: Server-side encryption that accepts an optional KMS_KEY_ID value.

For more information about the encryption types, see the AWS documentation for client-side encryption or server-side encryption.

MASTER_KEY = 'string' (applies only to AWS_CSE encryption)

Specifies the client-side master key that was used to encrypt the files in the bucket. The master key must be a 128-bit or 256-bit key in Base64-encoded form. Snowflake requires this key to decrypt encrypted files in the bucket and extract data for loading.

KMS_KEY_ID = 'string' (applies only to AWS_SSE_KMS encryption)`

Optionally specifies the ID for the AWS KMS-managed key that is used to encrypt files unloaded into the bucket. If no KMS_KEY_ID value is provided, your default KMS key ID is used to encrypt files on unload.

Microsoft Azure

ENCRYPTION = ( [ TYPE = AZURE_CSE | NONE ] [ MASTER_KEY = 'string' ] )

Specifies the settings used to encrypt the files containing the unloaded data.

TYPE = ...

Specifies the encryption type used. Possible values are:

  • AZURE_CSE: Client-side encryption; requires a MASTER_KEY value.

  • NONE: No encryption (default).

MASTER_KEY = 'string' (applies only to AZURE_CSE encryption)

Specifies the client-side master key that Snowflake uses to encrypt the files containing the unloaded data. The master key must be a 128-bit or 256-bit key in Base64-encoded form.

Optional Parameters

FILE_FORMAT = ( FORMAT_NAME = 'file_format_name' ) or . FILE_FORMAT = ( TYPE = CSV | JSON | PARQUET [ ... ] )

Specifies the format of the data files containing unloaded data:

FORMAT_NAME = 'file_format_name'

Specifies an existing named file format to use for unloading data from the table. The named file format determines the format type (CSV, JSON, PARQUET), as well as any other format options, for the data files. For more information, see CREATE FILE FORMAT.

TYPE = CSV | JSON | PARQUET [ ... ]

Specifies the type of files unloaded from the table.

If a format type is specified, additional format-specific options can be specified. For more details, see Format Type Options (in this topic).

Note

  • JSON can only be used to unload data from columns of type VARIANT (i.e. columns containing JSON data).

  • Currently, nested data in VARIANT columns cannot be unloaded successfully in Parquet format.

copyOptions

Specifies one or more copy options for the unloaded data. For more details, see Copy Options (in this topic).

HEADER = TRUE | FALSE

Specifies whether to include the table column headings in the output files.

  • Set this option to TRUE to include the table column headings to the output files.

    Note that if the COPY operation unloads the data to multiple files, the column headings are included in every file.

    When unloading data in Parquet format, the table column names are retained in the output files.

  • Set this option to FALSE to specify the following behavior:

    CSV

    Do not include table column headings in the output files.

    Parquet

    Include generic column headings (e.g. col1, col2, etc.) in the output files.

Default: FALSE

Format Type Options (formatTypeOptions)

Depending on the file format type specified (FILE_FORMAT = ( TYPE = ... )), you can include one or more of the following format-specific options (separated by blank spaces, commas, or new lines):

TYPE = CSV

COMPRESSION = AUTO | GZIP | BZ2 | BROTLI | ZSTD | DEFLATE | RAW_DEFLATE | NONE

String (constant) that specifies to compresses the unloaded data files using the specified compression algorithm.

Supported Values

Notes

AUTO

Unloaded files are automatically compressed using the default, which is gzip.

GZIP

BZ2

BROTLI

Must be used if loading Brotli-compressed files.

ZSTD

Zstandard v0.8 (and higher) supported.

DEFLATE

Unloaded files are compressed using Deflate (with zlib header, RFC1950).

RAW_DEFLATE

Unloaded files are compressed using Raw Deflate (without header, RFC1951).

NONE

Unloaded files are not compressed.

Default: AUTO

RECORD_DELIMITER = 'character' | NONE

Character that separates records in an unloaded file. Accepts common escape sequences, octal values (prefixed by \\), or hex values (prefixed by 0x). Also accepts a value of NONE.

Default: New line character. Note that “new line” is logical such that \r\n will be understood as a new line for files on a Windows platform.

FIELD_DELIMITER = 'character' | NONE

Character that separates fields in an unloaded file. Accepts common escape sequences, octal values, or hex values. Also accepts a value of NONE.

Default: comma (,)

FILE_EXTENSION = 'string' | NONE

String that specifies the extension for files unloaded to a stage. Accepts any extension. The user is responsible for specifying a valid file extension that can be read by the desired software or service.

Note

If the SINGLE copy option is TRUE, then the COPY command unloads a file without a file extension by default. To specify a file extension, provide a file name and extension in the internal_location or external_location path. For example:

copy into @stage/data.csv ...

Default: null, meaning the file extension is determined by the format type, e.g. .csv[compression], where compression is the extension added by the compression method, if COMPRESSION is set.

DATE_FORMAT = 'string' | AUTO

String that defines the format of date values in the unloaded data files. If a value is not specified or is set to AUTO, the value for the DATE_OUTPUT_FORMAT parameter is used.

Default: AUTO

TIME_FORMAT = 'string' | AUTO

String that defines the format of time values in the unloaded data files. If a value is not specified or is set to AUTO, the value for the TIME_OUTPUT_FORMAT parameter is used.

Default: AUTO

TIMESTAMP_FORMAT = 'string' | AUTO

String that defines the format of timestamp values in the unloaded data files. If a value is not specified or is set to AUTO, the value for the TIMESTAMP_OUTPUT_FORMAT parameter is used.

Default: AUTO

BINARY_FORMAT = HEX | BASE64 | UTF8

String (constant) that defines the encoding format for binary output. The option can be used when unloading data from binary columns in a table.

Default: HEX

ESCAPE = 'character' | NONE

Character used as the escape character for any field values. Accepts common escape sequences, octal values, or hex values. Also accepts a value of NONE.

If set, overrides the escape character set for ESCAPE_UNENCLOSED_FIELD.

Default: NONE

ESCAPE_UNENCLOSED_FIELD = 'character' | NONE

Single character string used as the escape character for unenclosed field values only. Accepts common escape sequences, octal values, or hex values. Also accepts a value of NONE.

However, if ESCAPE is set, the value set for that option overrides this option.

Default: backslash (\\)

FIELD_OPTIONALLY_ENCLOSED_BY = 'character' | NONE

Character used to enclose strings. Value can be NONE, single quote character ('), or double quote character ("). To use the single quote character, use the octal or hex representation (0x27) or the double single-quoted escape ('').

When a field contains this character, escape it using the same character. For example, if the value is the double quote character and a field contains the string A "B" C, escape the double quotes as follows:

A ""B"" C

Default: NONE

NULL_IF = ( 'string1' [ , 'string2' ... ] )

String used to convert from SQL NULL. Snowflake converts SQL NULL values to the first value in the list.

Default: \\N (i.e. NULL, assuming ESCAPE_UNENCLOSED_FIELD=\\)

TYPE = JSON

COMPRESSION = AUTO | GZIP | BZ2 | BROTLI | ZSTD | DEFLATE | RAW_DEFLATE | NONE

String (constant). Compresses the data file using the specified compression algorithm.

Supported Values

Notes

AUTO

Unloaded files are automatically compressed using the default, which is gzip.

GZIP

BZ2

BROTLI

ZSTD

DEFLATE

Unloaded files are compressed using Deflate (with zlib header, RFC1950).

RAW_DEFLATE

Unloaded files are compressed using Raw Deflate (without header, RFC1951).

NONE

Unloaded files are not compressed.

Default: AUTO

FILE_EXTENSION = 'string' | NONE

String that specifies the extension for files unloaded to a stage. Accepts any extension. The user is responsible for specifying a valid file extension that can be read by the desired software or service.

Default: null, meaning the file extension is determined by the format type (e.g. .csv[compression]), where compression is the extension added by the compression method, if COMPRESSION is set.

TYPE = PARQUET

COMPRESSION = AUTO | SNAPPY | NONE

String (constant). Compresses the data file using the specified compression algorithm.

Supported Values

Notes

AUTO

When loading data, compression algorithm detected automatically, except for Brotli-compressed files, which cannot currently be detected automatically. When unloading data, files are automatically compressed using the default, which is gzip.

SNAPPY

Must be specified if unloading SNAPPY-compressed files.

NONE

When loading data, indicates that the files have not been compressed. When unloading data, specifies that the unloaded files are not compressed.

Default: AUTO

SNAPPY_COMPRESSION = TRUE | FALSE

Boolean that specifies whether the unloaded file(s) are compressed using the SNAPPY algorithm.

Default: TRUE

Copy Options (copyOptions)

You can specify one or more of the following copy options (separated by blank spaces, commas, or new lines):

Options

OVERWRITE = TRUE | FALSE

Boolean that specifies whether the COPY command overwrites existing files, if any, in the location where files are stored. The option does not remove existing files it does not overwrite.

Default: FALSE

SINGLE = TRUE | FALSE

Boolean that specifies whether to generate a single file or multiple files. If FALSE, a filename prefix must be included in path.

Default: FALSE

Important

If SINGLE = TRUE, then COPY ignores the FILE_EXTENSION file format option and outputs a file simply named data. To specify a file extension, provide a file name and extension in the internal or external location path. For example:

COPY INTO @mystage/data.csv ...

In addition, if the COMPRESSION file format option is also explicitly set to one of the supported compression algorithms (e.g. GZIP), then the specified internal or external location path must end in a filename with the corresponding file extension (e.g. gz) so that the file can be uncompressed using the appropriate tool. For example:

COPY INTO @mystage/data.gz ...

COPY INTO @mystage/data.csv.gz ...
MAX_FILE_SIZE = num

Number (> 0) that specifies the upper size limit (in bytes) of each file to be generated in parallel per thread. Note that the actual file size and number of files unloaded are determined by the total amount of data and number of nodes available for parallel processing.

Snowflake utilizes parallel execution to optimize performance. The number of threads cannot be modified.

Default: 16000000 (16 MB)

Note

  • The COPY command unloads one set of table rows at a time. If you set a very small MAX_FILE_SIZE value, the amount of data in a set of rows could exceed the specified size.

  • The AWS file limit for a single PUT operation is 5 GB.

Usage Notes

  • STORAGE_INTEGRATION or CREDENTIALS only applies if you are unloading directly into a private/protected storage location (i.e. AWS S3, Google Cloud Storage, or Microsoft Azure). If you are unloading into a public bucket, secure access is not required, and if you are unloading into a named external stage, the stage provides all the credential information required for accessing the bucket.

  • If referencing a file format in the current namespace, you can omit the single quotes around the format identifier.

  • JSON can be specified for TYPE only when unloading data from VARIANT columns in tables.

  • When unloading to files of type PARQUET:

    • VARIANT columns are converted into simple JSON strings in the output file.

    • Unloading TIMESTAMP_TZ or TIMESTAMP_LTZ data produces an error.

Examples

Unloading Data from a Table to Files in a Table Stage

Unload data from the orderstiny table into the table’s stage using a folder/filename prefix (result/data_), a named file format (myformat), and gzip compression:

COPY INTO @%orderstiny/result/data_
  FROM orderstiny FILE_FORMAT = (FORMAT_NAME ='myformat' COMPRESSION='GZIP');

---------------+-------------+--------------+
 rows_unloaded | input_bytes | output_bytes |
---------------+-------------+--------------+
 73            | 8339        | 3322         |
---------------+-------------+--------------+

Unloading Data from a Query to Files in a Named Internal Stage

Unload the result of a query into a named internal stage (my_stage) using a folder/filename prefix (result/data_), a named file format (myformat), and gzip compression:

COPY INTO @my_stage/result/data_ FROM (SELECT * FROM orderstiny)
   file_format=(format_name='myformat' compression='gzip');

---------------+-------------+--------------+
 rows_unloaded | input_bytes | output_bytes |
---------------+-------------+--------------+
 73            | 8339        | 3322         |
---------------+-------------+--------------+

Note that the above example is functionally equivalent to the first example, except the file containing the unloaded data is stored in the stage location for my_stage rather than the table location for orderstiny.

Unloading Data from a Table Directly to Files in an External Location

Unload all data in a table into a storage location using a named my_csv_format file format:

AWS S3 bucket

Access the referenced S3 bucket using a referenced storage integration named myint:

COPY INTO s3://mybucket/unload/
  FROM mytable
  STORAGE_INTEGRATION = myint
  FILE_FORMAT = (FORMAT_NAME = my_csv_format);

Access the referenced S3 bucket using supplied credentials:

COPY INTO s3://mybucket/unload/
  FROM mytable
  CREDENTIALS = (AWS_KEY_ID='xxxx' AWS_SECRET_KEY='xxxxx' AWS_TOKEN='xxxxxx')
  FILE_FORMAT = (FORMAT_NAME = my_csv_format);

Google Cloud Storage bucket

Access the referenced GCS bucket using a referenced storage integration named myint:

COPY INTO gcs://mybucket/unload/
  FROM mytable
  STORAGE_INTEGRATION = myint
  FILE_FORMAT = (FORMAT_NAME = my_csv_format);

Microsoft Azure container

COPY INTO azure://myaccount.blob.core.windows.net/mycontainer/unload/
  FROM mytable
  CREDENTIALS=(AZURE_SAS_TOKEN='xxxx')
  FILE_FORMAT = (FORMAT_NAME = my_csv_format);

Retaining NULL/Empty Field Data in Unloaded Files

Retain SQL NULL and empty fields in unloaded files:

-- View the table column values

select * from HOME_SALES;

+------------+-------+-------+-------------+--------+-----------+
| CITY       | STATE | ZIP   | TYPE        | PRICE  | SALE_DATE |
|------------+-------+-------+-------------+--------+-----------|
| Lexington  | MA    | 95815 | Residential | 268880 | 3/28/17   |
| Belmont    | MA    | 95815 | Residential |        | 2/21/17   |
| Winchester | MA    | NULL  | Residential |        | 1/31/17   |
+------------+-------+-------+-------------+--------+-----------+

-- Unload the table data into the current user's personal stage. The file format options retain both the NULL value and the empty values in the output file

copy into @~ from HOME_SALES
file_format=(type=csv null_if = ('NULL', 'null')
empty_field_as_null=false);

-- Contents of the output file

95815,MA-Lexington,268880,3/28/17
95815,MA-Belmont,,2/21/17
NULL,MA-Winchester,389921,1/31/17

Unloading Data to a Single File

Unload all rows to a single data file using the SINGLE copy option:

copy into @~ from HOME_SALES
single = true;

Validating Data to be Unloaded (from a Query)

Execute COPY in validation mode to return the result of a query and view the data that will be unloaded from the orderstiny table if COPY is executed in normal mode:

COPY INTO @my_stage
FROM (SELECT * FROM orderstiny LIMIT 5)
VALIDATION_MODE='RETURN_ROWS';

----+--------+----+-----------+------------+----------+-----------------+----+---------------------------------------------------------------------------+
 C1 |   C2   | C3 |    C4     |     C5     |    C6    |       C7        | C8 |                                    C9                                     |
----+--------+----+-----------+------------+----------+-----------------+----+---------------------------------------------------------------------------+
 1  | 36901  | O  | 173665.47 | 1996-01-02 | 5-LOW    | Clerk#000000951 | 0  | nstructions sleep furiously among                                         |
 2  | 78002  | O  | 46929.18  | 1996-12-01 | 1-URGENT | Clerk#000000880 | 0  |  foxes. pending accounts at the pending\, silent asymptot                 |
 3  | 123314 | F  | 193846.25 | 1993-10-14 | 5-LOW    | Clerk#000000955 | 0  | sly final accounts boost. carefully regular ideas cajole carefully. depos |
 4  | 136777 | O  | 32151.78  | 1995-10-11 | 5-LOW    | Clerk#000000124 | 0  | sits. slyly regular warthogs cajole. regular\, regular theodolites acro   |
 5  | 44485  | F  | 144659.20 | 1994-07-30 | 5-LOW    | Clerk#000000925 | 0  | quickly. bold deposits sleep slyly. packages use slyly                    |
----+--------+----+-----------+------------+----------+-----------------+----+---------------------------------------------------------------------------+