Understanding Billing for Snowpipe Usage

With Snowpipe’s serverless compute model, users can initiate any size load without managing a virtual warehouse. Instead, Snowflake provides and manages the compute resources, automatically growing or shrinking capacity based on the current Snowpipe load. Accounts are charged based on their actual compute resource usage; in contrast with customer-managed virtual warehouses, which consume credits when active, and may sit idle or be overutilized.

Snowflake tracks the resource consumption of loads for all pipes in an account, with per-second/per-core granularity, as Snowpipe actively queues and processes data files. Per-core refers to the physical CPU cores in a compute server.

Note

Using a multi-threaded client application enables submitting data files in parallel, which initiates additional servers and loads the data in less time. However, the actual overall compute time required would be identical to using a single-threaded client application, just spread out over more internal Snowpipe servers.

The utilization recorded is then translated into familiar Snowflake credits, which are listed on the bill for your account.

Note

Snowpipe is currently a preview feature; however, accounts that try the service are still billed based on usage.

In this Topic:

Viewing the Data Load History for Your Account

Users with the ACCOUNTADMIN role can use the Snowflake web interface or SQL to view the credits billed to your Snowflake account within a specified date range.

To view the credits billed for Snowpipe data loading for your account:

Web Interface:Click on Account » Billing & Usage. Snowpipe utilization is shown as a special Snowflake warehouse called Snowflake SNOWPIPE.
SQL:Information Schema table function: PIPE_USAGE_HISTORY

Snowpipe Billing Example

The following example illustrates the Snowpipe billing model using a simple use case: loading the catalog_sales table data from the TPC-DS benchmark data set.

  • Data files: Around 3,000 gzip-compressed CSV files, 1.4 TB total, stored in an AWS S3 bucket
  • Snowflake credit usage: 11

The pipe definition for the load operation is a simple COPY statement:

create pipe tpcds_10tb_catalog_sales_pipe as
  copy into snowpipe_db.public.catalog_sales
  from @snowpipe_db.public.tpcds_10tb_stg/catalog_sales/;

Note that individual load scenarios have different compute resource requirements, resulting in higher or lower credit charges. In general, decrypting data files or performing COPY transformations – particularly transformations from semi-structured data formats such as JSON – have higher compute requirements and result in higher credit usage.