Understanding Billing for Snowpipe Usage¶
With Snowpipe’s serverless compute model, users can initiate any size load without managing a virtual warehouse. Instead, Snowflake provides and manages the compute resources, automatically growing or shrinking capacity based on the current Snowpipe load. Accounts are charged based on their actual compute resource usage; in contrast with customer-managed virtual warehouses, which consume credits when active, and may sit idle or be overutilized.
Snowflake tracks the resource consumption of loads for all pipes in an account, with per-second/per-core granularity, as Snowpipe actively queues and processes data files. Per-core refers to the physical CPU cores in a compute server.
Using a multi-threaded client application enables submitting data files in parallel, which initiates additional servers and loads the data in less time. However, the actual overall compute time required would be identical to using a single-threaded client application, just spread out over more internal Snowpipe servers.
The utilization recorded is then translated into familiar Snowflake credits, which are listed on the bill for your account.
In addition to resource consumption, an overhead to manage files in the internal load queue is included in the utilization costs charged for Snowpipe. This overhead increases in relation to the number of files queued for loading. Snowpipe charges 0.06 credits per 1000 files queued.
Decisions with regard to data file size and staging frequency impact the cost and performance of Snowpipe. For recommended best practices, see Continuous Data Loads (i.e. Snowpipe) and File Size.
Snowpipe is currently a preview feature; however, accounts that try the service are still billed based on usage.
In this Topic:
Viewing the Data Load History for Your Account¶
Users with the ACCOUNTADMIN role can use the Snowflake web interface or SQL to view the credits billed to your Snowflake account within a specified date range.
To view the credits billed for Snowpipe data loading for your account:
Snowpipe Billing Example¶
The following example illustrates the Snowpipe billing model using a simple use case: loading the
catalog_sales table data from the TPC-DS benchmark data set.
- Data files: Around 3,000 gzip-compressed CSV files, 1.4 TB total, stored in an external (i.e. AWS S3 or Microsoft Azure) stage.
- Snowflake credit usage: 11
The pipe definition for the load operation is a simple COPY statement:
create pipe tpcds_10tb_catalog_sales_pipe as copy into snowpipe_db.public.catalog_sales from @snowpipe_db.public.tpcds_10tb_stg/catalog_sales/;
Note that individual load scenarios have different compute resource requirements, resulting in higher or lower credit charges. In general, decrypting data files or performing COPY transformations – particularly transformations from semi-structured data formats such as JSON – have higher compute requirements and result in higher credit usage.