Understanding Snowflake Table Structures

All data in Snowflake is stored in database tables, logically structured as collections of columns and rows. To best utilize Snowflake tables, particularly large tables, it is helpful to have an understanding of the physical structure behind the logical structure.

This topic describes micro-partitions and data clustering, two of the principal concepts utilized in Snowflake physical table structures. It also provides guidance for explicitly defining clustering keys for very large tables (in the multi-terabyte range) to help optimize table maintenance and query performance.

Sections in this topic:

Micro-partitions

Traditional data warehouses rely on static partitioning of large tables to achieve acceptable performance and enable better scaling. In these systems, a partition is a unit of management that is manipulated independently using specialized DDL and syntax; however, static partitioning has a number of well-known limitations, such as user maintenance/overhead and data skew, which can result in disproportionately-sized partitions.

Snowflake has implemented a powerful and unique form of partitioning, called micro-partitioning, that delivers all the advantages of static partitioning without the known limitations, as well as provides additional significant benefits.

What are Micro-partitions?

All data in Snowflake tables is automatically divided into micro-partitions, which are contiguous units of storage. Each micro-partition contains between 50 MB and 500 MB of uncompressed data (actual size in Snowflake is smaller because data is always stored compressed). Groups of rows in tables are mapped into individual micro-partitions, organized in a columnar fashion. This size and structure allows for extremely granular pruning of very large tables, which can be comprised of millions, or even hundreds of millions, of micro-partitions.

Snowflake automatically gathers metadata about all rows stored in a micro-partition, including:

  • The domain of values for each of the columns in the micro-partition.
  • The number of distinct values.
  • Additional properties important for both optimization and efficient query processing.

Note

Micro-partitioning is automatically performed on all Snowflake tables. Tables are transparently partitioned using the ordering that occurs naturally when the data is inserted/loaded.

Benefits of Micro-partitioning

The benefits of Snowflake’s approach to partitioning table data include:

  • In contrast to traditional static partitioning, Snowflake micro-partitions are derived automatically; they don’t need to be explicitly defined up-front or maintained by users.
  • As the name suggests, micro-partitions are small in size (50 to 500 MB, before compression), which enables extremely efficient DML and fine-grained pruning for faster queries.
  • Micro-partitions can overlap in their range of values, which, combined with their uniformly small size, helps prevent skew.
  • Columns are stored independently within micro-partitions (i.e. true columnar storage). This enables efficient scanning of individual columns; only the columns referenced by a query are scanned.
  • Columns are also compressed individually within micro-partitions. Snowflake automatically determines the most efficient compression algorithm for the columns in each micro-partition.

Impact of Micro-partitions

DML

All DML operations (e.g. DELETE, UPDATE, MERGE) take advantage of the underlying micro-partition metadata to facilitate and simplify table maintenance. For example, some operations, such as DELETE * from a table, are metadata-only operations. In addition, maintenance of micro-partitions for DML occurs in the background, requiring no user intervention.

Query Pruning

The micro-partition metadata collected transparently by Snowflake enables precise pruning of columns in micro-partitions at query run-time, including columns containing semi-structured data. A query that specifies a filter predicate on a range of values that accesses 10% of the values in the domain should ideally only scan 10% of the micro-partitions.

For example, assume a large table contains one year of historical data with date and hour columns. Assuming uniform distribution of the data, a query targeting a particular hour would ideally scan 1/8760th of the micro-partitions comprising the table and then only scan the portion of the micro-partitions that contain the data for the hour column, i.e. Snowflake uses columnar scanning of partitions so that an entire partition is not scanned if a query only filters by one column. In other words, the closer the ratio of scanned micro-partitions and columnar data is to the ratio of actual data selected, the more efficient is the pruning performed on the table.

For time-series data, this level of pruning enables potentially sub-second response times for queries within ranges (i.e. “slices”) as fine-grained as one hour or even less.

Natural Data Clustering in Tables

Data stored in data warehouses is typically sorted/ordered along natural dimensions (e.g. date and/or geographic regions). This “clustering” is a key factor in query performance because table data that is not sorted or is only partially sorted may impact query performance, particularly on very large tables.

Snowflake automatically sorts data as it is inserted/loaded into a table. Data with the same values is stored, as much as possible, in the same micro-partition. Snowflake then leverages the sorting information it transparently collects for each table to avoid scanning micro-partitions during queries, significantly accelerating the performance of queries that reference these columns.

The following diagram illustrates a table in Snowflake that is sorted by date:

Logical and physical table structures with natural sorting

The table consists of 24 rows stored across 4 micro-partitions, with the rows divided equally between each micro-partition. Within each micro-partition, the data is sorted and stored by column, which enables Snowflake to first prune queries by micro-partition and then by column within the remaining micro-partitions.

Note that this diagram is intended only as a small-scale conceptual representation of the natural data clustering that Snowflake utilizes in micro-partitions.

Monitoring Clustering Information in Tables

Snowflake maintains clustering metrics for the micro-partitions in a table, including:

  • The total number of micro-partitions that comprise the table.
  • The number of micro-partitions that overlap with each other (in a specified subset of table columns).
  • The overlap depth of the micro-partitions (in a specified subset of table columns).

To view/monitor the clustering metrics for a table, Snowflake provides the following system functions:

For more details about how these metrics are used by these functions, see Clustering Ratio Example in this topic.

What is Clustering Ratio?

The clustering ratio for a table is a number between 0 and 100 that indicates whether the clustering state of the table has improved or deteriorated due to changes to the data in the table. The higher the ratio, the more optimally clustered the table is, with a value of 100 indicating that the table is fully clustered.

Clustering ratios can be used for a variety of purposes, including:

  • Monitoring the clustering “health” of a large table, particularly over time as DML is performed on the table.
  • Determining whether a large table would benefit from explicitly-defined clustering keys.

Note

  • Clustering ratios of 100 are theoretically possible, but are not required to achieve optimal query performance.
  • A table that consists of a single micro-partition or no micro-partitions (i.e. an empty table) always has a clustering ratio of 100.
  • The minimum value for a clustering ratio is 0; negative ratios are always rounded to 0. A negative ratio can occur if the number of overlapping micro-partitions are high relative to the total number of micro-partitions for the table.

Interpreting Clustering Ratios

The clustering ratio for a table is not an absolute or precise measure of whether the table is well-clustered. It is a relative value intended as a guideline for optimizing data storage within a specific table. In other words, clustering ratios for tables should not be compared with each other because every table and data clustering scenario is different. If one table has a higher ratio than a second table, this does not necessarily indicate that the first table is better clustered than the second table.

Ultimately, query performance is the best indicator of how well-clustered a table is:

  • If queries on a table are performing as needed or expected, the table is likely well-clustered and subsequent reclustering may not change the ratio or improve performance.
  • If the clustering ratio for a table drops over time and query performance degrades correspondingly, the table is likely no longer optimally clustered and would benefit from reclustering.

Clustering Ratio Example

The following diagram provides an example of a small table, consisting of 5 micro-partitions, and illustrates how overlap affects clustering ratio:

Example of clustering ratio

As this diagram shows:

  • If the range of values in all micro-partitions overlap with each other (i.e. the micro-partitions total is 5 and the overlap depth is 5), the clustering ratio is low (e.g. around 30).
  • As the number of overlapping micro-partitions decreases and/or the overlap depth decreases, the clustering ratio improves.
  • When there is no overlap in the range of values across all micro-partitions, the micro-partitions are considered to be in a constant state (i.e. they cannot be improved by reclustering) and table has a clustering ratio of 100. In this case, the table is considered to be fully clustered.

Clustering Keys

For the most part, Snowflake’s natural clustering produces well-clustered data in tables; however, over time, particularly as DML occurs, the data in some table rows may not naturally cluster on desired dimensions. To improve the natural clustering of the underlying micro-partitions, you could sort rows on important columns and re-insert them into the table; however, for very large tables (as defined by the size of the data in the table, not the number of rows), this manual operation might be expensive and cumbersome.

Clustering keys enable explicitly clustering data in a table according to one or more columns in the table.

Important

Clustering keys makes filtering in Snowflake more efficient; however, not all tables necessarily benefit from clustering keys. To see performance improvements from clustering keys, a table has to be large enough to reside on more than one micro-partition, and the clustering keys have to provide sufficient filtering to select a subset of these micro-partitions.

In general, tables in the multi-terabyte (TB) size range will see the most benefit from clustering keys, particularly if you perform significant amounts of DML on these tables.

What are Clustering Keys?

Clustering keys are a subset of columns in a table that are explicitly designated for grouping the column data together in micro-partitions. Clustering keys can be defined when creating a table (using the CREATE TABLE command) or afterward (using the ALTER TABLE command). Clustering keys can also be altered or dropped at any time.

Some general indicators that can be used to determine whether to define clustering keys for a very large table include:

  • Queries on the table are running slower than expected or have noticeably degraded over time.
  • The clustering ratio for the table is very low and/or the clustering depth is very high.

Note

If you define or modify the clustering keys for an existing table, the rows in the table are not reorganized until the table is reclustered using the ALTER TABLE command. For more information, see Table Reclustering in this topic.

Benefits of Defining Clustering Keys for Large Tables

Using explicitly-defined columns to cluster data in large tables offers several benefits, including:

  • Grouping similar rows together improves scan efficiency in queries by skipping large amount of data that does not match filtering predicates.
  • Grouping similar rows together usually enables better column compression than in tables with no clustering keys. This is especially true when other columns are strongly correlated with the clustering keys.
  • Once defined, clustering keys require little or no maintenance.

Strategies for Selecting Clustering Keys

Selecting the right clustering keys can dramatically impact query performance. Analysis of your workload will usually yield some ideal clustering key candidates. For example:

  • If you typically run queries on one column, such as a date column, you can use the column as the clustering key for the table.
  • If you typically query a table by two dimensions, such as application_id and user_id, clustering on those columns can help to improve the query performance for the table.

The number of distinct values (i.e. cardinality) in a clustering key is a critical aspect of selecting a clustering key. It is important to choose a clustering key that has:

  • A large enough number of distinct values to enable effective pruning on the table.
  • A small enough number of distinct values to allow Snowflake to effectively group rows in the same micro-partitions.

A very low cardinality column (e.g. a column containing gender values) would only yield minimal pruning. In contrast, a very high cardinality column (e.g. a column containing timestamp or UUID values) can be expensive to maintain clustering for.

Tip

If you are defining multiple clustering keys for a table, the order in which the keys are specified in the CLUSTER BY clause is important. As a general rule, Snowflake recommends ordering the keys from lowest cardinality to highest cardinality.

Table Reclustering

Snowflake supports using the ALTER TABLE command with a RECLUSTER clause to manually recluster a table with clustering keys at any time. The command organizes the records for the table based on the clustering keys, so that related records are relocated to the same micro-partition. This DML operation deletes all records to be moved and re-inserts them, grouped on the clustering keys. As with any DML operation, this operation locks the table for the duration of the operation.

Tip

As a general rule of thumb and best practice, Snowflake recommends reclustering after performing significant DML on a table with clustering keys. You can use the clustering ratio or clustering depth to measure whether clustering on the table has degraded over time.

Performance and Storage Impact of Reclustering

During reclustering, Snowflake groups records that have the same clustering keys in the same micro-partition. This grouping operation can impact the performance of the virtual warehouse used to perform the reclustering.

Tip

Due to the grouping/sorting that Snowflake performs during reclustering, we recommend using a dedicated warehouse of sufficient size to perform reclustering.

There is also a storage cost for using clustering keys. Each time data is reclustered, the rows have to be physically grouped based on the clustering keys, which results in Snowflake generating new micro-partitions for the data. Adding even a small number of rows to a table can cause all micro-partitions that contain those values to be recreated.

This process can create significant data turnover because the original micro-partitions are marked as deleted, but retained in the system to enable Time Travel and Fail-safe. The original micro-partitions are purged only after both the Time Travel retention period and the subsequent Fail-safe retention period have passed, i.e. minimum of 8 days and up to 97 days for extended Time Travel (if you are using Snowflake Enterprise Edition). This can result in increased storage costs. For more information, see Snowflake Time Travel & Fail-safe.

Reclustering Example

Building on the clustering diagram earlier in this topic, this example illustrates how reclustering a table can help reduce scanning of micro-partitions to improve query performance:

Logical table structures after reclustering

As this diagram shows:

  • The table is naturally clustered by date.
  • Before reclustering, the query (in the diagram) requires scanning micro-partitions 1, 2, and 3.
  • After reclustering (on date and id), the same query would need to scan only micro-partitions 1.1 and 2.1.

In addition, after reclustering:

  • Micro-partition 1.1 has reached a constant state (i.e. it cannot be improved by reclustering) and is therefore excluded when computing depth and overlap for future maintenance. In a well-clustered large table, most micro-partitions will fall into this category.
  • Original micro-partitions (1 to 4) are marked as deleted, but not purged from the system; they are retained to enable Time Travel and Fail-safe on the table.

Note

This example illustrates the impact of reclustering on an extremely small scale. For a large table (i.e. consisting of millions of micro-partitions), reclustering can have a significant impact on scanning and, therefore, query performance.

Clustering Tasks for Tables

Calculating the Clustering Ratio for a Table

Use the system function, SYSTEM$CLUSTERING_RATIO, to calculate the clustering ratio for a given table. This function can be run on any columns on any table, regardless of whether the table has explicit clustering keys:

  • If a table has explicit clustering keys, the function doesn’t require any input arguments other than the name of the table.
  • If a table doesn’t have explicit clustering keys (or a table has clustering keys, but you want to calculate the ratio on other columns in the table), the function takes the desired column(s) as an additional input argument.

The function also supports specifying a predicate (i.e. WHERE clause) to filter the range of values in the column(s) used to calculate the ratio.

The following simple example illustrates calculating the clustering ratio for a table named t1 using a single column, named c1, in the table. No filtering is performed on the column:

SELECT SYSTEM$CLUSTERING_RATIO('t1', '(c1)');

+---------------------------------------+
| SYSTEM$CLUSTERING_RATIO('T1', '(C1)') |
|---------------------------------------|
|                                  87.6 |
+---------------------------------------+

For more examples, see SYSTEM$CLUSTERING_RATIO.

Defining Clustering Keys for a Table

Clustering keys can be defined when a table is created by appending a CLUSTER BY clause to CREATE TABLE:

CREATE TABLE <name> ... CLUSTER BY ( <col1> [ , <col2> ... ] )

Where colN is a column in the table. The column can be any data type, except VARIANT, OBJECT, or ARRAY.

For example:

CREATE OR REPLACE TABLE t1 (c1 DATE, c2 STRING, c3 NUMBER) CLUSTER BY (c1, c2);

SHOW TABLES LIKE 't1';

+---------------------------------+------+---------------+-------------+-------+---------+------------+------+-------+--------+----------------+
| created_on                      | name | database_name | schema_name | kind  | comment | cluster_by | rows | bytes | owner  | retention_time |
|---------------------------------+------+---------------+-------------+-------+---------+------------+------+-------+--------+----------------|
| Mon, 08 Aug 2016 15:22:07 -0700 | T1   | TESTDB        | TESTSCHEMA  | TABLE |         | (c1,c2)    |    0 |     0 | PUBLIC | 1              |
+---------------------------------+------+---------------+-------------+-------+---------+------------+------+-------+--------+----------------+

Important

  • If you define two or more clustering keys for a table, the order of the keys has an impact on how the data is clustered in micro-partitions. In general, Snowflake recommends ordering the keys from lowest to highest cardinality. For more details, see Strategies for Selecting Clustering Keys in this topic.
  • Clustering keys are copied when a table is created using CREATE TABLE ... CLONE.
  • Clustering keys are not propagated when a table is created using CREATE TABLE ... LIKE.

Changing the Clustering Keys for a Table

At any time, you can add clustering keys to an existing table or change the existing clustering keys for a table using ALTER TABLE:

ALTER TABLE <name> CLUSTER BY ( <col1> [ , <col2> ... ] )

For example:

ALTER TABLE t1 CLUSTER BY (c1, c3);

SHOW TABLES LIKE 't1';

+---------------------------------+------+---------------+-------------+-------+---------+------------+------+-------+--------+----------------+
| created_on                      | name | database_name | schema_name | kind  | comment | cluster_by | rows | bytes | owner  | retention_time |
|---------------------------------+------+---------------+-------------+-------+---------+------------+------+-------+--------+----------------|
| Mon, 08 Aug 2016 15:22:07 -0700 | T1   | TESTDB        | TESTSCHEMA  | TABLE |         | (c1,c3)    |    0 |     0 | PUBLIC | 1              |
+---------------------------------+------+---------------+-------------+-------+---------+------------+------+-------+--------+----------------+

Note

Changing the clustering keys for a table automatically takes effect for all rows inserted or modified after the change; however, it does not affect existing records. To reorganize existing records using the new keys, you recluster the table using ALTER TABLE ... RECLUSTER.

Dropping the Clustering Keys for a Table

At any time, you can drop the clustering keys for a table using ALTER TABLE:

ALTER TABLE <name> DROP CLUSTERING KEY

For example:

ALTER TABLE t1 DROP CLUSTERING KEY;

SHOW TABLES LIKE 't1';

+---------------------------------+------+---------------+-------------+-------+---------+------------+------+-------+--------+----------------+
| created_on                      | name | database_name | schema_name | kind  | comment | cluster_by | rows | bytes | owner  | retention_time |
|---------------------------------+------+---------------+-------------+-------+---------+------------+------+-------+--------+----------------|
| Mon, 08 Aug 2016 15:22:07 -0700 | T1   | TESTDB        | TESTSCHEMA  | TABLE |         |            |    0 |     0 | PUBLIC | 1              |
+---------------------------------+------+---------------+-------------+-------+---------+------------+------+-------+--------+----------------+

Reclustering a Table with Clustering Keys

Use ALTER TABLE with a RECLUSTER clause to manually recluster a table for which clustering keys have been defined:

ALTER TABLE <name> RECLUSTER [ MAX_SIZE = <budget_in_bytes> ] [ WHERE <condition> ]

Where:

MAX_SIZE = budget_in_bytes
Specifies the upper-limit on the amount of data (in bytes) in the table to recluster. Reclustering may stop before the limit is reached if there are no more micro-partitions to recluster. If MAX_SIZE is not specified, no upper-limit is enforced and Snowflake automatically selects a size that results in the most effective reclustering.
WHERE condition
Specifies a condition or range on which to recluster data in the table.

Note

Reclustering can only be performed on tables that have clustering keys defined. In addition, reclustering requires a virtual warehouse, and impacts the data storage for the table, because the operation reorders and reinserts rows in the table, which results in new micro-partitions being created and the previous micro-partitions moving into Time Travel retention.

For example:

  • To recluster table t1 with no upper-limit on the size of the data to recluster:

    ALTER TABLE t1 RECLUSTER;
    
  • To recluster up to 2GB of the data in micro-partitions that have been selected for reclustering in table t1 (typically corresponds to rows on which DML has been performed):

    ALTER TABLE t1 RECLUSTER MAX_SIZE=2000000000;
    
  • To recluster up to 100MB of the data that was inserted into table t1 in the first week of 2016:

    ALTER TABLE t2 RECLUSTER MAX_SIZE=100000000 WHERE CREATE_DATE BETWEEN ('2016-01-01') AND ('2015-01-07');