Understanding Snowflake Table Structures

All data in Snowflake is stored in database tables, logically structured as collections of columns and rows. To best utilize Snowflake tables, particularly large tables, it is helpful to have an understanding of the physical structure behind the logical structure.

This topic describes micro-partitions and data clustering, two of the principal concepts utilized in Snowflake physical table structures. It also provides guidance for explicitly defining clustering keys for very large tables (in the multi-terabyte range) to help optimize table maintenance and query performance.

Sections in this topic:


Traditional data warehouses rely on static partitioning of large tables to achieve acceptable performance and enable better scaling. In these systems, a partition is a unit of management that is manipulated independently using specialized DDL and syntax; however, static partitioning has a number of well-known limitations, such as maintenance overhead and data skew, which can result in disproportionately-sized partitions.

Snowflake has implemented a powerful and unique form of partitioning, called micro-partitioning, that delivers all the advantages of static partitioning without the known limitations, as well as providing additional significant benefits.

What are Micro-partitions?

All data in Snowflake tables is automatically divided into micro-partitions, which are contiguous units of storage. Each micro-partition contains between 50 MB and 500 MB of uncompressed data (note that the actual size in Snowflake is smaller because data is always stored compressed). Groups of rows in tables are mapped into individual micro-partitions, organized in a columnar fashion. This size and structure allows for extremely granular pruning of very large tables, which can be comprised of millions, or even hundreds of millions, of micro-partitions.

Snowflake tracks metadata about all rows stored in a micro-partition, including:

  • The range of values for each of the columns in the micro-partition.
  • The number of distinct values.
  • Additional properties used for both optimization and efficient query processing.


Micro-partitioning is automatically performed on all Snowflake tables. Tables are transparently partitioned using the ordering that occurs when the data is inserted/loaded.

Benefits of Micro-partitioning

The benefits of Snowflake’s approach to partitioning table data include:

  • In contrast to traditional static partitioning, Snowflake micro-partitions are derived automatically; they don’t need to be explicitly defined up-front or maintained by users.
  • As the name suggests, micro-partitions are small in size (50 to 500 MB, before compression), which enables extremely efficient DML and fine-grained pruning for faster queries.
  • Micro-partitions can overlap in their range of values, which, combined with their uniformly small size, helps prevent skew.
  • Columns are stored independently within micro-partitions (i.e. columnar storage). This enables efficient scanning of individual columns; only the columns referenced by a query are scanned.
  • Columns are also compressed individually within micro-partitions. Snowflake automatically determines the most efficient compression algorithm for the columns in each micro-partition.

Impact of Micro-partitions


All DML operations (e.g. DELETE, UPDATE, MERGE) take advantage of the underlying micro-partition metadata to facilitate and simplify table maintenance. For example, some operations, such as deleting all rows from a table, are metadata-only operations.

Query Pruning

The micro-partition metadata maintained by Snowflake enables precise pruning of columns in micro-partitions at query run-time, including columns containing semi-structured data. In other words, a query that specifies a filter predicate on a range of values that accesses 10% of the values in the range should ideally only scan 10% of the micro-partitions.

For example, assume a large table contains one year of historical data with date and hour columns. Assuming uniform distribution of the data, a query targeting a particular hour would ideally scan 1/8760th of the micro-partitions comprising the table and then only scan the portion of the micro-partitions that contain the data for the hour column; Snowflake uses columnar scanning of partitions so that an entire partition is not scanned if a query only filters by one column.

In other words, the closer the ratio of scanned micro-partitions and columnar data is to the ratio of actual data selected, the more efficient is the pruning performed on the table.

For time-series data, this level of pruning enables potentially sub-second response times for queries within ranges (i.e. “slices”) as fine-grained as one hour or even less.

Data Clustering

Typically, data stored in tables in data warehouses is sorted/ordered along natural dimensions (e.g. date and/or geographic regions). This “clustering” is a key factor in query performance because table data that is not sorted or is only partially sorted may impact query performance, particularly on very large tables.

As data is loaded into a Snowflake table, Snowflake co-locates column data with the same values in the same micro-partition, if possible. Snowflake then leverages the metadata it transparently maintains for each table to avoid scanning micro-partitions during queries, significantly accelerating the performance of queries that reference these columns.

The following diagram illustrates a Snowflake table, t1, with 4 columns sorted by date:

Logical and physical table structures with natural sorting

The table consists of 24 rows stored across 4 micro-partitions, with the rows divided equally between each micro-partition. Within each micro-partition, the data is sorted and stored by column, which enables Snowflake to perform the following actions for queries on the table:

  1. First prune micro-partitions that are not needed for the query.
  2. Then prune by column within the remaining micro-partitions.

Note that this diagram is intended only as a small-scale conceptual representation of the natural data clustering that Snowflake utilizes in micro-partitions.

Monitoring Clustering Information in Tables

Snowflake maintains clustering metadata for the micro-partitions in a table, including:

  • The total number of micro-partitions that comprise the table.
  • The number of micro-partitions containing values that overlap with each other (in a specified subset of table columns).
  • The depth of the overlapping micro-partitions.

To view/monitor the clustering metadata for a table, Snowflake provides the following system functions:

For more details about how the clustering metadata is used by these functions, see Clustering Ratio Example in this topic.

What is Clustering Ratio and Depth?

The clustering ratio for a table is a number (between 0 and 100) that indicates whether the clustering state of the table has improved or deteriorated due to changes to the data in the table. The higher the ratio, the more optimally clustered the table is, with a value of 100 indicating that the table is fully clustered.

Clustering depth measures the average depth (greater than 1) of the overlapping micro-partitions for specified columns in a table. The smaller the average depth, the better clustered the table is with regards to the specified columns.

Clustering ratio and depth can be used for a variety of purposes, including:

  • Monitoring the clustering “health” of a large table, particularly over time as DML is performed on the table.
  • Determining whether a large table would benefit from explicitly-defined clustering keys.


  • Clustering ratios of 100 are theoretically possible, but are not required to achieve optimal query performance.
  • A table that consists of a single micro-partition or no micro-partitions (i.e. an empty table) always has a clustering ratio of 100.
  • The minimum value for a clustering ratio is 0; negative ratios are always rounded to 0. A negative ratio can occur if the number of overlapping micro-partitions are high relative to the total number of micro-partitions for the table.

Interpreting Clustering Ratios

The clustering ratio for a table is not an absolute or precise measure of whether the table is well-clustered. It is a relative value intended as a guideline for optimizing data storage within a specific table. Clustering ratios should not be compared between tables because every table and data clustering scenario is different. In other words, if a table has a higher ratio than another table, it does not necessarily indicate that the first table is better clustered than the second table.

Ultimately, query performance is the best indicator of how well-clustered a table is:

  • If queries on a table are performing as needed or expected, the table is likely well-clustered and subsequent reclustering may not change the ratio or improve performance.
  • If query performance degrades over time and there is a corresponding lowering in the clustering ratio for the table, the table is likely no longer optimally clustered and would benefit from reclustering.

Clustering Ratio Example

The following diagram provides an example of a small table, consisting of 5 micro-partitions with values ranging from A to Z, and illustrates how overlap affects clustering ratio:

Example of clustering ratio

As this diagram shows:

  1. At the beginning, the range of values in all the micro-partitions overlap and the clustering ratio is low 30.1).
  2. As the number of overlapping micro-partitions decreases and the overlap depth decreases, the clustering ratio improves (71.4 and 81.9)
  3. When there is no overlap in the range of values across all micro-partitions, the micro-partitions are considered to be in a constant state (i.e. they cannot be improved by reclustering) and table has a clustering ratio of 100. In this case, the table is considered to be fully clustered.

Clustering Keys

For the most part, Snowflake’s natural clustering produces well-clustered data in tables; however, over time, particularly as DML occurs, the data in some table rows may not naturally cluster on desired dimensions. To improve the natural clustering of the underlying micro-partitions, you could sort rows on important columns and re-insert them into the table; however, for very large tables (as defined by the size of the data in the table, not the number of rows), this manual operation might be expensive and cumbersome.

Clustering keys enable explicitly clustering data in a table according to one or more columns/expressions in the table.


Clustering keys makes filtering in Snowflake more efficient; however, clustering keys should not be defined for every table. The size of a table, as well as the query performance for the table, should dictate whether to define clustering keys for the table. In particular, to see performance improvements from clustering keys, a table has to be large enough to reside on many micro-partitions, and the clustering keys have to provide sufficient filtering to select a subset of these micro-partitions.

In general, tables in the multi-terabyte (TB) size range will see the most benefit from clustering keys, particularly if you perform regular/continual DML on these tables.

What are Clustering Keys?

Clustering keys are a subset of columns or expressions on a table that are explicitly designated to co-locate the data in the same micro-partitions. Clustering keys can be defined when creating a table (using the CREATE TABLE command) or afterward (using the ALTER TABLE command). Clustering keys can also be altered or dropped at any time.

Some general indicators that can help determine whether to define clustering keys for a very large table include:

  • Queries on the table are running slower than expected or have noticeably degraded over time.
  • The clustering ratio for the table is very low and the clustering depth is very large.


If you define clustering keys for an existing table (or modify the existing clustering keys for a table), the rows in the table are not reorganized until the table is reclustered. For more information, see Manual Reclustering (in this topic) or Automatic Clustering.

Benefits of Defining Clustering Keys (for Very Large Tables)

Using clustering keys to co-locate similar rows in the same micro-partitions offers several benefits for very large tables, including:

  • Improved scan efficiency in queries by skipping large amount of data that does not match filtering predicates.
  • Better column compression than in tables with no clustering keys. This is especially true when other columns are strongly correlated with the clustering keys.
  • After the keys have been defined, limited manual maintenance (or no maintenance if Automatic Clustering is enabled for the table).

Strategies for Selecting Clustering Keys

Selecting the right clustering keys can dramatically impact query performance. Analysis of your workload will usually yield some ideal clustering key candidates. For example:

  • If you typically run queries on one column, such as a date column, you can use the column as the clustering key for the table.
  • If you typically query a table by two dimensions, such as application_id and user_id columns, clustering on those columns can help to improve the query performance for the table.

The number of distinct values (i.e. cardinality) in a clustering key is a critical aspect of selecting a clustering key. It is important to choose a clustering key that has:

  • A large enough number of distinct values to enable effective pruning on the table.
  • A small enough number of distinct values to allow Snowflake to effectively group rows in the same micro-partitions.

A column with very low cardinality (e.g. a column containing gender values) would only yield minimal pruning. In contrast, a column with very high cardinality (e.g. a column containing timestamp or UUID values) typically is not a good candidate to use as a clustering key directly.


  • If you are defining multiple clustering keys for a table, the order in which the keys are specified in the CLUSTER BY clause is important. As a general rule, Snowflake recommends ordering the keys from lowest cardinality to highest cardinality.

  • If you want to use a column with very high cardinality as a clustering key, we recommend defining the key as an expression on the column, rather than on the column directly, to reduce the number of distinct values.

    For example, a fact table has a TIMESTAMP column c_timestamp containing many discrete values (many more than the number of micro-partitions in the table). A clustering key could be defined on the column by casting the values to dates instead of timestamps (e.g. to_date(c_timestamp)). This would reduce the cardinality to the total number of days, which produces much better pruning results.

Manual Reclustering

You can use the ALTER TABLE command with a RECLUSTER clause to manually recluster a table with clustering keys at any time. The command uses the clustering keys for the table to organize the table column data, so that related records are relocated to the same micro-partition. This DML operation deletes all records to be moved and re-inserts them, grouped on the clustering keys. As with any standard Snowflake DML operation, this operation locks the table for the duration of the operation.


As a general rule of thumb and best practice, we recommend reclustering after performing significant DML on a table with clustering keys. You can use the clustering ratio or clustering depth to measure whether clustering on the table has degraded due to DML.

Also, after a period of significant DML activity on a table, reclustering may need to be performed more than once to achieve the desired results for the table.

For this reason, as well as other benefits, we recommend using Automatic Clustering instead of manual reclustering.

Performance and Storage Impact of Reclustering

During reclustering, Snowflake groups records that have the same clustering keys into the same micro-partition. This operation can impact the performance of the virtual warehouse used to perform the reclustering.

There is also a storage cost for reclustering. Each time data is reclustered, the rows are physically grouped based on the clustering keys, which results in Snowflake generating new micro-partitions for the data. Adding even a small number of rows to a table can cause all micro-partitions that contain those values to be recreated.

This process can create significant data turnover because the original micro-partitions are marked as deleted, but retained in the system to enable Time Travel and Fail-safe. The original micro-partitions are purged only after both the Time Travel retention period and the subsequent Fail-safe period have passed (i.e. minimum of 8 days and up to 97 days for extended Time Travel, if you are using Snowflake Enterprise Edition). This can result in increased storage costs. For more information, see Snowflake Time Travel & Fail-safe.


Due to the grouping/sorting that Snowflake performs during reclustering, if you chose to perform manual reclustering, we recommend using a separate, dedicated warehouse and ensuring that the warehouse is of sufficient size.

Reclustering Example

Building on the clustering diagram presented earlier in this topic, this example illustrates how reclustering a table can help reduce scanning of micro-partitions to improve query performance:

Logical table structures after reclustering
  • To start, table t1 is naturally clustered by date across micro-partitions 1-4.
  • The query (in the diagram) requires scanning micro-partitions 1, 2, and 3.
  • date and id are defined as clustering keys and the table is reclustered, creating new micro-partitions (5-8).
  • After reclustering, the same query needs to scan only micro-partitions 5 and 6.

In addition, after reclustering:

  • Micro-partition 5 has reached a constant state (i.e. it cannot be improved by reclustering) and is therefore excluded when computing depth and overlap for future maintenance. In a well-clustered large table, most micro-partitions will fall into this category.
  • The original micro-partitions (1-4) are marked as deleted, but not purged from the system; they are retained for Time Travel and Fail-safe.


This example illustrates the impact of reclustering on an extremely small scale. For a large table (i.e. consisting of millions of micro-partitions), reclustering can have a significant impact on scanning and, therefore, query performance.

Clustering Tasks for Tables

Calculating the Clustering Ratio for a Table

Use the system function, SYSTEM$CLUSTERING_RATIO, to calculate the clustering ratio for a given table. This function can be run on any columns on any table, regardless of whether the table has explicit clustering keys:

  • If a table has explicit clustering keys, the function doesn’t require any input arguments other than the name of the table.
  • If a table doesn’t have explicit clustering keys (or a table has clustering keys, but you want to calculate the ratio on other columns in the table), the function takes the desired column(s) as an additional input argument.

The function also supports specifying a predicate (i.e. WHERE clause) to filter the range of values in the column(s) used to calculate the ratio.

The following simple example illustrates calculating the clustering ratio for a table named t1 using a single column, named c1, in the table. No filtering is performed on the column:


|                                  87.6 |

For more examples, see SYSTEM$CLUSTERING_RATIO.

Defining Clustering Keys for a Table

Clustering keys can be defined when a table is created by appending a CLUSTER BY clause to CREATE TABLE:

CREATE TABLE <name> ... CLUSTER BY ( <expr1> [ , <expr2> ... ] )

Where each clustering key exprN is an expression on the table, which can be of any data type, except VARIANT, OBJECT, or ARRAY. Clustering keys can be:

  • Base columns.
  • Expressions on base columns.
  • Expressions on paths in variant columns.

For example:

-- cluster by base columns


| created_on                      | name | database_name | schema_name | kind  | comment | cluster_by | rows | bytes | owner  | retention_time |
| Mon, 08 Aug 2016 15:22:07 -0700 | T1   | TESTDB        | TESTSCHEMA  | TABLE |         | (c1,c2)    |    0 |     0 | PUBLIC | 1              |

-- cluster by expressions
CREATE OR REPLACE TABLE t2 (c1 timestamp, c2 STRING, c3 NUMBER) CLUSTER BY (TO_DATE(C1), substring(c2, 0, 10));


|           created_on            | name | database_name | schema_name | kind  | comment |                cluster_by                 | rows | bytes |  owner   | retention_time |
| Fri, 07 Apr 2017 09:46:18 -0700 | T2   | TESTDB        | TESTSCHEMA  | TABLE |         | LINEAR(TO_DATE(C1), SUBSTRING(C2, 0, 10)) | 0    | 0     | PUBLIC   | 1              |

-- cluster by paths in variant columns
CREATE OR REPLACE TABLE T3 (t timestamp, v variant) cluster by (v:"Data":id::number);


|          created_on             | name | database_name | schema_name | kind  | comment |                cluster_by                 | rows | bytes | owner  | retention_time |
| Fri, 23 Jun 1967 00:00:00 -0700 | T3   | TESTDB        | TESTSCHEMA  | TABLE |         | LINEAR(TO_NUMBER(GET_PATH(V, 'Data.id'))) | 0    | 0     | PUBLIC | 1              |


  • If you define two or more clustering keys for a table, the order of the keys has an impact on how the data is clustered in micro-partitions.

    For more details, see Strategies for Selecting Clustering Keys in this topic.

  • Clustering keys are copied when a table is created using CREATE TABLE … CLONE.

  • Clustering keys are not propagated when a table is created using CREATE TABLE … LIKE.

  • When specifying clustering keys on variant columns, a path together with the target type must be provided. Clustering keys directly on top of variant columns are not supported.

Changing the Clustering Keys for a Table

At any time, you can add clustering keys to an existing table or change the existing clustering keys for a table using ALTER TABLE:

ALTER TABLE <name> CLUSTER BY ( <expr1> [ , <expr2> ... ] )

For example:

-- cluster by base columns


| created_on                      | name | database_name | schema_name | kind  | comment | cluster_by | rows | bytes | owner  | retention_time |
| Mon, 08 Aug 2016 15:22:07 -0700 | T1   | TESTDB        | TESTSCHEMA  | TABLE |         | (c1,c3)    |    0 |     0 | PUBLIC | 1              |

-- cluster by expressions

|           created_on            | name | database_name | schema_name | kind  | comment |                cluster_by                 | rows | bytes | owner  | retention_time |
| Fri, 07 Apr 2017 09:46:18 -0700 | T2   | TESTDB        | TESTSCHEMA  | TABLE |         | LINEAR(SUBSTRING(C2, 5, 15), TO_DATE(C1)) | 1    | 1024  | PUBLIC | 1              |

-- cluster by paths in variant columns
ALTER TABLE T3 CLUSTER BY (v:"Data":name::string, v:"Data":id::number);

|           created_on            | name | database_name | schema_name | kind  | comment |                                  cluster_by                                  | rows | bytes | owner  | retention_time |
| Fri, 09 Jun 2017 11:29:49 -0700 | T3   | TESTDB        | TESTSCHEMA  | TABLE |         | LINEAR(TO_CHAR(GET_PATH(V, 'Data.name')), TO_NUMBER(GET_PATH(V, 'Data.id'))) | 0    | 0     | PUBLIC | 1              |


  • When adding clustering keys to a table already populated with data, not all expressions are allowed to be specified as clustering keys. You can check whether a specific function is supported using:

    show functions like 'function_name';

    The SHOW FUNCTIONS output includes a column, valid_for_clustering, at the end of the output. This column displays whether the function can be used in a clustering key for a populated table.

  • Changing the clustering keys for a table does not affect existing records. To reorganize existing records using the new keys, the table needs to be reclustered using ALTER TABLE … RECLUSTER.

Dropping the Clustering Keys for a Table

At any time, you can drop the clustering keys for a table using ALTER TABLE:


For example:



| created_on                      | name | database_name | schema_name | kind  | comment | cluster_by | rows | bytes | owner  | retention_time |
| Mon, 08 Aug 2016 15:22:07 -0700 | T1   | TESTDB        | TESTSCHEMA  | TABLE |         |            |    0 |     0 | PUBLIC | 1              |

Manually Reclustering a Table

Use ALTER TABLE with a RECLUSTER clause to manually recluster a table for which clustering keys have been defined. You can use a WHERE clause to specify a condition or range on which to recluster date in the table.

For example:

  • To recluster table t1:


    This example automatically picks the best budget to use to recluster the table, based on the resources available in the warehouse used to execute this command. The larger the warehouse, the more budget is allocated to the recluster command, which results in more effective reclustering.

  • To recluster data that was inserted into table t1 in the first week of 2016:

    ALTER TABLE t2 RECLUSTER WHERE CREATE_DATE BETWEEN ('2016-01-01') AND ('2016-01-07');


Reclustering can only be performed on tables that have clustering keys defined.

In addition, manual reclustering requires a virtual warehouse, and impacts the data storage for the table, because the operation reorders and reinserts rows in the table, which results in new micro-partitions being created and the previous micro-partitions moving into Time Travel retention.

For this reason, as well as other benefits, we recommend using Automatic Clustering instead of manual reclustering.