Account Usage

Snowflake provides data dictionary metadata, as well as historical usage data, for your account via a shared database named SNOWFLAKE.

In this Topic:

What is the SNOWFLAKE Database?

SNOWFLAKE is a system-defined, read-only database provided by Snowflake. The database is automatically imported into each account from a share named ACCOUNT_USAGE. In this way, Snowflake utilizes Data Sharing to provide metadata and other usage metrics for the objects in your account.

Account Usage Schemas

The SNOWFLAKE database contains two schemas:

ACCOUNT_USAGE:

Set of views that display object metadata and usage metrics for your account.

READER_ACCOUNT_USAGE:
 

Set of views that display object metadata and usage metrics across all the reader accounts you’ve created as a Data Sharing provider. If you have not created any reader accounts, these views are empty.

For the most part, these views are a subset of the views in ACCOUNT_USAGE, with each view containing an additional ACCOUNT column for filtering results by reader account.

The views in both schemas can be queried similar to any other views in Snowflake

Note

By default, only account administrators (users with the ACCOUNTADMIN role) can access the SNOWFLAKE database and schemas and perform queries on the views; however, privileges for these objects can be granted to other roles in your account to allow users with the roles to access the objects.

Historical Data Retention

Certain account usage views provide historical usage metrics. The cut-off period for these views is 1 year.

For more details about these views, see the list of views (in this topic).

Data Latency

The account usage views have some natural latency, due to the process of extracting the usage data from Snowflake’s internal metadata store. However, the views have been designed to minimize the impact of this latency, based on the type of data displayed in the views:

  • For Runtime views, which display data that changes frequently/continually, the latency is limited to approximately 1 minute.
  • For Object and Analytic views, which display data that does not tend to change as frequently, the latency is a maximum of 1 hour.

For more details about each view, see the list of views (in this topic).

Tip

If the latency for a given account usage view is a concern, consider using the corresponding view/table function in the Information Schema. Information Schema views/table functions do not have any latency, but they have shorter retention periods (for historical data) than the account usage views.

Account Usage Views

The SNOWFLAKE database includes the following views in each schema:

View Type ACCOUNT_USAGE or READER_ACCOUNT_USAGE Latency [1] Notes
COLUMNS Object ACCOUNT_USAGE 1 hour  
DATABASES Object ACCOUNT_USAGE 1 hour  
DATABASE_STORAGE_USAGE_HISTORY Analytic ACCOUNT_USAGE 1 hour Historical data retained for 1 year.
FILE_FORMATS Object ACCOUNT_USAGE 1 hour  
FUNCTIONS Object ACCOUNT_USAGE 1 hour  
LOAD_HISTORY Analytic ACCOUNT_USAGE 1 hour Historical data retained for 1 year.
LOGIN_HISTORY Runtime (both) 1 minute Historical data retained for 1 year.
PIPE_USAGE_HISTORY Analytic ACCOUNT_USAGE 1 hour Historical data retained for 1 year.
QUERY_HISTORY Runtime ACCOUNT_USAGE 1 minute Historical data retained for 1 year.
REFERENTIAL_CONSTRAINTS Object ACCOUNT_USAGE 1 hour  
RESOURCE_MONITORS Object READER_ACCOUNT_USAGE 1 hour  
SCHEMATA Object ACCOUNT_USAGE 1 hour  
SEQUENCES Object ACCOUNT_USAGE 1 hour  
STAGES Object ACCOUNT_USAGE 1 hour  
STAGE_STORAGE_USAGE HISTORY Analytic ACCOUNT_USAGE 1 hour Historical data retained for 1 year.
STORAGE_USAGE Analytic (both) 1 hour Combined usage across all database tables and internal stages. Historical data retained for 1 year.
TABLES Object ACCOUNT_USAGE 1 hour  
TABLE_CONSTRAINTS Object ACCOUNT_USAGE 1 hour  
TABLE_STORAGE_METRICS Object ACCOUNT_USAGE 1 hour  
VIEWS Object ACCOUNT_USAGE 1 hour  
WAREHOUSE_METERING_HISTORY Analytic (both) 1 hour Historical data retained for 1 year.

[1] Latency times are approximate; in many instances, the actual latency may be less.

Examples

This section provides examples of some typical/useful queries using the views in ACCOUNT_USAGE.

Note

These examples assume the SNOWFLAKE database and the ACCOUNT_USAGE schema are in use for the current session. The examples also assume an appropriate role is in use. If they are not, execute the following commands before running the queries in the examples:

use role <role_name>;

use schema snowflake.account_usage;

User Login Metrics

Average number of seconds between failed login attempts by user (month-to-date):

select user_name,
       count(*) as failed_logins,
       avg(seconds_between_login_attempts) as average_seconds_between_login_attempts
from (
      select user_name,
             timediff(seconds, event_timestamp, lead(event_timestamp)
                 over(partition by user_name order by event_timestamp)) as seconds_between_login_attempts
      from login_history
      where event_timestamp > date_trunc(month, current_date)
      and is_success = 'YES'
     )
group by 1
order by 3;

Failed logins by user (month-to-date):

select user_name,
       sum(iff(is_success = 'NO', 1, 0)) as failed_logins,
       count(*) as logins,
       sum(iff(is_success = 'NO', 1, 0)) / nullif(count(*), 0) as login_failure_rate
from login_history
where event_timestamp > date_trunc(month, current_date)
group by 1
order by 4 desc;

Failed logins by user and connecting client (month-to-date):

select reported_client_type,
       user_name,
       sum(iff(is_success = 'NO', 1, 0)) as failed_logins,
       count(*) as logins,
       sum(iff(is_success = 'NO', 1, 0)) / nullif(count(*), 0) as login_failure_rate
from login_history
where event_timestamp > date_trunc(month, current_date)
group by 1,2
order by 5 desc;

Warehouse Credit Usage

Credits used by each warehouse in your account (month-to-date):

select warehouse_name,
       sum(credits_used) as total_credits_used
from warehouse_metering_history
where start_time >= date_trunc(month, current_date)
group by 1
order by 2 desc;

Credits used over time by each warehouse in your account (month-to-date):

select start_time::date as usage_date,
       warehouse_name,
       sum(credits_used) as total_credits_used
from warehouse_metering_history
where start_time >= date_trunc(month, current_date)
group by 1,2
order by 2,1;

Data Storage Usage

Billable terabytes stored in your account over time:

select date_trunc(month, usage_date) as usage_month
  , avg(storage_bytes + stage_bytes + failsafe_bytes) / power(1024, 4) as billable_tb
from storage_usage
group by 1
order by 1;

User Query Totals and Execution Times

Total jobs executed in your account (month-to-date):

select count(*) as number_of_jobs
from query_history
where start_time >= date_trunc(month, current_date);

Total jobs executed by each warehouse in your account (month-to-date):

select warehouse_name,
       count(*) as number_of_jobs
from query_history
where start_time >= date_trunc(month, current_date)
group by 1
order by 2 desc;

Average query execution time by user (month-to-date):

select user_name,
       avg(execution_time) as average_execution_time
from query_history
where start_time >= date_trunc(month, current_date)
group by 1
order by 2 desc;

Average query execution time by query type and warehouse size (month-to-date):

select query_type,
       warehouse_size
       avg(execution_time) as average_execution_time
from query_history
where start_time >= date_trunc(month, current_date)
group by 1,2
order by 3 desc;