Returns the Space-Saving summary at the end of aggregation. (For more information about the Space-Saving summary, see Estimating Frequent Values.)
The function APPROX_TOP_K discards its internal, intermediate state
when the final cardinality estimate is returned. However, in certain advanced
use cases, such as estimating incremental frequent values during bulk loading,
you might want to keep the intermediate state, in which case you would use
APPROX_TOP_K_ACCUMULATE instead of APPROX_TOP_K.
In contrast to APPROX_TOP_K, APPROX_TOP_K_ACCUMULATE does not return a frequency estimate of items. Instead, it returns the algorithm state itself. The intermediate state can later be:
Combined (i.e. merged) with intermediate states from separate but related batches of data.
Processed by other functions that operate directly on the intermediate state, for example, APPROX_TOP_K_ESTIMATE. (For an example, see the Examples section below.)
Exported to external tools.
APPROX_TOP_K_ACCUMULATE( <expr> , <counters> )
expr: The expression (e.g. column name) for which you want to find
the most common values.
counters: This is the maximum number of distinct values that
can be tracked at a time during the estimation process. For example, if
countersis set to 100000, then the algorithm tracks 100,000 distinct values, attempting to keep the 100,000 most frequent values.
The maximum number of
This example shows how to use the three related functions
This example uses more counters than distinct data values in order to get consistent results. In real-world applications, the number of distinct values is usually larger than the number of counters, so you will get approximations that can vary.
This example generates one table with 8 rows that have values 1 - 8, and a second table with 8 rows that have values 5 - 12. Thus the most frequent values in the union of the two tables are the values 5-8, each of which has a count of 2.
Create a simple table and data:-- Create a sequence to use to generate values for the table. CREATE OR REPLACE SEQUENCE seq91; CREATE OR REPLACE TABLE sequence_demo (c1 INTEGER DEFAULT seq91.nextval, dummy SMALLINT); INSERT INTO sequence_demo (dummy) VALUES (0); -- Double the number of rows a few times, until there are 8 rows: INSERT INTO sequence_demo (dummy) SELECT dummy FROM sequence_demo; INSERT INTO sequence_demo (dummy) SELECT dummy FROM sequence_demo; INSERT INTO sequence_demo (dummy) SELECT dummy FROM sequence_demo;
Create a table that contains the “state” that represents the current approximate Top K information for the table named sequence_demo:CREATE OR REPLACE TABLE resultstate1 AS ( SELECT approx_top_k_accumulate(c1, 50) AS rs1 FROM sequence_demo);
Now create a second table and add data. (In a more realistic situation, the user could have loaded more data into the first table and divided the data into non-overlapping sets based on the time that the data was loaded.)CREATE OR REPLACE TABLE test_table2 (c1 INTEGER); -- Insert data. INSERT INTO test_table2 (c1) SELECT c1 + 4 FROM sequence_demo;
Get the “state” information for just the new data.CREATE OR REPLACE TABLE resultstate2 AS (SELECT approx_top_k_accumulate(c1, 50) AS rs1 FROM test_table2);
Combine the “state” information for the two batches of rows:CREATE OR REPLACE TABLE combined_resultstate (c1) AS SELECT approx_top_k_combine(rs1) AS apc1 FROM ( SELECT rs1 FROM resultstate1 UNION ALL SELECT rs1 FROM resultstate2 ) ;
Get the approximate Top K value of the combined set of rows:SELECT approx_top_k_estimate(c1, 4) FROM combined_resultstate;
Output:+------------------------------+ | APPROX_TOP_K_ESTIMATE(C1, 4) | |------------------------------| | [ | | [ | | 5, | | 2 | | ], | | [ | | 6, | | 2 | | ], | | [ | | 7, | | 2 | | ], | | [ | | 8, | | 2 | | ] | | ] | +------------------------------+