kwarner
kwarner

Reputation: 57

How to iterate over all columns in a table using Snowflake Javascript Stored Procedure or Function?

I have a table in Snowflake with 100+ columns and I am trying to get a count of all distinct values in each column and ultimately concatenate all of the counts for each column into one table. If I were to do it on just one column it would be something like:

SELECT DISTINCT "AGE", count(*) AS "Frequency"
FROM 
    db.schema.tablename
WHERE 
    "SURVEYDATE" < "2019-07-29"
GROUP BY
    AGE;

I know this would be somewhat trivial to do in Python (perhaps I should just do it in PySpark, I'm open to recommendations), but for what I think would be ease of use for both my team and faster to do on 300 million rows, I would like to use the Snowflake Javascript procedural language to do something like:

create or replace procedure column_counts(table)
returns array
language javascript
as
$$
var num_columns = //get number of columns
var columns = [list of columns]
var results_array = [];

for (i = 0; i < num_columns; i++) {
    var col_count = snowflake.createStatement( {sqlText: 'SELECT DISTINCT columns[i], count(*) AS "Frequency" FROM 
    db.schema.tablename WHERE "SURVEYDATE" < "2019-07-29" GROUP BY columns[i]' }).execute(); //This returns a table of all distinct values in that column and their counts
    results_array.push([columns[i], col_count]) //I then want an array like [column_name[0...i], distinct_value[0....n], frequency]
    return results_array;
$$
;
CALL column_counts();

I am still pretty new to using this procedural language in Snowflake and Snowflake as a whole, so definitely open to recommendations on how to do this best and in a repeatable manner for new tables that come in each month.

Upvotes: 2

Views: 7116

Answers (1)

Lukasz Szozda
Lukasz Szozda

Reputation: 175726

It is possible without any kind of procedural code. For instance by using JSON:

WITH cte AS ( -- here goes the table/query/view
  SELECT TOP 100 OBJECT_CONSTRUCT(*) AS json_payload
  FROM SNOWFLAKE_SAMPLE_DATA.TPCH_SF1.ORDERS
)
SELECT f.KEY, 
      COUNT(DISTINCT f."VALUE") AS frequency, 
      LISTAGG(DISTINCT  f."VALUE" ,',') AS distinct_values  -- debug
FROM cte
, LATERAL FLATTEN (input => json_payload) f
-- WHERE f.KEY IN ('column_name1', 'column_name2', ...) -- only specific columns
GROUP BY f.KEY;

Output:

+-----------------+-----------+------------------------------------------------+
|       KEY       | FREQUENCY |                DISTINCT_VALUES                 |
+-----------------+-----------+------------------------------------------------+
| O_ORDERPRIORITY |         5 | 2-HIGH,1-URGENT,5-LOW,4-NOT SPECIFIED,3-MEDIUM |
| O_ORDERSTATUS   |         3 | P,O,F                                          |
| O_SHIPPRIORITY  |         1 | 0                                              |
| ...             |       ... | ....                                           |
+-----------------+-----------+------------------------------------------------+

How it works:

  1. Generate JSON per each row using OBJECT_CONSTRUCT(*)

  2. Flatten JSON to key/value

  3. Group by key and apply specific aggregation function COUNT/COUNT(DISTINCT )/LISTAGG/MIN/MAX/...


Version that provides distribution per each column/value:

WITH cte AS (
  SELECT TOP 100 OBJECT_CONSTRUCT(*) AS json_payload
  FROM SNOWFLAKE_SAMPLE_DATA.TPCH_SF1.ORDERS
)
SELECT f.KEY, f."VALUE", COUNT(*) AS frequency
FROM cte
, LATERAL FLATTEN (input => json_payload) f
-- WHERE f.KEY IN ('column_name1', 'column_name2', ...) -- only specific columns
GROUP BY f.KEY, f."VALUE"
ORDER BY f.KEY, f."VALUE";

Upvotes: 3

Related Questions