Pranasas
Pranasas

Reputation: 596

How to count different values into different rows in SQL efficiently?

Problem:

Say there is a simple (yet big) table foods

id   name 
--   -----------  
01   ginger beer
02   white wine
03   red wine
04   ginger wine

I'd like to count how many entries have specific hardcoded patterns, say contain words 'ginger' (LIKE '%ginger%') or 'wine' (LIKE '%wine%'), or whatever else in them, and write these numbers into rows along comments. The result I'm looking for is the following

comment           total 
---------------   -----  
contains ginger   2
for wine lovers   3

Solution 1 (good format but inefficient):

It is possible to use UNION ALL and construct the following

SELECT * FROM
(
  (
    SELECT
      'contains ginger' AS comment,
      sum((name LIKE '%ginger%')::INT) AS total
    FROM foods
  )
  UNION ALL
  (
    SELECT
      'for wine lovers' AS comment,
      sum((name LIKE '%wine%')::INT) AS total
    FROM foods
  )
)

Apparently it works similarly to simply executing multiple queries and sewing them together afterwards. It is very inefficient.

Solution 2 (efficient but bad format):

The following is multiple times faster compared to previous solution

SELECT
  sum((name LIKE '%ginger%')::INT) AS contains_ginger,
  sum((name LIKE '%wine%')::INT) AS for_wine_lovers
FROM foods

And the result is

contains_ginger   for_wine_lovers 
---------------   ---------------  
2                 3

So it is definitely possible to get the same information much faster, but in a wrong format...

Discussion:

What is the best overall approach? What should I do to get the result I want in an efficient manner and preferable format? Or is it really impossible?

By the way, I am writing this for Redshift (based on PostgreSQL).

Thanks.

Upvotes: 9

Views: 2774

Answers (12)

Haleemur Ali
Haleemur Ali

Reputation: 28303

option 1: manually reshape

CREATE TEMPORARY TABLE wide AS (
  SELECT
    sum((name LIKE '%ginger%')::INT) AS contains_ginger,
    sum((name LIKE '%wine%')::INT) AS for_wine_lovers
    ...
  FROM foods;
SELECT
  'contains ginger', contains_ginger FROM wide

UNION ALL
SELECT 
  'for wine lovers', contains_wine FROM wine

UNION ALL
...;

option 2: create a categories table & use a join

-- not sure if redshift supports values, hence I'm using the union all to build the table
WITH categories (category_label, food_part) AS (
    SELECT 'contains ginger', 'ginger'
    union all
    SELECT 'for wine lovers', 'wine'
    ...
)
SELECT
categories.category_label, COUNT(*)
FROM categories
LEFT JOIN foods ON foods.name LIKE ('%' || categories.food_part || '%')
GROUP BY 1

Since your solution 2 you consider to be fast enough, option 1 should work for you.

Option 2 should also be fairly efficient, and it is much easier to write & extend, and as an added bonus, this query will let you know if no foods exist in a given category.

Option 3: Reshape & redistribute your data to better match the grouping keys.

You could also pre-process your dataset if the query execution time is very important. A lot the benefits of this depend on your data volume and data distribution. Do you only have a few hard categories, or will they be searched dynamically from some sort of interface.

For example:

If the dataset were reshaped like this:

content   name 
--------  ----
ginger    01
ginger    04
beer      01
white     02
wine      02 
wine      04
wine      03

Then you could shard & distribute on content, and each instance could execute that part of the aggregation in parallel.

Here an equivalent query might look like this:

WITH content_count AS (
  SELECT content, COUNT(*) total
  FROM reshaped_food_table 
  GROUP BY 1
)
SELECT
    CASE content 
      WHEN 'ginger' THEN 'contains ginger'
      WHEN 'wine' THEN 'for wine lovers'
      ELSE 'other' 
    END category
  , total
FROM content_count

Upvotes: 1

I Love You
I Love You

Reputation: 258

Try with SQL like this:

SELECT count(1) as total,'contains ginger' result
FROM foods where names LIKE '%ginger%' 
union all
SELECT count(1),'for wine lovers' 
FROM foods where names LIKE '%wine%'

Upvotes: 0

AlexYes
AlexYes

Reputation: 4208

From your sample it seems like your product names contain up to 2 words. It's more efficient to break by space and check if individual chunks match than like, and then manually reshape as said in the other response

WITH counts as (
    SELECT 
      sum(('ginger' in (split_part(name,' ',1),split_part(name,' ',2)))::INT) AS contains_ginger,
      sum(('wine' in (split_part(name,' ',1),split_part(name,' ',2)))::INT) AS for_wine_lovers
    FROM foods
)
-- manually reshape

Upvotes: 0

Joe Harris
Joe Harris

Reputation: 14035

Here you go. šŸ‘

The WHERE filter reduces the rows going into the GROUP BY aggregation. It's not necessary for smaller data but will help if the table is in the billions of rows. Add additional patterns to the REGEXP filter and the CASE statement.

SELECT CASE WHEN name LIKE '%ginger%' THEN 'contains ginger' 
            WHEN name LIKE '%wine%'   THEN 'for wine lovers'
       ELSE NULL END "comment"
      ,COUNT(*) total
FROM grouping_test
WHERE REGEXP_INSTR(name,'ginger|wine')
GROUP BY 1
;

Upvotes: 0

MtwStark
MtwStark

Reputation: 4058

I think the best option is to split the ingredients list into parts and then to count them.

"Pass0".."Pass4" and "Numbers" is just a Tally table to get a list of numbers 1..256 to emulate the unnest.

"comments" is a a simple table you should have somewhere with ingredients and their comments

use your table "foods" instead of mine ;)

let's take a look

with
Pass0 as (select '1' as C union all select '1'), --2 rows
Pass1 as (select '1' as C from Pass0 as A, Pass0 as B),--4 rows
Pass2 as (select '1' as C from Pass1 as A, Pass1 as B),--16 rows
Pass3 as (select '1' as C from Pass2 as A, Pass2 as B),--256 rows
numbers as (
    select ROW_NUMBER() OVER(ORDER BY C) AS N FROM Pass3
),    
comments as (
    select 'ginger' ingredient, 'contains ginger' comment union all 
    select 'wine', 'for wine lovers' union all 
    select 'ale', 'a warm kind of beer' union all 
    select 'beer', 'your old friend'
),
foods as (
    select 01 id, 'ginger beer' name union all 
    select 02   ,'white wine' union all 
    select 03   ,'red wine' union all 
    select 04   ,'ginger wine' union all 
    select 05   ,'ginger ale' union all 
    select 06   ,'pale ale' union all 
    select 07   ,'ginger beer' union all 
),
ingredients as (
    select ingredient, COUNT(*) n
    from foods d
    CROSS JOIN LATERAL(
        select SPLIT_PART(d.name, ' ', n.n) ingredient
        from numbers n
        where SPLIT_PART(d.name, ' ', n.n)<>''
    ) ingredients
    group by ingredient
)
select i.*, isnull(c.comment, 'no comment..') comment
from ingredients i
left join comments c on c.ingredient = i.ingredient

ingredient  n   comment
ale         2   a warm kind of beer
beer        2   your old friend
ginger      4   contains ginger
pale        1   no comment..
red         1   no comment..
white       1   no comment..
wine        3   for wine lovers

Upvotes: 0

Erwin Brandstetter
Erwin Brandstetter

Reputation: 657727

Redshift is rather limited in comparison to modern Postgres.
No unnest(), no array_agg(), no ARRAY constructor, no VALUES expression, no LATERAL joins, no tablefunc module. All the tools that would make this nice an simple. At least we have CTEs ...

This should work and be fast and relatively simple to expand:

WITH ct AS (
   SELECT a.arr
        , count(name ~ arr[1] OR NULL) AS ct1
        , count(name ~ arr[2] OR NULL) AS ct2
        , count(name ~ arr[3] OR NULL) AS ct3
     -- , ... more
   FROM   foods
   CROSS  JOIN (SELECT '{ginger, wine, bread}'::text[]) AS a(arr)
   )
SELECT arr[1] AS comment, ct1 AS total FROM ct
UNION ALL SELECT arr[2], ct2 FROM ct
UNION ALL SELECT arr[3], ct3 FROM ct
--  ... more

I use the Posix operator ~ to replace LIKE, just because it's shorter and no need for added placeholder %. Performance is about the same for this simple form in Postgres, not sure about Redshift.

count(boolean_expression OR NULL) should be a bit faster than sum(boolean_expression::int).

Indexes won't be able to improve performance of this single sequential scan over the whole table.

Upvotes: 1

Pete
Pete

Reputation: 1867

Try this for size:

Declare @searchTerms table (term varchar(100), comment varchar(100))
insert into @searchTerms values
('ginger','contains ginger')
,('wine','for wine lovers')
-- Add any others here

select t.comment, isnull(count(f.id),0) [total]
from @searchTerms t
left join food f on (f.name like '%'+t.term+'%')
group by t.term
order by 1

I'm not sure what the temp table syntax is for postgresql - this example is for MS SQL Server, but I'm sure you get the idea

UPDATE: According to the online converter at SQLines the syntax is effectively the same

Upvotes: 1

pstef
pstef

Reputation: 91

I don't know about Redshift, but in Postgres I'd start with something like this:

WITH foods (id, name) AS (VALUES 
  (1, 'ginger beer'), (2, 'white wine'), (3, 'red wine'), (4, 'ginger wine'))
SELECT hardcoded.comment, count(*)
FROM (VALUES ('%ginger%', 'contains ginger'), ('%wine%', 'for wine lovers'))
  AS hardcoded (pattern, comment)
JOIN foods ON foods.name LIKE hardcoded.pattern
GROUP BY hardcoded.comment;

ā”Œā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”¬ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”
ā”‚     comment     ā”‚ count ā”‚
ā”œā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”¼ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”¤
ā”‚ contains ginger ā”‚     2 ā”‚
ā”‚ for wine lovers ā”‚     3 ā”‚
ā””ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”“ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”˜
(2 rows)

and if it's OK then I'd go on to create appropriate indexes on foods.name. That might include indexes on name and reverse(name); or perhaps (name gist_trgm_ops), but I don't expect Redshift to provide pg_trgm.

Upvotes: 1

Valli
Valli

Reputation: 1450

In both the queries LIKE operator is used. Alternatively We can use Position to find the location of the hardcoded words in the name. If hardcoded words are available in the name then a number greater than 0 will be returned.

SELECT 
       unnest(array['ginger', 'wine']) AS comments,
       unnest(array[ginger, wine]) AS count
FROM(
     (SELECT sum(contains_ginger) ginger , sum(contains_wine) wine
        FROM
             (SELECT CASE WHEN Position('ginger' in name)>0 
                          THEN 1 
                           END contains_ginger,
                     CASE WHEN Position('wine' in name) > 0 
                          THEN 1
                           END contains_wine
                 FROM foods) t) t1

Upvotes: 2

Stefan Taseski
Stefan Taseski

Reputation: 232

Have you considered using cursors?

Here is an example I wrote for SQL Server.

You just need to have some table with all the values you want to make search (I called it SearchWordTable in the example below and the column name SearchWord) in the foods table.

CREATE TABLE #TemporaryTable 
(
    KeyWord nvarchar(50),
    ResultCount int
);

DECLARE @searchWord nvarchar(50)
DECLARE @count INT

DECLARE statistic_cursor CURSOR FOR   
SELECT SearchWord
FROM SearchWordTable

OPEN statistic_cursor  
FETCH NEXT FROM statistic_cursor INTO @searchWord  

WHILE @@FETCH_STATUS = 0  
BEGIN  
    SELECT @count = COUNT(1) FROM foods
    WHERE name LIKE '%'+@searchWord+'%'

    INSERT INTO #TemporaryTable (KeyWord, ResultCount) VALUES (@searchWord, @count)

    FETCH NEXT FROM product_cursor INTO @product  
END  

CLOSE product_cursor  
DEALLOCATE product_cursor

SELECT * FROM #TemporaryTable

DROP #TemporaryTable

Upvotes: 0

Ankit Bajpai
Ankit Bajpai

Reputation: 13519

Try this -

SELECT 'contains ginger' AS comment
      , Count(*) AS total
FROM foods
WHERE name LIKE '%ginger%'
UNION ALL
SELECT 'for wine lovers',
      , count(*)
FROM foods
WHERE name LIKE '%wine%'

Upvotes: 0

Peter Abolins
Peter Abolins

Reputation: 1539

A little bit of searching suggests that you could use your second approach for efficiency, and place the result into a CTE, which you then unnest(), as per: unpivot and PostgreSQL

Upvotes: 0

Related Questions