Reputation: 3325
Suppose I have a table like this:
link_ids | length
------------+-----------
{1,4} | {1,2}
{2,5} | {0,1}
How can I find the min length for each link_ids
?
So the final output looks something like:
link_ids | length
------------+-----------
{1,4} | 1
{2,5} | 0
Upvotes: 4
Views: 15585
Reputation: 657052
Assuming a table like:
CREATE TABLE tbl (
link_ids int[] PRIMARY KEY -- which is odd for a PK
, length int[]
, CHECK (length <> '{}'::int[] IS TRUE) -- rules out null and empty in length
);
Query for Postgres 9.3 or later:
SELECT link_ids, min(len) AS min_length
FROM tbl t, unnest(t.length) len -- implicit LATERAL join
GROUP BY 1;
Or create a tiny function (Postgres 8.4+):
CREATE OR REPLACE FUNCTION arr_min(anyarray)
RETURNS anyelement LANGUAGE sql IMMUTABLE PARALLEL SAFE AS
'SELECT min(i) FROM unnest($1) i';
Only add PARALLEL SAFE
in Postgres 9.6 or later. Then:
SELECT link_ids, arr_min(length) AS min_length FROM t;
The function can be inlined and is fast.
Or, for integer
arrays of trivial length, use the additional module intarray
and its built-in sort()
function (Postgres 8.3+):
SELECT link_ids, (sort(length))[1] AS min_length FROM t;
Upvotes: 11
Reputation: 4268
For the min of array:
SELECT min(x) from unnest(array_name) as x;
Replace min with max to get the max.
Upvotes: 2
Reputation: 51529
A small addition to Erwin's answer - sometimes subquery with unnest
can be even cheaper, than lateral join.
I used table definition from Erwin's answer and filled it:
t=# insert into t select '{1}'::int[]||g,'{1}'::int[]||g from generate_series(1,9999,1) g;
INSERT 0 9999
t=# select * from t order by ctid desc limit 1;
link_ids | length
----------+----------
{1,9999} | {1,9999}
(1 row)
then analyze LATERAL JOIN:
t=# explain analyze select link_ids,max(r) from t, unnest(length) r where link_ids = '{1,9999}' group by 1;
QUERY PLAN
-----------------------------------------------------------------------------------------------------------------------
GroupAggregate (cost=0.29..4.81 rows=1 width=33) (actual time=0.030..0.030 rows=1 loops=1)
-> Nested Loop (cost=0.29..4.30 rows=100 width=33) (actual time=0.025..0.027 rows=2 loops=1)
-> Index Scan using t_pkey on t (cost=0.29..2.30 rows=1 width=58) (actual time=0.015..0.016 rows=1 loops=1)
Index Cond: (link_ids = '{1,9999}'::integer[])
-> Function Scan on unnest r (cost=0.00..1.00 rows=100 width=4) (actual time=0.007..0.007 rows=2 loops=1)
Total runtime: 0.059 ms
(6 rows)
and try the subquery:
t=# explain analyze select link_ids, (select max(r) from unnest(length) r) from t where link_ids = '{1,9999}';
QUERY PLAN
-----------------------------------------------------------------------------------------------------------------------
Index Scan using t_pkey on t (cost=0.29..3.56 rows=1 width=58) (actual time=0.030..0.031 rows=1 loops=1)
Index Cond: (link_ids = '{1,9999}'::integer[])
SubPlan 1
-> Aggregate (cost=1.25..1.26 rows=1 width=4) (actual time=0.011..0.011 rows=1 loops=1)
-> Function Scan on unnest r (cost=0.00..1.00 rows=100 width=4) (actual time=0.008..0.008 rows=2 loops=1)
Total runtime: 0.060 ms
(6 rows)
and lastly make sure result is the same:
t=# select link_ids, (select max(r) from unnest(length) r)
from t
where link_ids = '{1,9999}';
link_ids | max
----------+------
{1,9999} | 9999
(1 row)
t=# select link_ids,max(r)
from t, unnest(length) r
where link_ids = '{1,9999}'
group by 1;
link_ids | max
----------+------
{1,9999} | 9999
(1 row)
Upvotes: 2
Reputation: 29458
Assuming that the table name is t
and each value of link_ids
is unique.
select link_ids, min(len)
from (select link_ids, unnest(length) as len from t) as t
group by link_ids;
link_ids | min
----------+-----
{2,5} | 0
{1,4} | 1
Upvotes: 3
Reputation: 16487
(I'm gonna assume link_ids
can have doubles and since there is no id column we're gonna improvise).
WITH r AS
(SELECT row_number() OVER() as id,
link_ids,
length from Table1)
SELECT DISTINCT ON (id) link_ids,
unnest(length)
FROM r
ORDER BY id, length;
Upvotes: 0