Reputation: 2045
I have the following table MyTable
:
id │ value_two │ value_three │ value_four
────┼───────────┼─────────────┼────────────
1 │ a │ A │ AA
2 │ a │ A2 │ AA2
3 │ b │ A3 │ AA3
4 │ a │ A4 │ AA4
5 │ b │ A5 │ AA5
I want to query an array of objects { value_three, value_four }
grouped by value_two
. value_two
should be present on its own in the result. The result should look like this:
value_two │ value_four
───────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
a │ [{"value_three":"A","value_four":"AA"}, {"value_three":"A2","value_four":"AA2"}, {"value_three":"A4","value_four":"AA4"}]
b │ [{"value_three":"A3","value_four":"AA3"}, {"value_three":"A5","value_four":"AA5"}]
It does not matter whether it uses json_agg()
or array_agg()
.
However the best I can do is:
with MyCTE as ( select value_two, value_three, value_four from MyTable )
select value_two, json_agg(row_to_json(MyCTE)) value_four
from MyCTE
group by value_two;
Which returns:
value_two │ value_four
───────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
a │ [{"value_two":"a","value_three":"A","value_four":"AA"}, {"value_two":"a","value_three":"A2","value_four":"AA2"}, {"value_two":"a","value_three":"A4","value_four":"AA4"}]
b │ [{"value_two":"b","value_three":"A3","value_four":"AA3"}, {"value_two":"b","value_three":"A5","value_four":"AA5"}]
With an extra value_two
key in the objects, which I would like to get rid of. Which SQL (Postgres) query should I use?
Upvotes: 47
Views: 60289
Reputation: 656391
Convert the whole row to a jsonb
object and eliminate a single key (pg 9.5+) or an array of keys (pg 10+) with the -
operator before aggregating:
SELECT val2, jsonb_agg(to_jsonb(t.*) - '{id, val2}'::text[]) AS js_34
FROM tbl t
GROUP BY val2;
The explicit cast in '{id, val2}'::text[]
is necessary to disambiguate from the overloaded function taking a single key as text
.
See:
jsonb_build_object()
or json_build_object()
.
SELECT val2, jsonb_agg(jsonb_build_object('val3', val3, 'val4', val4)) AS js_34
FROM tbl
GROUP BY val2;
Builds a JSON object out of a variadic argument list. By convention, the argument list consists of alternating keys and values.
to_jsonb()
(or to_json
) with a ROW
expression does the trick. (Or row_to_json()
with optional line feeds):
SELECT val2, jsonb_agg(to_jsonb((val3, val4))) AS js_34
FROM tbl
GROUP BY val2;
But you lose original column names. A cast to a registered row type avoids that. (The row type of a temporary table serves for ad hoc queries, too.)
CREATE TYPE foo AS (val3 text, val4 text); -- once in the same session
SELECT val2, jsonb_agg((val3, val4)::foo) AS js_34
FROM tbl
GROUP BY val2;
Or use a subselect instead of the ROW
expression. More verbose, but without type cast:
SELECT val2, jsonb_agg(to_jsonb((SELECT t FROM (SELECT val3, val4) t))) AS js_34
FROM tbl
GROUP BY val2;
to_jsonb()
is an optional addition to add (insignificant) line breaks in the JSON document.
More in Craig's related answer:
Upvotes: 94
Reputation: 4824
use jsonb_agg
and to_jsonb
.
SELECT
value_two,
jsonb_agg(to_jsonb (t.*) - '{id,value_two}'::text[]) AS data
FROM
mytable t
GROUP BY
1
ORDER BY
1;
based on manual reference
jsonb - text[] → jsonb
Deletes all matching keys or array elements from the left operand.
Upvotes: 0
Reputation: 4824
to_json with array_agg with composite type
begin;
create table mytable(
id bigint, value_two text, value_three text, value_four text);
insert into mytable(id,value_two, value_three,value_four)
values
( 1, 'a', 'A', 'AA'),
(2, 'a' , 'A2' , 'AA2'),
(3, 'b' , 'A3', 'AA3'),
( 4, 'a' , 'A4', 'AA4'),
(5, 'b' , 'A5', 'AA5');
commit;
create type mytable_type as (value_three text, value_four text);
select value_two,
to_json( array_agg(row(value_three,value_four)::mytable_type))
from mytable
group by 1;
Upvotes: 0