Denis Fadeev
Denis Fadeev

Reputation: 188

JSON object to Postgres stored procedure argument

There is a single page application which sends POST HTTP requests with payload in JSON:

{"product": { "name": "product A", "quantity": 100 }}

There is a Postgres database which has tables and stored procedures:

create table product {
  product_id serial primary key,
  name text,
  quantity numeric,
  description text
}

create function insert_product (product product) returns product as $$
  -- This function accepts a product type as an argument

Is there a solution in any language that would sit on a server, handle HTTP requests, call stored procedures and automatically convert JSON objects to proper Postgres row types?

In pseudo-Express.js

app.post('/product', (req, res) =>
  db.query('select insert_product($1)', [convertToPostgresPlease(req.body.product)])

What I don't consider solutions:

I know stored procedures are often frowned upon, but for small projects I honestly think they're great. SQL is an amazing DSL for working with data and Postgres is advanced enough to handle any data-related task.

In any case, what is the most simple way to connect JSON HTTP request with a proper SQL RDBMS?

Found solutions (almost):

Upvotes: 4

Views: 13173

Answers (2)

grantwparks
grantwparks

Reputation: 1153

Check out PostgREST? BTW, I don't know why anyone would frown on stored procs. The correct way to interact with the DB is through views and functions/procs. Having SQL in plain code is something that just happened over the last 15 years, really due simply to convenience and loss of SQL skills. It's harder for most people to do set operations than procedural processing.

Upvotes: 1

Ezequiel Tolnay
Ezequiel Tolnay

Reputation: 4582

As Abelisto mentioned in the comments, you can convert from JSON/JSONB parameters within a database function to a specific table row, using json_populate_record/jsonb_populate_record. Another alternative is using the json variable directly using the -> and ->> operators to retrieve its contents. The disadvantage of this is that there is a fair amount of coding for the maintenance of each table.

You may also be able to benefit from RESTful interfaces (e.g. https://github.com/QBisConsult/psql-api).

Another option for a heavily JSON-based solution is simplify operations for the bulk of small tables that wouldn't grow beyond a few hundred records each. There would be a performance toll, but for a few rows it would likely be negligible.

The following exemplifies the power of the JSON datatype in PostgreSQL and the GIN indexes which support JSON operators. You can still use normal tables and specialised functions for data that requires maximum performance.

The example:

CREATE TABLE public.jtables (
  table_id serial NOT NULL PRIMARY KEY,
  table_name text NOT NULL UNIQUE,
  fields jsonb
);

INSERT INTO public.jtables VALUES (default, 'product', '{"id": "number", "name": "string", "quantity": "number", "description": "string"}'::jsonb);

CREATE TABLE public.jdata (
  table_id int NOT NULL REFERENCES jtables,
  data jsonb NOT NULL
);

CREATE UNIQUE INDEX ON public.jdata USING BTREE (table_id, (data->>'id'));
CREATE INDEX ON public.jdata USING GIN (data);

You could create functions to manipulate data in a generic JSON way, e.g.:

CREATE FUNCTION public.jdata_insert(_table text, _data jsonb) RETURNS text AS
$BODY$
  INSERT INTO public.jdata 
  SELECT table_id, $2
  FROM public.jtables
  WHERE table_name = $1
  RETURNING (data)->>'id';
$BODY$ LANGUAGE sql;

CREATE FUNCTION public.jdata_update(_table text, _id text, _data jsonb) RETURNS text AS
$BODY$
  UPDATE public.jdata d SET data = jsonb_strip_nulls(d.data || $3)
  FROM public.jtables t
  WHERE d.table_id = t.table_id AND t.table_name = $1 AND (d.data->>'id') = $2
  RETURNING (d.data)->>'id';
$BODY$ LANGUAGE sql;

Rows can then be inserted using these generic functions:

SELECT public.jdata_insert('product', '{"id": 1, "name": "Product 1", "quantity": 10, "description": "no description"}'::jsonb);
SELECT public.jdata_insert('product', '{"id": 2, "name": "Product 2", "quantity": 5}'::jsonb);
SELECT public.jdata_update('product', '1', '{"description": "test product"}'::jsonb);

And their data can be queried in a variety of ways which make use of the existing indexes:

SELECT * FROM public.jdata WHERE table_id = 1 AND (data->>'id') = '1';
SELECT * FROM public.jdata WHERE table_id = 1 AND data @> '{"quantity": 5}'::jsonb;
SELECT * FROM public.jdata WHERE table_id = 1 AND data ? 'description';

Views can make make queries easier:

CREATE VIEW public.vjdata AS
SELECT d.table_id, t.table_name, (d.data->>'id') AS id, d.data
FROM jtables t
JOIN jdata d USING (table_id);

CREATE OR REPLACE FUNCTION public.vjdata_upsert() RETURNS trigger AS $$
BEGIN
  IF TG_OP = 'INSERT' THEN
    PERFORM public.jdata_insert(NEW.table_name, NEW.data);
  ELSE
    PERFORM public.jdata_update(NEW.table_name, NEW.id, NEW.data);
  END IF;
  RETURN NEW;
END;
$$ LANGUAGE plpgsql;

CREATE TRIGGER vjdata_upsert_trigger INSTEAD OF INSERT OR UPDATE
  ON public.vjdata FOR EACH ROW EXECUTE PROCEDURE public.vjdata_upsert();

UPDATE public.vjdata SET
  data = data || jsonb_build_object('quantity', (data->>'quantity')::int + 2)
WHERE table_name = 'product' AND id = '2'

SELECT * FROM public.vjdata WHERE table_name = 'product' AND id = '2';

Upvotes: 5

Related Questions