adrusi
adrusi

Reputation: 845

Postgres prepared statements with varying fields

I looking for ways to abstract database access to postgres. In my examples I will use a hypothetical twitter clone in nodejs, but in the end it's a question about how postgres handles prepared statements, so the language and library don't really matter:

Suppose I want to be able to access a list of all tweets from a user by username:

name: "tweets by username"
text: "SELECT (SELECT * FROM tweets WHERE tweets.user_id = users.user_id) FROM users WHERE users.username = $1"
values: [username]

That works fine, but it seems inefficient, both in practical terms and code-quality terms to have to make another function to handle getting tweets by email rather than by username:

name: "tweets by email"
text: "SELECT (SELECT * FROM tweets WHERE tweets.user_id = users.user_id) FROM users WHERE users.email = $1"
values: [email]

Is it possible to include a field as a parameter to the prepared statement?

name: "tweets by user"
text: "SELECT (SELECT * FROM tweets WHERE tweets.user_id = users.user_id) FROM users WHERE users.$1 = $2"
values: [field, value]

While it's true that this might be a bit less efficient in the corner case of accessing tweets by user_id, that's a trade I'm willing to make to improve code quality, and hopefully overall improve efficiency by reducing the number of query templates to 1 instead of 3+.

Upvotes: 3

Views: 1590

Answers (2)

dbenhur
dbenhur

Reputation: 20408

@Clodoaldo 's answer is correct in that it allows the capability you desire and should return the right results. Unfortunately it produces rather slow execution.

I set up an experimental data base with tweets and users. populated 10K users each with 100 tweets (1M tweet records). I indexed the PKs u.id, t.id, the FK t.user_id and the predicate fields u.username, u.email.

create table t(id serial PRIMARY KEY, data integer, user_id bignit);
create index t1 t(user_id);
create table u(id serial PRIMARY KEY, name text, email text);
create index u1 on u(name);
create index u2 on u(email);
insert into u(name,email) select i::text, i::text from generate_series(1,10000) i;
insert into t(data,user_id) select i, (i/100)::bigint from generate_series(1,1000000) i;
analyze table t;
analyze table u;

A simple query using one field as predicate is very fast:

prepare qn as select t.* from t join u on t.user_id = u.id where u.name = $1;

explain analyze execute qn('1111');
 Nested Loop  (cost=0.00..19.81 rows=1 width=16) (actual time=0.030..0.057 rows=100 loops=1)
   ->  Index Scan using u1 on u  (cost=0.00..8.46 rows=1 width=4) (actual time=0.020..0.020 rows=1 loops=1)
         Index Cond: (name = $1)
   ->  Index Scan using t1 on t  (cost=0.00..10.10 rows=100 width=16) (actual time=0.007..0.023 rows=100 loops=1)
         Index Cond: (t.user_id = u.id)
 Total runtime: 0.093 ms

A query using case in the where as @Clodoaldo proposed takes almost 30 seconds:

prepare qen as select t.* from t join u on t.user_id = u.id
  where case $2 when 'e' then u.email = $1 when 'n' then u.name = $1 end;

explain analyze execute qen('1111','n');
 Merge Join  (cost=25.61..38402.69 rows=500000 width=16) (actual time=27.771..26345.439 rows=100 loops=1)
   Merge Cond: (t.user_id = u.id)
   ->  Index Scan using t1 on t  (cost=0.00..30457.35 rows=1000000 width=16) (actual time=0.023..17.741 rows=111200 loops=1)
   ->  Index Scan using u_pkey on u  (cost=0.00..42257.36 rows=500000 width=4) (actual time=0.325..26317.384 rows=1 loops=1)
         Filter: CASE $2 WHEN 'e'::text THEN (u.email = $1) WHEN 'n'::text THEN (u.name = $1) ELSE NULL::boolean END
 Total runtime: 26345.535 ms

Observing that plan, I thought that using a union subselect then filtering its results to get the id appropriate to the parametrized predicate choice would allow the planner to use specific indexes for each predicate. Turns out I was right:

prepare qen2 as 
select t.*
from t 
join (
 SELECT id from 
  (
  SELECT 'n' as fld, id from u where u.name = $1
  UNION ALL
  SELECT 'e' as fld, id from u where u.email = $1
  ) poly
 where poly.fld = $2
) uu
on t.user_id = uu.id;

explain analyze execute qen2('1111','n');
 Nested Loop  (cost=0.00..28.31 rows=100 width=16) (actual time=0.058..0.120 rows=100 loops=1)
   ->  Subquery Scan poly  (cost=0.00..16.96 rows=1 width=4) (actual time=0.041..0.073 rows=1 loops=1)
         Filter: (poly.fld = $2)
         ->  Append  (cost=0.00..16.94 rows=2 width=4) (actual time=0.038..0.070 rows=2 loops=1)
               ->  Subquery Scan "*SELECT* 1"  (cost=0.00..8.47 rows=1 width=4) (actual time=0.038..0.038 rows=1 loops=1)
                     ->  Index Scan using u1 on u  (cost=0.00..8.46 rows=1 width=4) (actual time=0.038..0.038 rows=1 loops=1)
                           Index Cond: (name = $1)
               ->  Subquery Scan "*SELECT* 2"  (cost=0.00..8.47 rows=1 width=4) (actual time=0.031..0.032 rows=1 loops=1)
                     ->  Index Scan using u2 on u  (cost=0.00..8.46 rows=1 width=4) (actual time=0.030..0.031 rows=1 loops=1)
                           Index Cond: (email = $1)
   ->  Index Scan using t1 on t  (cost=0.00..10.10 rows=100 width=16) (actual time=0.015..0.028 rows=100 loops=1)
         Index Cond: (t.user_id = poly.id)
 Total runtime: 0.170 ms

Upvotes: 3

Clodoaldo Neto
Clodoaldo Neto

Reputation: 125254

SELECT t.* 
FROM tweets t 
inner join users u on t.user_id = u.user_id
WHERE case $2
    when 'username' then u.username = $1
    when 'email' then u.email = $1
    else u.user_id = $1
    end

Upvotes: 0

Related Questions