Reputation: 21
I've got a postgresql-query that returns 120 rows {integer, boolean, integer, varchar(255), varchar(255), bigint, text}
in about 70ms when done in the database running psql
.
Using python/django with django.db.connection.cursor.execute()
it takes 10s to run, on the same machine.
I've tried putting all the rows into an array, and a single string (18k characters, but returning only the first 500 takes the same time) so there is only one row returned but with no gain.
Any ideas as to why there is such a dramatic slowdown in running a query from within python and in the db?
EDIT
I had to increase the work_mem to get the function running timely in psql. Other functions/queries don't show the same pattern, the difference between psql and python is only a few milliseconds.
EDIT
Cutting down the work_mem to 1MB shows similar numbers in psql and the django shell. Could it be that django is not going by the memory set in work_mem?
EDIT
Ugh. The problem was that the work_mem set in psql is not valid globally, if I set the memory in the function, the call is timely. I suppose setting this in the configuration file would work globally.
Upvotes: 2
Views: 2042
Reputation: 66263
If the timing between "in situ" queries and psql queries differs much then the first and usual suspect is this: If the framework uses prepared statements, then you have to check the timing in psql using prepared statements too. For example:
prepare foo as select * from sometable where intcolumn = $1;
execute foo(42);
If the timing of the execute
is in the same ballpark as your in situ query, then you can explain
and explain analyse
the execute
line.
If the timing is not in the same ballpark you have to look for something else.
Upvotes: 1