Michael Soulier
Michael Soulier

Reputation: 821

updating a large string efficiently in postgresql

I am caching a large JSON response for performance reasons, and debating whether to use external files, or a text column in postgres. The problem with the latter is that when I must completely rebuild the cache, the SQL to UPDATE the table is thus quite large.

I have looked at the COPY function, and I could delete the old row, write out a CSV file, and COPY in the replacement row, but it feels kludgy to me.

Is there an efficient method of updating a text column with a large amount of text in an efficient manner without asking the SQL parser to parse a ridiculously long SQL statement?

Thanks.

Upvotes: 0

Views: 731

Answers (1)

Craig Ringer
Craig Ringer

Reputation: 324501

Use a bind parameter from a driver that supports server-side parameter binding ("prepared statements"). The parameter value is passed as a simple chunk of data with a length word on the wire. This also makes error messages, log messages, etc more readable.

PgJDBC does proper server-side parameter binding, as does libpq of course. Many drivers actually do parameter binding client-side, so they still generate a large statement.

You can determine if a driver is doing client-side binds by running a prepared statement with log_statement = 'all' enabled. If the statement looks like SELECT $1 it's using server-side binding; if it looks like SELECT 'some_value' it's using client-side binding.

That said, the overhead of parsing a string literal is pretty low. I suspect that if you just do whatever is easiest the real world impact will be less than you expect. Fix the problem if it actually shows signs of being a problem.

Note that individual values are still limited to 1GB.

Upvotes: 1

Related Questions