AnApprentice
AnApprentice

Reputation: 111070

PostgreSQL ERROR: canceling statement due to conflict with recovery

I'm getting the following error when running a query on a PostgreSQL db in standby mode. The query that causes the error works fine for 1 month but when you query for more than 1 month an error results.

ERROR: canceling statement due to conflict with recovery
Detail: User query might have needed to see row versions that must be removed

Any suggestions on how to resolve? Thanks

Upvotes: 289

Views: 351172

Answers (8)

Max Malysh
Max Malysh

Reputation: 31635

No need to touch hot_standby_feedback. As others have mentioned, setting it to on can bloat master. Imagine opening a transaction on a slave and not closing it.

Instead, set max_standby_archive_delay and max_standby_streaming_delay to sane values:

# /etc/postgresql/10/main/postgresql.conf on a slave
max_standby_archive_delay = 900s
max_standby_streaming_delay = 900s

This way queries on slaves with a duration less than 900 seconds won't be cancelled. If your workload requires longer queries, just set these options to a higher value.

The postgres docs discuss this at some length. Key advice from there is:

if the standby server is meant for executing long-running queries, then a high or even infinite delay value [in max_standby_archive_delay and max_standby_streaming_delay] may be preferable

and

Users should be clear that tables that are regularly and heavily updated on the primary server will quickly cause cancellation of longer running queries on the standby. In such cases the setting of a finite value for max_standby_archive_delay or max_standby_streaming_delay can be considered similar to setting statement_timeout.

You can also consider setting vacuum_defer_cleanup_age (on the primary) in combination with the max standby delays. As the docs say:

Another option is to increase vacuum_defer_cleanup_age on the primary server, so that dead rows will not be cleaned up as quickly as they normally would be. This will allow more time for queries to execute before they are canceled on the standby, without having to set a high max_standby_streaming_delay

Upvotes: 250

bob
bob

Reputation: 51

Likewise, here's a 2nd caveat to @Artif3x elaboration of @max-malysh's excellent answer, both above.

With any delayed application of transactions from the master the follower(s) will have an older, stale view of the data. Therefore while providing time for the query on the follower to finish by setting max_standby_archive_delay and max_standby_streaming_delay makes sense, keep both of these caveats in mind:

  • the value of the follower as a standby / backup diminishes
  • any other queries running on the follower may return stale data.

If the value of the follower for backup ends up being too much in conflict with hosting queries, one solution would be multiple followers, each optimized for one or the other.

Also, note that several queries in a row can cause the application of wal entries to keep being delayed. So when choosing the new values, it’s not just the time for a single query, but a moving window that starts whenever a conflicting query starts, and ends when the wal entry is finally applied.

Upvotes: 4

Artif3x
Artif3x

Reputation: 4661

I'm going to add some updated info and references to @max-malysh's excellent answer above.

In short, if you do something on the master, it needs to be replicated on the slave. Postgres uses WAL records for this, which are sent after every logged action on the master to the slave. The slave then executes the action and the two are again in sync. In one of several scenarios, you can be in conflict on the slave with what's coming in from the master in a WAL action. In most of them, there's a transaction happening on the slave which conflicts with what the WAL action wants to change. In that case, you have two options:

  1. Delay the application of the WAL action for a bit, allowing the slave to finish its conflicting transaction, then apply the action.
  2. Cancel the conflicting query on the slave.

We're concerned with #1, and two values:

  • max_standby_archive_delay - this is the delay used after a long disconnection between the master and slave, when the data is being read from a WAL archive, which is not current data.
  • max_standby_streaming_delay - delay used for cancelling queries when WAL entries are received via streaming replication.

Generally, if your server is meant for high availability replication, you want to keep these numbers short. The default setting of 30000 (milliseconds if no units given) is sufficient for this. If, however, you want to set up something like an archive, reporting- or read-replica that might have very long-running queries, then you'll want to set this to something higher to avoid cancelled queries. The recommended 900s setting above seems like a good starting point. I disagree with the official docs on setting an infinite value -1 as being a good idea--that could mask some buggy code and cause lots of issues.

The one caveat about long-running queries and setting these values higher is that other queries running on the slave in parallel with the long-running one which is causing the WAL action to be delayed will see old data until the long query has completed. Developers will need to understand this and serialize queries which shouldn't run simultaneously.

For the full explanation of how max_standby_archive_delay and max_standby_streaming_delay work and why, go here.

Upvotes: 25

Tushar.k
Tushar.k

Reputation: 333

It might be too late for the answer but we face the same kind of issue on the production. Earlier we have only one RDS and as the number of users increases on the app side, we decided to add Read Replica for it. Read replica works properly on the staging but once we moved to the production we start getting the same error.

So we solve this by enabling hot_standby_feedback property in the Postgres properties. We referred the following link

https://aws.amazon.com/blogs/database/best-practices-for-amazon-rds-postgresql-replication/

I hope it will help.

Upvotes: 5

Gilles Quénot
Gilles Quénot

Reputation: 185760

As stated here about hot_standby_feedback = on :

Well, the disadvantage of it is that the standby can bloat the master, which might be surprising to some people, too

And here:

With what setting of max_standby_streaming_delay? I would rather default that to -1 than default hot_standby_feedback on. That way what you do on the standby only affects the standby


So I added

max_standby_streaming_delay = -1

And no more pg_dump error for us, nor master bloat :)

For AWS RDS instance, check http://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/Appendix.PostgreSQL.CommonDBATasks.html

Upvotes: 63

David Jaspers
David Jaspers

Reputation: 251

The table data on the hot standby slave server is modified while a long running query is running. A solution (PostgreSQL 9.1+) to make sure the table data is not modified is to suspend the replication and resume after the query:

select pg_xlog_replay_pause(); -- suspend
select * from foo; -- your query
select pg_xlog_replay_resume(); --resume

Upvotes: 20

eradman
eradman

Reputation: 2125

There's no need to start idle transactions on the master. In postgresql-9.1 the most direct way to solve this problem is by setting

hot_standby_feedback = on

This will make the master aware of long-running queries. From the docs:

The first option is to set the parameter hot_standby_feedback, which prevents VACUUM from removing recently-dead rows and so cleanup conflicts do not occur.

Why isn't this the default? This parameter was added after the initial implementation and it's the only way that a standby can affect a master.

Upvotes: 97

Tometzky
Tometzky

Reputation: 23920

Running queries on hot-standby server is somewhat tricky — it can fail, because during querying some needed rows might be updated or deleted on primary. As a primary does not know that a query is started on secondary it thinks it can clean up (vacuum) old versions of its rows. Then secondary has to replay this cleanup, and has to forcibly cancel all queries which can use these rows.

Longer queries will be canceled more often.

You can work around this by starting a repeatable read transaction on primary which does a dummy query and then sits idle while a real query is run on secondary. Its presence will prevent vacuuming of old row versions on primary.

More on this subject and other workarounds are explained in Hot Standby — Handling Query Conflicts section in documentation.

Upvotes: 147

Related Questions