Reputation: 1458
I am using pg_rewind
as follows on the slave:
/usr/pgsql-11/bin/pg_rewind -D <data_dir_path> --source-server="port=5432 user=myuser host=<ip>"
Command is completing successfully with:
source and target cluster are on the same timeline
no rewind required
After that, I created recovery.conf
on the new slave as follows:
standby_mode = 'on'
primary_conninfo = 'host=<master_ip> port=5432 user=<uname> password=<password> sslmode=require sslcompression=0'
trigger_file = '/tmp/MasterNow'
After that I start PostgreSQL on the slave and check the status. I get the following messages:
]# systemctl status postgresql-11
● postgresql-11.service - PostgreSQL 11 database server
Loaded: loaded (/usr/lib/systemd/system/postgresql-11.service; enabled; vendor preset: disabled)
Active: activating (start) since Thu 2019-05-02 10:36:11 UTC; 33min ago
Docs: https://www.postgresql.org/docs/11/static/
Process: 26444 ExecStartPre=/usr/pgsql-11/bin/postgresql-11-check-db-dir ${PGDATA} (code=exited, status=0/SUCCESS)
Main PID: 26450 (postmaster)
CGroup: /system.slice/postgresql-11.service
├─26450 /usr/pgsql-11/bin/postmaster -D /var/lib/pgsql/11/data/
└─26458 postgres: startup recovering 000000060000000000000008
May 02 11:09:13 my.localhost postmaster[26450]: 2019-05-02 11:09:13 UTC LOG: record length 1485139969 at 0/8005CB0 too long
May 02 11:09:18 my.localhost postmaster[26450]: 2019-05-02 11:09:18 UTC LOG: record length 1485139969 at 0/8005CB0 too long
May 02 11:09:23 my.localhost postmaster[26450]: 2019-05-02 11:09:23 UTC LOG: record length 1485139969 at 0/8005CB0 too long
May 02 11:09:28 my.localhost postmaster[26450]: 2019-05-02 11:09:28 UTC LOG: record length 1485139969 at 0/8005CB0 too long
May 02 11:09:33 my.localhost postmaster[26450]: 2019-05-02 11:09:33 UTC LOG: record length 1485139969 at 0/8005CB0 too long
May 02 11:09:38 my.localhost postmaster[26450]: 2019-05-02 11:09:38 UTC LOG: record length 1485139969 at 0/8005CB0 too long
May 02 11:09:43 my.localhost postmaster[26450]: 2019-05-02 11:09:43 UTC LOG: record length 1485139969 at 0/8005CB0 too long
May 02 11:09:48 my.localhost postmaster[26450]: 2019-05-02 11:09:48 UTC LOG: record length 1485139969 at 0/8005CB0 too long
May 02 11:09:53 my.localhost postmaster[26450]: 2019-05-02 11:09:53 UTC LOG: record length 1485139969 at 0/8005CB0 too long
May 02 11:09:58 my.localhost postmaster[26450]: 2019-05-02 11:09:58 UTC LOG: record length 1485139969 at 0/8005CB0 too long
Hint: Some lines were ellipsized, use -l to show in full.
On the master, the pg_wal
directory looks as follows:
root@{/var/lib/pgsql/11/data/pg_wal}#ls -R
.:
000000010000000000000003 000000020000000000000006 000000040000000000000006 000000050000000000000008 archive_status
000000010000000000000004 00000002.history 00000004.history 00000005.history
000000020000000000000004 000000030000000000000006 000000050000000000000006 000000060000000000000008
000000020000000000000005 00000003.history 000000050000000000000007 00000006.history
./archive_status:
000000050000000000000006.done 000000050000000000000007.done
PostgreSQL logs from slave:
May 3 06:08:58 postgres[9226]: [39-1] 2019-05-03 06:08:58 UTC LOG: entering standby mode
May 3 06:08:58 postgres[9226]: [40-1] 2019-05-03 06:08:58 UTC LOG: invalid resource manager ID 80 at 0/8005C78
May 3 06:08:58 postgres[9226]: [41-1] 2019-05-03 06:08:58 UTC DEBUG: switched WAL source from archive to stream after failure
May 3 06:08:58 postgres[9227]: [35-1] 2019-05-03 06:08:58 UTC DEBUG: find_in_dynamic_libpath: trying "/usr/pgsql-11/lib/libpqwalreceiver"
May 3 06:08:58 postgres[9227]: [36-1] 2019-05-03 06:08:58 UTC DEBUG: find_in_dynamic_libpath: trying "/usr/pgsql-11/lib/libpqwalreceiver.so"
May 3 06:08:58 postgres[9227]: [37-1] 2019-05-03 06:08:58 UTC LOG: started streaming WAL from primary at 0/8000000 on timeline 6
May 3 06:08:58 postgres[9227]: [38-1] 2019-05-03 06:08:58 UTC DEBUG: sendtime 2019-05-03 06:08:58.348488+00 receipttime 2019-05-03 06:08:58.350018+00 replication apply delay (N/A) transfer latency 1 ms
May 3 06:08:58 postgres[9227]: [39-1] 2019-05-03 06:08:58 UTC DEBUG: sending write 0/8020000 flush 0/0 apply 0/0
May 3 06:08:58 postgres[9227]: [40-1] 2019-05-03 06:08:58 UTC DEBUG: sending write 0/8020000 flush 0/8020000 apply 0/0
May 3 06:08:58 postgres[9226]: [42-1] 2019-05-03 06:08:58 UTC LOG: invalid resource manager ID 80 at 0/8005C78
May 3 06:08:58 postgres[9227]: [41-1] 2019-05-03 06:08:58 UTC DEBUG: sendtime 2019-05-03 06:08:58.349865+00 receipttime 2019-05-03 06:08:58.35253+00 replication apply delay 0 ms transfer latency 2 ms
May 3 06:08:58 postgres[9227]: [42-1] 2019-05-03 06:08:58 UTC DEBUG: sending write 0/8040000 flush 0/8020000 apply 0/0
May 3 06:08:58 postgres[9227]: [43-1] 2019-05-03 06:08:58 UTC DEBUG: sending write 0/8040000 flush 0/8040000 apply 0/0
May 3 06:08:58 postgres[9227]: [44-1] 2019-05-03 06:08:58 UTC DEBUG: sending write 0/8040000 flush 0/8040000 apply 0/0
May 3 06:08:58 postgres[9227]: [45-1] 2019-05-03 06:08:58 UTC FATAL: terminating walreceiver process due to administrator command
May 3 06:08:58 postgres[9227]: [46-1] 2019-05-03 06:08:58 UTC DEBUG: shmem_exit(1): 1 before_shmem_exit callbacks to make
May 3 06:08:58 postgres[9227]: [47-1] 2019-05-03 06:08:58 UTC DEBUG: shmem_exit(1): 5 on_shmem_exit callbacks to make
May 3 06:08:58 postgres[9227]: [48-1] 2019-05-03 06:08:58 UTC DEBUG: proc_exit(1): 2 callbacks to make
May 3 06:08:58 postgres[9227]: [49-1] 2019-05-03 06:08:58 UTC DEBUG: exit(1)
May 3 06:08:58 postgres[9227]: [50-1] 2019-05-03 06:08:58 UTC DEBUG: shmem_exit(-1): 0 before_shmem_exit callbacks to make
May 3 06:08:58 postgres[9227]: [51-1] 2019-05-03 06:08:58 UTC DEBUG: shmem_exit(-1): 0 on_shmem_exit callbacks to make
May 3 06:08:58 postgres[9227]: [52-1] 2019-05-03 06:08:58 UTC DEBUG: proc_exit(-1): 0 callbacks to make
May 3 06:08:58 postgres[9218]: [35-1] 2019-05-03 06:08:58 UTC DEBUG: reaping dead processes
Upvotes: 0
Views: 2075
Reputation: 246473
I'd say that the standby is trying to recover along the wrong time line (5, I guess), does not follow the new primary to the latest time line and keeps hitting an invalid record in the WAL file.
I'd add the following to recovery.conf
:
recovery_target_timeline = 'latest'
For a more detailed analysis, look into the PostgreSQL log file.
Upvotes: 1