You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I have setuped 3 pod postgres in test cluster.
when i kill a pod with a matser, one of the replicas becomes the new master, kubernetes creates a new pod and it connects to the master, replication is restored.
However, the third pod connects to the new master and can't find the wal files there:
2024-03-27 08:41:40.004 GMT [3473] FATAL: could not receive data from WAL stream: ERROR: requested WAL segment 000000040000000000000013 has already been removed
2024-03-27 08:41:40.004 GMT [28] LOG: waiting for WAL to become available at 0/13000630
2024-03-27 08:41:45.006 GMT [3482] LOG: started streaming WAL from primary at 0/13000000 on timeline 4
2024-03-27 08:41:45.006 GMT [3482] FATAL: could not receive data from WAL stream: ERROR: requested WAL segment 000000040000000000000013 has already been removed
2024-03-27 08:41:45.007 GMT [28] LOG: waiting for WAL to become available at 0/13000630
2024-03-27 08:41:50.008 GMT [3483] LOG: started streaming WAL from primary at 0/13000000 on timeline 4
2024-03-27 08:41:50.008 GMT [3483] FATAL: could not receive data from WAL stream: ERROR: requested WAL segment 000000040000000000000013 has already been removed
2024-03-27 08:41:50.009 GMT [28] LOG: waiting for WAL to become available at 0/13000630
I'm not sure if this is a bug or not, what is the best way to change this behavior so that after the master crashes the replica is restored to its previous state?
The text was updated successfully, but these errors were encountered:
I have setuped 3 pod postgres in test cluster.
when i kill a pod with a matser, one of the replicas becomes the new master, kubernetes creates a new pod and it connects to the master, replication is restored.
However, the third pod connects to the new master and can't find the wal files there:
I'm not sure if this is a bug or not, what is the best way to change this behavior so that after the master crashes the replica is restored to its previous state?
The text was updated successfully, but these errors were encountered: