Reputation: 1
I’m running KeyDB v6.3.2 in a Kubernetes sts, and after an extended uptime (39+ days), the pod receives SIGTERM and restarts, even though there was no manual shutdown or scaling event
Here is the relevant log before termination:
1:signal-handler (1740559193) Received SIGTERM scheduling shutdown... 1:22:S 26 Feb 2025 08:39:53.711 # User requested shutdown... 1:22:S 26 Feb 2025 08:39:53.711 * Saving the final RDB snapshot before exiting. 1:22:S 26 Feb 2025 08:39:54.027 * DB saved on disk 1:22:S 26 Feb 2025 08:39:54.027 * Removing the pid file. 1:22:S 26 Feb 2025 08:39:54.029 # KeyDB is now ready to exit, bye bye...
Pods lastState shows:
lastState:
terminated:
containerID: containerd://8cb24568ab14bd0579427dd04505ad2fea2a0ea685ef63e4c64579b8f5f92888
exitCode: 0
finishedAt: "2025-02-26T08:39:54Z"
reason: Completed
startedAt: "2025-01-18T07:30:35Z"
restartCount: 2
Pod was running for over a month before this restart
What I’ve checked so far:
Deployment Details:
Liveness Probe:
livenessProbe:
exec:
command:
- sh
- -c
- /health/ping_liveness_local.sh 5
failureThreshold: 5
initialDelaySeconds: 20
periodSeconds: 5
timeoutSeconds: 6
Readiness Probe:
readinessProbe:
exec:
command:
- sh
- -c
- /health/ping_readiness_local.sh 1
failureThreshold: 5
initialDelaySeconds: 20
periodSeconds: 5
timeoutSeconds: 2
Are there any keydb specific conditions where it could request its own shutdown? Any debugging steps to track the origin of the SIGTERM signal? I've tried to set loglevel to debug/verbose, but there are no results
Upvotes: 0
Views: 10