Reputation: 2863
I was running a cluster in Kubernetes 1.0, and I had a few containers I wanted to run periodically as a sidecar container in a pod--usually things like pushing or pulling backups. I did this by building a pod with the container that had data I wanted to back up, and the sidecar container for backing it up. The sidecar container was a basic bash script that would execute the backup command, then sleep for however long (say 15 minutes) I wanted to wait between backups, and finally exit with a 0 status code.
In 1.0, this worked like a charm. My backup containers were simple and not tied to being run as a daemon; they could be executed almost as a standalone command and work as expected, but the monitor kept them alive and so kept them in a loop.
After upgrading to 1.1, I noticed these pods all kept getting put into a CrashLoopBackOff state, which meant that their restarts got delayed. This would have been fine for the sidecar container, but the container producing data was also unavailable during this time, which surprised me.
Is there some way I can signal that a pod being regularly restarted is not a crash loop, but is happening by design? Or is the only way to solve this to turn the sidecar container into a daemon that never exits?
Upvotes: 2
Views: 3434
Reputation: 18200
Is there some way I can signal that a pod being regularly restarted is not a crash loop, but is happening by design?
Not that I know of.
Or is the only way to solve this to turn the sidecar container into a daemon that never exits?
This would be the my suggested solution.
Upvotes: 2