J Young
J Young

Reputation: 755

Job with multiple containers never succeeds

I'm running Kubernetes in a GKE cluster and need to run a DB migration script on every deploy. For staging this is easy: we have a permanent, separate MySQL service with its own volume. For production however we make use of GCE SQL, resulting in the job having two containers - one more for the migration, and the other for Cloud Proxy.

Because of this new container, the job always shows as 1 active when running kubectl describe jobs/migration and I'm at a complete loss. I have tried re-ordering the containers to see if it checks one by default but that made no difference and I cannot see a way to either a) kill a container or b) check the status of just one container inside the Job.

Any ideas?

Upvotes: 4

Views: 7316

Answers (5)

ghitesh
ghitesh

Reputation: 170

Starting 1.29, there is a KEP in beta for this. https://kubernetes.io/blog/2023/08/25/native-sidecar-containers/#what-are-sidecar-containers-in-1-28

What you can do is make you side car as initContainer, with additional attibute restartPolicy: Always in the container.

Copied from above link:

apiVersion: v1
kind: Pod
spec:
  initContainers:
  - name: secret-fetch
    image: secret-fetch:1.0
  - name: network-proxy
    image: network-proxy:1.0
    restartPolicy: Always
  containers:
  ...

What this does is, it will start the network-proxy container as initcontainer, and it will continue to run along side main containers, and the pod/job will terminate once the main container(s) finish.

Upvotes: 0

Chris Stryczynski
Chris Stryczynski

Reputation: 33891

The reason is the container/process never terminates.

One possible work around is: move the cloud-sql-proxy to it's own deployment - and add a service in front of that. Hence your job won't be responsible for running the long running cloud-sql-proxy and hence will terminate / complete.

Upvotes: 1

Oleksii Donoha
Oleksii Donoha

Reputation: 41

I know it's a year too late, but best practice would be to run single cloudsql proxy service for all app's purposes, and then configure DB access in app's image to use this service as a DB hostname.

This way you will not require putting cloudsql proxy container into every pod which uses DB.

Upvotes: 3

iamnat
iamnat

Reputation: 4166

You haven't posted enough details about your specific problem. But I'm taking a guess based on experience.

TL;DR: Move your containers into separate jobs if they are independent.

--

Kubernetes jobs keep restarting till the job succeeds. A kubernetes job will succeed only if every container within succeeds.

This means that your containers should be return in a restart proof way. Once a container sucessfully runs, it should return a success even if it runs again. Otherwise, say container1 is successful, container2 fails. Job restarts. Then, container1 fails (because it has already been successful). Hence, Job keeps restarting.

Upvotes: 0

pagid
pagid

Reputation: 13867

each Pod can be configured with a init container which seems to be a good fit for your issue. So instead of having a Pod with two containers which have to run permanently, you could rather define a init container to do your migration upfront. E.g. like this:

apiVersion: v1
kind: Pod
metadata:
  name: init-container
  annotations:
    pod.beta.kubernetes.io/init-containers: '[
        {
            "name": "migrate",
            "image": "application:version",
            "command": ["migrate up"],
        }
    ]'
spec:
  containers:
  - name: application
    image: application:version
    ports:
    - containerPort: 80

Upvotes: 0

Related Questions