cheslijones
cheslijones

Reputation: 9194

Skaffold 1.4.0: "Skipping deploy due to sync error: copying files:"

Using:

I just recently started receiving this error as I'm working on a Django API. Anytime I save after making a change I get a:

WARN[0234] Skipping deploy due to sync error: copying files: Running [kubectl --context minikube exec api-deployment-6946878554-n7lc2 --namespace default -c api -i -- tar xmf - -C / --no-same-owner]
 - stdout: 
 - stderr: error: unable to upgrade connection: container not found ("api")
: exit status 1 

Not sure what has changed to cause this. I have to do a CTRL + C to shutdown Skaffold and restart it to get the changes to be reflected.

This is my skaffold.yaml:

apiVersion: skaffold/v1beta15
kind: Config
build:
  local:
    push: false
  artifacts:
    - image: postgres
      context: postgres
      docker:
        dockerfile: Dockerfile.dev
      sync:
        manual:
          - src: "***/*.sql"
            dest: .
    - image: testappacr.azurecr.io/test-app-api
      context: api
      docker:
        dockerfile: Dockerfile.dev
      sync:
        manual:
          - src: "***/*.py"
            dest: .
deploy:
  kubectl:
    manifests:
      - manifests/dev-ingress.yaml 
      - manifests/postgres.yaml
      - manifests/api.yaml

Also the api.yaml:

apiVersion: apps/v1
kind: Deployment
metadata:
  name: api-deployment
spec:
  replicas: 3
  selector:
    matchLabels:
      component: api
  template:
    metadata:
      labels:
        component: api
    spec:
      containers:
        - name: api
          image: testappacr.azurecr.io/test-app-api
          ports:
            - containerPort: 5000
          env:
            - name: PGUSER
              valueFrom:
                secretKeyRef:
                  name: test-app-secrets
                  key: PGUSER
            - name: PGHOST
              value: postgres-cluster-ip-service
            - name: PGPORT
              value: "1423"
            - name: PGDATABASE
              valueFrom:
                secretKeyRef:
                  name: test-app-secrets
                  key: PGDATABASE
            - name: PGPASSWORD
              valueFrom:
                secretKeyRef:
                  name: test-app-secrets
                  key: PGPASSWORD
            - name: SECRET_KEY
              valueFrom:
                secretKeyRef:
                  name: test-app-secrets
                  key: SECRET_KEY
            - name: DEBUG
              valueFrom:
                secretKeyRef:
                  name: test-app-secrets
                  key: DEBUG
          livenessProbe:
            tcpSocket:
              port: 5000
            initialDelaySeconds: 2
            periodSeconds: 2
          readinessProbe:
            tcpSocket:
              port: 5000
            initialDelaySeconds: 2
            periodSeconds: 2
          volumeMounts:
          - mountPath: "/mnt/test-app"
            name: file-storage
      volumes:
        - name: file-storage
          persistentVolumeClaim:
            claimName: file-storage
---
apiVersion: v1
kind: Service
metadata:
  name: api-cluster-ip-service
spec:
  type: ClusterIP
  selector:
    component: api
  ports:
    - port: 5000
      targetPort: 5000

Any suggestions about what might be going on here?

Upvotes: 3

Views: 1518

Answers (2)

Venryx
Venryx

Reputation: 18019

For others completely new to Kubernetes, note that this error can occur simply because your server script/process completed before the skaffold-sync command was attempted; kubernetes interprets the ending of your process as meaning that it failed, and therefore closes the container (or at least makes it so that skaffold is unable to make a call into it to apply the file sync).

More info here: How can I keep a container running on Kubernetes?

So the solution is to "keep your process alive", by having it run a sleep-and-log loop or something. (even if it has no real work left to do, eg. if you just wrote a test script that logs and exits)

NodeJS example:

console.log("Test server-script started!");

// loop forever, to keep process alive, so kubernetes doesn't kill the container, so skaffold-sync can work
while (true) {
    console.log("Keep-alive loop. Time:", Date.now());
    await new Promise(resolve=>setTimeout(resolve, 1000));
}

Upvotes: 0

cheslijones
cheslijones

Reputation: 9194

Figured out the issue was caused by my readinessProbe and livenessProbe in the api.yaml.

          livenessProbe:
            tcpSocket:
              port: 5000
            initialDelaySeconds: 2
            periodSeconds: 2
          readinessProbe:
            tcpSocket:
              port: 5000
            initialDelaySeconds: 2
            periodSeconds: 2

Now I don't get this error.

However, the reason I had them there in the first place was because skaffold will sometimes boot up the database after the API, causing it to fail. So that is the trade off in my case: without probes DB occasionally boots after API causing failure or have them and it more frequently results in the error related to this question.

Upvotes: 2

Related Questions