Rutnet
Rutnet

Reputation: 1673

Kubernetes Crash Loop Error, container wont run and can't see logs

I have a working docker image which I am trying to now use on Kubernetes but as I try to run the deployment it never runs. It gets stuck in a crash loop error and I have no way of working out what the logs say because it exits so quickly. I've included my deployment yaml file to see if there is something obviously wrong.

Any help is appreciated.

apiVersion: v1
kind: Service
metadata:
  name: newapp
  labels:
    app: newapp
spec:
  ports:
    - port: 80
  selector:
    app: newapp
    tier: frontend
  type: LoadBalancer
---
apiVersion: apps/v1beta2 # for versions before 1.8.0 use apps/v1beta1
kind: Deployment
metadata:
  name: newapp
  labels:
    app: newapp
spec:
  selector:
    matchLabels:
      app: newapp
      tier: frontend
  strategy:
    type: Recreate
  template:
    metadata:
      labels:
        app: newapp
        tier: frontend
    spec:
      containers:
      - image: customwebimage
        name: newapp
        envFrom:
          - configMapRef:
              name: newapp-config
        ports:
        - containerPort: 80
      imagePullSecrets:
      - name: test123

Upvotes: 0

Views: 714

Answers (1)

Rawkode
Rawkode

Reputation: 22592

You can view the previous logs by adding -p

kubectl logs -p pod-name

I'd delete the Deployments Pod and try this with a new Pod, which will run 5 times before entering CrashLoopBackoff.

If the error isn't happening during container runtime, then you can describe the pod to see for scheduling / instantiation errors:

kubectl describe pod pod-name

Upvotes: 1

Related Questions