Gowmi
Gowmi

Reputation: 611

EFS persistent volume claim failed

I'm trying deploy nginx app, with volume mounts in EFS file system. after deploying manifest yaml file my pod ins pending state at the same time i verified pv/pc and it got bounded already here the logs.

Here the yam file pvc and deployment yaml file

apiVersion: apps/v1
kind: Deployment
metadata: 
  name: nginx
spec: 
  replicas: 1
  selector: 
    matchLabels: 
      app: nginx
  template: 
    metadata: 
      labels: 
        app: nginx
    spec:
      containers: 
        - image: "nginx:latest"
          name: nginx
          ports: 
            - containerPort: 80
              name: nginx
         
          volumeMounts: 
            - mountPath: "/etc/localtime -> /usr/share/zoneinfo/Etc/UTC"
              name: nginx-localtime
            - mountPath: "/var/log/nginx/"
              name: nginx-log
            - mountPath: "/var/log/cache/"
              name: nginx-cache
         
      volumes: 
        - name: nginx-localtime
          persistentVolumeClaim: 
            claimName: nginx-localtime
        - name: nginx-log
          persistentVolumeClaim: 
            claimName: nginx-log
        - name: nginx-cache
          persistentVolumeClaim: 
            claimName: nginx-cache
    
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
  name: nginx-localtime
spec:
  storageClassName: efs
  accessModes:
    - ReadWriteMany
  resources:
    requests:
      storage: 10Mi
---
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
  name: nginx-log
spec:
  storageClassName: efs
  accessModes:
    - ReadWriteMany
  resources:
    requests:
      storage: 10Mi
---
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
  name: nginx-cache
spec:
  storageClassName: efs
  accessModes:
    - ReadWriteMany
  resources:
    requests:
      storage: 10Mi

[root@ip-10-1-2-3 nginx]# kubectl get pvc
NAME              STATUS   VOLUME                                     CAPACITY   ACCESS MODES   STORAGECLASS   AGE
nginx-cache       Bound    pvc-d35df958-1288-4028-bf38-d880ec09824f   10Mi       RWX            efs            49s
nginx-localtime   Bound    pvc-ec5b15c0-a9d1-468a-989c-48a18332bbbb   10Mi       RWX            efs            49s
nginx-log         Bound    pvc-c84f1a46-ceba-4180-a2ce-95e19d3d9614   10Mi       RWX            efs            49s

[root@ip-10-1-2-3 nginx]# kubectl get pv
NAME                                       CAPACITY   ACCESS MODES   RECLAIM POLICY   STATUS   CLAIM                     STORAGECLASS   REASON   AGE
pvc-c84f1a46-ceba-4180-a2ce-95e19d3d9614   10Mi       RWX            Delete           Bound    default/nginx-log         efs                     54s
pvc-d35df958-1288-4028-bf38-d880ec09824f   10Mi       RWX            Delete           Bound    default/nginx-cache       efs                     53s
pvc-ec5b15c0-a9d1-468a-989c-48a18332bbbb   10Mi       RWX            Delete           Bound    default/nginx-localtime   efs                     54s

Events:
  Type     Reason            Age                From               Message
  ----     ------            ----               ----               -------
  Warning  FailedScheduling  36s (x2 over 36s)  default-scheduler  0/2 nodes are available: 2 persistentvolumeclaim "nginx-localtime" not found.
  Normal   Scheduled         34s                default-scheduler  Successfully assigned default/nginx-5c9777db9b-mjc75 to ip-10-1-2-3.eu-central-1.compute.internal
  Normal   Pulled            31s                kubelet            Successfully pulled image "nginx:latest" in 1.726030163s
  Normal   Pulled            29s                kubelet            Successfully pulled image "nginx:latest" in 1.898012878s
  Normal   Pulling           15s (x3 over 33s)  kubelet            Pulling image "nginx:latest"
  Normal   Created           13s (x3 over 31s)  kubelet            Created container nginx
  Normal   Started           13s (x3 over 31s)  kubelet            Started container nginx
  Normal   Pulled            13s                kubelet            Successfully pulled image "nginx:latest" in 1.580634471s
  Warning  BackOff           12s (x3 over 27s)  kubelet            Back-off restarting failed container

i'm not able to understand after bounding pvc still my nginx pod is not coming up could you please some one help me on this.

Upvotes: 0

Views: 755

Answers (1)

Rahul Agrawal
Rahul Agrawal

Reputation: 633

PV and PVC have one-to-one mapping. That is only one PVC can be bounded with one PV. First check that you have 3 PV for the 3 PVCs you are trying to bound.

kubectl get pv

Also note that once you have deleted a PVC its corresponding PV is not available for the bounding, as one might expect. In my experience you have to create a new PV and PVC. Also you can mention the name of PV to bound to in your PVC yaml like below.

apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: <your-pvc-name>
spec:
  volumeName: <your-pv-name>
  accessModes:
    - ReadWriteMany
  storageClassName: efs-sc
  resources:
    requests:
      storage: 10Gi

Upvotes: 1

Related Questions