DarVar
DarVar

Reputation: 18124

Deployment and PVCs

I have the following PersistentVolumeClaim:

apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: nginx-pvc
spec:
  accessModes:
    - ReadWriteOnce
  resources:
    requests:
      storage: 256Mi
  storageClassName: fask

and Deployment:

apiVersion: apps/v1
kind: Deployment
metadata:
  name: nginx-deployment
spec:
  replicas: 2
  selector:
    matchLabels:
      app: nginx
  template:
    metadata:
      labels:
        app: nginx
    spec:
      containers:
      - name: nginx
        image: nginx:1.7.9
        ports:
        - containerPort: 80
        volumeMounts:
        - name: data
          mountPath: "/var/www/html"
      volumes:
      - name: data
        persistentVolumeClaim:
          claimName: nginx-pvc

If I run with a single replica Deployment my PV gets dynamically created by the vsphere StorageClass

However, if I have more than 2 replicas it will fail to create the second PV:

AttachVolume.Attach failed for volume "pvc-8facf319-6b1a-11e8-935b-00505680b1b8" : Failed to add disk 'scsi0:1'.
Unable to mount volumes for pod "nginx-deployment-7886f48dcd-lzms8_default(b0e38764-6b1a-11e8-935b-00505680b1b8)": timeout expired waiting for volumes to attach or mount for pod "default"/"nginx-deployment-7886f48dcd-lzms8". list of unmounted volumes=[data]. list of unattached volumes=[data default-token-5q7kr]

Upvotes: 1

Views: 9649

Answers (3)

Alex
Alex

Reputation: 1011

ReadWriteOnce PV claim that all pods must run on the same node. You need to add NodeSelector to Deployment.

apiVersion: apps/v1
kind: Deployment
metadata:
  name: nginx-deployment
spec:
  replicas: 2
  selector:
    matchLabels:
      app: nginx
  template:
    metadata:
      labels:
        app: nginx
    spec:
      nodeSelector:
         kubernetes.io/hostname: nl-test-02
      containers:
      - name: nginx
        image: nginx:1.7.9
        ports:
        - containerPort: 80
        volumeMounts:
        - name: data
          mountPath: "/var/www/html"
      volumes:
      - name: data
        persistentVolumeClaim:
          claimName: nginx-pvc

Upvotes: 0

afe
afe

Reputation: 545

I know I'm very late to the party, but I do not agree with the approved answer. It depends on what you are trying to achieve (as most things in coding life).

StatefulSets with volumeClaimTemplates are useful when you need to have completely independent replicas that will communicate each other with some kind of app-level-implemented mechanism, while still existing as seperate identities. I'm thinking about distributed databases such Cassandra: different db nodes, one pod each, with different persisted storage, one PV each. The gossip mechanism in Cassandra will keep data on-sync accross the volumes.

This is in my opinion an avoidable situation if you are using Kubernetes mainly for microservices and replicated applications. StatefulSets are a pain in the neck when you need to do rolling updates or to upgrade your Kubernetes version, because they are not too easy to scale.

Deployments mount a single persistent volume no matter the number of replicas: 10 pods of the same deploy will try to mount the same volume for both read and write operations. What you were struggling with is that most volumes providers do not allow volumes to be mounted by several nodes. This is when your experience is needed.

If you only need to, as it seems by your template, expose a redundant website by sharing the same sources with multiple pods in order to achive rolling update without downtime, you could go with deploy and volumeClaim (not volumeClaimTemplates): you could mount the same volume on several pods with a deployment, you only need to be sure that all deployments will be assigned to the same node. PodAffinity will do this job for you.

Upvotes: 3

Const
Const

Reputation: 6643

However, if I have more than 2 replicas it will fail to create the second PV

You should then probably use StatefulSet and volumeClaimTemplates within it instead of Deployment and PersistentVolumeClaim.

In your case each deployment is having same PersistentVolumeClaim (that is ReadWriteOnly and can't be mounted on second request), while with volumeClaimTemplates you get different one provisioned per each replica.

Upvotes: 9

Related Questions