Reputation: 18820
I'm having trouble to create a persistent volume that I can use from different pods (1 write, another read).
Tried to use gcePersistentDisk
directly in the pod spec like in the example on the k8s page (plus readOnly
):
apiVersion: v1
kind: Pod
metadata:
name: test-pd
spec:
containers:
- image: gcr.io/google_containers/test-webserver
name: test-container
volumeMounts:
- mountPath: /test-pd
name: test-volume
readOnly: true
volumes:
- name: test-volume
gcePersistentDisk:
pdName: my-data-disk
fsType: ext4
readOnly: true
Then in the second pod spec exactly the same except the readOnly
... but got an NoDiskConflict
error.
Second approach is to use PersistentVolume
and PersistentVolumeClaim
like this:
apiVersion: v1
kind: PersistentVolume
metadata:
name: data-standard
spec:
capacity:
storage: 1Gi
accessModes:
- ReadWriteMany
gcePersistentDisk:
fsType: ext4
pdName: data
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: data-standard-claim
spec:
accessModes:
- ReadWriteMany
resources:
requests:
storage: 1Gi
But now I get an error telling me:
MountVolume.MountDevice failed for volume "kubernetes.io/gce-pd/xxx" (spec.Name: "yyy") pod "6ae34476-6197-11e7-9da5-42010a840186" (UID: "6ae34476-6197-11e7-9da5-42010a840186") with: mount failed: exit status 32 Mounting command: mount Mounting arguments: /dev/disk/by-id/google-gke-cluster-xxx /var/lib/kubelet/plugins/kubernetes.io/gce-pd/mounts/gke-cluster-xxx [ro] Output: mount: wrong fs type, bad option, bad superblock on /dev/sdb, missing codepage or helper program, or other error In some cases useful info is found in syslog - try dmesg | tail or so.
Error syncing pod, skipping: timeout expired waiting for volumes to attach/mount for pod "default"/"my-deployment". list of unattached/unmounted volumes=[data]
So what is the correct way of using a GCE disk with multiple pods.
PS: Kubernetes 1.6.6
Upvotes: 5
Views: 8072
Reputation: 675
i guess, you should be able to do this by using PV and PVC. let's take an example where you have one pv and PVC. i am not much sure on multi worker-node architecture. i am a minikube user. have a check here - https://github.com/kubernetes/kubernetes/issues/60903
Now you have PV and PVC, both status is bound. now you will use the claim in the pod/deployment resource definition and in pvc name you will use the same claim name. this will attach the volume to both the pods.
---
#pv defination:
---
kind: PersistentVolume
apiVersion: v1
metadata:
name: pv0
spec:
storageClassName: standard
accessModes:
- ReadWriteOnce
capacity:
storage: 5Gi
hostPath:
path: /path_on_node
type: DirectoryOrCreate
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: claim0
spec:
storageClassName: standard
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 5Gi
volumeName: pv0
---
# pod1
volumeMounts:
- mountPath: /container_path
name: vol0
subPath: sub_path_on_pv(say pod1 so on disk data will be written at /path_on_node/pod1
volumes:
- name: vol0
persistentVolumeClaim:
claimName: claim0
---
# pod2
volumeMounts:
- mountPath: /container_path
name: vol2
subPath: sub_path_on_pv(say pod2 so on disk data will be written at /path_on_node/pod2
volumes:
- name: vol2
persistentVolumeClaim:
claimName: claim0
Upvotes: 0
Reputation: 8288
Instead of ReadWriteMany
, can you use ReadOnlyMany
?
Accessmodes:
ReadWriteOnce – the volume can be mounted as read-write by a single node
ReadOnlyMany – the volume can be mounted read-only by many nodes
ReadWriteMany – the volume can be mounted as read-write by many nodes
GCE persistent disk do not support ReadWriteMany
Here is the list of providers and supported accessmodes:
https://kubernetes.io/docs/concepts/storage/persistent-volumes/#access-modes
Upvotes: 2
Reputation: 8827
According to https://kubernetes.io/docs/concepts/storage/persistent-volumes/#access-modes, GCE Disks do not support ReadWriteMany. I am not sure if this explains the issue but I would advise you to try another compatible Volume type.
Upvotes: 5