Reputation: 597
I am trying to mount a GCE persistent disk in a kubernetes pod via the deployment object yaml. I am observing this behavior that as long as the node (on which the pod resides) is in the same zone as the persistent disk (say us-central1-a), the mounting succeeds. However, if there are in different zones (say node in us-central1-a and disk in us-central1-b) then mounting times out.
Is this behavior valid? I could not find anything in the documentation that verifies that it is.
http://kubernetes.io/docs/user-guide/volumes/#gcePersistentDisk
We are using multi-zone clusters which is making it cumbersome to load the right disk.
Upvotes: 4
Views: 1453
Reputation: 239
you need to schedule your pods on the same PD zone. In order to do that you need to use nodeSelector or nodeAffinity:required.
If you are using dynamically provisioned volumes, this needs to be topology aware. This is accomplished by setting the volumeBindingMode for the storage class to WaitForFirstConsumer: volume binding
Upvotes: 0
Reputation: 391
You can use this nodeSelector:
nodeSelector:
failure-domain.beta.kubernetes.io/zone: us-central1-b
Upvotes: 3
Reputation: 5662
GCE Persistent Disks are a zonal resource, so the pod can only request a PD that is in its zone.
Upvotes: 1