Shilpa
Shilpa

Reputation: 101

Kubernetes provisioning PVC from GCE Persistent disk volume shows error

I am using a GCE cluster with 2 nodes, which I set up using kubeadm. Now I want to set up a persistent volume for postgresql to be deployed. I created a PVC and PV with a storageClass and also created a disk space with 10G in name postgres in the same project.Iam attaching the scripts for the PVC,PV,and Deployment below.Also I am using a service account that have the access to the disks.

1.Deployment.yml

apiVersion: apps/v1
kind: Deployment 
metadata:
  name: kyc-postgres
spec:
  replicas: 1
  selector:
    matchLabels:
      app: postgres
  template:
    metadata:
      labels:
        app: postgres
    spec:
      containers:
      - image: "postgres:9.6.2"
        name: postgres
        ports:
        - containerPort: 5432
          name: postgres
        volumeMounts:
        - name: postgres-storage
          mountPath: /var/lib/postgresql/db-data
      volumes:
      - name: postgres-storage
        persistentVolumeClaim:
          claimName: kyc-postgres-pvc

2.PersistentVolumeClaim.yml

apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: kyc-postgres-pvc
spec:
  accessModes:
    - ReadWriteOnce
  resources:
    requests:
      storage: 5Gi
  storageClassName: standard

3.PersistentVolume.yml

apiVersion: v1
kind: PersistentVolume
metadata:
  name: kyc-postgres-pv
  annotations:
    kubernetes.io/createdby: gce-pd-dynamic-provisioner
    pv.kubernetes.io/bound-by-controller: "yes"
    pv.kubernetes.io/provisioned-by: kubernetes.io/gce-pd
  finalizers:
  - kubernetes.io/pv-protection
spec:
  accessModes:
    - ReadWriteOnce
  capacity:
    storage: 5Gi
  claimRef:
    apiVersion: v1
    kind: PersistentVolumeClaim
    name: kyc-postgres-pvc
    namespace: default
  gcePersistentDisk:
    fsType: NTFS
    pdName: postgres
  nodeAffinity:
    required:
      nodeSelectorTerms:
      - matchExpressions:
        - key: failure-domain.beta.kubernetes.io/zone
          operator: In
          values:
          - us-central1-a
        - key: failure-domain.beta.kubernetes.io/region
          operator: In
          values:
          - us-central1-a
  persistentVolumeReclaimPolicy: Delete
  storageClassName: standard
  volumeMode: Filesystem
status:
  phase: Bound
  1. StorageClass.yml
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
  name: standard
provisioner: kubernetes.io/gce-pd
parameters:
  type: pd-standard
  zone: us-central1-a

Now when I create these volumes and deployments, the pod is not getting started properly.Iam getting the following errors when I tired creating deployments.

Failed to get GCE GCECloudProvider with error <nil>

Also Iam attaching my output for kubectl get sc

NAME       PROVISIONER            RECLAIMPOLICY   VOLUMEBINDINGMODE   ALLOWVOLUMEEXPANSION   AGE
standard   kubernetes.io/gce-pd   Delete          Immediate           false                  10m

Can someone help me with this.Thanks in advance for your time – if I’ve missed out anything, over- or under-emphasised a specific point let me know in the comments.

Upvotes: 0

Views: 1076

Answers (2)

Arghya Sadhu
Arghya Sadhu

Reputation: 44579

Using the GCECloudProvider in Kubernetes outside of the Google Kubernetes Engine has the following prerequisites :

  1. The VM needs to be run with a service account that has the right to provision disks. Info on how to run a VM with a service account can be found here

  2. The Kubelet needs to run with the argument --cloud-provider=gce. For this the KUBELET_KUBECONFIG_ARGS in /etc/systemd/system/kubelet.service.d/10-kubeadm.conf have to be edited. The Kubelet can then be restarted with sudo systemctl restart kubelet

  3. The Kubernetes cloud-config file needs to be configured. The file can be found at /etc/kubernetes/cloud-config and the following content is enough to get the cloud provider to work:

    [Global]
    
    project-id = "<google-project-id>"
    
  4. Kubeadm needs to have GCE configured as its cloud provider.However, the nodeName has to be changed. Edit the config file and upload it to cluster via kubeadm config upload from-file

    cloudProvider: gce
    

Upvotes: 1

Jonas
Jonas

Reputation: 128837

Your PersistentVolumeClaim does not specify a storageClassName, so I suppose you may want to use the default StorageClass. When using a default StorageClass, you don't need to create a PersistentVolume resource, that will be provided dynamically from the Google Cloud Platform. (Or is there any specific reason you don't want to use the default StorageClass?)

Upvotes: 1

Related Questions