Prog_G
Prog_G

Reputation: 1615

CrunchyData Postgres operator Pod always in pending state

I am trying to set up Postgres cluster using CrunchyData Postgres operator. I am facing issue where the pod backrest-shared-repo is always in Pending state.

NAME                      STATUS    VOLUME   CAPACITY   ACCESS MODES   STORAGECLASS   AGE
postgres-ha-1             Pending                                                     12m
postgres-ha-1-pgbr-repo   Pending                                                     12m

While debugging I found that the PersistentVolumeClaim is also in pending state. Events of PVC is below:

no persistent volumes available for this claim and no storage class is set

PVC.yaml

apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  creationTimestamp: "2022-07-08T10:28:48Z"
  finalizers:
  - kubernetes.io/pvc-protection
  labels:
    pg-cluster: postgres-ha-1
    vendor: crunchydata
  name: postgres-ha-1-pgbr-repo
  namespace: pgo
  resourceVersion: "1786569"
  uid: 6f80d516-320c-490e-ad6a-83400ea998a4
spec:
  accessModes:
  - ReadWriteMany
  resources:
    requests:
      storage: 3G
  volumeMode: Filesystem
status:
  phase: Pending

Below is the storage configuration in postgres-operator.yml:

backrest_storage: "hostpathstorage"
backup_storage: "hostpathstorage"
primary_storage: "hostpathstorage"
replica_storage: "hostpathstorage"
pgadmin_storage: "hostpathstorage"
wal_storage: ""
storage1_name: "default"
storage1_access_mode: "ReadWriteOnce"
storage1_size: "1G"
storage1_type: "dynamic"
storage2_name: "hostpathstorage"
storage2_access_mode: "ReadWriteMany"
storage2_size: "3G"
storage2_type: "create"
storage3_name: "nfsstorage"
storage3_access_mode: "ReadWriteMany"
storage3_size: "1G"
storage3_type: "create"
storage3_supplemental_groups: "65534"
storage4_name: "nfsstoragered"
storage4_access_mode: "ReadWriteMany"
storage4_size: "1G"
storage4_match_labels: "crunchyzone=red"
storage4_type: "create"
storage4_supplemental_groups: "65534"
storage5_name: "storageos"
storage5_access_mode: "ReadWriteOnce"
storage5_size: "5Gi"
storage5_type: "dynamic"
storage5_class: "fast"
storage6_name: "primarysite"
storage6_access_mode: "ReadWriteOnce"
storage6_size: "4G"
storage6_type: "dynamic"
storage6_class: "primarysite"
storage7_name: "alternatesite"
storage7_access_mode: "ReadWriteOnce"
storage7_size: "4G"
storage7_type: "dynamic"
storage7_class: "alternatesite"
storage8_name: "gce"
storage8_access_mode: "ReadWriteOnce"
storage8_size: "300M"
storage8_type: "dynamic"
storage8_class: "standard"
storage9_name: "rook"
storage9_access_mode: "ReadWriteOnce"
storage9_size: "1Gi"
storage9_type: "dynamic"
storage9_class: "rook-ceph-block"

Can anyone help me in solving the issue?

Upvotes: 1

Views: 660

Answers (1)

zer0
zer0

Reputation: 2919

You need to create a PersistentVolume in the cluster before you can use it with a PersistentVolumeClaim. The error simply means you do not have any PVs that can be matched with your PVC.

Here's the official guide on how to create PersistentVolumes. Just ensure that the specifications you set on the PersistentVolume match the PersistentVolumeClaim, otherwise it will not be bound.

You can use an hostPath type PV which will simply create a directory on your worker node and use it to store data. This will prove the functional correctness. Later perhaps you can move towards a more central solution of having a centralized Volume store (details in the docs here).

Upvotes: 2

Related Questions