Reputation: 1
I am trying to install Elasticsearch on Kubernetes using bitnami/elasticsearch. I use the following commands:
helm repo add bitnami https://charts.bitnami.com/bitnami
kubectl apply -f ./es-pv.yaml
helm install elasticsearch --set name=elasticsearch,master.replicas=3,data.persistence.size=6Gi,data.replicas=2,coordinating.replicas=1 bitnami/elasticsearch -n elasticsearch
This is what I get, when I check pods:
# kubectl get pods -n elasticsearch
NAME READY STATUS RESTARTS AGE
elasticsearch-coordinating-only-0 0/1 Init:0/1 0 18m
elasticsearch-data-0 0/1 Running 6 18m
elasticsearch-data-1 0/1 Init:0/1 0 18m
elasticsearch-master-0 0/1 Init:0/1 0 18m
elasticsearch-master-1 0/1 Running 6 18m
elasticsearch-master-2 0/1 Init:0/1 0 18m
When I try kubectl describe pod
for elasticsearch-data and elasticsearch-master pods, they all have the same message:
0/3 nodes are available: 3 pod has unbound immediate PersistentVolumeClaims.
es-pv.yaml describing PersistentVolumes:
apiVersion: v1
kind: PersistentVolume
metadata:
name: elastic-master-pv
labels:
type: local
spec:
storageClassName: ''
capacity:
storage: 10Gi
accessModes:
- ReadWriteOnce
persistentVolumeReclaimPolicy: Retain
claimRef:
namespace: elasticsearch
name: data-elasticsearch-master-0
hostPath:
path: "/usr/share/elasticsearch"
nodeAffinity:
required:
nodeSelectorTerms:
- matchExpressions:
- key: kubernetes.io/hostname
operator: In
values:
- node_name_1
---
apiVersion: v1
kind: PersistentVolume
metadata:
name: elastic-master-pv-1
labels:
type: local
spec:
storageClassName: ''
capacity:
storage: 10Gi
accessModes:
- ReadWriteOnce
persistentVolumeReclaimPolicy: Retain
claimRef:
namespace: elasticsearch
name: data-elasticsearch-master-1
hostPath:
path: "/usr/share/elasticsearch"
nodeAffinity:
required:
nodeSelectorTerms:
- matchExpressions:
- key: kubernetes.io/hostname
operator: In
values:
- node_name_0
---
apiVersion: v1
kind: PersistentVolume
metadata:
name: elastic-master-pv-2
labels:
type: local
spec:
storageClassName: ''
capacity:
storage: 10Gi
accessModes:
- ReadWriteOnce
persistentVolumeReclaimPolicy: Retain
claimRef:
namespace: elasticsearch
name: data-elasticsearch-master-2
hostPath:
path: "/usr/share/elasticsearch"
nodeAffinity:
required:
nodeSelectorTerms:
- matchExpressions:
- key: kubernetes.io/hostname
operator: In
values:
- node_name_1
---
apiVersion: v1
kind: PersistentVolume
metadata:
name: elastic-data-pv
labels:
type: local
spec:
storageClassName: ''
capacity:
storage: 10Gi
accessModes:
- ReadWriteOnce
persistentVolumeReclaimPolicy: Retain
claimRef:
namespace: elasticsearch
name: data-elasticsearch-data-0
hostPath:
path: "/usr/share/elasticsearch"
nodeAffinity:
required:
nodeSelectorTerms:
- matchExpressions:
- key: kubernetes.io/hostname
operator: In
values:
- node_name_0
---
apiVersion: v1
kind: PersistentVolume
metadata:
name: elastic-data-pv-1
labels:
type: local
spec:
storageClassName: ''
capacity:
storage: 10Gi
accessModes:
- ReadWriteOnce
persistentVolumeReclaimPolicy: Retain
claimRef:
namespace: elasticsearch
name: data-elasticsearch-data-1
hostPath:
path: "/usr/share/elasticsearch"
nodeAffinity:
required:
nodeSelectorTerms:
- matchExpressions:
- key: kubernetes.io/hostname
operator: In
values:
- node_name_1
root@shy-fog-vs:~/elasticsearch# cat es-values.yaml
resources:
requests:
cpu: "200m"
memory: "512M"
limits:
cpu: "1000m"
memory: "512M"
volumeClaimTemplate:
storageClassName: local-storage
accessModes:
- "ReadWriteOnce"
resources:
requests:
storage: 10Gi
minimumMasterNodes: 1
clusterHealthCheckParams: "wait_for_status=yellow&timeout=2s"
readinessProbe:
failureThreshold: 3
initialDelaySeconds: 200
periodSeconds: 10
successThreshold: 3
timeoutSeconds: 5
PersistentVolume and PersistentVolumeClaims seem to be alright:
# kubectl get pv
NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE
airflow-dags-pv 2Gi RWX Retain Bound airflow/airflow-dags-pvc manual 112d
airflow-logs-pv 2Gi RWX Retain Bound airflow/airflow-logs-pvc manual 112d
airflow-pv 2Gi RWX Retain Bound airflow/airflow-pvc manual 112d
elastic-data-pv 10Gi RWO Retain Bound elasticsearch/data-elasticsearch-data-0 15m
elastic-data-pv-1 10Gi RWO Retain Bound elasticsearch/data-elasticsearch-data-1 15m
elastic-master-pv 10Gi RWO Retain Bound elasticsearch/data-elasticsearch-master-0 15m
elastic-master-pv-1 10Gi RWO Retain Bound elasticsearch/data-elasticsearch-master-1 15m
elastic-master-pv-2 10Gi RWO Retain Bound elasticsearch/data-elasticsearch-master-2 15m
# kubectl get pvc -n elasticsearch
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
data-elasticsearch-data-0 Bound elastic-data-pv 10Gi RWO 16m
data-elasticsearch-data-1 Bound elastic-data-pv-1 10Gi RWO 16m
data-elasticsearch-master-0 Bound elastic-master-pv 10Gi RWO 16m
data-elasticsearch-master-1 Bound elastic-master-pv-1 10Gi RWO 16m
data-elasticsearch-master-2 Bound elastic-master-pv-2 10Gi RWO 16m
Upvotes: 0
Views: 1581
Reputation: 5655
Short answer: everything is fine
Longer answer (and why you got that error):
This is what I get, when I check pods:
# kubectl get pods -n elasticsearch NAME READY STATUS RESTARTS AGE elasticsearch-coordinating-only-0 0/1 Init:0/1 0 18m elasticsearch-data-0 0/1 Running 6 18m elasticsearch-data-1 0/1 Init:0/1 0 18m elasticsearch-master-0 0/1 Init:0/1 0 18m elasticsearch-master-1 0/1 Running 6 18m elasticsearch-master-2 0/1 Init:0/1 0 18m
This actually indicates the volumes mounted and the pod has started (see the second master pod is running and the other two are are in "Init" stage)
When I try
kubectl describe pod
for elasticsearch-data and elasticsearch-master pods, they all have the same message:
0/3 nodes are available: 3 pod has unbound immediate PersistentVolumeClaims.
This is actually expected the first time you start the chart. Kubernetes has detected you don't have the volumes, and goes off to provision them for you. During that time, the pods can't start as those disks haven't been provisioned (and therefore the PersistentVolumeClaims
have not been bound -- hence the error.)
You should also be able to see from the events section in the kubectl describe
how recently that message appeared and frequently it has appeared. It should read something like below:
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Pulling 51m (x112 over 10h) kubelet Pulling image "broken-image:latest"
So here, the "broken-image" image has been pulled 112 times over the past 10 hours, and that message is 51 minutes old
Once the disks have been provisioned, and the PersistentVolumeClaims
have been bound (the disks have been allocated to your claim), your pods can start. You can also confirm this by your other referenced snippet:
# kubectl get pv
NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE
airflow-dags-pv 2Gi RWX Retain Bound airflow/airflow-dags-pvc manual 112d
airflow-logs-pv 2Gi RWX Retain Bound airflow/airflow-logs-pvc manual 112d
airflow-pv 2Gi RWX Retain Bound airflow/airflow-pvc manual 112d
elastic-data-pv 10Gi RWO Retain Bound elasticsearch/data-elasticsearch-data-0 15m
elastic-data-pv-1 10Gi RWO Retain Bound elasticsearch/data-elasticsearch-data-1 15m
elastic-master-pv 10Gi RWO Retain Bound elasticsearch/data-elasticsearch-master-0 15m
elastic-master-pv-1 10Gi RWO Retain Bound elasticsearch/data-elasticsearch-master-1 15m
elastic-master-pv-2 10Gi RWO Retain Bound elasticsearch/data-elasticsearch-master-2 15m
You can see from this that the pv (Persistent Volume) has been bound to the claim and that is why your pods have started.
Upvotes: 1