Reputation: 477
I have two namespaces 'runsdata' and 'monitoring'. The heketi pod and glusterfs's daemonSet pod are both under the 'runsdata' namespace. Now I want to create the Prometheus monitor under the 'monitoring' namespace. Since I need storage to store my Prometheus data. So I create PVC(under the 'monitoring' ns) and pv, And in the PVC yaml I declare storageclass to create the corresponding volume in order to provide storage for Prometheus. But when I created pvc bound with pv and apply the prometheus-server.yaml . I get the error:
Warning FailedMount 18m (x3 over 43m) kubelet, 172.16.5.151 Unable to attach or mount volumes: unmounted volumes=[prometheus-data-volume], unattached volumes=[prometheus-rules-volume prometheus-token-vcrr2 prometheus-data-volume prometheus-conf-volume]: timed out waiting for the condition
Warning FailedMount 13m (x5 over 50m) kubelet, 172.16.5.151 Unable to attach or mount volumes: unmounted volumes=[prometheus-data-volume], unattached volumes=[prometheus-token-vcrr2 prometheus-data-volume prometheus-conf-volume prometheus-rules-volume]: timed out waiting for the condition
Warning FailedMount 3m58s (x35 over 59m) kubelet, 172.16.5.151 MountVolume.NewMounter initialization failed for volume "data-prometheus-pv" : endpoints "heketi-storage-endpoints" not found
It's not difficult to know from the above log, the storageClass can not found the heketi endpoints to create volume. because the heketi endpoints is under the 'runsdata'. how can i solve this problem?
Other info: 1. pv and pvc
apiVersion: v1
kind: PersistentVolume
metadata:
name: data-prometheus-pv
labels:
pv: data-prometheus-pv
release: stable
spec:
capacity:
storage: 2Gi
accessModes:
- ReadWriteMany
persistentVolumeReclaimPolicy: Recycle
storageClassName: runsdata-static-class
glusterfs:
endpoints: "heketi-storage-endpoints"
path: "runsdata-glusterfs-static-class"
readOnly: true
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: data-prometheus-claim
namespace: monitoring
spec:
accessModes:
- ReadWriteMany
resources:
requests:
storage: 2Gi
storageClassName: runsdata-static-class
selector:
matchLabels:
pv: data-prometheus-pv
release: stable
[root@localhost online-prometheus]# kubectl get pv
NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE
data-config-pv 1Gi RWX Retain Bound runsdata/data-config-claim runsdata-static-class 5d22h
data-mongo-pv 1Gi RWX Retain Bound runsdata/data-mongo-claim runsdata-static-class 4d4h
data-prometheus-pv 2Gi RWX Recycle Bound monitoring/data-prometheus-claim runsdata-static-class 151m
data-static-pv 1Gi RWX Retain Bound runsdata/data-static-claim runsdata-static-class 7d15h
pvc-02f5ce74-db7c-40ba-b0e1-ac3bf3ba1b37 3Gi RWX Delete Bound runsdata/data-test-claim runsdata-static-class 3d5h
pvc-085ec0f1-6429-4612-9f71-309b94a94463 1Gi RWX Delete Bound runsdata/data-file-claim runsdata-static-class 3d17h
[root@localhost online-prometheus]# kubectl get pvc -n monitoring
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
data-prometheus-claim Bound data-prometheus-pv 2Gi RWX runsdata-static-class 151m
[root@localhost online-prometheus]#
[root@localhost online-prometheus]# kubectl get pods -n runsdata|egrep "heketi|gluster"
glusterfs-5btbl 1/1 Running 1 11d
glusterfs-7gmbh 1/1 Running 3 11d
glusterfs-rmx7k 1/1 Running 7 11d
heketi-78ccdb6fd-97tkv 1/1 Running 2 10d
[root@localhost online-prometheus]#
---
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: runsdata-static-class
provisioner: kubernetes.io/glusterfs
allowVolumeExpansion: true
reclaimPolicy: Delete
parameters:
resturl: "http://10.10.11.181:8080"
volumetype: "replicate:3"
restauthenabled: "true"
restuser: "admin"
restuserkey: "runsdata-gf-admin"
#secretNamespace: "runsdata"
#secretName: "heketi-secret"
Upvotes: 0
Views: 1133
Reputation: 477
The solution is create endpoints and service under the current namespace. Then we can use the service in the pv yaml like below:
[root@localhost gluster]# cat glusterfs-endpoints.yaml
---
kind: Endpoints
apiVersion: v1
metadata:
name: glusterfs-cluster
namespace: monitoring
subsets:
- addresses:
- ip: 172.16.5.150
- ip: 172.16.5.151
- ip: 172.16.5.152
ports:
- port: 1
protocol: TCP
[root@localhost gluster]# cat glusterfs-service.yaml
apiVersion: v1
kind: Service
metadata:
name: glusterfs-cluster
namespace: monitoring
spec:
ports:
- port: 1
[root@localhost gluster]#
Upvotes: 1