Reputation: 85
I have installed Elastic/Kibana/Logstash using official helm charts with customized values.yaml on a K3s cluster. If I run kubectl get nodes, I get a list of the cluster nodes correctly. However, when I run kubectl get pods -o, I see all the pods are assigned to only one of the nodes and the remaining nodes are not utilized.
I have tried ➜ ~ kubectl scale --replicas=2 statefulset elasticsearch-master It attempts to schedule the new pods on the same node and triggers pod anti/affinity.
The number of nodes on Kibana stack monitoring is always only 1. The storage is also limited to the first node ephemeral disk.
Should I label the unused cluster nodes explicitly before elastic can start using them?
Upvotes: 0
Views: 920
Reputation: 85
I found the error. The error was giving a label to the other nodes on the cluster, I should leave the nodes unlabeled at all.
I shouldn't have run:
$ kubectl label node ip-X-X-X-X.ec2.internal node-role.kubernetes.io/worker=worker
Upvotes: 1