Reputation: 367
I have a kubernetes cluster running on multiple local (bare metal/physcal) machines. I want to deploy kafka on the cluster, but I can't figure out how to use strimzi with my configuration.
I tried to follow the tutorial on the quickstart page : https://strimzi.io/docs/quickstart/master/
Got my zookeeper pods pending at point 2.4. Creating a cluster
:
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Warning FailedScheduling <unknown> default-scheduler pod has unbound immediate PersistentVolumeClaims
Warning FailedScheduling <unknown> default-scheduler pod has unbound immediate PersistentVolumeClaims
I usually use hostpath
for my volumes, I don't know what's going on with this...
EDIT: I tried to create a StorageClass using Arghya Sadhu's commands, but the problem still there.
The description of my PVC :
kubectl describe -n my-kafka-project persistentvolumeclaim/data-my-cluster-zookeeper-0
Name: data-my-cluster-zookeeper-0
Namespace: my-kafka-project
StorageClass: local-storage
Status: Pending
Volume:
Labels: app.kubernetes.io/instance=my-cluster
app.kubernetes.io/managed-by=strimzi-cluster-operator
app.kubernetes.io/name=strimzi
strimzi.io/cluster=my-cluster
strimzi.io/kind=Kafka
strimzi.io/name=my-cluster-zookeeper
Annotations: strimzi.io/delete-claim: false
Finalizers: [kubernetes.io/pvc-protection]
Capacity:
Access Modes:
VolumeMode: Filesystem
Mounted By: my-cluster-zookeeper-0
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal WaitForFirstConsumer 72s (x66 over 16m) persistentvolume-controller waiting for first consumer to be created before binding
And my pod:
kubectl describe -n my-kafka-project pod/my-cluster-zookeeper-0
Name: my-cluster-zookeeper-0
Namespace: my-kafka-project
Priority: 0
Node: <none>
Labels: app.kubernetes.io/instance=my-cluster
app.kubernetes.io/managed-by=strimzi-cluster-operator
app.kubernetes.io/name=strimzi
controller-revision-hash=my-cluster-zookeeper-7f698cf9b5
statefulset.kubernetes.io/pod-name=my-cluster-zookeeper-0
strimzi.io/cluster=my-cluster
strimzi.io/kind=Kafka
strimzi.io/name=my-cluster-zookeeper
Annotations: strimzi.io/cluster-ca-cert-generation: 0
strimzi.io/generation: 0
Status: Pending
IP:
IPs: <none>
Controlled By: StatefulSet/my-cluster-zookeeper
Containers:
zookeeper:
Image: strimzi/kafka:0.15.0-kafka-2.3.1
Port: <none>
Host Port: <none>
Command:
/opt/kafka/zookeeper_run.sh
Liveness: exec [/opt/kafka/zookeeper_healthcheck.sh] delay=15s timeout=5s period=10s #success=1 #failure=3
Readiness: exec [/opt/kafka/zookeeper_healthcheck.sh] delay=15s timeout=5s period=10s #success=1 #failure=3
Environment:
ZOOKEEPER_NODE_COUNT: 1
ZOOKEEPER_METRICS_ENABLED: false
STRIMZI_KAFKA_GC_LOG_ENABLED: false
KAFKA_HEAP_OPTS: -Xms128M
ZOOKEEPER_CONFIGURATION: autopurge.purgeInterval=1
tickTime=2000
initLimit=5
syncLimit=2
Mounts:
/opt/kafka/custom-config/ from zookeeper-metrics-and-logging (rw)
/var/lib/zookeeper from data (rw)
/var/run/secrets/kubernetes.io/serviceaccount from my-cluster-zookeeper-token-hgk2b (ro)
tls-sidecar:
Image: strimzi/kafka:0.15.0-kafka-2.3.1
Ports: 2888/TCP, 3888/TCP, 2181/TCP
Host Ports: 0/TCP, 0/TCP, 0/TCP
Command:
/opt/stunnel/zookeeper_stunnel_run.sh
Liveness: exec [/opt/stunnel/stunnel_healthcheck.sh 2181] delay=15s timeout=5s period=10s #success=1 #failure=3
Readiness: exec [/opt/stunnel/stunnel_healthcheck.sh 2181] delay=15s timeout=5s period=10s #success=1 #failure=3
Environment:
ZOOKEEPER_NODE_COUNT: 1
TLS_SIDECAR_LOG_LEVEL: notice
Mounts:
/etc/tls-sidecar/cluster-ca-certs/ from cluster-ca-certs (rw)
/etc/tls-sidecar/zookeeper-nodes/ from zookeeper-nodes (rw)
/var/run/secrets/kubernetes.io/serviceaccount from my-cluster-zookeeper-token-hgk2b (ro)
Conditions:
Type Status
PodScheduled False
Volumes:
data:
Type: PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
ClaimName: data-my-cluster-zookeeper-0
ReadOnly: false
zookeeper-metrics-and-logging:
Type: ConfigMap (a volume populated by a ConfigMap)
Name: my-cluster-zookeeper-config
Optional: false
zookeeper-nodes:
Type: Secret (a volume populated by a Secret)
SecretName: my-cluster-zookeeper-nodes
Optional: false
cluster-ca-certs:
Type: Secret (a volume populated by a Secret)
SecretName: my-cluster-cluster-ca-cert
Optional: false
my-cluster-zookeeper-token-hgk2b:
Type: Secret (a volume populated by a Secret)
SecretName: my-cluster-zookeeper-token-hgk2b
Optional: false
QoS Class: BestEffort
Node-Selectors: <none>
Tolerations: node.kubernetes.io/not-ready:NoExecute for 300s
node.kubernetes.io/unreachable:NoExecute for 300s
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Warning FailedScheduling <unknown> default-scheduler 0/1 nodes are available: 1 node(s) didn't find available persistent volumes to bind.
Warning FailedScheduling <unknown> default-scheduler 0/1 nodes are available: 1 node(s) didn't find available persistent volumes to bind.
Upvotes: 0
Views: 2665
Reputation: 11
I had the same problem when running on bare metal and I tried the storage class method mentioned by @arghya-sadhu but it still didn't work. I found out that the storage class which strimzi is using is a specific type for local storage as mentioned here. Also, for each replica you will need a different storage class and a persistent volume that has a different directory.
For example, the snippet below will create 3 replicas for Zookeeper and Kafka.
You need to replace "node2" with the name of the node that you are assigning the data to. You can find the nodes in your cluster with
kubectl get nodes
Then you need to create the directories for each storage class because they can't be in the same directory otherwise you'll get an error
ssh root@node2
mkdir /mnt/pv0 /mnt/pv1 /mnt/pv2
After that, you can run this snippet to create storage classes and persistent volumes.
"storage-class-and-pv.yaml"
kind: StorageClass
apiVersion: storage.k8s.io/v1
metadata:
name: class-0
annotations:
storageclass.kubernetes.io/is-default-class: "true"
provisioner: kubernetes.io/aws-ebs
parameters:
type: gp2
encrypted: "true"
fsType: "xfs"
reclaimPolicy: Delete
allowVolumeExpansion: true
---
kind: StorageClass
apiVersion: storage.k8s.io/v1
metadata:
name: class-1
provisioner: kubernetes.io/aws-ebs
parameters:
type: gp2
encrypted: "true"
fsType: "xfs"
reclaimPolicy: Delete
allowVolumeExpansion: true
---
kind: StorageClass
apiVersion: storage.k8s.io/v1
metadata:
name: class-2
provisioner: kubernetes.io/aws-ebs
parameters:
type: gp2
encrypted: "true"
fsType: "xfs"
reclaimPolicy: Delete
allowVolumeExpansion: true
---
apiVersion: v1
kind: PersistentVolume
metadata:
name: pv-0
spec:
capacity:
storage: 1Gi
volumeMode: Filesystem
accessModes:
- ReadWriteOnce
persistentVolumeReclaimPolicy: Delete
storageClassName: class-0
local:
path: /mnt/pv0
nodeAffinity:
required:
nodeSelectorTerms:
- matchExpressions:
- key: kubernetes.io/hostname
operator: In
values:
- node2
---
apiVersion: v1
kind: PersistentVolume
metadata:
name: pv-1
spec:
capacity:
storage: 1Gi
volumeMode: Filesystem
accessModes:
- ReadWriteOnce
persistentVolumeReclaimPolicy: Delete
storageClassName: class-0
local:
path: /mnt/pv0
nodeAffinity:
required:
nodeSelectorTerms:
- matchExpressions:
- key: kubernetes.io/hostname
operator: In
values:
- node2
---
apiVersion: v1
kind: PersistentVolume
metadata:
name: pv-2
spec:
capacity:
storage: 1Gi
volumeMode: Filesystem
accessModes:
- ReadWriteOnce
persistentVolumeReclaimPolicy: Delete
storageClassName: class-1
local:
path: /mnt/pv1
nodeAffinity:
required:
nodeSelectorTerms:
- matchExpressions:
- key: kubernetes.io/hostname
operator: In
values:
- node2
---
apiVersion: v1
kind: PersistentVolume
metadata:
name: pv-3
spec:
capacity:
storage: 1Gi
volumeMode: Filesystem
accessModes:
- ReadWriteOnce
persistentVolumeReclaimPolicy: Delete
storageClassName: class-1
local:
path: /mnt/pv1
nodeAffinity:
required:
nodeSelectorTerms:
- matchExpressions:
- key: kubernetes.io/hostname
operator: In
values:
- node2
---
apiVersion: v1
kind: PersistentVolume
metadata:
name: pv-4
spec:
capacity:
storage: 1Gi
volumeMode: Filesystem
accessModes:
- ReadWriteOnce
persistentVolumeReclaimPolicy: Delete
storageClassName: class-2
local:
path: /mnt/pv2
nodeAffinity:
required:
nodeSelectorTerms:
- matchExpressions:
- key: kubernetes.io/hostname
operator: In
values:
- node2
---
apiVersion: v1
kind: PersistentVolume
metadata:
name: pv-5
spec:
capacity:
storage: 1Gi
volumeMode: Filesystem
accessModes:
- ReadWriteOnce
persistentVolumeReclaimPolicy: Delete
storageClassName: class-2
local:
path: /mnt/pv2
nodeAffinity:
required:
nodeSelectorTerms:
- matchExpressions:
- key: kubernetes.io/hostname
operator: In
values:
- node2
Once you create that, you can deploy your Kafka and Zookeeper cluster. Just make sure to override the classes for storage.
"sample-kafka-cluster.yaml"
apiVersion: kafka.strimzi.io/v1beta2
kind: Kafka
metadata:
name: my-sample-cluster
spec:
kafka:
version: 3.2.0
replicas: 3
listeners:
- name: plain
port: 9092
type: internal
tls: false
- name: tls
port: 9093
type: internal
tls: true
config:
offsets.topic.replication.factor: 1
transaction.state.log.replication.factor: 1
transaction.state.log.min.isr: 1
default.replication.factor: 1
min.insync.replicas: 1
inter.broker.protocol.version: "3.2"
storage:
type: jbod
volumes:
- id: 0
type: persistent-claim
size: 1Gi
deleteClaim: true
overrides:
- broker: 0
class: class-0
- broker: 1
class: class-1
- broker: 2
class: class-2
zookeeper:
replicas: 3
storage:
type: persistent-claim
size: 1Gi
deleteClaim: true
overrides:
- broker: 0
class: class-0
- broker: 1
class: class-1
- broker: 2
class: class-2
entityOperator:
topicOperator: {}
userOperator: {}
Upvotes: 1
Reputation: 2400
In my case I was creating kafka in another namespace my-cluster-kafka
but the strimzi operator was on namespace kafka
.
So I just created in the same namespace. For test purporse I use a ephemeral storage.
Here the kafla.yaml:
apiVersion: kafka.strimzi.io/v1beta1
kind: Kafka
metadata:
name: my-cluster
spec:
kafka:
replicas: 1
listeners:
- name: plain
port: 9092
type: internal
tls: false
- name: tls
port: 9093
type: internal
tls: true
authentication:
type: tls
- name: external
port: 9094
type: nodeport
tls: false
storage:
type: ephemeral
config:
offsets.topic.replication.factor: 1
transaction.state.log.replication.factor: 1
transaction.state.log.min.isr: 1
zookeeper:
replicas: 1
storage:
type: ephemeral
entityOperator:
topicOperator: {}
userOperator: {}
Upvotes: 0
Reputation: 10065
Yeah it sounds to me that there is something missing on Kubernetes at infrastructure level. You should provide PersistentVolumes which are used for static assign to PVCs or as already mentioned by Arghya you can provide StorageClasses for dynamic assignment.
Upvotes: 0
Reputation: 44549
You need to have a PersistentVolume fulfilling the constraints of the PersistentVolumeClaim.
Use local storage. Using a local storage class:
$ cat <<EOF
kind: StorageClass
apiVersion: storage.k8s.io/v1
metadata:
name: local-storage
provisioner: kubernetes.io/no-provisioner
volumeBindingMode: WaitForFirstConsumer
EOF | kubectl apply -f -
You need to configure a default storageClass in your cluster so that the PersistentVolumeClaim can take the storage from there.
$ kubectl patch storageclass local-storage -p '{"metadata": {"annotations":{"storageclass.kubernetes.io/is-default-class":"true"}}}'
Upvotes: 1