zilcuanu
zilcuanu

Reputation: 3715

K8s mongodb container cannot use the EBS volume mount

I am having the below Pod definition.

apiVersion: v1
kind: Pod
metadata:
  name: mongodb
spec:
  volumes:
  - name: mongodb-data
    awsElasticBlockStore:
      volumeID: vol-0c0d9800c22f8c563
      fsType: ext4
  containers:
  - image: mongo
    name: mongodb
    volumeMounts:
    - name: mongodb-data
      mountPath: /data/db
    ports:
    - containerPort: 27017
      protocol: TCP

I have created volumne in AWS and tried to mount to the container. The container is not starting.

kubectl get po
NAME      READY   STATUS              RESTARTS   AGE
mongodb   0/1     ContainerCreating   0          6m57s

When I created the volume and assigned it to a Availability zone where the node is running and and the pod was scheduled on that node, the volume was mounted successfully. If the pod is not scheduled on the node, the mount fails. How can I make sure that the volume can be accessed by all the nodes

Upvotes: 1

Views: 303

Answers (1)

Wytrzymały Wiktor
Wytrzymały Wiktor

Reputation: 13878

According to the documentation:

There are some restrictions when using an awsElasticBlockStore volume:

  • the nodes on which Pods are running must be AWS EC2 instances
  • those instances need to be in the same region and availability-zone as the EBS volume
  • EBS only supports a single EC2 instance mounting a volume

Make sure all of the above are met. If your nodes are in different zones than you might need to create additional EBS volumes, for example:

aws ec2 create-volume --availability-zone=eu-west-1a --size=10 --volume-type=gp2

Please let me know if that helped.

Upvotes: 1

Related Questions