Reputation: 5289
I'm attempting to follow the instructions at https://github.com/kubernetes/kubernetes/tree/master/cluster/addons/registry to add a private docker registry to Kubernetes, but the pod created by the rc isn't able to mount the persistent volume claim.
First I'm creating a volume on EBS like so:
aws ec2 create-volume --region us-west-1 --availability-zone us-west-1a --size 32 --volume-type gp2
(us-west-1a
is also the availability zone that all of my kube minions are running in.)
Then I create a persistent volume like so:
kind: PersistentVolume
apiVersion: v1
metadata:
name: kube-system-kube-registry-pv
labels:
kubernetes.io/cluster-service: "true"
spec:
capacity:
storage: 30Gi
accessModes:
- ReadWriteOnce
awsElasticBlockStore:
volumeID: vol-XXXXXXXX
fsType: ext4
And a claim on the persistent volume like so:
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
name: kube-registry-pvc
namespace: kube-system
labels:
kubernetes.io/cluster-service: "true"
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 30Gi
The replication controller is specified like so:
apiVersion: v1
kind: ReplicationController
metadata:
name: kube-registry-v0
namespace: kube-system
labels:
k8s-app: kube-registry
version: v0
kubernetes.io/cluster-service: "true"
spec:
replicas: 1
selector:
k8s-app: kube-registry
version: v0
template:
metadata:
labels:
k8s-app: kube-registry
version: v0
kubernetes.io/cluster-service: "true"
spec:
containers:
- name: registry
image: registry:2
resources:
limits:
cpu: 100m
memory: 100Mi
env:
- name: REGISTRY_HTTP_ADDR
value: :5000
- name: REGISTRY_STORAGE_FILESYSTEM_ROOTDIRECTORY
value: /var/lib/registry
volumeMounts:
- name: image-store
mountPath: /var/lib/registry
ports:
- containerPort: 5000
name: registry
protocol: TCP
volumes:
- name: image-store
persistentVolumeClaim:
claimName: kube-registry-pvc
When I create the rc, It successfully starts a pod, but the pod is unable to mount the volume:
$ kubectl describe po kube-registry --namespace=kube-system
...
Events:
FirstSeen LastSeen Count From SubobjectPath Reason Message
───────── ──────── ───── ──── ───────────── ────── ───────
1m 1m 1 {scheduler } Scheduled Successfully assigned kube-registry-v0-3jobf to XXXXXXXXXXXXXXX.us-west-1.compute.internal
22s 22s 1 {kubelet XXXXXXXXXXXXXXX.us-west-1.compute.internal} FailedMount Unable to mount volumes for pod "kube-registry-v0-3jobf_kube-system": Timeout waiting for volume state
22s 22s 1 {kubelet XXXXXXXXXXXXXXX.us-west-1.compute.internal} FailedSync Error syncing pod, skipping: Timeout waiting for volume state
I'm able to successfully mount EBS volumes if I don't use persistent volumes and persistent volume claims. The following works without error, for example:
apiVersion: v1
kind: Pod
metadata:
name: test-ebs
spec:
containers:
- image: gcr.io/google_containers/test-webserver
name: test-container
volumeMounts:
- mountPath: /test-ebs
name: test-volume
volumes:
- name: test-volume
awsElasticBlockStore:
volumeID: vol-XXXXXXXX
fsType: ext4
My two questions are:
Upvotes: 3
Views: 3964
Reputation: 5289
I think I was likely running into https://github.com/kubernetes/kubernetes/issues/15073 . (If I create a new EBS volume, I first get a different failure, and then after the pod has been killed if I try to re-create the rc I get the failure I mentioned in my question.)
Also, for anyone else wondering where to look for logs, /var/log/syslog
and /var/log/containers/XXX
on the kubelet was where I ended up having to look.
Upvotes: 2