Reputation:
We are trying to Configure kubernetes RC in AWS instance with AWS Elastic Block Store(EBS). here is the key part of our controller yaml file -
volumeMounts:
- mountPath: "/opt/phabricator/repo"
name: ebsvol
volumes:
-
name: ebsvol
awsElasticBlockStore:
volumeID: aws://us-west-2a/vol-*****
fsType: ext4
our rc can start pod and works fine with out mounting it to a AWS EBS but with volume mounting in an AWS EBS it gives us -
Fri, 11 Sep 2015 11:29:14 +0000 Fri, 11 Sep 2015 11:29:34 +0000 3 {kubelet 172.31.24.103} failedMount Unable to mount volumes for pod "phabricator-controller-zvg7z_default": error listing AWS instances: NoCredentialProviders: no valid providers in chain
Fri, 11 Sep 2015 11:29:14 +0000 Fri, 11 Sep 2015 11:29:34 +0000 3 {kubelet 172.31.24.103} failedSync Error syncing pod, skipping: error listing AWS instances: NoCredentialProviders: no valid providers in chain
We have an credential file with appropiate credential in .aws directory. But its not working. Do we missing something? Is it a configuration issue?
Kubectl version: 1.0.4 and 1.0.5 (Tried with both)
Upvotes: 5
Views: 1301
Reputation: 306
We opened an issue to discuss this: https://github.com/kubernetes/kubernetes/issues/13858
The recommended way to go here is to use IAM instance profiles. kube-up does configure this for you, and if you're not using kube-up I recommend looking at it to emulate what it does!
Although we did recently merge in support for using a .aws credentials file, I don't believe it has been back-ported into any release, and it isn't really the way I (personally) recommend.
It sounds like you're not using kube-up; you may find it easier if you can use that (and I'd love to know if there's some reason you can't or don't want to use kube-up, as I personally am working on an alternative that I hope will meet everyone's needs!)
I'd also love to know if IAM instance profiles aren't suitable for you for some reason.
Upvotes: 4