Reputation: 9953
Kubernetes docs say using AWS ECR is supported, but it’s not working for me. My nodes have an EC2 instance role associated with all the correct permissions but kubectl run debug1 -i --tty --restart=Never --image=672129611065.dkr.ecr.us-west-2.amazonaws.com/debug:v2
results in failed to "StartContainer" for "debug1" with ErrImagePull: "Authentication is required."
Details
The instances all have a role associated and that role has this policy attached:
{
"Version": "2012-10-17",
"Statement": [{
"Effect": "Allow",
"Action": [
"ecr:GetAuthorizationToken",
"ecr:BatchCheckLayerAvailability",
"ecr:GetDownloadUrlForLayer",
"ecr:GetRepositoryPolicy",
"ecr:DescribeRepositories",
"ecr:ListImages",
"ecr:BatchGetImage"
],
"Resource": "*"
}]
}
And the kubelet logs look like:
Apr 18 19:02:12 ip-10-0-170-46 kubelet[948]: I0418 19:02:12.004611 948 provider.go:91] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider
Apr 18 19:02:12 ip-10-0-170-46 kubelet[948]: E0418 19:02:12.112142 948 pod_workers.go:138] Error syncing pod b21c2ba6-0593-11e6-9ec1-065c82331f7b, skipping: failed to "StartContainer" for "debug1" with ErrImagePull: "Authentication is required."
Apr 18 19:02:27 ip-10-0-170-46 kubelet[948]: E0418 19:02:27.006329 948 pod_workers.go:138] Error syncing pod b21c2ba6-0593-11e6-9ec1-065c82331f7b, skipping: failed to "StartContainer" for "debug1" with ImagePullBackOff: "Back-off pulling image \"672129611065.dkr.ecr.us-west-2.amazonaws.com/debug:v2\""
Upvotes: 0
Views: 3440
Reputation: 1215
From those logs, I suspect one of three things:
--cloud-provider=aws
arg on the kubelet.Upvotes: 2
Reputation: 717
I think you also need the image pull secret configured for ecr images. You can reffer the below links for details.
http://kubernetes.io/docs/user-guide/images/#specifying-imagepullsecrets-on-a-pod
http://docs.aws.amazon.com/AmazonECR/latest/userguide/ECR_GetStarted.html
https://github.com/kubernetes/kubernetes/issues/499
1) Retrieve the docker login command that you can use to authenticate your Docker client to your registry:
aws ecr get-login --region us-east-1
2) Run the docker login command that was returned in the previous step.
3) Docker login secret saved /root/.dockercfg
4)Encode docker config file
echo $(cat /root/.dockercfg) | base64 -w 0
5)Copy and paste result to secret YAML based on the old format:
apiVersion: v1
kind: Secret
metadata:
name: aws-key
type: kubernetes.io/dockercfg
data:
.dockercfg: <YOUR_BASE64_JSON_HERE>
6) Use this aws-key secret to access the image
apiVersion: v1
kind: Pod
metadata:
name: foo
namespace: awesomeapps
spec:
containers:
- name: foo
image: janedoe/awesomeapp:v1
imagePullSecrets:
- name: aws-key
Upvotes: 1
Reputation: 9953
Generally if you change permissions on an InstanceProfile they take effect immediately. However, there must be some kind of setup phase for the Kubelet that requires the permissions to already be set. I completely bounced my CloudFormation stack so that the booted with the new permissions active and that did the trick. I can now use ECR images without issue.
Upvotes: 0