alx.lzt
alx.lzt

Reputation: 486

Pulling docker image from private github package using AWS EKS with AWS Fargate

I wish to deploy a dockerized service with kubernetes on aws. To do so I'm using the recently released AWS EKS with AWS Fargate feature. The service's docker image is stored in a private package on github.

To deploy my service I'm using a kubernetes manifest file containing a secret, a deployment and a service.

When locally deploying with kubectl on minikube, the deployment pods successfully pull the image from the private github package. I successfully reproduced the process for accessing a private dockerhub registry.

I then configured kubectl to connect to my eks cluster. When applying the manifest file I'm getting an ImagePullBackOff status for the deployment pods when pulling form github packages while it is working fine when pulling from dockerhub. The differences in the manifest file are as follow:

Generating secret for github packages:

kubectl create secret docker-registry mySecret --dry-run=true --docker-server=https://docker.pkg.github.com --docker-username=myGithubUsername --docker-password=myGithubAccessToken -o yaml

Generating secret for dockerhub:

kubectl create secret docker-registry mySecret --dry-run=true --docker-server=https://index.docker.io/v1/ --docker-username=myDockerhubUsername --docker-password=myDockerhubPassword -o yaml

The deployment spec is referenced as follow:

spec:
  containers:
    # when pulling from github packages 
    - image: docker.pkg.github.com/myGithubUsername/myRepo/myPackage:tag
    # when pulling from dockerhub 
    - image: myDockerhubUsername/repository:tag
    ...
  imagePullSecrets:
    - name: mySecret

Trying to make this work specifically with github packages I had a try with AWS Secrets Manager.

I created a secret "mySecret" as follows :

{
  "username" : "myGithubUsername",
  "password" : "myGithubAccessToken"
}

I then created a policy to access this secret:

{
  "Version": "2012-10-17",
  "Statement": [
    {
        "Effect": "Allow",
        "Action": [
            "secretsmanager:GetSecretValue"
        ],
        "Resource": [
            "arn:aws:secretsmanager:eu-west-1:myAWSAccountId:secret:mySecret"
        ]
    }
  ]
}

I then attached the policy to both the Cluster IAM Role of my eks cluster and the Pod Execution Role referenced in its fargate profile "fp-default". I'm only working in the default kubernetes namespace. My secret and cluster are both in the eu-west-1 region.

Still, I'm getting the ImagePullBackOff status when deploying.

I'm having a hard time finding anything tackling this issue using AWS Fargate with AWS EKS, and would love some insight on this :)

Edit: The question was edited in order to present in a clearer maner that the issue is mainly related to using github packages as a registry provider.

Upvotes: 6

Views: 5767

Answers (3)

alx.lzt
alx.lzt

Reputation: 486

After digging a little and exchanging with some of the AWS technical staff:

The problem seems to be that EKS Fargate uses Containerd under the hood.

Containerd and Docker try to pull images differently. Containerd is currently tracking multiple issues with multiple registry providers that only correctly support Docker but not the OCI HTTP API V2, github being one of them.

As mentioned by the director of product at Github, the issue will be addressed in a few weeks to a couple of months.

Upvotes: 3

Barry Tam
Barry Tam

Reputation: 37

I think you need to create your secret in the same namespace as your deployment. I was able to get this work by creating a secret

kubectl create secret docker-registry mySecret --dry-run=true --docker-server=https://docker.pkg.github.com --docker-username=myGithubUsername --docker-password=myGithubAccessToken --namespace=my-api -o yaml

in my deployment.yaml, I referenced it like

    spec:
      containers:
        - name: my-api
          image: docker.pkg.github.com/doc-api:0.0.0
          ports:
          - containerPort: 5000
          resources: {}
      imagePullSecrets:
        - name: mySecret

Upvotes: 3

Abdennour TOUMI
Abdennour TOUMI

Reputation: 93243

If no hope with AWS docs, you can do the following :

  • Run daemonset which will create pod per node by design.
  • These pods will mount volumes of type hostPath (/var/run/docker.sock).
  • These pods has docker client and run the following "command "

    docker login docker.pkg.github.com -u username -p passWord
    
  • Now you login inside the container, but it is not reflected in the Node. then , you need to mount another hostPath volume (~/.docker/config.json) . But the challenge is to know what is the home directory of Fargate Nodes. In the example below, I put /root (volumes), but it can be something else .e.g /home/ec2-user... something to check.

  • This is how it looks like

    apiVersion: apps/v1
    kind: Daemonset
    metadata:
      name: docker-init
    spec:
      replicas: 1
      template:
        metadata:
          name: worker
          labels:
            app: worker
        spec:
          initContainers:
          - name: login-private-registies
            image: docker
            command: ['sh', '-c', 'docker login docker.pkg.github.com -u username -p passWord']
            volumeMounts:
              - name: dockersock
                mountPath: "/var/run/docker.sock"
              - name: dockerconfig
                mountPath: "/root/.docker"
          volumes:
          - name: dockersock
            hostPath:
              path: /var/run/docker.sock
          - name: dockerconfig
            hostPath:
              path: /root/.docker
    

Upvotes: 1

Related Questions