Reputation: 321
I looked at this article: Github Deploy Keys
From what I have read, it treats clients like it is a stationary machine that always has an ssh setup to clone the repo. In ECS, however the client machine changes variably. Do I have to setup ssh each time on each container ?
My question is coming from an AWS point of view, is there some kind of "role " that can be set so that whenever we deploy a service, it has read access to a private github repo?
Upvotes: 4
Views: 1464
Reputation: 574
You can store private key content in AWS Secrets Manager. Then you can access it with AWS CLI like:
aws secretsmanager get-secret-value --secret-id arn:aws:secretsmanager:us-west-2:foocompany:secret:secret-name-X1cUDA | python -c 'import sys, json; print json.load(sys.stdin)["SecretString"]' > ~/.ssh/id_rsa
chmod 400 ~/.ssh/id_rsa
You may need setup secret permissions similar to
{ "Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": "secretsmanager:GetSecretValue",
"Principal": {"AWS": "arn:aws:iam::foocompany:user/user"},
"Resource": "arn:aws:secretsmanager:us-west-2:238513131754:secret:arn:aws:secretsmanager:us-west-2:foocompany:secret:secret-name-X1cUDA"
}
]
}
Upvotes: 2
Reputation: 1323055
that the code running inside the container needs to make a call to the github repo.
That means the container must start with a bind mount of a .ssh/id_rsa
/.ssh_id_rsa.pub
allowing the container to authenticate itself to GitHub, as a collaborator.
SSH is not the only way to access a private repo: mounting a PAT (Personal Access Token) would allow the container to use an HTTPS URL.
But in both case, the container needs to mount the files needed for a proper authentication in order to access the remote private repo.
Upvotes: 1