Reputation: 7309
Is is possible to run docker-in-docker on AWS batch?
I have tried the approach of mounting the docker socket via the container properties:
container_properties = <<CONTAINER_PROPERTIES
{
"command": ["docker", "run", "my container"],
"image": "docker/compose",
"jobRoleArn": "my-role",
"memory": 2000,
"vcpus": 1,
"privileged": true,
"mountPoints": [
{
"sourceVolume": "/var/run/docker.sock",
"containerPath": "/var/run/docker.sock",
"readOnly": false
}
]
}
However running this batch job in a SPOT compute environment with default configuration yields a job that immediately transitions to FAILED status with the status transition reason:
Status reason
Unknown volume '/var/run/docker.sock'.
Upvotes: 3
Views: 1690
Reputation: 7309
The solution is that both volumes
and mountPoints
must be defined. For example the following container properties work:
{
"command": ["docker", "run", "<my container>"],
"image": "docker/compose",
"jobRoleArn": "my-role",
"memory": 2000,
"vcpus": 1,
"privileged": false,
"volumes": [
{
"host": {
"sourcePath": "/var/run/docker.sock"
},
"name": "dockersock"
}
],
"mountPoints": [
{
"sourceVolume": "dockersock",
"containerPath": "/var/run/docker.sock",
"readOnly": false
}
]
Access to your private ECR images works fine from the inner docker, however the authentication to ECR from the outer docker does not carry over, so you need to reauthenticate with
aws ecr get-login-password \
--region <region> \
| docker login \
--username AWS \
--password-stdin <aws_account_id>.dkr.ecr.<region>.amazonaws.com
before you run your privately hosted docker container.
Turns out privileged
is not even required which is nice.
Upvotes: 6