Reputation: 2781
Env
Problem
We've 3 docker containers in our app.
DB and app-server gets started, however once the DB-seed exits(as it should) after running some migration scripts. The rest of the containers die(STOP).
This strange behaviour happens only in AWS ecs and never in my local docker setup.
Moreover, killing any of the containers, stops the other containers in AWS-ECS.
Our docker compose file
version: '2'
services:
db:
image: db-image
hostname: db
cpu_shares: 50
mem_limit: 3758096384
volumes:
- /data/db:/data/db
ports:
- "27017:27017"
db-seed:
image:db-seed
cpu_shares: 10
mem_limit: 504288000
links:
- db
web:
image: server-image
cpu_shares: 50
mem_limit: 3758096384
ports:
- "8080:8080"
links:
- db
Is this an issue in AWS ecs or a feature(all or none)?
Upvotes: 3
Views: 2570
Reputation: 8871
You're missing the essential
parameter in your task definition. Unfortunately I'm not aware of a way to insert this parameter via docker-compose
, but that's what's causing your behavior in the resultant ECS Task. From the documentation:
If the essential parameter of a container is marked as
true
, and that container fails or stops for any reason, all other containers that are part of the task are stopped. If the essential parameter of a container is marked asfalse
, then its failure does not affect the rest of the containers in a task. If this parameter is omitted, a container is assumed to be essential.
Note that the parameter defaults to true
when it is omitted. Given all of your tasks are missing the parameter, it is expected behavior for them all to stop when one of them exits.
More information can be found in the ECS Task Definition Parameters documentation: http://docs.aws.amazon.com/AmazonECS/latest/developerguide/task_definition_parameters.html#container_definition_environment
I know this isn't an exact answer, but I hope it helps in determining how to solve your issue.
Upvotes: 4