Reputation: 2546
I'm deploying an app with a start-up script that generates cache data if it does not exist, if it does exist this process will be skipped and the main app will run, this is all controled by ENTRYPOINT["/opt/entrypoint.sh"]
, a customs script that determines which thing to do based on scenario.
The problem I'm having is that AWS ECS kills the container and marks it unhealthy. However, it's running the entrypoint.sh specified in the Dockerfile
. What is "unhealthy" about it? How can I keep the cache generation going before starting the main app in the container? This is a one-time process that occurs when the image is first pulled and run as a local container.
Upvotes: 2
Views: 2170
Reputation: 2546
My org and I solved this ultimately by keeping the Docker container as thin as possible and using AWS snapshots and volumes to manage the external payload rather then try to use the first boot in order to pull down the data to the local Docker container. This required some minor refactoring but gave us what we needed to move forward. Docker worked fine, for the record, it was the health check for AWS ECS and the inability to pause other services while this one booted up for an extended period of time.
Upvotes: 0
Reputation: 484
Seems like your health check policy is determining the container as unhealthy, even if it's just starting.
To fix this you have to adjust the health checks. That can be done in several places (Target Group, Task Definition). I suggest you to do it in Task Definition, because the health check will be related to your container behavior. Here's the documentation for the health check fields in a task definition.
Attention! From my experience you can't remove the health check configuration after you add it to the task definition. In my case it made sense to keep to check the health from ELB (thus I had to define them in the target group). I had to delete the task definition and create it again to get rid of health check configuration.
Upvotes: 1