Reputation: 476
I'm trying to get a working Docker environment on AWS using the ecs-cli command line.
I have a working local Docker environment using Dockerfiles, docker-compose.yml, a .env file, and a entrypoint.sh scripts. The containers are an apache webserver running PHP and a bunch of extensions, and a MySQL db.
Skeleton file structure is like this:
./db <-- mounted by db container for persistence
./docker
./docker/database
./docker/database/Dockerfile
./docker/database/dump.sql
./docker/webserver
./docker/webserver/apache-config.conf
./docker/webserver/Dockerfile
./docker/webserver/entrypoint.sh
./docker-compose.yml
./web <!-- mounted by web server, contains all public web code
Here's the 2 Docker files:
./docker/database/Dockerfile
FROM mysql:5.6
ADD dump.sql /docker-entrypoint-initdb.d
./docker/webserver/Dockerfile
FROM php:5.6-apache
RUN apt-get update
RUN curl -sL https://deb.nodesource.com/setup_8.x | bash -
RUN apt-get install -y zlib1g-dev nodejs gdal-bin
RUN npm install -g topojson
RUN docker-php-ext-install mysql mysqli pdo pdo_mysql zip
RUN pecl install dbase
RUN docker-php-ext-enable dbase
COPY apache-config.conf /etc/apache2/sites-enabled/000-default.conf
RUN a2enmod rewrite headers
RUN service apache2 restart
COPY entrypoint.sh /entrypoint.sh
RUN chmod 0755 /entrypoint.sh
ENTRYPOINT ["/entrypoint.sh", "apache2-foreground"]
entrypoint.sh creates some directories in the web directory for apache to write into:
./docker/webserver/entrypoint.sh #!/bin/sh
mkdir /var/www/html/maps
chown www-data /var/www/html/maps
chgrp www-data /var/www/html/maps
exec "$@"
Here's the docker-compose.yml
version: '2'
services:
webserver:
image: ACCOUNT_NUMBER.dkr.ecr.eu-west-1.amazonaws.com/project/project-webserver
ports:
- "8080:80"
volumes:
- ./web:${APACHE_DOC_ROOT}
links:
- db
environment:
- HTTP_ROOT=http://${DOCKER_HOST_IP}:${DOCKER_HOST_PORT}/
- PHP_TMP_DIR=${PHP_TMP_DIR}
- APACHE_LOG_DIR=${APACHE_LOG_DIR}
- APACHE_DOC_ROOT=${APACHE_DOC_ROOT}/
- SERVER_ADMIN_EMAIL=${SERVER_ADMIN_EMAIL}
- MYSQL_USER=${MYSQL_USER}
- MYSQL_PASSWORD=${MYSQL_PASSWORD}
- MYSQL_DATABASE=${MYSQL_DATABASE}
env_file: .env
db:
user: "1000:50"
image: ACCOUNT_NUMBER.dkr.ecr.eu-west-1.amazonaws.com/project/project-database
ports:
- "4406:3306"
volumes:
- ./db:/var/lib/mysql
environment:
- MYSQL_ROOT_PASSWORD=${MYSQL_ROOT_PASSWORD}
- MYSQL_USER=${MYSQL_USER}
- MYSQL_PASSWORD=${MYSQL_PASSWORD}
- MYSQL_DATABASE=${MYSQL_DATABASE}
env_file: .env
To build the images referenced there I:
Created AWS IAM user with Admin permission and set keys in ~/.aws/credentials under a profile name, then set-up local ENV using export AWS_PROFILE=my-project-profile
Then built the images locally as follows:
docker/webserver $ docker build -t ACCOUNT_NUMBER.dkr.ecr.eu-west-1.amazonaws.com/project/project-webserver .
docker/database $ docker build -t ACCOUNT_NUMBER.dkr.ecr.eu-west-1.amazonaws.com/project/project-database .
Got docker logged into ECR (running the docker login command echo'd to std-out):
$aws ecr get-login --no-include-email
Created the repos:
$ aws ecr create-repository --repository-name project/project-webserver
$ aws ecr create-repository --repository-name project/project-database
Pushed the images:
$docker push ACCOUNT_NUMBER.dkr.ecr.eu-west-1.amazonaws.com/project/project-webserver
$docker push ACCOUNT_NUMBER.dkr.ecr.eu-west-1.amazonaws.com/project/project-database
Checked they are there:
$ aws ecr describe-images --repository-name project/project-webserver
$ aws ecr describe-images --repository-name project/project-database
All looks fine.
Created an EC2 key-pair in the same region
$ecs-cli configure --region eu-west-1 --cluster project $ cat ~/.ecs/config
Tried running them on ECS:
$ecs-cli up --keypair project --capability-iam --size 1 --instance-type t2.micro --force
But if I open port 22 in the security group of the resulting EC2 instance and SSH in I can see the agent container running, but no others:
[ec2-user@ip-10-0-0-122 ~]$ docker ps -a
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
d011f1402c26 amazon/amazon-ecs-agent:latest "/agent" 8 minutes ago Up 8 minutes ecs-agent
I don't see anything bad in the logs for the agent
[ec2-user@ip-10-0-1-102 ~]$ docker logs ecs-agent
2017-09-22T13:32:55Z [INFO] Loading configuration
2017-09-22T13:32:55Z [INFO] Loading state! module="statemanager"
2017-09-22T13:32:55Z [INFO] Event stream ContainerChange start listening...
2017-09-22T13:32:55Z [INFO] Registering Instance with ECS
2017-09-22T13:32:55Z [INFO] Registered! module="api client"
2017-09-22T13:32:55Z [INFO] Registration completed successfully. I am running as 'arn:aws:ecs:eu-west-1:248221388880:container-instance/ba24ead4-21a5-4bc7-ba9f-4d3ba0f29c6b' in cluster 'gastrak'
2017-09-22T13:32:55Z [INFO] Saving state! module="statemanager"
2017-09-22T13:32:55Z [INFO] Beginning Polling for updates
2017-09-22T13:32:55Z [INFO] Event stream DeregisterContainerInstance start listening...
2017-09-22T13:32:55Z [INFO] Initializing stats engine
2017-09-22T13:32:55Z [INFO] NO_PROXY set:169.254.169.254,169.254.170.2,/var/run/docker.sock
2017-09-22T13:33:05Z [INFO] Saving state! module="statemanager"
2017-09-22T13:44:50Z [INFO] Connection closed for a valid reason: websocket: close 1000 (normal): ConnectionExpired: Reconnect to continue
I guess I need to figure out why those containers aren't initalising, but where do I look and, better still, what do I need to do next to get this to work?
Upvotes: 0
Views: 745
Reputation: 476
In case anyone else runs adrift here, the missing incantations were
$ ecs-cli compose create
which builds an ECS task definition from your compose file (assuming it is compatible...)
and
$ecs-cli compose run
which will build and run the containers on the remote EC2 machine.
SSH'ing to the remote machine and doing a "docker ps -a" should show the containers running. Or "docker logs [container_name]" to see what went wrong...
Upvotes: 0