Aidan Ewen
Aidan Ewen

Reputation: 13328

How do I run a node container on AWS ECS without exiting

I'm struggling to keep my node.js container running on ECS. It runs fine when I run it locally with docker compose, but on ECS it runs for a 2-3 mins and handles a few connections (2-3 health checks from the load balancer), then closes down. And I can't work out why.

My Dockerfile -

FROM node:6.10

RUN npm install -g nodemon \
    && npm install forever-monitor \
    winston \
    express-winston

RUN mkdir -p /usr/src/app
WORKDIR /usr/src/app

COPY package.json /usr/src/app/
RUN npm install

COPY . /usr/src/app

EXPOSE 3000

CMD [ "npm", "start" ]

Then in my package.json -

{
  ...
  "main": "forever.js",
  "dependencies": {
    "mongodb": "~2.0",
    "abbajs": ">=0.1.4",
    "express": ">=4.15.2"
  }
  ...
}

In my docker-compose.yml I run with nodemon -

node:
  ...
  command: nodemon

In my cloudWatch logs I can see everything start -

14:20:24  npm info lifecycle [email protected]~start: [email protected]

Then I see the health check requests (all with http 200's), then a bit later it all wraps up -

14:23:00  npm info lifecycle [email protected]~poststart: [email protected]
14:23:00  npm info ok

I've tried wrapping my start.js script in forever-monitor, but that doesn't seem to be making any difference.

UPDATE

My ECS task definition -

{
  "requiresAttributes": [
    {
      "value": null,
      "name": "com.amazonaws.ecs.capability.ecr-auth",
      "targetId": null,
      "targetType": null
    },
    {
      "value": null,
      "name": "com.amazonaws.ecs.capability.logging-driver.awslogs",
      "targetId": null,
      "targetType": null
    },
    {
      "value": null,
      "name": "com.amazonaws.ecs.capability.docker-remote-api.1.19",
      "targetId": null,
      "targetType": null
    }
  ],
  "taskDefinitionArn": "arn:aws:ecs:us-east-1:562155596068:task-definition/node:12",
  "networkMode": "bridge",
  "status": "ACTIVE",
  "revision": 12,
  "taskRoleArn": null,
  "containerDefinitions": [
    {
      "volumesFrom": [],
      "memory": 128,
      "extraHosts": null,
      "dnsServers": null,
      "disableNetworking": null,
      "dnsSearchDomains": null,
      "portMappings": [
        {
          "hostPort": 0,
          "containerPort": 3000,
          "protocol": "tcp"
        }
      ],
      "hostname": null,
      "essential": true,
      "entryPoint": null,
      "mountPoints": [],
      "name": "node",
      "ulimits": null,
      "dockerSecurityOptions": null,
      "environment": [
        {
          "name": "awslogs-group",
          "value": "node_logs"
        },
        {
          "name": "awslogs-region",
          "value": "us-east-1"
        },
        {
          "name": "NODE_ENV",
          "value": "production"
        }
      ],
      "links": null,
      "workingDirectory": null,
      "readonlyRootFilesystem": null,
      "image": "562155596068.dkr.ecr.us-east-1.amazonaws.com/node:06b5a3700df163c8563865c2f23947c2685edd7b",
      "command": null,
      "user": null,
      "dockerLabels": null,
      "logConfiguration": {
        "logDriver": "awslogs",
        "options": {
          "awslogs-group": "node_logs",
          "awslogs-region": "us-east-1"
        }
      },
      "cpu": 1,
      "privileged": null,
      "memoryReservation": null
    }
  ],
  "placementConstraints": [],
  "volumes": [],
  "family": "node"
}

Tasks are all stopped with the status Task failed ELB health checks in (target-group .... Health checks pass 2 or 3 times before they start failing. And there's no record of anything other than an http 200 in the logs.

Upvotes: 0

Views: 1878

Answers (1)

Aidan Ewen
Aidan Ewen

Reputation: 13328

I was using an old version of the mongo driver ~2.0, and keeping connections to more than one db. When I upgraded the driver, the issue went away.

  "dependencies": {
    "mongodb": ">=2.2"
  }

I can only assume that there was a bug in the driver.

Upvotes: 1

Related Questions