Aerodynamika
Aerodynamika

Reputation: 8433

Can I use docker secrets within AWS ECS infrastructure?

I'm struggling with the best way to pass secret config data to my Node.Js app.

The problem is that I need it to run both on my local machine, within the CI environment for testing and staging, as well as on production (AWS).

I was thinking to use docker secrets as it's described here: https://medium.com/better-programming/how-to-handle-docker-secrets-in-node-js-3aa04d5bf46e

The problem is that it only works if you run Docker as a service (via Swarm), which I could do locally, but not on AWS ECS and not on CI. (Or am I missing something there?)

Then I could also use Amazon secrets, but how would I get to them on my CI environment and on the local environment? Or if I don't have the internet?

Isn't there a way to make like a separate file or something that I could use for every environment no matter whether it's my local one running via docker run or the CI one or the AWS ECS one?

Upvotes: 1

Views: 1637

Answers (2)

ydaetskcoR
ydaetskcoR

Reputation: 56997

The most common way of passing environmental specific configuration to an application running in a Docker container is to use environment variable as proposed as the third factor of Twelve-Factor Apps.

With this your application should read all configuration, including secrets, from environment variables.

If you are running locally and outside of a Docker container you can manually set these environment variables, run a script that exports them to your shell or use a dotenv style helper for your language that will automatically load envrionment variables from an environment file and expose them as environment variables to your application so you can fetch them with process.env.FOO, os.environ["FOO"], ENV['HOSTNAME'] or however your application's language accesses environment variables.

When running in a Docker container locally you can avoid packaging your .env file into the image and instead just inject the environment variables from the environment file by using the --env-file argument to docker run or instead just individually inject the environment variables by hand with --env.

When these are just being accessed locally then you just need to make sure you don't store any secrets in source control so would add your .env file or equivalent to your .gitignore.

When it comes to running in CI you will need to have your CI system store these secret variables securely and then inject them at runtime. In Gitlab CI, for example, you would create the variables in the project CI/CD settings, these are then stored encrypted in the database and are then injected transparently in plain text at runtime to the container as environment variables.

For deployment to ECS you can store non secret configuration directly as environment variables in the task definition. This leaves the environment variables readable by anyone with read only access to your AWS account which is probably not what you want for secrets. Instead you can create these in SSM Parameter Store or Secrets Manager and then refer to these in the secrets parameter of your task definition:

AWS documentation includes this smallish example of a task definition that gets secrets from Secrets Manager:

{
  "requiresCompatibilities": [
    "EC2"
  ],
  "networkMode": "awsvpc",
  "containerDefinitions": [
    {
      "name": "web",
      "image": "httpd",
      "memory": 128,
      "essential": true,
      "portMappings": [
        {
          "containerPort": 80,
          "protocol": "tcp"
        }
      ],
      "logConfiguration": {
        "logDriver": "splunk",
        "options": {
          "splunk-url": "https://sample.splunk.com:8080"
        },
        "secretOptions": [
          {
            "name": "splunk-token",
            "valueFrom": "arn:aws:secretsmanager:us-east-1:awsExampleAccountID:secret:awsExampleParameter"
          }
        ]
      },
      "secrets": [
        {
          "name": "DATABASE_PASSWORD",
          "valueFrom": "arn:aws:ssm:us-east-1:awsExampleAccountID:parameter/awsExampleParameter"
        }
      ]
    }
  ],
  "executionRoleArn": "arn:aws:iam::awsExampleAccountID:role/awsExampleRoleName"
}

Upvotes: 1

Adiii
Adiii

Reputation: 60134

Isn't there a way to make like a separate file or something that I could use for every environment no matter whether it's my local one running via docker run or the CI one or the AWS ECS one? Or if I don't have the internet?

Targeting N-Environment with a condition like No Internet is something that one hardly relay on AWS service for keeping secret like store parameter etc.

What I can suggest is to use dot env which will be environment independent, all you need to handle different environment from different sources, for example

  • Pull from s3 when running on staging and production on AWS
  • Bind local .env when working on dev machine to handle No internet condition
  • Pull from s3 or generate dynamic dot env for CI

So deal with this each environment consumes proper Dot ENV file you can add logic in Docker entrypoint.

#!/bin/sh

if [ "${NODE_ENV}" == "production" ];then
   # in production we are in aws and we can pull dot env file from s3
   aws s3 cp s3://mybucket/production/.env /app/.env
elif [ "${NODE_ENV}" == "staging" ];then
   # in staging we also assume that we are in aws and we can pull dot env file from s3
   aws s3 cp s3://mybucket/staging/.env /app/.env
elif [ "${NODE_ENV}" == "ci" ];then 
   # generate dynamic ENV or pull from s3
   aws s3 cp s3://mybucket/ci/.env
else 
   echo "running against local env, please dot env file like docker run -it $PWD/.env:/app/.env ..."
fi
echo "Startin node application"
exec node "$@

Enable encryption on s3, plus only production env should able to pull production env file, a more strong policy will lead to a more secure mechanism.

For local setup you can try

docker run -it -e NODE_ENV="local" --rm $PWD/.env:/app/.env myapp

Upvotes: 1

Related Questions