Using Vault with docker-compose file

Currently I am using docker-compose file to setup my dev/prod environments. I am using environment variables to store secrets, database credentials etc. After some search, I found out that Vault can be used to secure the credentials. I tried couple of basic examples with vault, but still I have no idea of how to use Vault with a docker-compose file. Can someone point me to a correct way. If Vault is not a good solution with docker-compose, what are the mechanisms I could use to secure credentials rather than storing them in environment as plain text.

Upvotes: 38

Views: 57929

Answers (4)

simarmannsingh
simarmannsingh

Reputation: 171

I tried many solutions, but then ultimately ended up creating my own version which works.

version: '3.8'

services:
  vault:
    image: hashicorp/vault:latest
    container_name: vault
    restart: unless-stopped
    environment:
      VAULT_ADDR: "https://127.0.0.1:8200"
      VAULT_API_ADDR: "https://127.0.0.1:8200"
      VAULT_LOCAL_CONFIG: |
        {
          "listener": [
            {
              "tcp": {
                "address": "0.0.0.0:8200",
                "tls_disable": 0,
                "tls_cert_file": "/vault/config/certs/vault-cert.pem",
                "tls_key_file": "/vault/config/certs/vault-key.pem"
              }
            }
          ],
          "storage": {
            "file": {
              "path": "/vault/data"
            }
          },
          "default_lease_ttl": "168h",
          "max_lease_ttl": "720h",
          "ui": true
        }
    ports:
      - "8200:8200"
    volumes:
      - "<CUSTOM_USER_DIRECTORY>/data:/vault/data"
      - "<CUSTOM_USER_DIRECTORY>/certs:/vault/config/certs"
    cap_add:
      - IPC_LOCK
    command: "vault server -config vault/config/local.json"

Notice that I used the <CUSTOM_USER_DIRECTORY> which can be any directory on your system. I mapped these directories as I wanted to backup data using the backup service running and taking regular differential snapshots of the directory.

Also, in my case, I had to assign correct permissions to the directory I mentioned above.

sudo chown -R 100:100 <CUSTOM_USER_DIRECTORY>
sudo chmod -R 770 <CUSTOM_USER_DIRECTORY>

Also notice that I'm not running a dev env. It is using TLS. For that, I am using the mkcert tool for generating the TLS certificate inside the certs directory. It can creates certificate even for localhost. Here's what i created for.

mkcert 127.0.0.1

It created a certificate valid for 3 years. Good enough for me.

Now, if you created the certificate afte you assigned the permissions, notice that you'd have to make sure the certificates also have the correct permissions.

Only issue with this approach is, cli access is not valid anymore. It throws an error saying, "tls: failed to verify certificate: x509: certificate signed by unknown authority"

But, I'm using nginx as the load-balancer and mapped a domain to https://127.0.0.1:8200. (Notice the https here also).

Finally, the access via the domain works and it is TLS encrypted. First time, it shows the initialize UI screen also. Works perfectly well.

Upvotes: 2

Roman Rhrn Nesterov
Roman Rhrn Nesterov

Reputation: 3673

Single file version

version: '3.6'

services:
  vault:
    image: vault:1.13.3
    healthcheck:
      retries: 5
    restart: always
    ports:
      - 8200:8200
    environment:
      VAULT_ADDR: 'https://0.0.0.0:8200'
      VAULT_LOCAL_CONFIG: '{"listener": [{"tcp":{"address": "0.0.0.0:8200","tls_disable":"1"}}], "ui": true, "storage": [{"file": {"path":"/vault/data"}}]}'
    cap_add:
      - IPC_LOCK
    volumes:
      - ./vault/config:/vault/config
      - ./vault/data:/vault/data
    command: vault server -config vault/config/local.json

Upvotes: 0

Jean-Roch B.
Jean-Roch B.

Reputation: 554

I have a slightly different version: (mainly added some env variables)

docker-compose.yml

version: '3'

services:

    vault:
      image: vault:latest
      volumes:
        - ./vault/config:/vault/config
        - ./vault/policies:/vault/policies
        - ./vault/data:/vault/data
      ports:
        - 8200:8200
      environment:
        - VAULT_ADDR=http://0.0.0.0:8200
        - VAULT_API_ADDR=http://0.0.0.0:8200
        - VAULT_ADDRESS=http://0.0.0.0:8200
      cap_add:
        - IPC_LOCK
      command: vault server -config=/vault/config/vault.json

vault.json:

{                                    
  "listener":  {                     
    "tcp":  {                        
      "address":  "0.0.0.0:8200",  
      "tls_disable":  "true"         
    }                                
  },                                 
  "backend": {                       
    "file": {                        
      "path": "/vault/file"          
    }                                
  },                                 
  "default_lease_ttl": "168h",       
  "max_lease_ttl": "0h",
  "api_addr": "http://0.0.0.0:8200"
}  

If I want to test the vault outside a container: I do (for example): http://localhost:8200/v1/sys/seal-status

If I want to test inside a container: I do (for example): http://vault:8200/v1/sys/seal-status

I implemented it with laradock.

Upvotes: 29

StampyCode
StampyCode

Reputation: 8118

This is my current docker-compose config for using Vault in dev, but I use dedicated servers (not Docker) in production.

# docker_compose.yml
version: '2'
services:
    myvault:
        image: vault
        container_name: myvault
        ports:
          - "127.0.0.1:8200:8200"
        volumes:
          - ./file:/vault/file:rw
          - ./config:/vault/config:rw
        cap_add:
          - IPC_LOCK
        entrypoint: vault server -config=/vault/config/vault.json

The volume mounts ensure the vault config is saved if you have to rebuild the container.

To use the 'file' backend, to make this setup portable for Docker/Git, you will also need to create a directory called config and put this file into it, named vault.json:

# config/vault.json
{
  "backend": {"file": {"path": "/vault/file"}},
  "listener": {"tcp": {"address": "0.0.0.0:8200", "tls_disable": 1}},
  "default_lease_ttl": "168h",
  "max_lease_ttl": "0h"
}

Notes:
Although the ROOT_TOKEN is static in this configuration (will not change between container builds), any generated VAULT_TOKEN issued for an app_role will be invalidated every time the vault has to be unsealed.

I have found the Vault sometimes becomes sealed when the container is restarted.

Upvotes: 35

Related Questions