Julian
Julian

Reputation: 4055

How to map docker volume to EC2 host file system

We have a bunch of micro services running each in its own Docker container in a shared AWS EC2 instance. Giving the excellent results obtained when running locally now we are trying to use Chronicle queues as a way to communicate between our micro services in AWS.

MS1 receives an API request does some internal processing and emits an event to CH2 Chronicle queue. MS2 is listening to CH2 chronicle queue and when an events arrives in there it picks it up does some internal processing and emits an event to CH3 Chronicle queue.

API --> MS1 --> CH2 --> MS2 --> CH3 --> ...

Inside each container we have /tmp/my_app_data an application root folder and a subfolder for each Chronicle queue that micro service interacts with. For example in MS1 container we have /tmp/my_app_data/ch2 and inside MS2 container we have /tmp/my_app_data/ch2 and /tmp/my_app_data/ch3

All these folders are mapped to the EC2 host machine under a similar structure:

/tmp/my_app_data
    |_ch2
    |_ch3
    |_...

Now when trying to run our system we encountered all kind of issues to accessing data which was written by one micro service intended for the next one in the work flow. In the above example the further we could get was to have MS2 read data from ch2 that was sent thee by MS1 but we still could not mark the data as processed which means MS2 writing into files under the ch2 folder.

I cannot list the full set of permutations we have tried even trying to hack file permissions inside the containers and on the EC2 host; it seems I am missing some basic setup specific to Docker/AWS.

This is our Docker configuration from MS1 Dockerfile:

 RUN mkdir -p /tmp/my_app_data && chown nobody:nobody /tmp/my_app_data
 RUN mkdir -p /tmp/my_app_data/ch2 && chown nobody:nobody /tmp/my_app_data/ch2
 USER nobody
 VOLUME ["/tmp/my_app_data"]

Similar we have the same ur Docker configuration from MS1 Dockerfile:

 RUN mkdir -p /tmp/my_app_data && chown nobody:nobody /tmp/my_app_data
 RUN mkdir -p /tmp/my_app_data/ch2 && chown nobody:nobody /tmp/my_app_data/ch2
 RUN mkdir -p /tmp/my_app_data/ch3 && chown nobody:nobody /tmp/my_app_data/ch3
 USER nobody
 VOLUME ["/tmp/my_app_data"]

And on the AWS side part of both MS1 and MS2 tasks definitions we have:

    "mountPoints": [
      {
        "readOnly": null,
        "containerPath": "/tmp/my_app_data",
        "sourceVolume": "chronicle"
      }
    ],
    ....
    "volumes": [
      {
        "fsxWindowsFileServerVolumeConfiguration": null,
        "efsVolumeConfiguration": null,
        "name": "chronicle",
        "host": {
          "sourcePath": "/tmp/my_app_data"
         },
        "dockerVolumeConfiguration": null
      }
    ]

So here is my question: What I am doing wrong and how could I fix it? Ideally for us this should work when running as user nobody but because this is a POC I would be thankful to get it running anyway, including root. For us the purpose of this POC is to confirm whether we get the same good results with Chronicle in the cloud as we got when running locally.

Thank you in advance for your inputs.

Upvotes: 0

Views: 830

Answers (1)

Dmitry Pisklov
Dmitry Pisklov

Reputation: 1206

How to set up Chronicle Queue to work with Docker is documented in Chronicle FAQ. You need to ensure that:

  • containers share IPC namespace (run with --ipc="host")

  • queues are mounted on bind-mounted folders from the host (i.e. -v /host/dir/1/:/container/dir)

Upvotes: 0

Related Questions