Stephane
Stephane

Reputation: 12760

Docker dependencies design for containers configuration and startup

I containerize a web application which has container dependencies.

The containers are listed here in their dependency order, the latter being dependent on the former.

  1. There is a mysql container which compiles mysql, installs it and configures it.

  2. There is a learnintouch container which installs files and seeds custom product data into the mysql container.

  3. There is a learnintouch.com container which installs files and seeds custom website data into the mysql container.

The data seeding is part of the application installation and needs to be done only once in the application lifetime.

The data seeding is quite long, very long in fact.

It would be nice to have the application created AND started by a docker-compose.yml file sitting in the learnintouch.com directory.

At first, I was hoping to have only three containers, all dependents, with each dependent container waiting for its dependency to complete its data seeding, before running itself, and finally starting the application.

I now see this will be difficult to achieve, if not impossible. It is already tricky to have docker-compose wait for a service to start up, but it's even more so to check for a data seeding to have completed.

I reckon one way is to have two branches in the containers dependency tree, one for the application installation doing the data seeding and another one only starting the application.

Is that a common practice ?

Upvotes: 0

Views: 90

Answers (1)

BMitch
BMitch

Reputation: 264036

The best practice is to remove the dependency so containers can start in any order. If one container starts before another that it uses, it should gracefully return an error if you attempt to use it, but begin to work as soon as the dependencies come online. This allows various microservices to be independently upgraded, replaced, or migrated without needing to restart your entire infrastructure.

When that's not possible, realize that docker-compose is great at what it does, but what it does is limited in scope. So you'll need to extend that with your own scripting. So you may have several compose files, and the first one would be run to seed the data until it completes and then when it returns the script would continue to where you launch your other containers.

Lastly, for large data structures, that is best managed as an external volume, which you can create, update, and backup independently from your other containers.

Upvotes: 1

Related Questions