Patrik Mihalčin
Patrik Mihalčin

Reputation: 3971

Docker-compose, deploy war to Tomcat, run oracle service

Given I have following scenario:

I want to deploy web application to Tomcat, which connects to the oracle database and display data to the user

What I will need is the following:

The solution could be:

version: '2'
services:
  web-tomcat:
    image: tomcat:jre8
    depends_on:
      - db-oracle
  db-oracle:
    image: wnameless/oracle-xe-11g

BUT, multiple problems gotta be solved:

  1. waiting for oracle service to be ready - solved with waiting wrapper as advised in docker docs: wait-for-it
  2. waiting for users to be created - solved by waiting loop which tries to connect to oracle db with user being created
  3. waiting for the db schema being prepared by liquibase fatdb deployment and after that deploying war to tomcat
  4. not ending up in the situation I spin up separate container to do the fatdb deployment and after that it would just stay in Exited (0) state as fatdb deployment is one-off thing, not a service

I ended up creating entrypoint.sh script in order to have db ready before war deployment:

java -jar fatdb.jar update
catalina.sh run

and overriding entrypoint in tomcat image

Is there better solution how to prepare database schema before war deployment in docker-compose?

Upvotes: 3

Views: 1627

Answers (1)

Rob Lockwood-Blake
Rob Lockwood-Blake

Reputation: 5056

I don't know if it's "better" than what you currently have, but I always approach this by thinking that I have two deployments to make my application complete. I have:

  1. A database deployment (that includes updating the schema to the latest version using something like Liquibase)
  2. An application deployment (that includes deploying the latest version of my WAR archive or however it's packaged)

Naturally the 2nd deployment depends on the 1st deployment being complete and successful.

So how do I model that with docker-compose? I'm all for clarity, so I create two files:

  1. docker-compose.database
  2. docker-compose.application

I then have a two step deployment process:

Step 1:

docker-compose -p application-name -f docker-compose.database up -d database
docker-compose -p application-name -f docker-compose.database run --rm database-migration

Firstly we start-up the database using docker-compose up. Secondly we run our database migration.

As you point out in your question, there will be a delay for the database actually being "ready" after docker-compose up returns, so the database migration logic is wrapped in a database checking function that only runs the migration once we're happy the database is available. You'll notice that I call my database migration container with the --rm option meaning that the container is automatically removed once it has completed execution. No need to keep this container hanging around once it has done its job.

It's also important here to use the -p option that docker-compose offers. This specifies the project name for the deployment and ensures that all containers are created on the same docker network meaning that inter-container communication by service name is not a problem.

Step 2:

docker-compose -p application-name -f docker-compose.database -f docker-compose.application up -d application

Docker Compose allows you to specify multiple compose files on the command line and that is what I do here. In my docker-compose.application file I'll have something like this:

version: 2
services:
 application:
  image: my-tomcat-image
  depends_on: database

The database service is defined in the docker-compose.database file and I know that is it now already running (because deployment step 1 completely successfully). Therefore I can immediately start the Tomcat service without having to wait for anything. This helps as I only have the "wait for the database to be ready logic" in once place i.e. the database migration. The Tomcat service itself expects the database to be there and if it is not, fails fast.

Wrapper around the steps

The obvious disadvantage to this approach is that there are suddenly more than one command needed to run your deployment. That is a challenge that always raises its head whenever you're trying to do any form of infrastructure orchestration like this where there are dependencies between services.

Again you can keep things simple by perhaps writing a simple shell script to wrap up the docker-compose commands. I like to be be able to unit test any wrapper scripts, so I tend to use ruby for something like that. You could then imagine something like:

ruby deploy.rb

The deploy.rb script of course wrapping up your multiple docker-compose commands.

Other good approaches are things like using Jenkins or any other CI/CD pipeline tools to do this for you (appreciating that you may want to do this on your local machine).

You can also start to look at tools like Terraform which provide a "standardised" approach for doing things like this, but that may be over-kill for what you need when a simple wrapper script can get you up and running.

Upvotes: 1

Related Questions