Conar Welsh
Conar Welsh

Reputation: 41

Storing submodules for micro services, but still using forks

I am stumped here. A lot of this is already in place, its just the wrapping that I cannot figure out.

We have a micro-service architecture, with many separate repositories. We are using Docker, and Docker Compose for building and running the development environment, which works beautifully.

The question I have, is how to package up the main collection of repositories. So if I have a folder structure like:

\
    service1
        .git
        Dockerfile
    service2 
        .git
        Dockerfile
    service3
        .git
        Dockerfile
    docker-compose.yml
    README.md

...Where service1, service2, service3 are each their own git repository.

My first thought was to use git submodules, which would work, however we enforce policies to require developers to fork repositories instead of working off the main repository due to continuous integration constraints and code reviews. I was not overly excited about using git submodules at all, even before I thought of this caveat, so an alternative solution would be much preferred.

At the moment i can only think to write scripts to store a list of repositories; run a query for each to see if the logged-in developer has a fork of each, creating one if not, then pulling into the master folder; and then booting docker-compose. This seems like a horrible solution though, enough so that I may just have to write docs to just tell devs how to manually do this process...

Thoughts??

Thanks for your time :)

Upvotes: 4

Views: 1448

Answers (3)

user4447859
user4447859

Reputation:

I had to solve a similar problem at work (although we don't use Docker; instead we have a Vagrantfile which provisions a VM for each of our microservices).

Create a develop repository containing your docker-compose.yml. Each developer should clone the develop repository, as well as his or her forks of the microservices, to a common directory (e.g. ~/workspace):

/Users/conar/workspace
  /develop
    /docker-compose.yml
  /service1
    /Dockerfile
    /app.js
  /service2
    /Dockerfile
    /app.js
  /service3
    /Dockerfile
    /app.js

Your docker-compose.yml might look something like this:

service1:
  build: ../service1/Dockerfile
  volumes:
    - ../service1:/app
service2:
  build: ../service2/Dockerfile
  volumes:
    - ../service2:/app
service3:
  build: ../service3/Dockerfile
  volumes:
    - ../service3:/app

Now each service's app.js (and all other code) is available under /app.

Upvotes: 0

Alex Kurkin
Alex Kurkin

Reputation: 1069

I've had similar problem with workflow (with exception that I didn't use forks). Ultimately my requirements came down to 1 thing:

  • As a developer, I want to run 1 command project bootstrap to bootstrap my environment

I would suggest to do few things to optimize the workflow, which worked incredibly well for me:

  • store list of services in JSON file, where each service has "url", "fork_url", "name" and other common attributes required for docker-compose to understand how to handle this service. It seems like every team member on your team will have either fork url or base repo url

  • build a command-line app (there are a number of options available - depends on your language of choice, for Ruby it's gem Thor, for Go it's package Cobra, etc). Usually it takes just a few hours to build a simple command-line app structure and first couple commands, but this type of automation will save you hours on a day-by-day basis and you will have a foundation to extend your command-line app as your needs increase and the concept is proven to be viable and useful. You also will have a unified interface to provision your environment across all of team members, which then becomes responsibility of a team to maintain it.

  • build a project bootstrap command to provision your environment. For example, it will:
    • go over list of services from your JSON file:
      • do a fork if not exists
      • clone your repo
      • add another remote to your repo, which will represent fork (apart from origin)
      • will generate docker-compose.yml from template in a specified directory

Upvotes: 3

dnephin
dnephin

Reputation: 28090

It may depend on your workflow. Are you likely to be making changes to multiple services at once?

If you usually only make a change to one or two services at any given time, you can setup your docker-compose.yml to use only image fields (instead of build) , and to not use any host volumes. With this setup, it doesn't matter where the code is checked out because the docker-compose.yml just uses images available to the docker daemon.

This would require you to have other docker-compose.yml files (or a bash script, makefile, etc) in each repo to build the images for each service.

If you need host volumes for interactive development, you use a docker-compose.override.yml to add them as necessary (but that would be up to the developer to point it at the right path).

Upvotes: 1

Related Questions