iamnat
iamnat

Reputation: 4166

Setting up a development environment with docker

I'm facing some issues in setting up docker for a development environment for my team. So far:

  1. I used a base image to start a container

    docker run -t -i ubuntu:latest "/bin/bash"
    
  2. I installed all the compile and build tools in it

  3. I committed that image and pushed that to our local docker server

    docker commit e239ab... 192.168.10.100:5000/team-dev:beta
    

So far so good. Now, acting as a team member:

  1. I pull the dev environment image on my computer

    docker pull 192.168.10.100:5000/team-dev:beta
    
  2. I start a container:

    docker run -t -i 5cca4... "/bin/bash"
    

At this point, I'm thinking of my container as a sort of remote machine that I can SSH into and work in.

I try to do a git clone from inside the container, but this fails because of a publickey problem. I copy the id_rsa* files into the docker manually, and the clone works. Then I try to edit some source files, but my vim configurations, bash configurations, everything is thrown off because this is a fresh OS environment. What does work really well is my entire dependency versioned build environment.


These are the possible solutions I'm thinking of to help me work around this.

  1. After pulling the basic image, use a dockerfile to add all the environment variables from the host to the docker.

    Cons: Everytime my host environment changes bash/vim/git I need to update the dockerfile

  2. Use a Volume from the host to the container. Git clone and edit files in the host. Run the build scripts and the compilations from inside the docker.

    Cons: Content from a data volume cannot be used to update the image if required. I don't know if this is something I should care about anyway.

Or am I approaching this the wrong way?

Upvotes: 20

Views: 11939

Answers (5)

Aurel
Aurel

Reputation: 3740

As a young docker user, I will try to explain my usage of it. I mostly use it for two things: isolate services and containerize complex environments.

1. Isolate services

Abstract

Like Separation Of Concern Principle

Why ? For reusability and scalability (debug and maintenance too by the way).
For example to have a dev environment for a PHP Laravel website, I would run a couple of containers:

  • Mysql
  • Apache-PHP
  • Redis
  • ..

Each of these containers (services) would be linked to each other so they can work together. For example:

Apache <== Mysql (3306 port).

The Apache container can now open a TCP connection to the Mysql container through the exposed 3306 port.

Similar projects will rely upon a single Docker image but will run in different containers. But every tool required by an app should be containerized for the sake of team work.

Source code management

I never put source code directly in a container. I prefer to mount it by volume (docker run -v option) into the container.

When I want to run a command that will alter my source code, like build, run tests or npm updates, I do it either from the host or the container, depending on how much configuration this tool needs for a run.
The more it is complicated or app-specific, the more I would consider to do it in the container.

Running containers

Following the example above, I'll run a myapp and an mysql containers.

$ cd myapp
$ docker run -d \    # run as daemon (detached)
 -p 82:80 \          # bind container's port 80 to host's 82
 -p 22001:22 \       # bind container's port 22 to host's 22001
 --expose 3306       # exposed to other container linked with it
 --name mysql
 author/mysqlimage  
$ docker run -d \    # run as daemon (detached)
 -p 81:80 \          # bind container's port 80 to host's 81
 -p 22001:22 \       # bind container's port 22 to host's 22001
 -v $(pwd):/var/www  # mount current host's directory to container's /var/www
 --link mysql:db     # give access to mysql container (ports) as db alias
 --name myapp
 author/apacheimage

myapp container would be able to talk with mysql container. To test it: $ telnet db 3306 would work.

Running containers using Fig

Like I said, Docker's cmd is a nightmare to me, so I found another great tool, Fig which made me end up with this clear yaml file, located at my project's root:

web:
    image: lighta971/laravel-apache
    links:
        - db
    ports: 
        - "81:80"
        - "22001:22"
    volumes:
        - .:/app
db:
    image: lighta971/phpmyadmin-apache
    ports:
        - "82:80"
        - "22002:22"
    expose:
        - "3306"

And then $ cd myapp && fig up gives the same result as the commands below :)

2. Containerize complex environments

I'm also using Docker for Android development. A basic Android/Cordova setup is big, like Gigs of downloads, and requires time to set up the environment.
This is why I put all the components into a single "swiss army knife" container:

  • Android SDK
  • Android NDK
  • Apache Ant
  • Android tools
  • Java
  • ...

It results in an image with everything I need to set up my Cordova environment:

$ docker run -i 
 --privileged                  # special permission for usb
 -v /dev/bus/usb:/dev/bus/usb  # mount usb volume
 -v $(pwd):/app                # mount current folder
 -p 8001:8000
 lighta971/cordova-env

Which I aliased in cvd:

$ alias cdv='docker run -i --privileged -v /dev/bus/usb:/dev/bus/usb -v $(pwd):/app -p 8001:8000 lighta971/cordova-env'

Now I can transparently use all the programs inside the container like if it was installed on my system. For instance:

$ cdv adb devices            # List usb devices
$ cdv cordova build android  # Build an app
$ cdv cordova run android    # Run an app on a device
$ cdv npm update             # Update an app dependencies

All these commands would work into the current directory because of the volume mount option $(pwd):/app.

Dockerfile

All that said, there are other things to know, like:

  • understanding the build process to make efficient images
  • deal with needs of persistent data
  • keep image packages updated
  • etc.

Hope that was clear to you :)

Upvotes: 37

Walterwhites
Walterwhites

Reputation: 1477

I created image to use cordova in docker container with Java, JDK, SDK manager, gradle and maven, angular CLI, android accepted licenses etc ... https://hub.docker.com/r/walterwhites/docker-cordova/builds/

build image

sudo docker build . -t walterwhites/cordova

run the container:

docker run -ti --rm --name=cordova -v /Users/username/workspace/gusto-coffee-mobile-app:/workspace walterwhites/cordova

option -t is to run the container with terminal option -i is to run in interactive mode option -rm is to stop the container when exit it option -v is to create volume to share between your host (local machine) and the container

Note to tun android app on cordova:

add android platform:

cordova platform add android

build:

cordova build android

Upvotes: 0

iamnat
iamnat

Reputation: 4166

This is finally the workflow that I settled with.

Key points in setting up a dev environment:

  • Writing code must happen outside the docker
  • VCS must be outside the docker
  • All compilation and execution should be inside the docker
  • Docker containers should be stateless, so that modifying and creating a new docker container only involves a recompile, execute

As suggested by lighta there are two options:

  1. Each service has a docker
  2. All services inside one docker

I preferred the latter approach because I'm working on multiple projects and have m x n dockers feels worse than having m dockers.

Setting up the environment:

  • Started off with phusion_baseimage
  • Installed mysql, phpmyadmin, nodejs, grunt and its buddies, apache, django
  • Set up init services and logging (a breeze with runit and svlogd that come along with the base image) to start apache, mysql etc on startup. So a docker start will restart these services
  • Expose the required ports (80, and generally 8000 for running a test server)
  • Now, I define my mount points in the following way
    • ~/host/projectDocker/src > /root/project
    • ~/host/projectDocker/dbData/mysql_data > /var/lib/mysql
    • ~/host/projectDocker/apache_conf > /etc/apache/sites-enabled/

This has worked out really well so far. Whenever we have some specific libraries or dependencies to be installed (esp for Haskell say) we just set up a new image, and ask all the devs to pull the latest image and rebuild their containers. Voila!

All hail docker.

Upvotes: 5

Maleck13
Maleck13

Reputation: 1739

This is how I do it for my nodejs app.

Firstly have a Dockerfile as part of the source it looks like this

FROM node:0.10-onbuild
EXPOSE 8080
COPY . /src

This is used to build an image:

sudo docker build -t myapp .

Once built I then develop my code and use the following to see the changes in my container:

sudo docker run --rm -it -p 8080:8080 --name="myapp" -v '/path/to/app/src:/usr/src/app' myapp:latest

Notice the -v here I am mounting my local dir into the container so I don't need to rebuild every time I make a code change.

The code is managed by github, so the flow of development is the same. Make branch, work locally, once ready merge back to master.

If my app relies on an another service, say rabbitmq, I add this is a separate docker container and use a config file for my app.

Hope this is helpful

Upvotes: 1

rcheuk
rcheuk

Reputation: 1140

I've tried a lot of different dev environment setups for working in docker.

The one my team stuck with was a pre-defined directory where code would sit, and then mapped into the docker container as a directory. The necessary environment variables were put in the Dockerfile and a bunch of bash scripts were written to start up the containers with the necessary paths mapped in. We would store the Dockerfile in Git, and everytime a new dependency was added, we would have to update the Dockerfile and possibly rebuild the image (this was mostly due to how we handled dependency management, it wasn't ideal, and I don't think it's necessary, but will depend on technology stack)

Our scripts were per technology, due to how many different technologies we used. All our containers would map to a technology specific folder that stored all the configurations. For example, the /opt/ directory became the main directory where our code would sit. When docker run command was run, it would map in the local /opt/ directory to the /opt/ directory in the docker container.

This generally worked.

Setting up a local development environment was its own challenge though. I started off with a Windows machine, that would pull from git. I had that mapped into a Ubuntu VM that was running docker.

In the ubuntu VM, I had bash scripts that would start all the needed docker containers.

./start-apache.sh
./start-mysql.sh
./start-mongodb.sh ... and so on

This eventually stopped working when I found out due to using Windows as the host, I had problems creating symlinks that our project depended on. SO I switched to using git in the Ubuntu VM, and then starting up everything in Ubuntu, using the same bash scripts.

The downside of this was I was basically coding in a VM, and could not use my IDE of preference in Windows. It wasn't terrible, but working in a VM isn't ideal, imo.

This setup left much to be desired. It took weeks to get it in a somewhat maintainable state for our small dev team. The workflow process could have been improved, but there was neither the time nor know how...

I'd be interested in hearing of anyone else's workflow they developed for working with docker.

Upvotes: 8

Related Questions