Reputation: 3007
I'm learning Ansible
with the idea of using it locally in my laptop and currently I'm working with the "includes".
I'd like to prepare a playbook to import some other tasks (installing base packages, set up git, vim and docker) and execute it.
All the code is hosted in
GitLab
and I'm using their freeCI
service to test the plays.
The CI job will run until when checking if the docker
service is running. At this point, the play will fail with the following message:
fatal: [localhost]: FAILED! => {
"changed": true,
"cmd": ["service", "docker", "restart"],
"delta": "0:00:00.176233",
"end": "2017-09-28 18:49:56.194752",
"failed": true,
"msg": "non-zero return code",
"rc": 1,
"start": "2017-09-28 18:49:56.018519",
"stderr": "",
"stderr_lines": [],
"stdout": "",
"stdout_lines": []
}
I've created a docker container (docker run --rm -ti debian
) in my laptop and executed the play locally and it fails in the same spot.
However, if the container is created with the privileged
flag, I can start the service by hand and then re-execute the play. This time, it will finish successfully.
So, my questions are:
docker
service using Ansible
?
privilege
mode. This allowed me to start the service by-hand.
GitLab-CI
public instance:privileged
flag enabled?PS: At the moment, I'm not exactly looking into best practices, I'm just trying to get this running to have something to play with.
Environment:
The gitlab-ci.yml
file:
image: debian:latest
variables:
HOST_INVENTORY: "./hosts"
INCLUDES_DIR: "./gists/includes"
before_script:
- apt-get update -qq
- apt-get install -y curl gcc g++ openssh-server openssh-client python python-dev python-setuptools libffi-dev libssl-dev
- easy_install pip
- pip install ansible
stages:
- syntax_check
- install
check:
stage: syntax_check
script:
- ansible-playbook -i $HOST_INVENTORY $INCLUDES_DIR/play.yml --limit 'local' --syntax-check;
run_includes:
stage: install
script:
- ANSIBLE_PIPELINING=True ansible-playbook -i $HOST_INVENTORY $INCLUDES_DIR/play.yml --limit 'local';
The play.yml
is quite simple, only importing some plays:
- hosts: local
become: yes
become_method: sudo
pre_tasks:
- import_tasks: tasks/pre.yml # update package manager cache
tasks:
- import_tasks: tasks/common.yml
- import_tasks: tasks/docker.yml
And the docker
tasks:
- name: Dependencies
package:
name: "{{ item }}"
state: installed
with_items:
- apt-transport-https
- ca-certificates
- curl
- gnupg2
- software-properties-common
- name: Docker module dependencies
pip:
name: docker-py
- name: Add Docker key
shell: curl -fsSL https://download.docker.com/linux/$(. /etc/os-release; echo "$ID")/gpg | apt-key add -
- name: Add Docker repo
shell: echo "deb [arch=amd64] https://download.docker.com/linux/$(. /etc/os-release; echo $ID) $(lsb_release -cs) stable" > /etc/apt/sources.list.d/docker.list
- name: Install Docker
apt:
pkg: docker-ce
state: installed
update_cache: yes
- name: Ensure Docker group is present
group:
name: docker
state: present
- name: Add current user to the Docker group
user:
name: "{{ ansible_user }}"
groups:
- docker
append: yes
- name: Ensure service is enabled
command: service docker restart
- name: Pull images from Docker hub
docker_image:
name: "{{ item }}"
with_items:
- debian
- cern/cc7-base
Upvotes: 2
Views: 4284
Reputation: 3007
Another way is using GitLab's dind
service that will let us test the playbooks inside a container using the shared runners that are already provided.
Two things are needed here:
Dockerfile
where to build a base image with all the requirements to run Ansible.gitlab-ci
file to run the jobs inside a container.As an example, the Dockerfile
could look like this:
FROM debian:latest
ARG USERNAME
RUN apt-get update && \
apt-get install -y curl \
git \
gcc \
g++ \
libffi-dev \
libssl-dev \
openssh-client \
openssh-server \
python \
python-dev \
python-setuptools \
sudo
RUN easy_install pip && \
pip install ansible \
jmespath
RUN useradd -ms /bin/bash $USERNAME
USER $USERNAME
WORKDIR /home/$USERNAME
And then, in the gitlab-ci.yml
file, we can have something like:
image: docker:latest
services:
- docker:dind
variables:
DOCKER_IMAGE: "registry.gitlab.com/<YOUR_USER>/<YOUR_REPO>/dummy-image"
CONTAINER_NAME: "test"
IMAGE_USERNAME: "dummy"
stages:
- build
- check
before_script:
- docker login -u gitlab-ci-token -p $CI_JOB_TOKEN registry.gitlab.com
after_script:
- docker rm -f $(docker ps -aq)
- docker rmi $DOCKER_IMAGE
- docker logout registry.gitlab.com
build_image:
stage: build
script:
- docker build --pull -t $DOCKER_IMAGE --build-arg USERNAME=$IMAGE_USERNAME .
- docker push $DOCKER_IMAGE
when: manual
check_version:
stage: check
script:
- docker run -di --name $CONTAINER_NAME $DOCKER_IMAGE
- docker exec $CONTAINER_NAME /bin/bash -c "ansible-playbook --version"
This has also the added benefit that all the pre-requisites for Ansible to run are set in the build
stage and they don't need to be repeated again every time the CI process is run (like with the original gitlab-ci.yml
file in the question).
Upvotes: 0
Reputation: 146520
So I created a sample project on gitlab.com
which runs few command on the runner
$ echo PWD is $PWD
PWD is /builds/tarunlalwani/testci
$ id
uid=0(root) gid=0(root) groups=0(root)
$ ps aux
USER PID %CPU %MEM VSZ RSS TTY STAT START TIME COMMAND
root 1 0.0 0.1 20260 2984 ? Ss 19:41 0:00 /bin/bash
root 7 0.0 0.1 20420 2652 ? S 19:41 0:00 /bin/bash
root 10 0.0 0.0 17500 1996 ? R 19:41 0:00 ps aux
$ which systemctl
/bin/systemctl
$ systemctl status docker
Failed to get D-Bus connection: Unknown error -1
ERROR: Job failed: exit code 1
As you can see we are running inside a container and hence your ansible script is not going to work. There is no docker service to start or end. Also since you are inside a docker container, restarting docker container also means killing the container you are running.
You need to setup a Ubuntu VM and then setup Gitlab Runner with a Shell executor on the same. The shell executor will mean that your commands run on that VM, which will have systemctl
and docker
service
Upvotes: 3