Zehouani Alae
Zehouani Alae

Reputation: 101

Buildah vs Kaniko

I'm using ArgoWorkflow to automate our CI/CD chains. In order to build images, and push them to our private registry we are faced between the choice of either buildah or kaniko. But I can't put my finger on the main difference between the two. Pros and cons wise, and also on how do these tools handle parallel builds and cache management. Can anyone clarify these points ? Or even suggest another tool that can maybe do the job in a more simple way. Some clarifications on the subject would be really helpful. Thanks in advance.

Upvotes: 8

Views: 8421

Answers (2)

Davide Madrisan
Davide Madrisan

Reputation: 2300

kaniko is very simple to setup and has some magic that let it work with no requirements in kubernetes :)

I also tried buildah but was unable to configure it and found it too complex to setup in a kubernetes environment.

You can use an internal Docker registry as cache management for kaniko, but a local storage can be configured instead (not tried yet). Just use the latest version of kaniko (v1.7.0), that fixes an important bug in the cached layers management.

These are some functions (declared in the file ci/libkaniko.sh) that I use in my GitLab CI pipelines, executed by a GitLab kubernetes runner. They should hopefully clarify setup and usage of kaniko.

function kaniko_config
{
    local docker_auth="$(echo -n "$CI_REGISTRY_USER:$CI_REGISTRY_PASSWORD" | base64)"

    mkdir -p $DOCKER_CONFIG
    [ -e $DOCKER_CONFIG/config.json ] || \
        cat <<JSON > $DOCKER_CONFIG/config.json
{
    "auths": {
        "$CI_REGISTRY": {
            "auth": "$docker_auth"
        }
    }
}
JSON
}

# Usage example (.gitlab-ci.yml)
#
# build php:
#   extends: .build
#   variables:
#     DOCKER_CONFIG: "$CI_PROJECT_DIR/php/.docker"
#     DOCKER_IMAGE_PHP_DEVEL_BRANCH: &php-devel-image "${CI_REGISTRY_IMAGE}/php:${CI_COMMIT_REF_SLUG}-build"
#   script:
#     - kaniko_build
#       --destination $DOCKER_IMAGE_PHP_DEVEL_BRANCH
#       --dockerfile $CI_PROJECT_DIR/docker/images/php/Dockerfile
#       --target devel

function kaniko_build
{
    kaniko_config
    echo "Kaniko cache enabled ($CI_REGISTRY_IMAGE/cache)"
    /kaniko/executor \
        --build-arg http_proxy="${HTTP_PROXY}" \
        --build-arg https_proxy="${HTTPS_PROXY}" \
        --build-arg no_proxy="${NO_PROXY}" \
        --cache --cache-repo $CI_REGISTRY_IMAGE/cache \
        --context "$CI_PROJECT_DIR" \
        --digest-file=/dev/termination-log \
        --label "ci.job.id=${CI_JOB_ID}" \
        --label "ci.pipeline.id=${CI_PIPELINE_ID}" \
        --verbosity info \
        $@

    [ -r /dev/termination-log ] && \
        echo "Manifest digest: $(cat /dev/termination-log)"
}

With these functions a new image can be built with:

stages:
  - build

build app:
  stage: build
  image:
    name: gcr.io/kaniko-project/executor:v1.7.0-debug
    entrypoint: [""]
  variables:
    DOCKER_CONFIG: "$CI_PROJECT_DIR/app/.docker"
    DOCKER_IMAGE: "${CI_REGISTRY_IMAGE}/myimage:${CI_COMMIT_REF_SLUG}"
    GIT_SUBMODULE_STRATEGY: recursive
  before_script:
    - source ci/libkaniko.sh
  script:
    - kaniko_build
      --destination $DOCKER_IMAGE
      --digest-file $CI_PROJECT_DIR/docker-content-digest-app
      --dockerfile $CI_PROJECT_DIR/docker/Dockerfile
  artifacts:
    paths:
      - docker-content-digest-app
  tags:
    - k8s-runner

Note that you have to use the debug version of kaniko executor because this image tag provides a shell (and other busybox based binaries).

Upvotes: 6

rhatdan
rhatdan

Reputation: 452

buildah will require either a privileged container with more then one UID or a container running with CAP_SETUID, CAP_SETGID to build container images. It is not hacking on the file system like kaniko does to get around these requirements. It runs full containers when building.

--isolation chroot, will make it a little easier to get buildah to work within kubernetes.

Upvotes: 9

Related Questions