Vassily
Vassily

Reputation: 5428

How should I rewrite docker-compose.yaml for Kubernetes?

I am very new to K8s, so I didn't use it ever. But I had familiarized myself with the concept of nodes/pods. I know that minikube is the local k8s engine for debug/etc and that I should interact with any k8s engine via kubectl tool. Now my questions are:

  1. Does launching the same configuration on my local minikube instance and production AWS/etc instance guarantee that the result will be identic?

  2. How do I set up continuous deployment for my project? Now I have configured CI that pushes images of tested code to the docker hub with the :latest tag. But I want them to be automatically deployed in the Rolling Update mode without interrupting uptime.

  3. It would be great to get correct configurations with the steps I should perform to make it work on any cluster? I don't want to save docker-compose's notation and use kompose. I want to make it properly in the k8s context.

My current docker-compose.yml is (django and react services are available from dockerhub now):

version: "3.5"
services:
  nginx:
    build:
      context: .
      dockerfile: Dockerfile.nginx
    restart: always
    command: bash -c "service nginx start && tail -f /dev/null"
    ports:
      - 80:80
      - 443:443
    volumes:
      - /mnt/wts_new_data_volume/static:/data/django/static
      - /mnt/wts_new_data_volume/media:/data/django/media
      - ./certs:/etc/letsencrypt/
      - ./misc/ssl/server.crt:/etc/ssl/certs/server.crt
      - ./misc/ssl/server.key:/etc/ssl/private/server.key
      - ./misc/conf/nginx.conf:/etc/nginx/nginx.conf:ro
      - ./misc/conf/passports.htaccess:/etc/passports.htaccess:ro
    depends_on:
      - react
  redis:
    restart: always
    image: redis:latest
    privileged: true
    command: redis-server
  celery:
    build:
      context: backend
    command: bash -c "celery -A project worker -B -l info"
    env_file:
      - ./misc/.env
    depends_on:
      - redis
  django:
    build:
      context: backend
    command: bash -c "/code/manage.py collectstatic --no-input && echo donecollectstatic && /code/manage.py migrate && bash /code/run/daphne.sh"
    volumes:
      - /mnt/wts_new_data_volume/static:/data/django/static
      - /mnt/wts_new_data_volume/media:/data/django/media
    env_file:
      - ./misc/.env
    depends_on:
      - redis
  react:
    build:
      context: frontend
    depends_on:
      - django

Upvotes: 4

Views: 333

Answers (1)

Rico
Rico

Reputation: 61671

The short answer is yes, you can replicate what you have with docker-compose with K8s.

  1. It depends on your infrastructure. For example if you have an external LoadBalancer in your AWS deployment, it won't be the same in your local.

  2. You can do rolling updates (this typically works with stateless services). You can also take advantage of a GitOps type of approach.

  3. The docker-compose notation is different from K8s so yes, you'll have to translate that to Kubernetes objects: Pods, Deployments, Secrets, ConfigMaps, Volumes, etc and for the most part the basic objects will work on any cluster, but there will always be some specific objects related to the physical characteristics of your cluster (i.e storage volumes, load balancer, etc). The Kubernetes docs are very comprehensive and are super helpful.

Upvotes: 3

Related Questions