djt
djt

Reputation: 7535

Volume mounting in Jenkins on Kubernetes

I'm trying to setup Jenkins to run in a container on Kubernetes, but I'm having trouble persisting the volume for the Jenkins home directory.

Here's my deployment.yml file. The image is based off jenkins/jenkins

apiVersion: extensions/v1beta1
kind: Deployment
metadata:
  name: jenkins-deployment
  labels:
    app: jenkins
spec:
  replicas: 1
  selector:
    matchLabels:
      app: jenkins
  template:
    metadata:
      labels:
        app: jenkins
    spec:
      containers:
      - name: jenkins
        image: 1234567.dkr.ecr.us-east-1.amazonaws.com/mycompany/jenkins
        imagePullPolicy: "Always"
        ports:
        - containerPort: 8080
        volumeMounts:
          - name: jenkins-home
            mountPath: /var/jenkins_home
      volumes:
        - name: jenkins-home
          emptyDir: {}

However, if i then push a new container to my image repository and update the pods using the below commands, Jenkins comes back online but asks me to start from scratch (enter admin password, none of my Jenkins jobs are there, no plugins etc)

kubectl apply -f kubernetes (where my manifests are stored)

kubectl set image deployment/jenkins-deployment jenkins=1234567.dkr.ecr.us-east-1.amazonaws.com/mycompany/jenkins:$VERSION

Am I misunderstanding how this volume mount is meant to work?


As an aside, I also have backup and restore scripts which backup the Jenkins home directory to s3, and download it again, but that's somewhat outside the scope of this issue.

Upvotes: 5

Views: 11968

Answers (2)

jaxxstorm
jaxxstorm

Reputation: 13251

You have specified the volume type EmptyDir. This will essentially mount an empty directory on the kube node that runs your pod. Every time you restart your deployment, the pod could move between kube hosts and the empty dir isn't present, so your data isn't persisting across restarts.

I see you're pulling you image from an ECR repository, so I'm assuming you're running k8s in AWS.

You'll need to configure a StorageClass for AWS. If you've provisioned k8s using something like kops, this will already be configured. You can confirm this by doing kubectl get storageclass - the provisioner should be configured as EBS:

NAME            PROVISIONER
gp2 (default)   kubernetes.io/aws-ebs

Then, you need to specify a persistentvolumeclaim:

apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: jenkins-data
spec:
  accessModes:
    - ReadWriteOnce
  storageClassName: gp2 # must match your storageclass from above
  resources:
    requests:
      storage: 30Gi

You can now the pv claim on your deployment:

apiVersion: extensions/v1beta1
kind: Deployment
metadata:
  name: jenkins-deployment
  labels:
    app: jenkins
spec:
  replicas: 1
  selector:
    matchLabels:
      app: jenkins
  template:
    metadata:
      labels:
        app: jenkins
    spec:
      containers:
      - name: jenkins
        image: 1234567.dkr.ecr.us-east-1.amazonaws.com/mycompany/jenkins
        imagePullPolicy: "Always"
        ports:
        - containerPort: 8080
        volumeMounts:
          - name: jenkins-home
            mountPath: /var/jenkins_home
      volumes:
        persistentVolumeClaim:
        claimName: jenkins-data # must match the claim name from above

Upvotes: 1

kofucii
kofucii

Reputation: 7653

You should use PersistentVolumes along with StatefulSet instead of Deployment resource if you wish your data to survive re-deployments|restarts of your pod.

Upvotes: 2

Related Questions