Drake
Drake

Reputation: 443

How to mount a configMap into a persistentVolumeClaim?

So, I'm trying to teach myself kubernetes by example, and I've found a scenario I can't seem to find a good solution to. The general scenario is a container requires a config to have a path like /config/system.xml, but there are also auto-created embedded databases and such also created in /config. So, my initial thought was to have a persistentVolumeClaim for /config, and simply mount the configMap into the PVC. The problem I'm having is, system.xml IS mounted, and has the correct content, but also has owner/group = root, and the container logs indicate the /config/system.xml is a "read-only" filesystem.

I've tried a few things to get this working:

  1. securityContext to try to make the system.xml in the appropriate non-root use/group
  2. using the subPath directive for system.xml
  3. initContainers to wget the config files from github (this kinda works, but seems like a hack)

Does anyone have any suggestions for handling this scenario?


deployment.yml (utilizing subPath shown):

---
apiVersion: apps/v1
kind: Deployment
metadata:
  name: jellyfin
  labels:
    app: jellyfin
spec:
  selector:
    matchLabels:
      app: jellyfin
  strategy:
    type: Recreate
  template:
    metadata:
      labels:
        app: jellyfin
    spec:
      restartPolicy: Always
      containers:
        - name: jellyfin
          image: lscr.io/linuxserver/jellyfin:10.9.9ubu2204-ls25
          ports:
            - containerPort: 8096
          volumeMounts:
          # NFS mounts ommitted
          - name: jf-config
            mountPath:  /config
          - name: jf-config-system
            mountPath:  /config/system.xml
            subPath:  system.xml
      volumes:
      # NFS mounts ommitted
      - name: jf-config
        persistentVolumeClaim:
          claimName:  jf-config-data
      - name: jf-config-system
        configMap:
          name: jf-baseconfig-configmap

pcv.yml:

---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: jf-config-data
spec:
  accessModes:
    - ReadWriteMany
  storageClassName: longhorn
  resources:
    requests:
      storage: 50Gi

kustomization.yml:

---
apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
namespace: dev

configMapGenerator:
- name: jf-baseconfig-configmap
  files:
  - system.xml

generatorOptions:
  labels:
    app: jellyfin

resources:
- ../../base

namePrefix: dev-

patches:
- path: svc_IP.yml    # not shown; works fine

there are some other files/folders omitted for the sake of space.


background:

My kubernetes (k3s) cluster is running on VMs within Proxmox, and I'm making use of KubeVIP, MetalLB, and Longhorn (basically following this and this)

Upvotes: 0

Views: 92

Answers (0)

Related Questions