Reputation: 4669
I'm running a Kubernetes cluster on AWS using kops. I've mounted an EBS volume onto a container and it is visible from my application but it's read only because my application does not run as root. How can I mount a PersistentVolumeClaim
as a user other than root? The VolumeMount
does not seem to have any options to control the user, group or file permissions of the mounted path.
Here is my Deployment yaml file:
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: notebook-1
spec:
replicas: 1
template:
metadata:
labels:
app: notebook-1
spec:
volumes:
- name: notebook-1
persistentVolumeClaim:
claimName: notebook-1
containers:
- name: notebook-1
image: jupyter/base-notebook
ports:
- containerPort: 8888
volumeMounts:
- mountPath: "/home/jovyan/work"
name: notebook-1
Upvotes: 157
Views: 368243
Reputation: 4520
If you are using the hostPath
volume type, the above ran into file permission issues. hostPath
does not respect the fsGroup
, runAsGroup
, or runAsUser
settings when mounted. The Kubernetes securityContext, including fsGroup
, does not change the ownership or permissions of files on hostPath
volumes. This is because hostPath
volumes directly mount directories from the host node's filesystem, and Kubernetes does not modify the file ownership or permissions of the host's file system when doing so.
I had to manually set the user_id in the Dockerfile and also use the same id inside Minikube.
I wrote the details in https://zepworks.com/posts/persist-pod-logs-on-minikube/ since it took too much time from me to figure it out.
Here is the yaml file:
apiVersion: v1
kind: PersistentVolume
metadata:
name: logs-data
spec:
accessModes:
- ReadWriteOnce
capacity:
storage: 1Gi
hostPath:
path: /data/logs-data # Host path for Minikube node
storageClassName: standard
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: logs-data
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 1Gi
volumeName: logs-data
---
apiVersion: batch/v1
kind: Job
metadata:
name: myapp
labels:
app: myapp
spec:
template:
spec:
securityContext:
fsGroup: 1000 # Group ID from atlas docker image nonroot user
runAsGroup: 1000 # Group ID from atlas docker image nonroot user
runAsUser: 1000 # User ID from atlas docker image nonroot user
containers:
- name: main
image: path_to_image
command:
- sleep
- infinity
volumeMounts:
- name: logs-data
mountPath: /var/logs-data
restartPolicy: Never
volumes:
- name: logs-data
persistentVolumeClaim:
claimName: logs-data
Upvotes: 0
Reputation: 21
Here is the solution of this problem:
apiVersion: v1
kind: Pod
metadata:
name: myapp-pod
labels:
app.kubernetes.io/name: MyApp
spec:
initContainers:
- name: volume-permissions
image: busybox
command: ["/bin/sh"]
args: ["-c", "chmod -R 755 /data && chown 1000:1000 /data"]
volumeMounts:
- name: my-volume
mountPath: /data
containers:
- name: my-container
image: nginx
volumeMounts:
- name: my-volume
mountPath: /data
volumes:
- name: my-volume
hostPath:
path: /data
Upvotes: 2
Reputation: 9094
I ended up with an initContainer
with the same volumeMount
as the main container to set proper permissions, in my case, for a custom Grafana image.
This is necessary when a container in a pod is running as a user other than root
and needs write permissions on a mounted volume.
initContainers:
- name: take-data-dir-ownership
image: alpine:3
# Give `grafana` user (id 472) permissions a mounted volume
# https://github.com/grafana/grafana-docker/blob/master/Dockerfile
command:
- chown
- -R
- 472:472
- /var/lib/grafana
volumeMounts:
- name: data
mountPath: /var/lib/grafana
Update: Note that it might suffice to run chown
without the -R
(recursive) flag, since the permissions will generally be persisted within the volume itself, regardless of pod restarts. This will be desirable if there are large amounts of files in the volume, as it will take time to process all of them (depending on the resources
limits that are set for the initContainer
).
Update 2: In Kubernetes v1.23 securityContext.fsGroup
and securityContext.fsGroupChangePolicy
features went into GA/stable. See the other answer for more information. The related changelog item describes this as
The feature to configure volume permission and ownership change policy for Pods moved to GA in 1.23. This allows users to skip recursive permission changes on mount and speeds up the pod start up time.
Upvotes: 102
Reputation: 87
There is now also the fsGroupChangePolicy that you can specify in the security context. See here and here
Upvotes: 2
Reputation: 1649
For people using configmap as file inside pod
I am loading data from a configmap as file inside the pod's container here are my manifests:
#./script-cm.yaml
apiVersion: v1
kind: ConfigMap
metadata:
name: script-cm
labels:
app: script
data:
data-script: |
#!/bin/bash
set -e
echo "some script commands"
#./deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: script
namespace: default
labels:
app: script
spec:
selector:
matchLabels:
app: script
replicas: 1
template:
metadata:
labels:
app: script
spec:
restartPolicy: Always
containers:
- name: script-container
image: ubuntu:20.04
resources: {}
volumeMounts:
- name: influxdb-provisioning
mountPath: /docker-entrypoint-initdb.d/data.sh
subPath: data.sh
volumes:
- name: script-bind
configMap:
name: script-cm
items:
- key: data-script
path: data.sh
mode: 0777
As you can see I am following k8s docs to bind a config map into the pod, mode: 0777
allowed me to give execution permissions
on that specific file, you can also run the following command to get a better idea using kubectl explain
:
kubectl explain deployment.spec.template.spec.volumes.configMap.items.mode
Make sure to put the right permissions instead of 0777 since it's not recommended especially on sensitive data!
Upvotes: 2
Reputation: 5852
In my case, I used scratch
as the base image and set the user to 65543. And I needed the write permission to a dir. I did this by using emptyDir
volume,
spec:
containers:
...
volumeMounts:
- mountPath: /tmp
name: tmp
# readOnly: true
volumes:
- name: tmp
emptyDir: {}
Upvotes: 1
Reputation: 1222
initContainers
as a root user , by setting runAsUser: 0
initContainers:
- name: change-ownership-container
image: busybox
command: ["/bin/chown","-R","1000:1000", "/home/jovyan/work"]
securityContext:
runAsUser: 0
privileged: true
volumeMounts:
- name: notebook-data
mountPath: /home/jovyan/work
So the whole Yaml file looks like this
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: jupyter
labels:
release: jupyter
spec:
replicas:
updateStrategy:
type: RollingUpdate
serviceName: jupyter-headless
podManagementPolicy: Parallel
selector:
matchLabels:
release: jupyter
template:
metadata:
labels:
release: jupyter
annotations:
spec:
restartPolicy: Always
terminationGracePeriodSeconds: 30
securityContext:
runAsUser: 1000
fsGroup: 1000
containers:
- name: jupyter
image: "jupyter/base-notebook:ubuntu-20.04"
imagePullPolicy: IfNotPresent
ports:
- name: http
containerPort: 8888
protocol: TCP
- name: blockmanager
containerPort: 7777
protocol: TCP
- name: driver
containerPort: 2222
protocol: TCP
volumeMounts:
- name: notebook-data
mountPath: /home/jovyan/work
resources:
limits:
cpu: 200m
memory: 300Mi
requests:
cpu: 100m
memory: 200Mi
initContainers:
- name: change-ownership-container
image: busybox
command: ["/bin/chown","-R","1000:1000", "/home/jovyan/work"]
securityContext:
runAsUser: 0
privileged: true
volumeMounts:
- name: notebook-data
mountPath: /home/jovyan/work
volumes:
- name: notebook-data
persistentVolumeClaim:
claimName: jupyter-pvc
Upvotes: 6
Reputation: 1094
Over a few iterations later, I ended up using
{{- $root := . }}
...
initContainers:
- name: volume-mount-hack
image: busybox
command: ["sh", "-c", "find /data -user root -exec chown 33:33 {} \\;"]
volumeMounts:
{{- range $key,$val := .Values.persistence.mounts }}
- name: data
mountPath: /data/{{ $key }}
subPath: {{ $root.Values.projectKey }}/{{ $key }}
{{- end }}
It's much cleaner and configurable as opposed to other solutions. Moreover, it is way faster - the find command only changes permissions on files/directories that actually belong to the root user.
When you are mounting volumes with a large number of files, this can have a significant impact on your container boot/load times (seconds or even minutes!).
Try comparing the execution time of
chown www-data:www-data ./ -R
and
find /data -user root -exec chown 33:33 {} \\;
you may be surprised!
Upvotes: 0
Reputation: 71
Please refer to this issue: https://github.com/kubernetes/kubernetes/issues/2630
If it is an emptydir
, the securityContext
in the spec
can be used:
spec:
securityContext:
runAsUser: 1000
fsGroup: 1000
containers: ...
If the volume is a hostpath
, an initContainer
can be used to chown
paths in the volume:
initContainers:
- name: example-c
image: busybox:latest
command: ["sh","-c","mkdir -p /vol-path && chown -R 1000:1000 /vol-path"]
resources:
limits:
cpu: "1"
memory: 1Gi
volumeMounts:
- name: vol-example
mountPath: /vol-path
Upvotes: 7
Reputation: 30113
To change the file system permission run the initcontainer
before actual container start
here example for elastic search pod
initContainers:
- command:
- sh
- -c
- chown -R 1000:1000 /usr/share/elasticsearch/data
- sysctl -w vm.max_map_count=262144
- chgrp 1000 /usr/share/elasticsearch/data
image: busybox:1.29.2
imagePullPolicy: IfNotPresent
name: set-dir-owner
resources: {}
terminationMessagePath: /dev/termination-log
terminationMessagePolicy: File
volumeMounts: #Volume mount path
- mountPath: /usr/share/elasticsearch/data
name: elasticsearch-data
To change user group in container
spec:
containers:
securityContext:
privileged: true
runAsUser: 1000
Upvotes: 1
Reputation: 1329
This came as one of the challenges for the Kubernetes Deployments/StatefulSets, when you have to run process inside a container as non-root user. But, when you mount a volume to a pod, it always gets mounted with the permission of root:root
.
So, the non-root user must have access to the folder where it wants to read and write data.
Please follow the below steps for the same.
Add the below lines in Deployment/StatefulSet in pod spec
context.
spec:
securityContext:
runAsUser: 1099
runAsGroup: 1099
fsGroup: 1099
runAsUser
Specifies that for any Containers in the Pod, all processes run with user ID 1099.
runAsGroup
Specifies the primary group ID of 1099 for all processes within any containers of the Pod.
If this field is omitted, the primary group ID of the containers will be root(0)
.
Any files created will also be owned by user 1099 and group 1099 when runAsGroup
is specified.
fsGroup
Specifies the owner of any volume attached will be owner by group ID 1099.
Any files created under it will be having permission of nonrootgroup:nonrootgroup
.
Upvotes: 43
Reputation: 207
For k8s version 1.10+, runAsGroup
has been added, it's similar to fsGroup
but works differently.
Implementation can be tracked here: https://github.com/kubernetes/features/issues/213
Upvotes: 10
Reputation: 12399
The Pod Security Context supports setting an fsGroup
, which allows you to set the group ID that owns the volume, and thus who can write to it. The example in the docs:
apiVersion: v1
kind: Pod
metadata:
name: hello-world
spec:
containers:
# specification of the pod's containers
# ...
securityContext:
fsGroup: 1234
More info on this is here
Upvotes: 134