Reputation: 921
I am using k8s with version 1.11 and CephFS as storage.
I am trying to mount the directory created on the CephFS in the pod. To achieve the same I have written the following volume and volume mount config in the deployment configuration
Volume
{
"name": "cephfs-0",
"cephfs": {
"monitors": [
"10.0.1.165:6789",
"10.0.1.103:6789",
"10.0.1.222:6789"
],
"user": "cfs",
"secretRef": {
"name": "ceph-secret"
},
"readOnly": false,
"path": "/cfs/data/conf"
}
}
volumeMounts
{
"mountPath": "/opt/myapplication/conf",
"name": "cephfs-0",
"readOnly": false
}
Mount is working properly. I can see the ceph directory i.e. /cfs/data/conf getting mounted on /opt/myapplication/conf but following is my issue.
I have configuration files already present as a part of docker image at the location /opt/myapplication/conf. When deployment tries to mount the ceph volume then all the files at the location /opt/myapplication/conf gets disappear. I know it's the behavior of the mount operation but is there any way by which I would be able to persist the already existing files in the container on the volume which I am mounting so that other pod which is mounting the same volume can access the configuration files. i.e. the files which are already there inside the pod at the location /opt/myapplication/conf should be accessible on the CephFS at location /cfs/data/conf.
Is it possible?
I went through the docker document and it mentions that
Populate a volume using a container If you start a container which creates a new volume, as above, and the container has files or directories in the directory to be mounted (such as /app/ above), the directory’s contents are copied into the volume. The container then mounts and uses the volume, and other containers which use the volume also have access to the pre-populated content.
This matches with my requirement but how to achieve it with k8s volumes?
Upvotes: 36
Views: 59247
Reputation: 54257
Unfortunately Kubernetes' volume system differs from Docker's, so this is not possible directly.
However, in case of a single file foo.conf
you can use:
mountPath
ending in this file name andsubPath
containing this file name, like this: volumeMounts:
- name: cephfs-0
mountPath: /opt/myapplication/conf/foo.conf
subPath: foo.conf
Repeat that for each file. But if you have a lot of them, or if their names can vary, then you have to handle this at runtime or use templating tools. Usually that means mounting it somewhere else and setting up symlinks before your main process starts.
Upvotes: 53
Reputation: 91
Very easy ! you have to use init container here. With init container use the same deployment image of your application. Assuming your container path is /opt/myapplication/conf your init container will share the cephfs PVC
with main application container, mount the volume at correct location i.e. /opt/myapplication/conf
now when you deploy your application,
Upvotes: 9
Reputation: 31
I was able to fix this by having my ENTRYPOINT
be a bash script that mv
my config files i wanted mounted to their correct location. It seems this device or resource is busy
errors were happening because the files were not mounted yet.
Upvotes: 3
Reputation: 518
I also encountered this very niche issue not being able to mount a folder to a specific path with content from my built image. This ends up empty.
However my workaround is to use ENTRYPOINT in de dockerfile refering to a shellscript that runs the commands to initialize DB or do something with files that affects the mounted target folder.
So it seems that entrypoint runs after the volume is being mounted by kubernetes.
I did tried to symlink the path in the entrypoint script, but that didn't work out.
Upvotes: 0