Muhammad ahmad
Muhammad ahmad

Reputation: 73

Share Folder Between kubernetes Nodes

I have 3 Kubernetes nodes, one of them is the Master and the others are worker. I deployed Laravel application to the Master Node and created volumes and storage class which points to that folder. These are my YAML files to create volumes and the persistent volume claim

apiVersion: v1
kind: PersistentVolumeClaim
metadata:
    name: qsinav-pv-www-claim
spec:
    storageClassName: manual
    accessModes:
        - ReadWriteOnce
    resources:
        requests:
            storage: 5Gi

storage class

apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
  name: manual
provisioner: kubernetes.io/gce-pd
parameters:
  type: pd-ssd

apiVersion: v1
kind: PersistentVolume
metadata:
  name: qsinav-pv-www
  labels:
    type: local
spec:
  storageClassName: manual
  capacity:
    storage: 5Gi
  accessModes:
    - ReadWriteOnce
  persistentVolumeReclaimPolicy: Retain
  claimRef:
    namespace: default
    name: qsinav-pv-www-claim
  hostPath:
    path: "/var/www/html/Test/qSinav-starter"

The problem is that the pods in every node try to mount the folder in its parent node. So as I am running a Web application and I have load balancing between these nodes, if I logged in node one and the next request went to node 2 it redirects me to the login page as I don't have a session there. So I need to share 1 folder from a master node with all worker nodes. I don't know what should I do to achieve my goal, so please help me to solve it Thanks in advance

Upvotes: 0

Views: 991

Answers (2)

Hamza AZIZ
Hamza AZIZ

Reputation: 2937

don't use the type hostPath, because this type of volume is just for a single node cluster, it Work well just for single node environnement, because if the pod is assigne to another node, then the pod can’t get the data nedded.

so use the type Local

It remembers which node was used for provisioning the volume, thus making sure that a restarting POD will always find the data storage in the state it had left it before the reboot.

Ps 1: Once a node has died, the data of both hostpath and local persitent volumes of that node are lost.

ps 2: the local and the hostPath type don't work with dynamic provisioning

apiVersion: v1
kind: PersistentVolume
metadata:
  name: qsinav-pv-www
  labels:
    type: local
spec:
  storageClassName: manual
  capacity:
    storage: 5Gi
  accessModes:
    - ReadWriteOnce
  persistentVolumeReclaimPolicy: Retain
  claimRef:
    namespace: default
    name: qsinav-pv-www-claim
  local:
    path: "/var/www/html/Test/qSinav-starter"

Upvotes: 0

rock'n rolla
rock'n rolla

Reputation: 2229

Yeah that's expected and is clearly mentioned in the docs for hostPath volume.

Pods with identical configuration (such as created from a PodTemplate) may behave differently on different nodes due to different files on the nodes

You need to use something like nfs which can be shared between nodes. You PV definition would end up looking something like this (change IP & path as per your setup):

apiVersion: v1
kind: PersistentVolume
metadata:
  name: qsinav-pv-www
  labels:
    type: local
spec:
  storageClassName: manual
  capacity:
    storage: 5Gi
  accessModes:
    - ReadWriteOnce
  persistentVolumeReclaimPolicy: Retain
  claimRef:
    namespace: default
    name: qsinav-pv-www-claim
  nfs: 
    path: /tmp 
    server: 172.17.0.2

Upvotes: 1

Related Questions