Reputation: 8626
I have a Kubernetes setup installed in my Ubuntu machine. I'm trying to setup a nfs volume and mount it to a container according to this http://kubernetes.io/v1.1/examples/nfs/ document.
nfs service and pod configurations
kind: Service
apiVersion: v1
metadata:
name: nfs-server
spec:
ports:
- port: 2049
selector:
role: nfs-server
---
apiVersion: v1
kind: Pod
metadata:
name: nfs-server
labels:
role: nfs-server
spec:
containers:
- name: nfs-server
image: jsafrane/nfs-data
ports:
- name: nfs
containerPort: 2049
securityContext:
privileged: true
pod configuration to mount nfs volume
apiVersion: v1
kind: Pod
metadata:
name: nfs-web
spec:
containers:
- name: web
image: nginx
ports:
- name: web
containerPort: 80
volumeMounts:
# name must match the volume name below
- name: nfs
mountPath: "/usr/share/nginx/html"
volumes:
- name: nfs
nfs:
# FIXME: use the right hostname
server: 192.168.3.201
path: "/"
When I run kubectl describe nfs-web I get following output mentioning it was unable to mount nfs volume. What could be the reason for that?
Name: nfs-web
Namespace: default
Image(s): nginx
Node: 192.168.1.114/192.168.1.114
Start Time: Sun, 06 Dec 2015 08:31:06 +0530
Labels: <none>
Status: Pending
Reason:
Message:
IP:
Replication Controllers: <none>
Containers:
web:
Container ID:
Image: nginx
Image ID:
State: Waiting
Reason: ContainerCreating
Ready: False
Restart Count: 0
Environment Variables:
Conditions:
Type Status
Ready False
Volumes:
nfs:
Type: NFS (an NFS mount that lasts the lifetime of a pod)
Server: 192.168.3.201
Path: /
ReadOnly: false
default-token-nh698:
Type: Secret (a secret that should populate this volume)
SecretName: default-token-nh698
Events:
FirstSeen LastSeen Count From SubobjectPath Reason Message
───────── ──────── ───── ──── ───────────── ────── ───────
36s 36s 1 {scheduler } Scheduled Successfully assigned nfs-web to 192.168.1.114
36s 2s 5 {kubelet 192.168.1.114} FailedMount Unable to mount volumes for pod "nfs-web_default": exit status 32
36s 2s 5 {kubelet 192.168.1.114} FailedSync Error syncing pod, skipping: exit status 32
Upvotes: 33
Views: 97783
Reputation: 61
You need to execute the following on each master and node
sudo yum install nfs-utils -y
Upvotes: 1
Reputation: 535
In my case, the issue was the folder defined in volume hostPath was not created in the local. Once the folder was created in the worker node server, the issue was addressed.
Warning FailedMount 3m18s kubelet Unable to attach or mount volumes: unmounted volumes=[temp-volume], unattached volumes=[nfsvol-vre-data temp1-volume consumer1-serviceaccount-token-sdfsdf nfsvol]: timed out waiting for the condition
Warning FailedMount 71s (x10 over 5m20s) kubelet MountVolume.SetUp failed for volume "temp-volume" : hostPath type check failed: /tmp/folder is not a directory
Warning FailedMount 63s kubelet Unable to attach or mount volumes: unmounted volumes=[temp-volume], unattached volumes=[nfsvol nfsvol-vre-data temp1-volume consumer1-serviceaccount-token-sdfsdf]: timed out waiting for the condition
Upvotes: 0
Reputation: 533
In my case the issue was that i hadn't declared the host server of the nfs in the /etc/exports file. After adding an entry in there for my host server, the volume was working correctly.
if you modify the file in anyway then you need restart the service too;
sudo systemctl restart nfs-kernel-server
An example of an entry in the /etc/exports
file;
/var/nfs/home 192.111.222.333(rw,sync,no_subtree_check)
Upvotes: 1
Reputation: 2858
Had the same issue with NFS which only allowed root mounts. fixed by:
a. allow non-root users to mount NFS (on the server).
or
b. in PersistentVolume add
mountOptions:
- nfsvers=4.1
Upvotes: 2
Reputation: 1019
I had the same problem, and I solved it by installing nfs-common in every Kubernetes nodes.
apt-get install -y nfs-common
My nodes were installed without nfs-common. Kubernetes will ask each node to mount the NFS into a specific directory to be available to the pod. As mount.nfs was not found, the mounting process failed.
Good luck!
Upvotes: 57
Reputation: 121
It looks like volumes.nfs.server=192.168.3.201 is incorrectly configured on your client. It should be set to the ClusterIP address of your nfs-server Service.
Upvotes: 1