Reputation: 3949
In this repository https://github.com/mappedinn/kubernetes-nfs-volume-on-gke I am trying to share a volume through NFS service on GKE. The NFS file sharing is successful if hard coded IP address is used.
But, in my point of view, it would be better to use DNS name in stead of hard coded IP address.
Below is the declaration of the NFS service being used for sharing a volume in Google Cloud Platform:
apiVersion: v1
kind: Service
metadata:
name: nfs-server
spec:
ports:
- name: nfs
port: 2049
- name: mountd
port: 20048
- name: rpcbind
port: 111
selector:
role: nfs-server
Below is the definition of the PersistentVolume with hard coded IP address:
apiVersion: v1
kind: PersistentVolume
metadata:
name: wp01-pv-data
spec:
capacity:
storage: 5Gi
accessModes:
- ReadWriteMany
nfs:
server: 10.247.248.43 # with hard coded IP, it works
path: "/"
Below is the definition of the PersistentVolume with DNS name:
apiVersion: v1
kind: PersistentVolume
metadata:
name: wp01-pv-data
spec:
capacity:
storage: 5Gi
accessModes:
- ReadWriteMany
nfs:
server: nfs-service.default.svc.cluster.local # with DNS, it does not works
path: "/"
I am using this https://kubernetes.io/docs/concepts/services-networking/dns-pod-service/ for getting the DNS of the service. Is there any thing missed?
Thanks
Upvotes: 3
Views: 897
Reputation: 3949
I solved the problem by just upgrading kubectl of my GKE cluster from the version 1.7.11-gke.1
to 1.8.6-gke.0
.
kubectl version
# Client Version: version.Info{Major:"1", Minor:"8", GitVersion:"v1.8.6", GitCommit:"6260bb08c46c31eea6cb538b34a9ceb3e406689c", GitTreeState:"clean", BuildDate:"2017-12-21T06:34:11Z", GoVersion:"go1.8.3", Compiler:"gc", Platform:"linux/amd64"}
# Server Version: version.Info{Major:"1", Minor:"8+", GitVersion:"v1.8.6-gke.0", GitCommit:"ee9a97661f14ee0b1ca31d6edd30480c89347c79", GitTreeState:"clean", BuildDate:"2018-01-05T03:36:42Z", GoVersion:"go1.8.3b4", Compiler:"gc", Platform:"linux/amd64"}
Actually, this the final version of yaml files:
apiVersion: v1
kind: Service
metadata:
name: nfs-server
spec:
# clusterIP: 10.3.240.20
ports:
- name: nfs
port: 2049
- name: mountd
port: 20048
- name: rpcbind
port: 111
selector:
role: nfs-server
# type: "LoadBalancer"
and
apiVersion: v1
kind: PersistentVolume
metadata:
name: nfs
spec:
capacity:
storage: 1Mi
accessModes:
- ReadWriteMany
nfs:
# FIXED: Use internal DNS name
server: nfs-server.default.svc.cluster.local
path: "/"
Upvotes: 0
Reputation: 22884
The problem is in DNS resolution on node it self. Mounting of the NFS share to the pod is a job of kubelet that is launched on the node. Hence the DNS resolution happens according to /etc/resolv.conf on the node it self as well. What could suffice is adding a nameserver <your_kubedns_service_ip>
to the nodes /etc/resolv.conf
, but it can become somewhat chicken-and-egg problem in some corner cases
Upvotes: 1