Peter Penzov
Peter Penzov

Reputation: 1626

Remove nodeSelectorTerms param

I use this manifest configuration to deploy a registry into 3 mode Kubernetes cluster:

apiVersion: v1
kind: PersistentVolume
metadata:
  name: pv1
  namespace: registry-space
spec:
  capacity:
    storage: 5Gi # specify your own size
  volumeMode: Filesystem
  persistentVolumeReclaimPolicy: Retain
  local:
    path: /opt/registry # can be any path
  nodeAffinity:
    required:
      nodeSelectorTerms:
      - matchExpressions:
        - key: kubernetes.io/hostname
          operator: In
          values:
          - kubernetes2
  accessModes:
    - ReadWriteMany # only 1 node will read/write on the path.
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: pv1-claim
  namespace: registry-space
spec: # should match specs added in the PersistenVolume
  accessModes:
    - ReadWriteMany
  volumeMode: Filesystem
  resources:
    requests:
      storage: 5Gi
---
apiVersion: apps/v1
kind: Deployment
metadata:
  name: private-repository-k8s
  namespace: registry-space
  labels:
    app: private-repository-k8s
spec:
  replicas: 1
  selector:
    matchLabels:
      app: private-repository-k8s
  template:
    metadata:
      labels:
        app: private-repository-k8s
    spec:
      volumes:
       - name: certs-vol
         hostPath:
          path: /opt/certs
          type: Directory
       - name: task-pv-storage
         persistentVolumeClaim:
           claimName: pv1-claim # specify the PVC that you've created. PVC and Deployment must be in same namespace.
      containers:
        - image: registry:2
          name: private-repository-k8s
          imagePullPolicy: IfNotPresent
          env:
          - name: REGISTRY_HTTP_TLS_CERTIFICATE
            value: "/opt/certs/registry.crt"
          - name: REGISTRY_HTTP_TLS_KEY
            value: "/opt/certs/registry.key"
          ports:
            - containerPort: 5000
          volumeMounts:
          - name: certs-vol
            mountPath: /opt/certs
          - name: task-pv-storage
            mountPath: /opt/registry

I manually created directories on every node under /opt/certs and /opt/registry.

But when I try to deploy the manifest without hardcoded nodeSelectorTerms on tha control plane I get error:

kubernetes@kubernetes1:/opt/registry$ kubectl get pods --all-namespaces
NAMESPACE        NAME                                       READY   STATUS    RESTARTS      AGE
kube-system      calico-kube-controllers-58dbc876ff-fsjd5   1/1     Running   1 (74m ago)   84m
kube-system      calico-node-5brzt                          1/1     Running   1 (73m ago)   84m
kube-system      calico-node-nph9n                          1/1     Running   1 (76m ago)   84m
kube-system      calico-node-pcd74                          1/1     Running   1 (74m ago)   84m
kube-system      calico-node-ph2ht                          1/1     Running   1 (76m ago)   84m
kube-system      coredns-565d847f94-7pswp                   1/1     Running   1 (74m ago)   105m
kube-system      coredns-565d847f94-tlrfr                   1/1     Running   1 (74m ago)   105m
kube-system      etcd-kubernetes1                           1/1     Running   2 (74m ago)   105m
kube-system      kube-apiserver-kubernetes1                 1/1     Running   2 (74m ago)   105m
kube-system      kube-controller-manager-kubernetes1        1/1     Running   2 (74m ago)   105m
kube-system      kube-proxy-4slm4                           1/1     Running   1 (76m ago)   86m
kube-system      kube-proxy-4tnx2                           1/1     Running   2 (74m ago)   105m
kube-system      kube-proxy-9dgsj                           1/1     Running   1 (73m ago)   85m
kube-system      kube-proxy-cgr44                           1/1     Running   1 (76m ago)   86m
kube-system      kube-scheduler-kubernetes1                 1/1     Running   2 (74m ago)   105m
registry-space   private-repository-k8s-6d5d954b4f-xkmj5    0/1     Pending   0             4m55s
kubernetes@kubernetes1:/opt/registry$

Do you know how I can let Kubernetes to decide where to deploy the pod?

Upvotes: 1

Views: 300

Answers (1)

jmvcollaborator
jmvcollaborator

Reputation: 2475

Lets try the following(disregard the paths you currently have and use the ones in the example, (then you can change it), we can adapt it to your needs once dynamic provisioning is working, at the very bottom theres mysql image as an example, use busybox or leave it as it is to get a better understanding:

  1. NFS Server install. Create NFS Share on File Server (Usually master node)

 #Include prerequisites
    sudo apt update -y # Run updates prior to installing
    sudo apt install nfs-kernel-server # Install NFS Server
    sudo systemctl enable nfs-server # Set nfs-server to load on startups
    sudo systemctl status nfs-server # Check its status
     
    # check server status
    root@worker03:/home/brucelee# sudo systemctl status nfs-server
    ● nfs-server.service - NFS server and services
         Loaded: loaded (/lib/systemd/system/nfs-server.service; enabled; vendor preset: enabled)
         Active: active (exited) since Fri 2021-08-13 04:25:50 UTC; 18s ago
        Process: 2731 ExecStartPre=/usr/sbin/exportfs -r (code=exited, status=0/SUCCESS)
        Process: 2732 ExecStart=/usr/sbin/rpc.nfsd $RPCNFSDARGS (code=exited, status=0/SUCCESS)
       Main PID: 2732 (code=exited, status=0/SUCCESS)
     
    Aug 13 04:25:49 linux03 systemd[1]: Starting NFS server and services...
    Aug 13 04:25:50 linux03 systemd[1]: Finished NFS server and services.
     
    # Prepare an empty folder
    sudo su # enter root
    nfsShare=/nfs-share
    mkdir $nfsShare # create folder if it doesn't exist
    chown nobody: $nfsShare
    chmod -R 777 $nfsShare # not recommended for production
     
    # Edit the nfs server share configs
    vim /etc/exports
    # add these lines
    /nfs-share x.x.x.x/24(rw,sync,no_subtree_check,no_root_squash,no_all_squash,insecure)
     
    # Export directory and make it available
    sudo exportfs -rav
     
    # Verify nfs shares
    sudo exportfs -v
     
    # Enable ingress for subnet
    sudo ufw allow from x.x.x.x/24 to any port nfs
     
    # Check firewall status - inactive firewall is fine for testing
    root@worker03:/home/brucelee# sudo ufw status
    Status: inactive

  1. NFS Client install (Worker nodes)
# Install prerequisites
sudo apt update -y
sudo apt install nfs-common
 
# Mount the nfs share
remoteShare=server.ip.here:/nfs-share
localMount=/mnt/testmount
sudo mkdir -p $localMount
sudo mount $remoteShare $localMount
 
# Unmount
sudo umount $localMount
  1. Dinamic provisioning and Storage class defaulted
# Pull the source code
workingDirectory=~/nfs-dynamic-provisioner
mkdir $workingDirectory && cd $workingDirectory
git clone https://github.com/kubernetes-sigs/nfs-subdir-external-provisioner
cd nfs-subdir-external-provisioner/deploy
 
# Deploying the service accounts, accepting defaults
k create -f rbac.yaml
 
# Editing storage class
vim class.yaml
 
##############################################
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
  name: managed-nfs-ssd # set this value
provisioner: k8s-sigs.io/nfs-subdir-external-provisioner # or choose another name, must match deployment's env PROVISIONER_NAME'
parameters:
  archiveOnDelete: "true" # value of true means retaining data upon pod terminations
allowVolumeExpansion: "true" # this attribute doesn't exist by default
##############################################
 
# Deploying storage class
k create -f class.yaml
 
# Sample output
stoic@masternode:~/nfs-dynamic-provisioner/nfs-subdir-external-provisioner/deploy$ k get storageclasses.storage.k8s.io
NAME                   PROVISIONER                                     RECLAIMPOLICY   VOLUMEBINDINGMODE   ALLOWVOLUMEEXPANSION   AGE
managed-nfs-ssd        k8s-sigs.io/nfs-subdir-external-provisioner     Delete          Immediate           false                  33s
nfs-class              kubernetes.io/nfs                               Retain          Immediate           true                   193d
nfs-client (default)   cluster.local/nfs-subdir-external-provisioner   Delete          Immediate           true                   12d
 
# Example of patching an applied object
kubectl patch storageclass managed-nfs-ssd -p '{"allowVolumeExpansion":true}'
kubectl patch storageclass managed-nfs-ssd -p '{"metadata": {"annotations":{"storageclass.kubernetes.io/is-default-class":"true"}}}' # Set storage class as default
 
# Editing deployment of dynamic nfs provisioning service pod
vim deployment.yaml
 
##############################################
apiVersion: apps/v1
kind: Deployment
metadata:
  name: nfs-client-provisioner
  labels:
    app: nfs-client-provisioner
  # replace with namespace where provisioner is deployed
  namespace: default
spec:
  replicas: 1
  strategy:
    type: Recreate
  selector:
    matchLabels:
      app: nfs-client-provisioner
  template:
    metadata:
      labels:
        app: nfs-client-provisioner
    spec:
      serviceAccountName: nfs-client-provisioner
      containers:
        - name: nfs-client-provisioner
          image: k8s.gcr.io/sig-storage/nfs-subdir-external-provisioner:v4.0.2
          volumeMounts:
            - name: nfs-client-root
              mountPath: /persistentvolumes
          env:
            - name: PROVISIONER_NAME
              value: k8s-sigs.io/nfs-subdir-external-provisioner
            - name: NFS_SERVER
              value: X.X.X.X # change this value
            - name: NFS_PATH
              value: /nfs-share # change this value
      volumes:
        - name: nfs-client-root
          nfs:
            server: 192.168.100.93 # change this value
            path: /nfs-share # change this value
##############################################
 
# Creating nfs provisioning service pod
k create -f deployment.yaml
 
# Troubleshooting: example where the deployment was pending variables to be created by rbac.yaml
stoic@masternode: $ k describe deployments.apps nfs-client-provisioner
Name:               nfs-client-provisioner
Namespace:          default
CreationTimestamp:  Sat, 14 Aug 2021 00:09:24 +0000
Labels:             app=nfs-client-provisioner
Annotations:        deployment.kubernetes.io/revision: 1
Selector:           app=nfs-client-provisioner
Replicas:           1 desired | 0 updated | 0 total | 0 available | 1 unavailable
StrategyType:       Recreate
MinReadySeconds:    0
Pod Template:
  Labels:           app=nfs-client-provisioner
  Service Account:  nfs-client-provisioner
  Containers:
   nfs-client-provisioner:
    Image:      k8s.gcr.io/sig-storage/nfs-subdir-external-provisioner:v4.0.2
    Port:       <none>
    Host Port:  <none>
    Environment:
      PROVISIONER_NAME:  k8s-sigs.io/nfs-subdir-external-provisioner
      NFS_SERVER:        X.X.X.X
      NFS_PATH:          /nfs-share
    Mounts:
      /persistentvolumes from nfs-client-root (rw)
  Volumes:
   nfs-client-root:
    Type:      NFS (an NFS mount that lasts the lifetime of a pod)
    Server:    X.X.X.X
    Path:      /nfs-share
    ReadOnly:  false
Conditions:
  Type             Status  Reason
  ----             ------  ------
  Progressing      True    NewReplicaSetCreated
  Available        False   MinimumReplicasUnavailable
  ReplicaFailure   True    FailedCreate
OldReplicaSets:    <none>
NewReplicaSet:     nfs-client-provisioner-7768c6dfb4 (0/1 replicas created)
Events:
  Type    Reason             Age    From                   Message
  ----    ------             ----   ----                   -------
  Normal  ScalingReplicaSet  3m47s  deployment-controller  Scaled up replica set nfs-client-provisioner-7768c6dfb4 to 1
 
# Get the default nfs storage class
echo $(kubectl get sc -o=jsonpath='{range .items[?(@.metadata.annotations.storageclass\.kubernetes\.io/is-default-class=="true")]}{@.metadata.name}{"\n"}{end}')
 
  1. PersistentVolumeClaim (Notice the storageClassName it is the one defined on the previous step)
apiVersion: v1
kind: PersistentVolumeClaim
metadata:  
  name: my-persistentvolume-claim
  namespace: default
spec:
  storageClassName: nfs-client
  accessModes:
    - ReadWriteOnce
  resources:
    requests:
      storage: 5Gi
  1. PersistentVolume

It is created dinamically ! confirm if it is here with the correct values running this command:

kubectl get pv -A

  1. Deployment

On your deployment you need two things, volumeMounts (for each container) and volumes (for all containers). Notice: VolumeMounts->name=data and volumes->name=data because they should match. And claimName is my-persistentvolume-claim which is the same as you PVC.

 ...
 spec:
      containers:
      - name: mysql
        image: mysql:8.0.30
        volumeMounts:
        - name: data
          mountPath: /var/lib/mysql
          subPath: mysql
      volumes:
      - name: data
        persistentVolumeClaim:
          claimName: my-persistentvolume-claim

Upvotes: 1

Related Questions