YanivCode
YanivCode

Reputation: 144

Kubernetes and mongo, PV, PVC

Hi just A noobiew question. I manage(?) to implement PV and PVC over mongo DB. I'm using PV as local and not on the cloud. There is a way to save the data when k8s runs on my pc after container restart ?

I'm not sure I got this right but what I need is to save the mongo data after he restart. What is the best way ? (no mongo atlas)

UPDATE: I managed to make tickets service db work great, but I have 2 other services that it just wont work ! i update the yaml files so u can see the current state. the auth-mongo is just the same as tickets-mongo so why it wont work ?

the ticket-depl-mongo yaml file:

apiVersion: apps/v1
kind: Deployment
metadata:
  name: tickets-mongo-depl

spec:
  replicas: 1
  selector:
    matchLabels:
      app: tickets-mongo
  template:
    metadata:
      labels:
        app: tickets-mongo
    spec:
      containers:
        - name: tickets-mongo
          image: mongo
          args: ["--dbpath", "data/auth"]
          livenessProbe:
            exec:
              command:
                - mongo
                - --disableImplicitSessions
                - --eval
                - "db.adminCommand('ping')"
            initialDelaySeconds: 30
            periodSeconds: 10
            timeoutSeconds: 5
            successThreshold: 1
            failureThreshold: 6
          volumeMounts:
            - mountPath: /data/auth
              name: tickets-data
      volumes:
        - name: tickets-data
          persistentVolumeClaim:
            claimName: tickets-pvc

---
apiVersion: v1
kind: Service
metadata:
  name: tickets-mongo-srv
spec:
  selector:
    app: tickets-mongo
  ports:
    - name: db
      protocol: TCP
      port: 27017
      targetPort: 27017

auth-mongo-depl.yaml :

apiVersion: apps/v1
kind: Deployment
metadata:
  name: auth-mongo-depl
spec:
  replicas: 1
  selector:
    matchLabels:
      app: auth-mongo
  template:
    metadata:
      labels:
        app: auth-mongo
    spec:
      containers:
        - name: auth-mongo
          image: mongo
          args: ["--dbpath", "data/db"]
          livenessProbe:
            exec:
              command:
                - mongo
                - --disableImplicitSessions
                - --eval
                - "db.adminCommand('ping')"
            initialDelaySeconds: 30
            periodSeconds: 10
            timeoutSeconds: 5
            successThreshold: 1
            failureThreshold: 6
          volumeMounts:
            - mountPath: /data/db
              name: auth-data
      volumes:
        - name: auth-data
          persistentVolumeClaim:
            claimName: auth-pvc

---
apiVersion: v1
kind: Service
metadata:
  name: auth-mongo-srv
spec:
  selector:
    app: auth-mongo
  ports:
    - name: db
      protocol: TCP
      port: 27017
      targetPort: 27017



NAME         CAPACITY   ACCESS MODES   RECLAIM POLICY   STATUS   CLAIM                 STORAGECLASS   REASON   AGE
pv-auth      1Gi        RWO            Retain           Bound    default/auth-pvc      auth                    78m
pv-orders    1Gi        RWO            Retain           Bound    default/orders-pvc    orders                  78m
pv-tickets   1Gi        RWO            Retain           Bound    default/tickets-pvc   tickets                 78m

I'm using mongo containers with tickets, orders, and auth services. Just adding some info to make it clear.

NAME                                     READY   STATUS    RESTARTS   AGE
auth-depl-66c5d54988-ffhwc               1/1     Running   0          36m
auth-mongo-depl-594b98fcc5-k9hj8         1/1     Running   0          36m
client-depl-787cf6c7c6-xxks9             1/1     Running   0          36m
expiration-depl-864d846445-b95sh         1/1     Running   0          36m
expiration-redis-depl-64bd9fdb95-sg7fc   1/1     Running   0          36m
nats-depl-7d6c7dc46-m6mcg                1/1     Running   0          36m
orders-depl-5478cf4dfd-zmngj             1/1     Running   0          36m
orders-mongo-depl-5f974847d7-bz9s4       1/1     Running   0          36m
payments-depl-78f85d94fd-4zs55           1/1     Running   0          36m
payments-mongo-depl-5d5c47494b-7zjrl     1/1     Running   0          36m
tickets-depl-84d59fd47c-cs4k5            1/1     Running   0          36m
tickets-mongo-depl-66798d9874-cfbqb      1/1     Running   0          36m

example for pv:

apiVersion: v1
kind: PersistentVolume
metadata:
  name: pv-tickets
  labels:
    type: local
spec:
  storageClassName: tickets
  capacity:
    storage: 1Gi
  accessModes:
    - ReadWriteOnce
  hostPath:
    path: "/tmp"

Upvotes: 0

Views: 1097

Answers (1)

YanivCode
YanivCode

Reputation: 144

All I had to do is to change the path of hostPath in each PV. the same path will make the app to faill.

pv1:

 hostPath:
    path: "/path/x1"

pv2:

 hostPath:
        path: "/path/x2"

like so.. just not the same path.

Upvotes: 1

Related Questions