DavideP
DavideP

Reputation: 72

Shared Folder with Azure File on kubernetes pod doesn't work

I have an issue on my deployment when I try to share a folder with a kubernetes volume. The folder will be shared using an Azure File Storage. If I deploy my image without sharing the folder (/integrations) the app start. as shown in the image below the pod via lens is up and running

If I add the relation of the folder to a volume the result is that the pod will stuck in error with this messagge

Here I put my yaml deployment:

apiVersion: apps/v1
kind: Deployment
metadata:
  namespace: sandbox-pizzly
  name: sandbox-pizzly-widget
  labels:
    app: sandbox-pizzly-widget
    product: sandbox-pizzly
    app.kubernetes.io/name: "sandbox-pizzly-widget"
    app.kubernetes.io/version: "latest"
    app.kubernetes.io/managed-by: "xxxx"
    app.kubernetes.io/component: "sandbox-pizzly-widget"
    app.kubernetes.io/part-of: "sandbox-pizzly"
spec:
  replicas: 1
  selector:
    matchLabels:
      app: sandbox-pizzly-widget
  template:
    metadata:
      labels:
        app: sandbox-pizzly-widget
    spec:
      containers:
        - name: sandbox-pizzly-widget
          image: davidep931/pizzly-proxy:latest
          ports:
            - containerPort: 8080
          env:
            - name: NODE_ENV
              value: "production"
            - name: DASHBOARD_USERNAME
              value: "admin"
            - name: DASHBOARD_PASSWORD
              value: "admin"
            - name: SECRET_KEY
              value: "devSecretKey"
            - name: PUBLISHABLE_KEY
              value: "devPubKey"
            - name: PROXY_USES_SECRET_KEY_ONLY
              value: "FALSE"
            - name: COOKIE_SECRET
              value: "devCookieSecret"
            - name: AUTH_CALLBACK_URL
              value: "https://pizzly.mydomain/auth/callback"
            - name: DB_HOST
              value: "10.x.x.x"
            - name: DB_PORT
              value: "5432"
            - name: DB_DATABASE
              value: "postgresdb"
            - name: DB_USER
              value: "username"
            - name: DB_PASSWORD
              value: "password"
            - name: PORT
              value: "8080"
          volumeMounts:
            - mountPath: "/home/node/app/integrations"
              name: pizzlystorage
          resources:
            requests:
              memory: "100Mi"
              cpu: "50m"
            limits:
              cpu: "75m"
              memory: "200Mi"
---
apiVersion: v1
kind: Service
metadata:
  namespace: sandbox-pizzly
  name: sandbox-pizzly-widget
spec:
  ports:
    - port: 8080
      targetPort: 8080
  selector:
    app: sandbox-pizzly-widget
---
kind: PersistentVolume
apiVersion: v1
metadata:
  name: sandbox-pizzly-pv-volume
  labels:
    type: local
    app: products
spec:
  storageClassName: azurefile
  capacity:
    storage: 1Gi
  azureFile:
    secretName: azure-secret
    shareName: sandbox-pizzly-pv
    readOnly: false
    secretNamespace: sandbox-pizzly
  accessModes:
    - ReadWriteMany
  claimRef:
    namespace: sandbox-pizzly
    name: sandbox-pizzly-pv-claim
---
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
  namespace: sandbox-pizzly
  name: sandbox-pizzly-pv-claim
  labels:
    app: products
spec:
  storageClassName: azurefile
  accessModes:
    - ReadWriteMany
  resources:
    requests:
      storage: 1Gi
---
kind: StorageClass
apiVersion: storage.k8s.io/v1
metadata:
  name: azurefilestorage
provisioner: kubernetes.io/azure-file
parameters:
  storageAccount: persistentsapizzly
reclaimPolicy: Retain
---
apiVersion: v1
kind: Secret
metadata:
  name: azure-secret
  namespace: sandbox-pizzly
type: Opaque
data:
  azurestorageaccountname: xxxxxxxxxxxxxxxxxxxxx
  azurestorageaccountkey: xxxxxxxxxxxxxxxxxxxxxxxxxxx

If I try, in the few seconds before the pod stuck, to access to integrations folder and I perform a touch 'test.txt', I will found that file in the Azure File Storage.

Here what I see few seconds before shell autoclose due to CrashLoopBack

I add the Dockerfile:

FROM node:14-slim

WORKDIR /app

# Copy in dependencies for building
COPY *.json ./
COPY yarn.lock ./
# COPY config ./config
COPY integrations ./integrations/
COPY src ./src/
COPY tests ./tests/
COPY views ./views/

RUN yarn install


# Actual image to run from.
FROM node:14-slim

# Make sure we have ca certs for TLS
RUN apt-get update && apt-get install -y \
    curl \
    wget \
    gnupg2 ca-certificates libnss3  \
    git

# Make a directory for the node user. Not running Pizzly as root.
RUN mkdir /home/node/app && chown -R node:node /home/node/app
WORKDIR /home/node/app

USER node

# Startup script
COPY --chown=node:node ./startup.sh ./startup.sh
RUN chmod +x ./startup.sh
# COPY from first container
COPY --chown=node:node --from=0 /app/package.json ./package.json
COPY --chown=node:node --from=0 /app/dist/ .
COPY --chown=node:node --from=0 /app/views ./views
COPY --chown=node:node --from=0 /app/node_modules ./node_modules

# Run the startup script
CMD ./startup.sh

Here the startup.sh script:

#!/bin/sh

# Docker Startup script

# Apply migration
./node_modules/.bin/knex --cwd ./src/lib/database/config migrate:latest

# Start App
node ./src/index.js

Have you got any idea on what I miss or I'm wrong?

Thank you, Dave.

Upvotes: 0

Views: 492

Answers (1)

Charles Xu
Charles Xu

Reputation: 31414

Well, there are two things I think you need to know when you mount the Azure file to the pods existing folder as the volume:

  1. it will cover the existing files
  2. the mount path will set the ownership as the root user

So the above means if your application will start depends on the existing files, then it will cause the problem. And if your application uses a non-root use, for example, the user app, then it maybe will also cause the problem. Here I guess the problem may be caused by the first limitation.

Upvotes: 1

Related Questions