Petchav
Petchav

Reputation: 131

Read-only file system issue with Nextcloud Helm chart with Longhorn persistent volume claim

I have a Raspberry Pi cluster running k3s with a Rancher interface. I'm trying to install the official Nextcloud Helm chart, which you can find here.

I've successfully configured my cluster with Longhorn to ensure data persistence with PostgreSQL works. In my setup, the PostgreSQL section correctly creates a PersistentVolumeClaims, and everything works fine.

However, when I attempt to enable persistence for Nextcloud data by setting persistence.enabled: true, my nextcloud-nginx Pod fails to function and produces the following logs:

/docker-entrypoint.sh: /docker-entrypoint.d/ is not empty, will attempt to perform configuration
/docker-entrypoint.sh: Looking for shell scripts in /docker-entrypoint.d/
/docker-entrypoint.sh: Launching /docker-entrypoint.d/10-listen-on-ipv6-by-default.sh
10-listen-on-ipv6-by-default.sh: info: can not modify /etc/nginx/conf.d/default.conf (read-only file system?)
/docker-entrypoint.sh: Sourcing /docker-entrypoint.d/15-local-resolvers.envsh
/docker-entrypoint.sh: Launching /docker-entrypoint.d/20-envsubst-on-templates.sh
/docker-entrypoint.sh: Launching /docker-entrypoint.d/30-tune-worker-processes.sh
/docker-entrypoint.sh: Configuration complete; ready for start up
2024/08/31 10:30:47 [warn] 1#1: duplicate extension "js", content type: "text/javascript", previous content type: "application/javascript" in /etc/nginx/conf.d/default.conf:50
nginx: [warn] duplicate extension "js", content type: "text/javascript", previous content type: "application/javascript" in /etc/nginx/conf.d/default.conf:50
2024/08/31 10:30:47 [notice] 1#1: using the "epoll" event method
2024/08/31 10:30:47 [notice] 1#1: nginx/1.27.1
2024/08/31 10:30:47 [notice] 1#1: built by gcc 13.2.1 20240309 (Alpine 13.2.1_git20240309) 
2024/08/31 10:30:47 [notice] 1#1: OS: Linux 6.8.0-1010-raspi
2024/08/31 10:30:47 [notice] 1#1: getrlimit(RLIMIT_NOFILE): 1048576:1048576
2024/08/31 10:30:47 [notice] 1#1: start worker processes
2024/08/31 10:30:47 [notice] 1#1: start worker process 20
2024/08/31 10:30:47 [notice] 1#1: start worker process 21
2024/08/31 10:30:47 [notice] 1#1: start worker process 22
2024/08/31 10:30:47 [notice] 1#1: start worker process 23
10.42.0.1 - - [31/Aug/2024:10:31:01 +0000] "GET /status.php HTTP/1.1" 500 5 "-" "kube-probe/1.30" "-"
10.42.0.1 - - [31/Aug/2024:10:31:01 +0000] "GET /status.php HTTP/1.1" 500 5 "-" "kube-probe/1.30" "-"
10.42.0.1 - - [31/Aug/2024:10:31:11 +0000] "GET /status.php HTTP/1.1" 500 5 "-" "kube-probe/1.30" "-"
10.42.0.1 - - [31/Aug/2024:10:31:11 +0000] "GET /status.php HTTP/1.1" 500 5 "-" "kube-probe/1.30" "-"
10.42.0.1 - - [31/Aug/2024:10:31:21 +0000] "GET /status.php HTTP/1.1" 500 5 "-" "kube-probe/1.30" "-"
10.42.0.1 - - [31/Aug/2024:10:31:21 +0000] "GET /status.php HTTP/1.1" 500 5 "-" "kube-probe/1.30" "-"
10.42.0.1 - - [31/Aug/2024:10:31:21 +0000] "GET /status.php HTTP/1.1" 500 5 "-" "kube-probe/1.30" "-"
2024/08/31 10:31:21 [notice] 1#1: signal 3 (SIGQUIT) received, shutting down
2024/08/31 10:31:21 [notice] 22#22: gracefully shutting down
2024/08/31 10:31:21 [notice] 22#22: exiting
2024/08/31 10:31:21 [notice] 20#20: gracefully shutting down
2024/08/31 10:31:21 [notice] 20#20: exiting
2024/08/31 10:31:21 [notice] 21#21: gracefully shutting down
2024/08/31 10:31:21 [notice] 21#21: exiting
2024/08/31 10:31:21 [notice] 20#20: exit
2024/08/31 10:31:21 [notice] 22#22: exit
2024/08/31 10:31:21 [notice] 21#21: exit
2024/08/31 10:31:21 [notice] 23#23: gracefully shutting down
2024/08/31 10:31:21 [notice] 23#23: exiting
2024/08/31 10:31:21 [notice] 23#23: exit
2024/08/31 10:31:21 [notice] 1#1: signal 17 (SIGCHLD) received from 22
2024/08/31 10:31:21 [notice] 1#1: worker process 22 exited with code 0
2024/08/31 10:31:21 [notice] 1#1: signal 29 (SIGIO) received
2024/08/31 10:31:21 [notice] 1#1: signal 17 (SIGCHLD) received from 21
2024/08/31 10:31:21 [notice] 1#1: worker process 21 exited with code 0
2024/08/31 10:31:21 [notice] 1#1: signal 29 (SIGIO) received
2024/08/31 10:31:21 [notice] 1#1: signal 17 (SIGCHLD) received from 23
2024/08/31 10:31:21 [notice] 1#1: worker process 23 exited with code 0
2024/08/31 10:31:21 [notice] 1#1: signal 29 (SIGIO) received
2024/08/31 10:31:21 [notice] 1#1: signal 17 (SIGCHLD) received from 20
2024/08/31 10:31:21 [notice] 1#1: worker process 20 exited with code 0
2024/08/31 10:31:21 [notice] 1#1: exit

I can't figure out what the issue is because when I use the exact same configuration with persistence.enabled: false, the nextcloud-nginx Pod works perfectly.

If anyone could help me understand how k3s handles persistent data, I would greatly appreciate it.

I've attached my values.yaml configuration file in case there's something wrong with my setup.

affinity: {}
cronjob:
  enabled: false
  lifecycle: {}
  resources: {}
  securityContext: {}
deploymentAnnotations: {}
deploymentLabels: {}
dnsConfig: {}
externalDatabase:
  database: nextcloud
  enabled: true
  existingSecret:
    enabled: true
    usernameKey: db-user
    passwordKey: db-password
  host: 'nextcloud-postgresql'
  user: null
  password: null
  type: postgresql
fullnameOverride: ''
hpa:
  cputhreshold: 60
  enabled: false
  maxPods: 10
  minPods: 1
image:
  flavor: fpm
  pullPolicy: IfNotPresent
  repository: nextcloud
  tag: null
ingress:
  annotations:
    cert-manager.io/issuer: rancher
    nginx.ingress.kubernetes.io/cors-allow-headers: X-Forwarded-For
    nginx.ingress.kubernetes.io/enable-cors: 'true'
    nginx.ingress.kubernetes.io/server-snippet: |-
      server_tokens off;
      proxy_hide_header X-Powered-By;
      rewrite ^/.well-known/webfinger /index.php/.well-known/webfinger last;
      rewrite ^/.well-known/nodeinfo /index.php/.well-known/nodeinfo last;
      rewrite ^/.well-known/host-meta /public.php?service=host-meta last;
      rewrite ^/.well-known/host-meta.json /public.php?service=host-meta-json;
      location = /.well-known/carddav {
        return 301 $scheme://$host/remote.php/dav;
      }
      location = /.well-known/caldav {
        return 301 $scheme://$host/remote.php/dav;
      }
      location = /robots.txt {
        allow all;
        log_not_found off;
        access_log off;
      }
      location ~ ^/(?:build|tests|config|lib|3rdparty|templates|data)/ {
        deny all;
      }
      location ~ ^/(?:autotest|occ|issue|indie|db_|console) {
        deny all;
      }
  enabled: true
  labels: {}
  path: /
  pathType: Prefix
internalDatabase:
  enabled: false
  name: nextcloud
lifecycle: {}
livenessProbe:
  enabled: true
  failureThreshold: 3
  initialDelaySeconds: 10
  periodSeconds: 10
  successThreshold: 1
  timeoutSeconds: 5
mariadb:
  architecture: standalone
  auth:
    database: nextcloud
    existingSecret: ''
    password: changeme
    username: nextcloud
  enabled: false
  global:
    defaultStorageClass: ''
  primary:
    persistence:
      accessMode: ReadWriteOnce
      enabled: false
      existingClaim: ''
      size: 8Gi
      storageClass: ''
metrics:
  enabled: false
  https: false
  image:
    pullPolicy: IfNotPresent
    repository: xperimental/nextcloud-exporter
    tag: 0.6.2
  info:
    apps: false
  podSecurityContext: {}
  replicaCount: 1
  securityContext:
    runAsNonRoot: true
    runAsUser: 1000
  server: ''
  service:
    annotations:
      prometheus.io/port: '9205'
      prometheus.io/scrape: 'true'
    labels: {}
    type: ClusterIP
  serviceMonitor:
    enabled: false
    interval: 30s
    jobLabel: ''
    labels: {}
    namespace: ''
    namespaceSelector: null
    scrapeTimeout: ''
  timeout: 5s
  tlsSkipVerify: false
  token: ''
nameOverride: ''
nextcloud:
  configs:
    proxy.config.php: |-
      <?php
      $CONFIG = array (
        'trusted_proxies' => array(
          0 => '127.0.0.1',
          1 => '10.0.0.0/8',
        ),
        'maintenance_window_start' => 1,
      );
  containerPort: 80
  datadir: /var/www/html/data
  defaultConfigs:
    .htaccess: true
    apache-pretty-urls.config.php: true
    apcu.config.php: true
    apps.config.php: true
    autoconfig.php: true
    redis.config.php: true
    reverse-proxy.config.php: true
    s3.config.php: true
    smtp.config.php: true
    swift.config.php: true
    upgrade-disable-web.config.php: true
  existingSecret:
    enabled: false
    passwordKey: nextcloud-password
    smtpHostKey: smtp-host
    smtpPasswordKey: smtp-password
    smtpUsernameKey: smtp-username
    tokenKey: ''
    usernameKey: nextcloud-username
  extraEnv: null
  extraInitContainers: []
  extraSidecarContainers: []
  extraVolumeMounts: null
  extraVolumes: null
  hooks:
    before-starting: null
    post-installation: null
    post-upgrade: null
    pre-installation: null
    pre-upgrade: null
  host: nextcloud.mydomain.com
  mail:
    domain: domain.com
    enabled: false
    fromAddress: user
    smtp:
      authtype: LOGIN
      host: domain.com
      name: user
      password: pass
      port: 465
      secure: ssl
  mariaDbInitContainer:
    securityContext: {}
  objectStore:
    s3:
      accessKey: ''
      autoCreate: false
      bucket: ''
      enabled: false
      existingSecret: ''
      host: ''
      legacyAuth: false
      port: '443'
      prefix: ''
      region: eu-west-1
      secretKey: ''
      secretKeys:
        accessKey: ''
        bucket: ''
        host: ''
        secretKey: ''
        sse_c_key: ''
      sse_c_key: ''
      ssl: true
      storageClass: STANDARD
      usePathStyle: false
    swift:
      autoCreate: false
      container: ''
      enabled: false
      project:
        domain: Default
        name: ''
      region: ''
      service: swift
      url: ''
      user:
        domain: Default
        name: ''
        password: ''
  password: nextcloud-changeme
  persistence:
    subPath: null
  phpConfigs: {}
  podSecurityContext: {}
  postgreSqlInitContainer:
    securityContext: {}
  securityContext: {}
  strategy:
    type: Recreate
  trustedDomains:
    - nextcloud.mydomain.com
  update: 0
  username: admin
nginx:
  config:
    custom: null
    default: true
  containerPort: 80
  enabled: true
  extraEnv: []
  image:
    pullPolicy: IfNotPresent
    repository: nginx
    tag: alpine
  resources: {}
  securityContext: {}
nodeSelector: {}
persistence:
  accessMode: ReadWriteOnce
  annotations: {}
  enabled: true
  storageClass: longhorn
  nextcloudData:
    accessMode: ReadWriteOnce
    annotations: {}
    enabled: false
    size: 8Gi
    subPath: null
  size: 20Gi
phpClientHttpsFix:
  enabled: false
  protocol: https
podAnnotations: {}
postgresql:
  enabled: true
  global:
    postgresql:
      auth:
        database: nextcloud
        username: nextcloud
        password: null
        existingSecret: "nextcloud-postgres"
        secretKeys:
          adminPasswordKey: "admin-password"
          userPasswordKey: "user-password"
          replicationPasswordKey: "replication-password"
  primary:
    persistence:
      enabled: true
      storageClass: longhorn
      accessMode: ReadWriteOnce
      size: 3Gi
rbac:
  enabled: false
  serviceaccount:
    annotations: {}
    create: true
    name: nextcloud-serviceaccount
readinessProbe:
  enabled: true
  failureThreshold: 3
  initialDelaySeconds: 10
  periodSeconds: 10
  successThreshold: 1
  timeoutSeconds: 5
redis:
  auth:
    enabled: true
    existingSecret: ''
    existingSecretPasswordKey: ''
    password: changeme
  enabled: false
  global:
    storageClass: ''
  master:
    persistence:
      enabled: true
  replica:
    persistence:
      enabled: true
replicaCount: 1
resources: {}
securityContext: {}
service:
  annotations: {}
  loadBalancerIP: ''
  nodePort: nil
  port: 8080
  type: ClusterIP
startupProbe:
  enabled: false
  failureThreshold: 30
  initialDelaySeconds: 30
  periodSeconds: 10
  successThreshold: 1
  timeoutSeconds: 5
tolerations: []
global:
  cattle:
    systemProjectId: p-pdqk6

I tried several solutions with extraInitContainers: or securityContext: like :

extraInitContainers: 
  - name: init-permissions
    image: busybox
    command: ['sh', '-c', 'chmod -R 775 /var/www/html/data']
    volumeMounts:
      - name: nextcloud-data
        mountPath: /var/www/html/data
securityContext:
  runAsUser: 82
  runAsGroup: 33
  runAsNonRoot: true
  readOnlyRootFilesystem: true

But it's still too abstract for me and I don't understand the problem well enough. I don't know if I'm on the right track with this or, where is exactly these parameters should be placed in the treeessence of the configuration file.

Thank you in advance for your help

Upvotes: 0

Views: 234

Answers (0)

Related Questions