Reputation: 2911
I'm new to kubernetes,
Currently i'm triyng to deploy laravel app on kuberetes. I have setup 1 deployment yaml containing 2 containers (nginx and php-fpm) and a shared volume.
Here's the full yaml:
apiVersion: v1
kind: Service
metadata:
name: operation-service
labels:
app: operation-service
spec:
type: NodePort
selector:
app: operation
ports:
- port: 80
targetPort: 80
protocol: TCP
name: http
- port: 443
targetPort: 443
protocol: TCP
name: https
- port: 9000
targetPort: 9000
protocol: TCP
name: fastcgi
---
# Create a pod containing the PHP-FPM application (my-php-app)
# and nginx, each mounting the `shared-files` volume to their
# respective /var/www/ directories.
apiVersion: apps/v1
kind: Deployment
metadata:
name: operation
spec:
selector:
matchLabels:
app: operation
replicas: 1
template:
metadata:
labels:
app: operation
spec:
volumes:
# Create the shared files volume to be used in both pods
- name: shared-files
emptyDir: {}
# Secret containing
- name: secret-volume
secret:
secretName: nginxsecret
# Add the ConfigMap we declared for the conf.d in nginx
- name: configmap-volume
configMap:
name: nginxconfigmap
containers:
# Our PHP-FPM application
- image: asia.gcr.io/operations
name: app
volumeMounts:
- name: shared-files
mountPath: /var/www
ports:
- containerPort: 9000
lifecycle:
postStart:
exec:
command: ["/bin/sh", "-c", "cp -r /app/. /var/www"]
- image: nginx:latest
name: nginx
ports:
- containerPort: 443
- containerPort: 80
volumeMounts:
- name: shared-files
mountPath: /var/www
- mountPath: /etc/nginx/ssl
name: secret-volume
- mountPath: /etc/nginx/conf.d
name: configmap-volume
---
apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:
name: ingress
annotations:
nginx.ingress.kubernetes.io/force-ssl-redirect: "true"
nginx.ingress.kubernetes.io/from-to-www-redirect: "true"
nginx.ingress.kubernetes.io/proxy-body-size: 100m
spec:
rules:
- host: testing.com
http:
paths:
- path: /
backend:
serviceName: operation-service
servicePort: 443
Here's my working nginxconf
:
server {
listen 80;
listen [::]:80;
# For https
listen 443 ssl;
listen [::]:443 ssl ipv6only=on;
ssl_certificate /etc/nginx/ssl/tls.crt;
ssl_certificate_key /etc/nginx/ssl/tls.key;
server_name testing.com;
root /var/www/public;
index index.php index.html index.htm;
location / {
try_files $uri $uri/ /index.php$is_args$args;
}
location ~ \.php$ {
try_files $uri /index.php =404;
fastcgi_pass 127.0.0.1:9000;
fastcgi_index index.php;
fastcgi_buffers 16 16k;
fastcgi_buffer_size 32k;
fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name;
#fixes timeouts
fastcgi_read_timeout 600;
include fastcgi_params;
}
location ~ /\.ht {
deny all;
}
location /.well-known/acme-challenge/ {
root /var/www/letsencrypt/;
log_not_found off;
}
error_log /var/log/nginx/laravel_error.log;
access_log /var/log/nginx/laravel_access.log;
}
After deploying, my app won't load on the web. Turns out nginx log is returning:
[error] 19#19: *64 FastCGI sent in stderr: "PHP message: PHP Fatal error: Uncaught ErrorException: file_put_contents(/app/storage/framework/views/ba2564046cc89e436fb993df6f661f314e4d2efb.php): failed to open stream: Permission denied in /var/www/vendor/laravel/framework/src/Illuminate/Filesystem/Filesystem.php:185
I know how to setup the volume correctly in local docker, how do i set the shared volume permission in kubernetes correctly?
Upvotes: 3
Views: 5113
Reputation: 2911
For anyone who is looking for answer, I manage to setup kubernetes for our production server with php-fpm and nginx.
It requires 2 image, 1 contains php-fpm
and our code, the other one is nginx
image with our conf in it.
Also we have to setup a shared volume between those 2 image to access. What I was missing was the postStart
command to do chmod
and php artisan optimize
to make sure I cleared the cache
For future reference, please do kubectl logs <pods-name>
and kubectl describe pods <pods-name>
to easily debug and see what happens in every pods
here's the final working config, hope it helps someone in the future
apiVersion: v1
kind: Service
metadata:
name: operation-service
labels:
app: operation-service
spec:
type: NodePort
selector:
app: operation
ports:
- port: 80
targetPort: 80
protocol: TCP
name: http
---
# Create a pod containing the PHP-FPM application (my-php-app)
# and nginx, each mounting the `shared-files` volume to their
# respective /var/www/ directories.
apiVersion: apps/v1
kind: Deployment
metadata:
name: operation
spec:
selector:
matchLabels:
app: operation
replicas: {{ .Values.replicaCount }}
strategy:
rollingUpdate:
maxSurge: 1
maxUnavailable: 50%
type: RollingUpdate
template:
metadata:
labels:
app: operation
spec:
volumes:
# Create the shared files volume to be used in both pods
- name: shared-files
emptyDir: {}
securityContext:
fsGroup: 82
containers:
# Our PHP-FPM application
- image: asia.gcr.io/3/operations:{{ .Values.version }}
name: app
envFrom:
- configMapRef:
name: prod
- secretRef:
name: prod
volumeMounts:
- name: shared-files
mountPath: /var/www
# Important! After this container has started, the PHP files
# in our Docker image aren't in the shared volume. We need to
# get them into the shared volume. If we tried to write directly
# to this volume from our Docker image the files wouldn't appear
# in the nginx container.
#
# So, after the container has started, copy the PHP files from this
# container's local filesystem (/app -- added via the Docker image)
# to the shared volume, which is mounted at /var/www.
ports:
- containerPort: 9000
name: fastcgi
lifecycle:
postStart:
exec:
command:
- "/bin/sh"
- "-c"
- >
cp -r /app/. /var/www &&
cd /var/www &&
php artisan optimize &&
php artisan migrate --force &&
chgrp -R www-data /var/www/* &&
chmod -R 775 /var/www/*
# Our nginx container, which uses the configuration declared above,
# along with the files shared with the PHP-FPM app.
- image: asia.gcr.io/3/nginx:1.0
name: nginx
ports:
- containerPort: 80
volumeMounts:
- name: shared-files
mountPath: /var/www
# We don't need this anymore as we're not using fastcgi straightaway
# ---
# apiVersion: v1
# kind: ConfigMap
# metadata:
# name: ingress-cm
# data:
# SCRIPT_FILENAME: "/var/www/public/index.php$is_args$args"
---
apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:
name: operation-ingress
labels:
app: operation-ingress
annotations:
kubernetes.io/ingress.class: "nginx"
nginx.ingress.kubernetes.io/ssl-redirect: "true"
nginx.ingress.kubernetes.io/force-ssl-redirect: "true"
nginx.ingress.kubernetes.io/from-to-www-redirect: "true"
nginx.ingress.kubernetes.io/proxy-body-size: "0"
cert-manager.io/cluster-issuer: letsencrypt-prod
spec:
tls:
- hosts:
- myservice.com.au
secretName: kubernetes-tls
rules:
- host: myservice.com.au
http:
paths:
- backend:
serviceName: operation-service
servicePort: 80
Upvotes: 2
Reputation: 30188
you can read the log and clearly mention permission denied which mean Nginx doesn't have permission to access the file you might have to change the permission of directory or files so that Nginx can access it.
either you can change permission during the docker build or else run a prehook or set the command which will run with image at a time of deployment get updated.
something like :
sudo chmod -R 775 /var/www/vendor
or
sudo chmod -R 755 /var/www/
i was trying to set up the WordPress same way along with php-fpm and using the Nginx container with php-fpm and faced same issue.
you find all the example files : https://github.com/harsh4870/Kubernetes-wordpress-php-fpm-nginx
Upvotes: 0
Reputation: 4614
You can use init container
as described here to change permissions of mounted directories or you can set an fsGroup
to change the groupID that owns volume as described here.
In your case I think it will be easier to set permissions by modifying your "copy" command:
command: ["/bin/sh", "-c", "cp -r /app/. /var/www"]
adding a chmod
command with appropriate parameters e.g:
command: ["/bin/sh", "-c", "cp -r /app/. /var/www && chmod -R a+r /var/www"]
Upvotes: 1