Reputation: 70
new to GKE and kubernetes just trying to get a simple project up and running. Here's what I'm trying to accomplish in GKE in a single cluster, single node pool, and single namespace:
nginx deployment behind LoadBalancer service accepting Http traffic on port 80 passing it on port 8000 to
front-end deployment (python Django) behind ClusterIP service accepting traffic on port 8000.
The front-end is already successfully communicating with a StatefulSet running Postgres database. The front-end was seen successfully serving Http (gunicorn) before I switched it's service from LoadBalancer to ClusterIP.
I don't know how to properly set up the Nginx configuration to pass traffic to the ClusterIP service for the front-end deployment. What I have is not working.
Any advice/suggestions would be appreciated. Here are the setup files:
nginx - etc/nginx/conf.d/nginx.conf
upstream front-end {
server front-end:8000;
}
server {
listen 80;
client_max_body_size 2M;
location / {
proxy_pass http://front-end;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header Host $host;
proxy_redirect off;
}
location /static/ {
alias /usr/src/app/static/;
}
}
nginx deployment/service
---
apiVersion: v1
kind: Service
metadata:
name: "web-nginx"
labels:
app: "nginx"
spec:
type: "LoadBalancer"
ports:
- port: 80
name: "web"
selector:
app: "nginx"
---
apiVersion: "apps/v1"
kind: "Deployment"
metadata:
name: "nginx"
namespace: "default"
labels:
app: "nginx"
spec:
replicas: 1
selector:
matchLabels:
app: "nginx"
template:
metadata:
labels:
app: "nginx"
spec:
containers:
- name: "my-nginx"
image: "us.gcr.io/my_repo/my_nginx_image" # this is nginx:alpine + my staicfiles & nginx.conf
ports:
- containerPort: 80
args:
- /bin/sh
- -c
- while :; do sleep 6h & wait $${!}; nginx -s reload; done & nginx -g "daemon off;"
front-end deployment/service
---
apiVersion: v1
kind: Service
metadata:
name: "front-end"
labels:
app: "front-end"
spec:
type: "ClusterIP"
ports:
- port: 8000
name: "django"
targetPort: 8000
selector:
app: "front-end"
---
apiVersion: "apps/v1"
kind: "Deployment"
metadata:
name: "front-end"
namespace: "default"
labels:
app: "front-end"
spec:
replicas: 1
selector:
matchLabels:
app: "front-end"
template:
metadata:
labels:
app: "front-end"
spec:
containers:
- name: "myApp"
image: "us.gcr.io/my_repo/myApp"
ports:
- containerPort: 8000
args:
- /bin/sh
- -c
- python manage.py migrate && gunicorn smokkr.wsgi:application --bind 0.0.0.0:8000
---
Upvotes: 2
Views: 752
Reputation: 2063
Kubernetes ingress is the way to go about this. GKE uses Google cloud load balancer behind the scenes to provision your Kubernetes ingress resource; so when you create an Ingress object, the GKE ingress controller creates a Google Cloud HTTP(S) load balancer and configures it according to the information in the Ingress and its associated Services.
This way you get access to some custom resource types from Google like ManagedCertificates
and staticIP
addresses, which could be associated with the ingress in kubernetes to achieve loadbalancing between services or between clients and services.
Follow the documentation here to understand how to setup HTTP(s) load balancing with GKE using K8s ingress - https://cloud.google.com/kubernetes-engine/docs/concepts/ingress
This tutorial is really helpful too -
https://cloud.google.com/kubernetes-engine/docs/tutorials/http-balancer
Upvotes: 2
Reputation: 30160
It would be better to use ingress
to forward traffic to service in Kubernetes.
You can find more deocumentation here : https://www.digitalocean.com/community/tutorials/how-to-set-up-an-nginx-ingress-with-cert-manager-on-digitalocean-kubernetes
On Kubernetes official doc : https://kubernetes.io/docs/concepts/services-networking/ingress/
Simply deploy the nginx controller and apply the nginx rule in backend it deploy8 the nginx and convert YAML rule to nginx conf.
apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:
name: test-ingress
annotations:
nginx.ingress.kubernetes.io/rewrite-target: /
spec:
rules:
- http:
paths:
- path: /testpath
backend:
serviceName: test
servicePort: 80
Upvotes: 2