Reputation: 2040
I would like to do two things with MicroK8s:
My end goal is to create a single node Kubernetes cluster that sits on the Ubuntu host, then using ingress to route different domains to their respective pods inside the service.
I've been attempting to do this with Microk8s for the past couple of days but can't wrap my head around it.
The best I've gotten so far is using MetalLB to create a load balancer. But this required me to use a free IP address available on my local network rather than the host machines IP address.
I've also enabled the default-http-backend
and attempted to export and edit these config files with no success.
As an example this will work on Minikube
once the ingress add on is enabled, This example shows the base Nginx server image at port 80 on the cluster IP:
# ingress-service.yaml
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: ingress-service
annotations:
kubernetes.io/ingress.class: nginx
nginx.ingress.kubernetes.io/rewrite-target: /
spec:
rules:
# - host: nginx.ioo
- http:
paths:
- path: /
backend:
serviceName: nginx-cluster-ip-service
servicePort: 80
# nginx-deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx-deployment
spec:
replicas: 1
selector:
matchLabels:
component: nginx
template:
metadata:
labels:
component: nginx
spec:
containers:
- name: nginx
image: nginx
ports:
- containerPort: 80
# nginx-cluster-ip-service
apiVersion: v1
kind: Service
metadata:
name: nginx-cluster-ip-service
spec:
type: ClusterIP
selector:
component: nginx
ports:
- port: 80
targetPort: 80
Upvotes: 34
Views: 42380
Reputation: 169
To install nginx such that it works with the ingressClass=nginx
use:
#https://kubernetes.github.io/ingress-nginx/deploy/
helm upgrade --install ingress-nginx ingress-nginx \
--repo https://kubernetes.github.io/ingress-nginx \
--namespace ingress-nginx --create-namespace
Upvotes: -1
Reputation: 1823
When using Microk8s 1.21+, this is what an Ingress configuration should look like after running microk8s enable ingress
:
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: frontend-ingress
spec:
rules:
- host: staging.resplendentdata.com
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: frontend-service
port:
number: 80
ingressClassName: public
Upvotes: 1
Reputation: 6881
The change of ingress.class
from nginx
to public
proposed here and setting up DNS entry (using my external provider's console) from *
to my public IP (not a hostname) were the two sufficient conditions to replicate Openshift-style route
(aka "name-based virtual hosting") under microk8s
installed on metal.
Load-balancing between all pod replicas works fine despite no MetalLB being installed (as seen from the output of gcr.io/google-samples/hello-app
). Even HTTPS worked out-of-the box thanks to self-signed certs auto-generated by the ingress controller.
Upvotes: 0
Reputation: 1005
Update the annotation to be kubernetes.io/ingress.class: public
For MicroK8s v1.21, running
microk8s enable ingress
Will create a DaemonSet
called nginx-ingress-microk8s-controller
in the ingress
namespace.
If you inspect that, there is a flag to set the ingress class:
- args:
... omitted ...
- --ingress-class=public
... omitted ...
Therefore in order to work with most examples online, you need to either
--ingress-class=public
argument so it defaults to nginx
kubernetes.io/ingress.class: nginx
to be kubernetes.io/ingress.class: public
Upvotes: 29
Reputation: 322
If you need expose a service publicly with HTTPS and authentication, that may become rather involved, as you need configure a) ingress, b) TLS certificate service - i.e. using Lets Encrypt, c) authentication proxy, d) implement user authorization in your app.
If your K8S cluster is running on a server with no public IP, that brings an additional complication, as you need penetrate NAT.
https://github.com/gwrun/tutorials/tree/main/k8s/pod demonstrates how to securely expose k8s service running on microk8s cluster with no public IP as publicly accessible HTTPS with OAuth authentication and authorization, using Kubernetes Dashboard as a sample service.
Upvotes: 0
Reputation: 369
The statement "The best I've gotten so far is using MetalLB to create a load balancer." is wrong. You must to use the ingress layer for host traffic routing.
In a bare metal environment you need to configure MetalLB to allow incoming connections from the host to k8s.
First we need a test:
curl -H "Host: nginx.ioo" http://HOST_IP
What is the result?
If Network error then you need MetalLB
microk8s.enable metallb:$(curl ipinfo.io/ip)-$(curl ipinfo.io/ip)
Run the test again.
If Network error then you have something wrong. Check host connectivity.
If error 404 (sometimes 503) then you need a ingress rule.
# ingress-service.yaml
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: ingress-service
annotations:
kubernetes.io/ingress.class: nginx
nginx.ingress.kubernetes.io/rewrite-target: /
spec:
rules:
- host: nginx.ioo
- http:
paths:
- path: /
backend:
serviceName: nginx-cluster-ip-service
servicePort: 80
Last test. It should work.
Now you can use ingress to route different domains to their respective pods inside the service.
Upvotes: 5
Reputation: 96
When using a LoadBalancer (aka metallb) there is an important step missing in almost all docs:
The ingress-controller needs to be exposed to the metallb LoadBalancer.
kubectl expose deploy nginx-deployment --port 80 --type LoadBalancer
This can be done by a yaml as well but its way easier to use the cli.
After days of googling i finally came across this tutorial video that opened my eyes.
https://www.youtube.com/watch?v=xYiYIjlAgHY
Upvotes: 1
Reputation: 11446
If I understood you correctly, there are a few ways you might be looking at.
One would be MetalLB which you already mentioned.
MetalLB provides a network load-balancer implementation for Kubernetes clusters that do not run on a supported cloud provider, effectively allowing the usage of LoadBalancer Services within any cluster.
You can read the detailed implementation A pure software solution: MetalLB
Another way would be Over a NodePort Service
This approach has a few other limitations one ought to be aware of:
- Source IP address
Services of type NodePort perform source address translation by default. This means the source IP of a HTTP request is always the IP address of the Kubernetes node that received the requestfrom the perspective of NGINX.
You can also use host network
In a setup where there is no external load balancer available but using NodePorts is not an option, one can configure
ingress-nginx
Pods to use the network of the host they run on instead of a dedicated network namespace. The benefit of this approach is that the NGINX Ingress controller can bind ports 80 and 443 directly to Kubernetes nodes' network interfaces, without the extra network translation imposed by NodePort Services.
You have to also remember that if you edit the configuration inside the POD
, it will be gone if the Pod is restarted or it crashes.
I hope this helps you to determine which way to go with your idea.
Upvotes: 16