Reputation: 73
I am trying to create a Kubernetes cluster, this cluster will contain 3 nodes
Master Nodes, where I Installed and configured kubeadm , kubelete, and installed my system there (which is web application developed by laravel ),
the worker nodes is joined to the master without any problem ,
and I deployed my system to PHP-fpm pods and created services and horizontal Pods Autoscaling
this is my service:
PHP LoadBalancer 10.108.218.232 <pending> 9000:30026/TCP 15h app=php
this is my pods
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
qsinavphp-5b67996888-9clxp 1/1 Running 0 40m 10.244.0.4 taishan <none> <none>
qsinavphp-5b67996888-fnv7c 1/1 Running 0 43m 10.244.0.12 kubernetes-master <none> <none>
qsinavphp-5b67996888-gbtdw 1/1 Running 0 40m 10.244.0.3 taishan <none> <none>
qsinavphp-5b67996888-l6ghh 1/1 Running 0 33m 10.244.0.2 taishan <none> <none>
qsinavphp-5b67996888-ndbc8 1/1 Running 0 43m 10.244.0.11 kubernetes-master <none> <none>
qsinavphp-5b67996888-qgdbc 1/1 Running 0 43m 10.244.0.10 kubernetes-master <none> <none>
qsinavphp-5b67996888-t97qm 1/1 Running 0 43m 10.244.0.13 kubernetes-master <none> <none>
qsinavphp-5b67996888-wgrzb 1/1 Running 0 43m 10.244.0.14 kubernetes-master <none> <none>
the worker nondes is taishan , and the master is Kubernetes-master. and this is my nginx config which is sending request to php service
server {
listen 80;
listen 443 ssl;
server_name k8s.example.com;
root /var/www/html/Test/project-starter/public;
ssl_certificate "/var/www/cert/example.cer";
ssl_certificate_key "/var/www/cert/example.key";
add_header X-Frame-Options "SAMEORIGIN";
add_header X-XSS-Protection "1; mode=block";
add_header X-Content-Type-Options "nosniff";
index index.php;
charset utf-8;
# if ($scheme = http) {
# return 301 https://$server_name$request_uri;
# }
ssl_protocols TLSv1.2;
ssl_ciphers 'ECDHE-ECDSA-AES256-GCM-SHA384:ECDHE-RSA-AES256-GCM-SHA384:ECDHE-ECDSA-CHACHA20-POLY1305:ECDHE-RSA-CHACHA20-POLY1305:ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES25>
ssl_prefer_server_ciphers on;
location / {
try_files $uri $uri/ /index.php?$query_string;
}
location = /favicon.ico { access_log off; log_not_found off; }
location = /robots.txt { access_log off; log_not_found off; }
error_page 404 /index.php;
location ~ [^/]\.php(/|$) {
fastcgi_split_path_info ^(.+\.php)(/.+)$;
fastcgi_index index.php;
fastcgi_pass 10.108.218.232:9000;
include fastcgi_params;
fastcgi_param PATH_INFO $fastcgi_path_info;
fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name;
}
location ~ /\.(?!well-known).* {
deny all;
}
}
the problem is I have 3 pods on the worker node and 5 pods on the master node, but no request going to the worker's pods all request is going to the master, both of my nodes are in ready status
NAME STATUS ROLES AGE VERSION INTERNAL-IP EXTERNAL-IP OS-IMAGE KERNEL-VERSION CONTAINER-RUNTIME
kubernetes-master Ready control-plane,master 15h v1.20.4 10.14.0.58 <none> Ubuntu 20.04.1 LTS 5.4.0-70-generic docker://19.3.8
taishan Ready <none> 79m v1.20.5 10.14.2.66 <none> Ubuntu 20.04.1 LTS 5.4.0-42-generic docker://19.3.8
this is my kubectl describe nodes php result
Name: php
Namespace: default
Labels: tier=backend
Annotations: <none>
Selector: app=php
Type: LoadBalancer
IP Families: <none>
IP: 10.108.218.232
IPs: 10.108.218.232
Port: <unset> 9000/TCP
TargetPort: 9000/TCP
NodePort: <unset> 30026/TCP
Endpoints: 10.244.0.10:9000,10.244.0.11:9000,10.244.0.12:9000 + 7 more...
Session Affinity: None
External Traffic Policy: Cluster
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Type 48m service-controller ClusterIP -> LoadBalancer
this is my yaml file which I am using to create the deployment
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
app: php
name: qsinavphp
namespace: default
spec:
replicas: 1
selector:
matchLabels:
app: php
strategy:
type: Recreate
template:
metadata:
labels:
app: php
spec:
containers:
- name: taishan-php-fpm
image: starking8b/taishanphp:last
imagePullPolicy: Never
ports:
- containerPort: 9000
volumeMounts:
- name: qsinav-nginx-config-volume
mountPath: /usr/local/etc/php-fpm.d/www.conf
subPath: www.conf
- name: qsinav-nginx-config-volume
mountPath: /usr/local/etc/php/conf.d/docker-php-memlimit.ini
subPath: php-memory
- name: qsinav-php-config-volume
mountPath: /usr/local/etc/php/php.ini-production
subPath: php.ini
- name: qsinav-php-config-volume
mountPath: /usr/local/etc/php/php.ini-development
subPath: php.ini
- name: qsinav-php-config-volume
mountPath: /usr/local/etc/php-fpm.conf
subPath: php-fpm.conf
- name: qsinav-www-storage
mountPath: /var/www/html/Test/qSinav-starter
resources:
limits:
cpu: 4048m
requests:
cpu: 4048m
restartPolicy: Always
serviceAccountName: ""
volumes:
- name: qsinav-www-storage
persistentVolumeClaim:
claimName: qsinav-pv-www-claim
- name: qsinav-nginx-config-volume
configMap:
name: qsinav-nginx-config
- name: qsinav-php-config-volume
configMap:
name: qsinav-php-config
and this is my service yaml file
apiVersion: v1
kind: Service
metadata:
name: php
labels:
tier: backend
spec:
selector:
app: php
ports:
- protocol: TCP
port: 9000
type: LoadBalancer
I am not sure where is my error , so please help to solve this problem
Upvotes: 1
Views: 676
Reputation: 73
actually the problem was with flannel network , it was not able to make connection between nodes , so I solved it by installing weave plugin which is working fine now by applying this command
kubectl apply -f "https://cloud.weave.works/k8s/net?k8s-version=$(kubectl version | base64 | tr -d '\n')"
Upvotes: 1
Reputation: 2972
Here I have added from basic baremetal k8 installation
##### Creating ssh keys
From master node
`ssh-keygen`
Copy content in `~/.ssh/id_rsa.pub`
Login to other servers and paste this copied part into `~/.ssh/authorized_keys`
Follow these steps in all servers. Master and worker.
`sudo apt-get install python`
`sudo apt install python3-pip`
Adding Ansible
`sudo apt-add-repository ppa:ansible/ansible`
`sudo apt update`
`sudo apt-get install ansible -y`
[Reference](https://www.techrepublic.com/article/how-to-install-ansible-on-ubuntu-server-18-04/)
### Install Kubernetes
`sudo apt-get update`
`sudo apt-get install docker.io`
`sudo systemctl enable docker`
`curl -s https://packages.cloud.google.com/apt/doc/apt-key.gpg | sudo apt-key add`
`sudo apt-add-repository "deb http://apt.kubernetes.io/ kubernetes-xenial main"`
`sudo apt-get install kubeadm kubelet kubectl`
`sudo apt-mark hold kubeadm kubelet kubectl`
For more details please [refer](https://phoenixnap.com/kb/install-kubernetes-on-ubuntu)
### Installing Kubespray
`git clone https://github.com/kubernetes-incubator/kubespray.git`
`cd kubespray`
`sudo pip3 install -r requirements.txt`
`cp -rfp inventory/sample inventory/mycluster`
`declare -a IPS=(10.10.1.3 10.10.1.4 10.10.1.5)`
Please put your IP addresses here separated with a space.
`CONFIG_FILE=inventory/mycluster/hosts.yaml python3 contrib/inventory_builder/inventory.py ${IPS[@]}`
`ansible-playbook -i inventory/mycluster/hosts.yaml --become --become-user=root cluster.yml`
For none root user access
`ansible-playbook -i inventory/mycluster/hosts.yaml --become --become-user=root cluster.yml --extra-vars "ansible_sudo_pass=password"`
This will take around 15mins to run successfully. If `root` user ssh is not working properly, this will fail. Please check key sharing step again.
[10 Simple stepms](https://dzone.com/articles/kubespray-10-simple-steps-for-installing-a-product)
[Add a node to existing cluster](https://www.serverlab.ca/tutorials/containers/kubernetes/how-to-add-workers-to-kubernetes-clusters/)
[kubelet debug](https://stackoverflow.com/questions/56463783/how-to-start-kubelet-service)
### Possible Errors
`kubectl get nodes`
> The connection to the server localhost:8080 was refused - did you specify the right host or port?
Perform followings as normal user (none root user)
`mkdir -p $HOME/.kube`
`sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config`
`sudo chown $(id -u):$(id -g) $HOME/.kube/config`
If you are in worker node, you will have to use `scp` to get `/etc/kubernetes/admin.conf` from master node. Master node may have this problem, if so please do these steps locally using normal user.
[Refer](https://www.edureka.co/community/18633/error-saying-connection-server-localhost-refused-specify)
## Installing MetalLB
kubectl apply -f https://raw.githubusercontent.com/metallb/metallb/v0.9.3/manifests/namespace.yaml
kubectl apply -f https://raw.githubusercontent.com/metallb/metallb/v0.9.3/manifests/metallb.yaml
# On first install only
kubectl create secret generic -n metallb-system memberlist --from-literal=secretkey="$(openssl rand -base64 128)"
[Official Installation guide](https://metallb.universe.tf/installation/)
### Configuring L2 config
sachith@master:~$ cat << EOF | kubectl create -f -
apiVersion: v1
kind: ConfigMap
metadata:
namespace: metallb-system
name: config
data:
config: |
address-pools:
- name: default
protocol: layer2
addresses:
- 192.168.1.19-192.168.1.29 # Preferred IP range.
EOF
Verify installation success using : kubectl describe configmap config -n metallb-system
This will install two components.
Upvotes: 0