Umesh
Umesh

Reputation: 71

Load Distribution not working in Kubernetes, Request is going to single pod

I m using Minikube for kubernetes deployment and Operating system i m using is Ubuntu 18.04

I have a Deployment of my application with 3 replicas. So when i deploy 3 pods gets deployed . when i increase the number of http request all the request are forwarded to the a single pod while there are no logs recorded in the other two replicas.

Any ideas on how the load get distributed across all the pods.

i have the deployment.yaml and respective service.yaml file below

    deployment.yaml


    apiVersion: apps/v1
    kind: Deployment
    metadata:
          name: master
      labels:
       app: master-container
    spec:
     selector:
       matchLabels:
         app: master-container
     replicas: 3
     template:
       metadata:
         labels:
           app: master-container
       spec:
         volumes:
         - name: logs-dir
           hostPath:
            path: /data/logs/
            type: DirectoryOrCreate
         containers:
          - name: master-container
            image: image:1
            volumeMounts:
            - mountPath: /data/workspace/logs/
              name: logs-dir

    service.yaml
    
apiVersion: v1
kind: Service
metadata:
  name: master-container
  labels:
    app: master-container
spec:
  selector:
    app: master-container
  ports:
  - port: 2000
    protocol: TCP
    targetPort: 2000
    name: can-port
    nodePort: 32000
  - port: 2002
    protocol: TCP
    targetPort: 2002
    name: can-ajp-port
    nodePort: 32002
  - port: 2003
    protocol: TCP
    targetPort: 2003
    name: cas-port
    nodePort: 32003
  - port: 2005
    protocol: TCP
    targetPort: 2005
    name: cas-ajp-port
    nodePort: 32005
  - port: 31900
    protocol: TCP
    targetPort: 31900
    name: cas-master-port
    nodePort: 31900
  type: NodePort

Have a look at the below image this is the output of the command

kubectl describe svc canmastercontainer

We can see that service is up and all the ports are accessible . The only issue i m facing is that the load distribution is not happening across the pods

enter image description here

i have tried giving type as NodePort , LoadBalancer and ClusterIp all the three didnt work out . All the http request are getting transferred to the single pod .

Any solutions would be appriciated thank you

Upvotes: 2

Views: 1805

Answers (2)

PjoterS
PjoterS

Reputation: 14102

Every node in a Kubernetes cluster runs a kube-proxy. kube-proxy is responsible for implementing a form of virtual IP for Services of type other than ExternalName.

kube-proxy can work in a few modes:

However, expect built-in Kubernetes methods you can use 3rd party software like Nginx Ingress.

Test scenario

Nginx Ingress, bare metal consideration on GCP Ubuntu VM.

Hello world YAML

apiVersion: apps/v1
kind: Deployment
metadata:
  name: deployment-1
spec:
  replicas: 1
  selector:
    matchLabels:
      key: application-1
  template:
    metadata:
      labels:
        key: application-1
    spec:
      containers:
      - name: hello1
        image: gcr.io/google-samples/hello-app:1.0
        imagePullPolicy: IfNotPresent
        ports:
        - containerPort: 8080

Service Yaml:

apiVersion: v1
kind: Service
metadata:
  name: service-one
spec:
  selector:
    key: application-1
  ports:
    - port: 80
      targetPort: 8080

Ingress Yaml:

apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:
  name: test-ingress
  annotations:
    kubernetes.io/ingress.class: "nginx"      
spec:
  rules:
  - http:
      paths:
      - path: /one
        backend:
          serviceName: service-one
          servicePort: 80
      - path: /two
        backend:
          serviceName: service-two
          servicePort: 80

Ingress Description:

Rules:
  Host        Path  Backends
  ----        ----  --------
  *           
              /one   service-one:80 (192.168.243.80:8080,192.168.243.84:8080,192.168.243.85:8080)
              /two   service-two:80 (192.168.243.81:8080,192.168.243.82:8080,192.168.243.83:8080)
          

Tests

$ curl 34.89.244.41:31337/one
Hello, world!
Version: 1.0.0
Hostname: deployment-1-77ddb77d56-4x4pd
$ curl 34.89.244.41:31337/one
Hello, world!
Version: 1.0.0
Hostname: deployment-1-77ddb77d56-vq72x
$ curl 34.89.244.41:31337/one
Hello, world!
Version: 1.0.0
Hostname: deployment-1-77ddb77d56-v5826
$ curl 34.89.244.41:31337/one
Hello, world!
Version: 1.0.0
Hostname: deployment-2-fb984955c-xk5h9
$ curl 34.89.244.41:31337/two
Hello, world!
Version: 2.0.0
Hostname: deployment-2-fb984955c-lw74g
$ curl 34.89.244.41:31337/two
Hello, world!
Version: 2.0.0
Hostname: deployment-2-fb984955c-xk5h9
$ curl 34.89.244.41:31337/two
Hello, world!
Version: 2.0.0
Hostname: deployment-2-fb984955c-lw74g
$ curl 34.89.244.41:31337/two
Hello, world!
Version: 2.0.0
Hostname: deployment-2-fb984955c-8pfls
$ curl 34.89.244.41:31337/two
Hello, world!
Version: 2.0.0
Hostname: deployment-2-fb984955c-8pfls

As you can see, each request was redirected to different pod.

If this won't work, please provide more details which was asked under your question.

Upvotes: 1

Hemanth H L
Hemanth H L

Reputation: 121

If you are using port forwarding then the load will not be distributed across all pods (endpoints).

Please refer to this link for more information.

Upvotes: 1

Related Questions