Sach K
Sach K

Reputation: 743

Kubernetes - Service always hitting the same Pod Container

I have a local Kubernetes install based on Docker Desktop. I have a Kubernetes Service setup with Cluster IP on top of 3 Pods. I notice when looking at the Container logs the same Pod is always hit.

Is this the default behaviour of Cluster IP? If so how will the other Pods ever be used or what is the point of them using Cluster IP?

The other option is to use a LoadBalancer type however I want the Service to only be accessible from within the Cluster.

Is there a way to make the LoadBalancer internal?

If anyone can please advise that would be much appreciated.

UPDATE:

I have tried using an LoadBalancer type and the same Pod is being hit all the time also.

Here is my config:

apiVersion: v1
kind: Namespace
metadata:
  name: dropshippingplatform
---
apiVersion: apps/v1
kind: Deployment
metadata:
  name: organisationservice-deployment
  namespace: dropshippingplatform
spec:
  selector:
    matchLabels:
      app: organisationservice-pod
  replicas: 3
  template:
    metadata:
      labels:
        app: organisationservice-pod
    spec:
      containers:
      - name: organisationservice-container
        image: organisationservice:v1.0.1
        ports:
        - containerPort: 80
---
apiVersion: v1
kind: Service
metadata:
  name: organisationservice-service
  namespace: dropshippingplatform
spec:
  selector:
    app: organisationservice-pod
  ports:
    - protocol: TCP
      port: 81
      targetPort: 80
  type: LoadBalancer
---
apiVersion: apps/v1
kind: Deployment
metadata:
  name: apigateway-deployment
  namespace: dropshippingplatform
spec:
  selector:
    matchLabels:
      app: apigateway-pod
  template:
    metadata:
      labels:
        app: apigateway-pod
    spec:
      containers:
      - name: apigateway-container
        image: apigateway:v1.0.1
        ports:
        - containerPort: 80
---
apiVersion: v1
kind: Service
metadata:
  name: apigateway-service
  namespace: dropshippingplatform
spec:
  selector:
    app: apigateway-pod
  ports:
    - protocol: TCP
      port: 80
      targetPort: 80
  type: LoadBalancer

Here is my Ocelot configuration:

{
  "Routes": [
    {
      "DownstreamPathTemplate": "/api/organisations",
      "DownstreamScheme": "http",
      "ServiceName": "organisationservice-service",
      "ServiceNamespace": "dropshippingplatform",
      "UpstreamPathTemplate": "/APIGateway/Organisations",
      "UpstreamHttpMethod": [ "Get" ],
      "Key": "Login"
    },
    {
      "DownstreamPathTemplate": "/weatherforecast",
      "DownstreamScheme": "http",
      "ServiceName": "organisationservice-service",
      "ServiceNamespace": "dropshippingplatform",
      "UpstreamPathTemplate": "/APIGateway/WeatherForecast",
      "UpstreamHttpMethod": [ "Get" ],
      "Key": "WeatherForecast"
    }
  ],
  "Aggregates": [
    {
      "RouteKeys": [
        "Login",
        "WeatherForecast"
      ],
      "UpstreamPathTemplate": "/APIGateway/Organisations/Login"
    },
    {
      "RouteKeys": [
        "Login",
        "WeatherForecast"
      ],
      "UpstreamPathTemplate": "/APIGateway/Organisations/TestAggregator",
      "Aggregator": "TestAggregator"
    }
  ],
  "GlobalConfiguration": {
    "ServiceDiscoveryProvider": {
      "Namespace": "default",
      "Type": "KubernetesServiceDiscoveryProvider"
    }
  }
}

To isolate the issue I created a Load Balancer in front of the Kubernetes Service in question and called the Service directly from the client. The same Pod is being hit all the time which tells me its to do with Kubernetes and not the Ocelot API Gateway.

Here is the output of kubectl describe svc:

Name:              organisationservice-service
Namespace:         dropshippingplatform
Labels:            <none>
Annotations:       <none>
Selector:          app=organisationservice-pod
Type:              ClusterIP
IP:                X.X.X.119
Port:              <unset>  81/TCP
TargetPort:        80/TCP
Endpoints:         X.X.X.163:80,X.X.X.165:80,X.X.X.166:80
Session Affinity:  None
Events:            <none>

Upvotes: 1

Views: 1355

Answers (1)

Sach K
Sach K

Reputation: 743

I solved it. It turned out that Ocelot API Gateway was the issue. I added this to the Ocelot configuration:

"LoadBalancerOptions": {
        "Type": "RoundRobin"
      },

And now its equally distributing the traffic.

Upvotes: 1

Related Questions