Ferskfisk
Ferskfisk

Reputation: 13

How do I access Kubernetes pods through a single IP?

I have a set of pods running based on the following fleet:

apiVersion: "agones.dev/v1"
kind: Fleet
metadata:
  name: bungee
spec:
  replicas: 2
  template:
    metadata:
      labels:
        run: bungee
    spec:
      ports:
        - name: default
          containerPort: 25565
          protocol: TCP
      template:
        spec:
          containers:
            - name: bungee
              image: a/b:test

I can access these pods outside the cluster with <node-IP>:<port> where the port is random per pod given by Agones.

My goal is to be able to connect to these pods through a single IP, meaning I have to add some sort of load balancer. I tried using this service of type LoadBalancer, but I can't connect to any of the pods with it.

apiVersion: v1
kind: Service
metadata:
  name: bungee-svc
spec:
  type: LoadBalancer
  loadBalancerIP: XXX.XX.XX.XXX
  ports:
    - port: 25565
      protocol: TCP
  selector:
    run: bungee
  externalTrafficPolicy: Local

Is a service like this the wrong approach here, and if so what should I use instead? If it is correct, why is it not working?

Edit: External IP field says pending while checking the service status. I am running Kubernetes on bare-metal.

Edit 2: Attempting to use NodePort as suggested, I see the service has not been given an external IP address. Trying to connect to <node-IP>:<nodePort> does not work. Could it be a problem related to the selector?

Upvotes: 0

Views: 389

Answers (1)

SYN
SYN

Reputation: 5032

LoadBalancer Services could have worked, in clusters that are integrating with the API of the cloud provider hosting your Kubernetes nodes (cloud-controller-manager component). Since this is not your case, you're looking for a NodePort Service.

Something like:

apiVersion: v1
kind: Service
metadata:
  name: bungee-svc
spec:
  type: NodePort
  ports:
    - port: 25565
      protocol: TCP
  selector:
    run: bungee

Having created that service, you can check its description - or yaml/json representation:

# kubectl describe svc xxx
Type:                     NodePort
IP:                       10.233.24.89 <- ip within SDN
Port:                     tcp-8080  8080/TCP <- ports within SDN
TargetPort:               8080/TCP <- port on your container
NodePort:                 tcp-8080  31655/TCP <- port exposed on your nodes
Endpoints:                10.233.108.232:8080 <- pod:port ...
Session Affinity:         None

Now, I know the port 31655 was allocated to my NodePort Service -- ports are unique on your cluster, they are picked within a range, depends on your cluster configuration.

I can connect to my service, accessing any Kubernetes node IP, on the port that was allocated to my NodePort service.

curl http://k8s-worker1.example.com:31655/

As a sidenote: a LoadBalancer Service extends a NodePort Service. While the externalIP won't ever show up, note that your Service was already allocated with its own port, as any NodePort Service - which is meant to receive traffic from whichever LoadBalancer would have been configured on behalf of your cluster, onto the cloud infrastructure it is integrated with.

And ... I have to say I'm not familiar with Agones. When you say "I can access these pods outside the cluster with <node-IP>:<port> where the port is random per pod given by Agones". Are you sure ports are allocated on a per-pod basis, and bound to a given node? Or could it be they're already using a NodePort Service. Give it another look: have you tried connecting that port on other nodes of your cluster?

Upvotes: 1

Related Questions