PatPanda
PatPanda

Reputation: 5066

Kubernetes - HTTP communication between two different pods in one same namespace

Small question regarding Kubernetes and how one pod can talk to another pod (two pods in total) when they are in one same namespace, hopefully without a very complex solution.

Setup: I have one pod A, called juliette-waiting-for-romeo-tocall, which exposes a rest endpoint /romeo-please-call-me

Then, I have a second pod B, called romeo-wants-to-call-juliette, which codes does nothing but trying to make a call to juliette on the endpoint /romeo-please-call-me.

The pod juliette-waiting-for-romeo-tocall is not a complex pod exposed for the internet. This pod does not want any public internet call, does not want any other cluster to call, does not want any pods from different namespace to call. Juliette just want her Romeo which should be on the same namespace to call her.

Nonetheless, Romeo and Juliette are not yet engaged, they are not within one same pod, they don't use the side car pattern, does not share any volume, etc... They are really just two different pods under one same namespace so far.

What would be the easiest solution for Romeo to call Juliette please?

When I do kubectl get all, I do see:

NAME                                             READY   STATUS    RESTARTS   AGE
pod/juliette-waiting-for-romeo-tocall-7cc8b84cc4-xw9n7   1/1     Running   0          2d
pod/romeo-wants-to-call-juliette-7c765f6987-rsqp5         1/1     Running   0          1d

service/juliette-waiting-for-romeo-tocall        ClusterIP      111.111.111.111   <none>          80/TCP            2d
service/romeo-wants-to-call-juliette         ClusterIP      222.222.22.22     <none>          80/TCP            1d

So far, In Romeo's code, I tried a curl on the cluster IP address + port (in this fake example 111.111.111.111:80/romeo-please-call-me) But I am not getting anything back

What is the easier solution please?

Thank you

Upvotes: 3

Views: 10552

Answers (1)

VonC
VonC

Reputation: 1328152

More generally, pod-to-pod communication is documented by "Cluster Networking"

Every Pod gets its own IP address.

This means you do not need to explicitly create links between Pods and you almost never need to deal with mapping container ports to host ports.

This creates a clean, backwards-compatible model where Pods can be treated much like VMs or physical hosts from the perspectives of port allocation, naming, service discovery, load balancing, application configuration, and migration.

Kubernetes imposes the following fundamental requirements on any networking implementation (barring any intentional network segmentation policies):

  • pods on a node can communicate with all pods on all nodes without NAT
  • agents on a node (e.g. system daemons, kubelet) can communicate with all pods on that node

Kubernetes IP addresses exist at the Pod scope - containers within a Pod share their network namespaces - including their IP address and MAC address.

This means that containers within a Pod can all reach each other's ports on localhost.
This also means that containers within a Pod must coordinate port usage, but this is no different from processes in a VM. This is called the "IP-per-pod" model.

But that depends on which CNI (Container Network Interface) has been used for your Kubernetes.

The Kubernetes Networking design document mentions for pod-to-pod:

Because every pod gets a "real" (not machine-private) IP address, pods can communicate without proxies or translations.
The pod can use well-known port numbers and can avoid the use of higher-level service discovery systems like DNS-SD, Consul, or Etcd.

When any container calls ioctl(SIOCGIFADDR) (get the address of an interface), it sees the same IP that any peer container would see them coming from — each pod has its own IP address that other pods can know.

By making IP addresses and ports the same both inside and outside the pods, we create a NAT-less, flat address space.
Running "ip addr show" should work as expected.
This would enable all existing naming/discovery mechanisms to work out of the box, including self-registration mechanisms and applications that distribute IP addresses. We should be optimizing for inter-pod network communication. Within a pod, containers are more likely to use communication through volumes (e.g., tmpfs) or IPC.


You can follow the Debug Services to troubleshoot this communication issue:

  • Does the Service exist?

    kubectl get svc hostnames
    
  • Does the Service work by DNS name?

    nslookup hostnames
    
  • Does any Service work by DNS name?

    nslookup kubernetes.default
    
  • Does the Service work by IP?
    Assuming you have confirmed that DNS works, the next thing to test is whether your Service works by its IP address.
    From a Pod in your cluster, access the Service's IP (from kubectl get above).

    for i in $(seq 1 3); do 
        wget -qO- 10.0.1.175:80
    done
    
  • Is the Service defined correctly?

    kubectl get service hostnames -o json
    
  • Does the Service have any Endpoints?

    kubectl get pods -l app=hostnames
    
  • Are the Pods working?
    From within a Pod:

    for ep in 10.244.0.5:9376 10.244.0.6:9376 10.244.0.7:9376; do
        wget -qO- $ep
    done
    
  • Is the kube-proxy working?

    ps auxw | grep kube-proxy
    iptables-save | grep hostnames
    
  • Is kube-proxy proxying?

    curl 10.0.1.175:80
    
  • Edge case: A Pod fails to reach itself via the Service IP

This might sound unlikely, but it does happen and it is supposed to work.

This can happen when the network is not properly configured for "hairpin" traffic, usually when kube-proxy is running in iptables mode and Pods are connected with bridge network.
The Kubelet exposes a hairpin-mode flag that allows endpoints of a Service to loadbalance back to themselves if they try to access their own Service VIP.
The hairpin-mode flag must either be set to hairpin-veth or promiscuous-bridge.

Upvotes: 2

Related Questions