stvnwrgs
stvnwrgs

Reputation: 137

Routing traffic from GCP VM to VPC Native Cloud DNS GKE Cluster

I'm trying to achieve the following scenario:

My VM should be able to connect over a ClusterIP Service to the Pods behind it. The new beta feature here looks like this should be possible, but maybe I'm doing something wrong or misunderstood something...

enter image description here

The full documentation is here: https://cloud.google.com/kubernetes-engine/docs/how-to/cloud-dns?hl=de#vpc_scope_dns

The DNS is working, I get a service IP, a route to the default network is available. I can connect via pod IP. But it seems the service IP is not routable from outside the cluster. I know, that normally a ClusterIP is not available from outside the cluster. But then, I don't understand why this feature exists and it also does not match with the diagram provided in the docs. As this feature seems to provide cross-cluster/VM communication via services.

apiVersion: v1
kind: Service
metadata:
  name: my-cip-service
  namespace: default
spec:
  ports:
  - name: http
    port: 80
    protocol: TCP
    targetPort: 8080
  selector:
    run: load-balancer-example
  sessionAffinity: None
  type: ClusterIP

Do I understand the feature wrong or am I missing a configuration?

Traffic Check: enter image description here

Upvotes: 0

Views: 206

Answers (1)

Gari Singh
Gari Singh

Reputation: 12053

This only works with headless Kubernetes services. So you'll need to modify your service spec to included clusterIP: None:

apiVersion: v1
kind: Service
metadata:
  name: my-cip-service
  namespace: default
spec:
  clusterIP: None
  ports:
  - name: http
    port: 80
    protocol: TCP
    targetPort: 8080
  selector:
    run: load-balancer-example
  sessionAffinity: None
  type: ClusterIP

Upvotes: 1

Related Questions