Pengbo Wu
Pengbo Wu

Reputation: 87

How can I visit NodePort service in GKE

This is my Service yaml. When create the svc on GKE.I don't know how can I visit the svc.I can't find a external ip for visiting the svc. How can I visit this svc in standard flow. Is it need to create an ingress?

apiVersion: v1
kind: Service
metadata:
  namespace: dev
  name: ui-svc
  labels:
    targetEnv: dev
    app: ui-svc
spec:
  selector:
    app: ui
    targetEnv: dev
  ports:
    - name: ui
      port: 8080
      targetPort: 8080
      nodePort: 30080
  type: NodePort

enter image description here

Upvotes: 2

Views: 2672

Answers (1)

mario
mario

Reputation: 11098

If you don't use a private cluster where nodes don't have public IP addresses, you can access your NodePort services using any node's public IP address.

What you can see in Services & Ingresses section in the Endpoints column, it's an internal, cluster ip address of your NodePort service.

If you want to know what are public IP addresses of your GKE nodes, please go to Compute Engine > VM instances:

enter image description here

You will see the list of all your Compute Engine VMs which also includes your GKE nodes. Note the IP address in External IP column. You should use it along with port number which you may check in your NodePort service details. Simply click on it's name "ui-svc" to see the details. At the very bottom of the page you should see ports section which may look as follows:

enter image description here

So in my case I should use <any_node's_public_ip_address>:31251.

One more important thing. Don't forget to allow traffic to this port on Firewall as by default it is blocked. So you need to explicitly allow traffic to your nodes e.g. on 31251 port to be able to access it from public internet. Simply go to VPC Network > Firewall and set the apropriate rule:

enter image description here

UPDATE:

If you created an Autopilot Cluster, by default it is a public one, which means its nodes have public IP addresses:

enter image description here

If during the cluster creation you've selected a second option i.e. "Private cluster", your nodes won't have public IPs by design and you won't be able to access your NodePort service on any public IP. So the only option that remains in such scenario is exposing your workload via LoadBalancer service or Ingress, where a single public IP endpoint is created for you, so you can access your workload externally.

However if you've chosen the default option i.e. "Public cluster", you can use your node's public IP's to access your NodePort service in the very same way as if you used a Standard (non-autopilot) cluster.

Of course in autopilot mode you won't see your nodes as compute engine VMs in your GCP console, but you can still get their public IPs by running:

kubectl get nodes -o wide

They will be shown in EXTERNAL-IP column.

To connect to your cluster simply click on 3 dots you can see to the right of the cluster name ("Kubernetes Engine" > "Clusters") > click "Connect" > click "RUN IN CLOUD SHELL".

Since you don't know what network tags have been assigned to your GKE auto-pilot nodes (if any) as you don't manage them and they are not shown in your GCP console, you won't be able to use specified network tags when defining a firewall rule to allow access to your NodePort service port e.g. 30543 and you would have to choose the option "All instances in the network" instead:

enter image description here

Upvotes: 9

Related Questions