potatopotato
potatopotato

Reputation: 1174

GCP create GKE service with static private IP, stuck in Pending state

I've been creating new microservices every roughly 2 months for past year, every time the same process.

Reserving private IP

gcloud compute addresses create my-internal-lb \
                                --region europe-west3 \
                                --addresses 10.223.0.192 \
                                --subnet <subnet_name>

and putting it in kubernetes service

apiVersion: v1
kind: Service
metadata:
  name: <app_name_lb>
  annotations:
    cloud.google.com/load-balancer-type: "Internal"
    networking.gke.io/internal-load-balancer-allow-global-access: "true"
  labels:
    app: <app_name>
    env: <env>
spec:
  type: LoadBalancer
  selector:
    app: <app_name>
    env: <env>
  ports:
  - port: 80
    targetPort: 8080
    protocol: TCP
  loadBalancerIP: 10.223.0.192
  externalTrafficPolicy: Local

but right now I see my new service stuck in Pending state

$ kubectl get services
NAME                             TYPE           CLUSTER-IP      EXTERNAL-IP    PORT(S)                         AGE
<app_name_lb>                    LoadBalancer   10.190.93.199   <pending>      80:32767/TCP                    22m

and GCP sees the IP as RESERVED not IN_USE so there shouldn't be an issue with that, anyone have any idea why that happens?

$ gcloud compute addresses list | grep "<app_name_lb>"
<app_name_lb>      10.223.0.192     INTERNAL  GCE_ENDPOINT            europe-west3  <subnet_name>     RESERVED

I'll add that I've done it like so multiple times, and other applicaition works just fine

<other_app_lb>          LoadBalancer   10.190.86.56    10.223.0.209   80:<32k port>/TCP                    37d

I'll add that I had this issue for few days now, and comming and leaving it all the time, tested on multiple subnets/zones, different IPs still the same Pending state, any help will be appreciated


I think I might have hit the limit of IN_USE addresses -checking that

Upvotes: 1

Views: 1616

Answers (3)

alrashid villanueva
alrashid villanueva

Reputation: 66

I hope this is still helpful.

I see you created the internal IP via Gcloud with these command

gcloud compute addresses create my-internal-lb \
                                --region europe-west3 \
                                --addresses 10.223.0.192 \
                                --subnet <subnet_name>

However, you are missing a flag

--purpose SHARED_LOADBALANCER_VIP

This is needed for the Internal Load-balancer to get the static internal IP assigned to it. Also, If your cluster is in a Shared VPC service project but uses a Shared VPC network in a host project: we would then use

gcloud compute addresses create IP_ADDR_NAME \
    --project SERVICE_PROJECT_ID \
    --subnet projects/HOST_PROJECT_ID/regions/REGION/subnetworks/SUBNET \
    --address= IP_ADDRESS \
    --region REGION \
    --purpose SHARED_LOADBALANCER_VIP

Upvotes: 0

potatopotato
potatopotato

Reputation: 1174

I had limit quotas on IN-USE addresses, on backend-services and on firewall-rules reaching limits (where only backend-services was at its limit) - asking google to increase them and waiting a day fixed the issue

Upvotes: 1

jabbson
jabbson

Reputation: 4911

What do you see for kubectl describe svc <app_name_lb>? There should be an error.

Please note that if you are using the version 1.17+, it is now networking.gke.io/load-balancer-type: "Internal" instead of cloud.google.com/load-balancer-type: "Internal"

Please also note the following: as per Restrictions for internal TCP/UDP load balancers:

For clusters running Kubernetes 1.7.X or later, while the clusterIP remains unchanged, internal TCP/UDP load balancers cannot use reserved IP addresses. The spec.loadBalancerIP field can still be defined using an unused IP address to assign a specific internal IP.

Upvotes: 3

Related Questions