Digital Impermanence
Digital Impermanence

Reputation: 417

Istio istio-ingressgateway throwing "no cluster match for URL '/'"

I have Istio installed on docker-desktop. In general it works fine. I'm attempting to setup an http-based match on a very simple virtual service, but I'm only able to get 404s. Here are the technical details.

My endpoint image is hashi http-echo which uses the net/http library to create a trivial http server that returns a message you supply. It works just fine and couldn't be more trivial.

Here is my pod and service configuration:

kind: Pod
apiVersion: v1
metadata:
  name: a
  labels:
    app: a
    version: v1
spec:
  containers:
  - name: a
    image: hashicorp/http-echo
    args:
    - "-text='this is service a: v1'"
    - "-listen=:6789"
---
kind: Service
apiVersion: v1
metadata:
  name: a-service
spec:
  selector:
    app: a
    version: v1
  ports:
  # Default port used by the image
  - port: 6789
    targetPort: 6789
    name: http-echo

And here is an example of the service working by my curling it from another pod in the same namespace:

/ # curl 10.1.0.29:6789
'this is service a: v1'

And here's the pod running in the docker-desktop cluster:

NAME       READY   STATUS    RESTARTS   AGE     IP          NODE             NOMINATED NODE   READINESS GATES
a          2/2     Running   0          45h     10.1.0.29   docker-desktop   <none>           <none>

And here is the service registering and administrating the pod:

NAME         TYPE        CLUSTER-IP     EXTERNAL-IP   PORT(S)    AGE     SELECTOR
a-service    ClusterIP   10.101.113.9   <none>        6789/TCP   45h     app=a,version=v1

Here is my istio istio-ingressgateway pod specification via Helm (seems to work fine) which I list as this is the only part of the installation I've changed and the change itself is utterly trivial (just add a single new port block which seems to work fine as listening is indeed occurring):

gateways:
  istio-ingressgateway:
    name: istio-ingressgateway
    labels:
      app: istio-ingressgateway
      istio: ingressgateway
    ports:
    - port: 15021
      targetPort: 15021
      name: status-port
      protocol: TCP
    - port: 80
      targetPort: 8080
      name: http2
      protocol: TCP
    - port: 443
      targetPort: 8443
      name: https
      protocol: TCP
    - port: 6789
      targetPort: 6789
      name: http-echo
      protocol: TCP

And here is the kubectl get svc on the istio-ingressgateway just to show that indeed I have an external-ip and things look normal:

NAME                   TYPE           CLUSTER-IP      EXTERNAL-IP   PORT(S)                                                     AGE     SELECTOR
istio-ingressgateway   LoadBalancer   10.109.63.15    localhost     15021:30095/TCP,80:32454/TCP,443:31644/TCP,6789:30209/TCP   2d16h   app=istio-ingressgateway,istio=ingressgateway
istiod                 ClusterIP      10.96.155.154   <none>        15010/TCP,15012/TCP,443/TCP,15014/TCP                       2d16h   app=istiod,istio=pilot

Here's my virtualservice:

apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
  name: a-service
  namespace: default
spec:
  hosts:
  - 'a-service.default.svc.cluster.local'
  gateways:
  - gateway
  http:
  - match:
    - port: 6789
    route:
    - destination:
        host: 'a-service.default.svc.cluster.local'
        port:
          number: 6789

Here's my gateway:

apiVersion: networking.istio.io/v1alpha3
kind: Gateway
metadata:
  name: gateway
  namespace: default
spec:
  selector:
    app: istio-ingressgateway
  servers:
  - port:
      number: 6789
      name: http-echo
      protocol: http
    hosts:
    - 'a-service.default.svc.cluster.local'

And then finally here's a debug log from the istio-ingressgateway showing that despite all these seemingly correct pod, service, gateway, virtualservice and ingressgateway configs, the ingressgateway is only return 404s:

2021-09-27T15:34:41.001773Z debug   envoy connection    [C367] closing data_to_write=143 type=2
2021-09-27T15:34:41.001779Z debug   envoy connection    [C367] setting delayed close timer with timeout 1000 ms
2021-09-27T15:34:41.001786Z debug   envoy pool  [C7] response complete
2021-09-27T15:34:41.001791Z debug   envoy pool  [C7] destroying stream: 0 remaining
2021-09-27T15:34:41.001925Z debug   envoy connection    [C367] write flush complete
2021-09-27T15:34:41.002215Z debug   envoy connection    [C367] remote early close
2021-09-27T15:34:41.002279Z debug   envoy connection    [C367] closing socket: 0
2021-09-27T15:34:41.002348Z debug   envoy conn_handler  [C367] adding to cleanup list
2021-09-27T15:34:41.179213Z debug   envoy conn_handler  [C368] new connection from 192.168.65.3:62904
2021-09-27T15:34:41.179594Z debug   envoy http  [C368] new stream
2021-09-27T15:34:41.179690Z debug   envoy http  [C368][S14851390862777765658] request headers complete (end_stream=true):
':authority', '0:6789'
':path', '/'
':method', 'GET'
'user-agent', 'curl/7.64.1'
'accept', '*/*'
'version', 'TESTING'

2021-09-27T15:34:41.179708Z debug   envoy http  [C368][S14851390862777765658] request end stream
2021-09-27T15:34:41.179828Z debug   envoy router    [C368][S14851390862777765658] no cluster match for URL '/'
2021-09-27T15:34:41.179903Z debug   envoy http  [C368][S14851390862777765658] Sending local reply with details route_not_found
2021-09-27T15:34:41.179949Z debug   envoy http  [C368][S14851390862777765658] encoding headers via codec (end_stream=true):
':status', '404'
'date', 'Mon, 27 Sep 2021 15:34:41 GMT'
'server', 'istio-envoy'

Here's istioct proxy-status:

istioctl proxy-status                                                                                                                     ⎈ docker-desktop/istio-system
NAME                                                   CDS        LDS        EDS        RDS        ISTIOD                     VERSION
a.default                                              SYNCED     SYNCED     SYNCED     SYNCED     istiod-b9c8c9487-clkkt     1.11.3
istio-ingressgateway-5797689568-x47ck.istio-system     SYNCED     SYNCED     SYNCED     SYNCED     istiod-b9c8c9487-clkkt     1.11.3

And here's istioctl pc cluster $ingressgateway:

SERVICE FQDN                                            PORT      SUBSET     DIRECTION     TYPE           DESTINATION RULE
BlackHoleCluster                                        -         -          -             STATIC
a-service.default.svc.cluster.local                     6789      -          outbound      EDS
agent                                                   -         -          -             STATIC
istio-ingressgateway.istio-system.svc.cluster.local     80        -          outbound      EDS
istio-ingressgateway.istio-system.svc.cluster.local     443       -          outbound      EDS
istio-ingressgateway.istio-system.svc.cluster.local     6789      -          outbound      EDS
istio-ingressgateway.istio-system.svc.cluster.local     15021     -          outbound      EDS
istiod.istio-system.svc.cluster.local                   443       -          outbound      EDS
istiod.istio-system.svc.cluster.local                   15010     -          outbound      EDS
istiod.istio-system.svc.cluster.local                   15012     -          outbound      EDS
istiod.istio-system.svc.cluster.local                   15014     -          outbound      EDS
kube-dns.kube-system.svc.cluster.local                  53        -          outbound      EDS
kube-dns.kube-system.svc.cluster.local                  9153      -          outbound      EDS
kubernetes.default.svc.cluster.local                    443       -          outbound      EDS
prometheus_stats                                        -         -          -             STATIC
sds-grpc                                                -         -          -             STATIC
xds-grpc                                                -         -          -             STATIC
zipkin                                                  -         -          -             STRICT_DNS

And here's istioctl pc listeners on the same ingress:

ADDRESS PORT  MATCH DESTINATION
0.0.0.0 6789  ALL   Route: http.6789
0.0.0.0 15021 ALL   Inline Route: /healthz/ready*
0.0.0.0 15090 ALL   Inline Route: /stats/prometheus*

And finally here's istioctl routes:

NOTE: This output only contains routes loaded via RDS.
NAME          DOMAINS                                 MATCH                  VIRTUAL SERVICE
http.6789     a-service.default.svc.cluster.local     /*                     a-service.default
              *                                       /stats/prometheus*
              *                                       /healthz/ready*

I've tried numerous different configurations from changing selectors, to making sure port names match to trying different ports. If I change my virtualservice from http to tcp the port match works great. But because my ultimate goal with this is to do more advanced header-based matching I need to be matching on http. Any insight would be greatly appreciated!

Upvotes: 1

Views: 1611

Answers (1)

Digital Impermanence
Digital Impermanence

Reputation: 417

It turned out the problem was that I had specified my service in my hosts directive in both my gateway and virtualservice. Specifying a service as a hosts entry is almost certainly never correct, though one can "workaround" this by adding a host header to curl, i.e. curl ... -H 'Host: kubernetes.docker.internal' .... But the correct solution is to simply add correct host entries, i.e. - mysite.mycompany.com etc. Hosts in this case are like vhosts in Apache; they're an fqdn that resolves to something the mesh and cluster can use to send requests to. host, however, in virtualservice destination is the service, which is a bit convoluted and is what threw me.

Upvotes: 2

Related Questions