Vincent
Vincent

Reputation: 21

Istio with GKE on GCP : impossible to redirect TCP stream other than 80 and 443

I am using Istio with GKE / GCP and all is working successfully for deploying application with classic HTTPS with Istio.

Now, I am trying to deploy Kafka for development cluster, and trying to redirect TCP stream to kafka from Istio Gateway on specific port, but it's not working.

I have checked specific Firewall rules from GCP but for the moment I think the blocking point is not here (some packets are captured by GCP and any is blocked, all port are opened).

I am working on this problem since 6 hours, so if someone has has already had this problem I will be very grateful !

My Kubernetes version is 1.17.14-gke.400 and Istio version is 1.4.10-gke.5.

You will find below a full simplified example with tcp-echo test from Istio :

File tcp-echo-deployment.yaml :

apiVersion: apps/v1
kind: Deployment
metadata:
  name: tcp-echo
  namespace: production
spec:
  replicas: 1
  selector:
    matchLabels:
      name: tcp-echo
  template:
    metadata:
      labels:
        name: tcp-echo
    spec:
      containers:
      - name: tcp-echo
        image: istio/tcp-echo-server:1.1
        imagePullPolicy: IfNotPresent
        args: [ "9000", "hello" ]
        ports:
        - containerPort: 9000

File tcp-echo-service.yaml :

apiVersion: v1
kind: Service
metadata:
  name: tcp-echo
  namespace: production
  labels:
    name: tcp-echo
spec:
  ports:
  - name: tcp
    port: 9000
  selector:
    name: tcp-echo

File tcp-echo-gateway.yaml :

apiVersion: networking.istio.io/v1alpha3
kind: Gateway
metadata:
  name: tcp-echo-gateway
  namespace: istio-system
spec:
  selector:
    istio: ingressgateway
  servers:
  - port:
      number: 10000
      name: tcp
      protocol: TCP
    hosts:
    - "*"

File tcp-echo-virtual-service.yaml :

apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
  name: tcp-echo
  namespace: istio-system
spec:
  hosts:
  - "*"
  gateways:
  - tcp-echo-gateway.istio-system.svc.cluster.local
  tcp:
  - match:
    - port: 10000
    route:
    - destination:
        host: tcp-echo.production.svc
        port:
          number: 9000

Test :

$ kubectl apply -f tcp-echo-deployment.yaml
$ kubectl apply -f tcp-echo-service.yaml
$ kubectl apply -f tcp-echo-gateway.yaml
$ kubectl apply -f tcp-echo-virtual-service.yaml

Test 1 : inside cluster (working) :

$ kubectl -n production run -i --tty busybox --image=busybox --rm=true --restart=Never -- sh
# telnet tcp-echo:9000
  vinz
  hello vinz

Test 2 : outside cluster (not working, ip address is masked) :

$ telnet 1.2.3.4 10000
  Trying 1.2.3.4...

Now I have timeout, but a few hours ago I had connection refused.

Someone has already this problem ?

Thank you very much !!

Upvotes: 0

Views: 215

Answers (1)

Vincent
Vincent

Reputation: 21

Ok, I've got it !

It was simply the service "istio-ingressgateway" which define and route the opened ports from ingress gateway.

Maybe it will help someone !

Upvotes: 2

Related Questions