qkhanhpro
qkhanhpro

Reputation: 5220

Kubernetes internal socket.io connection

I am following this image architecture from K8s

enter image description here

However I can not seem to connect to socket.io server from within the cluster using the service name

Current situation:

From POD B

From outside world / Internet

My current service configuration

spec:
  ports:
    - name: http
      protocol: TCP
      port: 8000
      targetPort: 3000
  selector:
    app: orders
  clusterIP: 172.20.115.234
  type: ClusterIP
  sessionAffinity: None

My Ingress Helm chart

apiVersion: extensions/v1beta1
kind: Ingress
metadata:
  name: {{ template "app.name" $ }}-backend
  annotations:
    kubernetes.io/ingress.class: traefik
    ingress.kubernetes.io/auth-type: forward
    ingress.kubernetes.io/auth-url: "http://auth.default.svc.cluster.local:8000/api/v1/oauth2/auth"
    ingress.kubernetes.io/auth-response-headers: authorization
  labels:
    {{- include "api-gw.labels" $ | indent 4 }}
spec:
  rules:
    - host: {{ .Values.deploy.host | quote }}
      http:
        paths:
          - path: /socket/events
            backend:
              serviceName: orders
              servicePort: 8000

My Service Helm chart

apiVersion: v1
kind: Service
metadata:
  name: {{ template "app.name" . }}
spec:
  {{ if not $isDebug -}}
  selector:
    app: {{ template "app.name" . }}
  {{ end -}}
  type: NodePort
  ports:
  - name: http
    port: {{ template "app.svc.port" . }}
    targetPort: {{ template "app.port" . }}
    nodePort: {{ .Values.service.exposedPort }}
    protocol: TCP



# Helpers..
# {{/* vim: set filetype=mustache: */}}
# {{- define "app.name" -}}
#     {{ default "default" .Chart.Name }}
# {{- end -}}

# {{- define "app.port" -}}
# 3000
# {{- end -}}

# {{- define "app.svc.port" -}}
# 8000
# {{- end -}}

Upvotes: 2

Views: 1066

Answers (1)

Daniel Karapishchenko
Daniel Karapishchenko

Reputation: 343

The services DNS name must be set in your container to access its VIP address. Kubernetes automatically sets environmental variables in all pods which have the same selector as the service.

In your case, all pods with selector A, have environmental variables set in them when the container is deploying, that contain the services VIP and PORT.

The other pod with selector B, is not linked as an endpoints for the service, therefor, it does not contain the environmental variables needed to access the service.

Here is the k8s documentation related to your problem.

To solve this, you can setup a DNS service, which k8s offers as a cluster addon. Just follow the documentation.

Upvotes: 2

Related Questions