Reputation: 71
This is my problem. Y have a yamls deployment for kubernetes that works fine. So, I am putting this yamls in a Helm Chart, but when I deploy de helm, I recived a 5032 error from Nginx:
[error] 41#41: *1176907 connect() failed (111: Connection refused) while connecting to upstream, client: 79.144.175.25, server: envtest.westeurope.cloudapp.azure.com, request: "POST /es/api/api/v1/terminals/login/123456789/monitor HTTP/1.1", upstream: "http://10.0.63.136:80/v1/terminals/login/123456789/monitor", host: "envtest.westeurope.cloudapp.azure.com"
I am starter level, so I am so confused about the problem. I have compared my original files and the generated helm files and I dont see the error. As my original yamls works, Im just going to copy here my helms files:
Deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: {{ include "example.fullname" . }}
namespace: {{ .Values.namespace }}
labels:
{{- include "example.labels" . | nindent 4 }}
spec:
{{- if not .Values.autoscaling.enabled }}
replicas: {{ .Values.replicaCount }}
{{- end }}
selector:
matchLabels:
{{- include "example.selectorLabels" . | nindent 6 }}
template:
metadata:
{{- with .Values.podAnnotations }}
annotations:
{{- toYaml . | nindent 8 }}
{{- end }}
labels:
{{- include "example.selectorLabels" . | nindent 8 }}
spec:
containers:
- name: {{ .Chart.Name }}
image: "{{ .Values.image.repository }}:{{ .Values.image.tag | default .Chart.AppVersion }}"
imagePullPolicy: {{ .Values.image.pullPolicy }}
ports:
- name: http
containerPort: 80
protocol: TCP
resources:
{{- toYaml .Values.resources | nindent 12 }}
{{- with .Values.env }}
env:
{{- toYaml . | nindent 12 }}
{{- end }}
{{- with .Values.nodeSelector }}
nodeSelector:
{{- toYaml . | nindent 8 }}
{{- end }}
Service.yaml
apiVersion: v1
kind: Service
metadata:
name: {{ include "example.fullname" . }}
namespace: {{ .Values.namespace }}
labels:
{{- include "example.labels" . | nindent 4 }}
spec:
type: {{ .Values.service.type }}
ports:
- port: {{ .Values.service.port }}
targetPort: http
protocol: TCP
name: http
selector:
microservice: {{ .Values.microservice }}
{{- include "example.selectorLabels" . | nindent 4 }}
I have a secret. yaml too:
apiVersion: v1
kind: Secret
metadata:
name: {{ .Values.secretname }}
namespace: {{ .Values.namespace }}
type: Opaque
data:
redis-connection-string: {{ .Values.redisconnectionstring | b64enc }}
event-hub-connection-string:
{{ .Values.eventhubconnectionstring | b64enc }}
blob-storage-connection-string:
{{ .Values.blobstorageconnectionstring | b64enc }}
#(ingesters) sql
sql-user-for-admin-user: {{ .Values.sqluserforadminuser | b64enc }}
sql-user-for-admin-password:
{{ .Values.sqluserforadminpassword | b64enc }}
By other hand, I still have out of the helm, in a convencional yaml, the ingress and the external service (as a parts of the set of yamls that works fine)
External Service
kind: Service
apiVersion: v1
metadata:
name: example
namespace: example-ingress
spec:
type: ExternalName
externalName: example.example-tpvs.svc.cluster.local
ports:
- port: 80
Ingress
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: example-logic
namespace: example-ingress
annotations:
kubernetes.io/ingress.class: nginx
cert-manager.io/issuer: letsencrypt
nginx.ingress.kubernetes.io/rewrite-target: /$1
labels:
version: "0.1"
spec:
rules:
- host: envtest.westeurope.cloudapp.azure.com
http:
paths:
- path: /es/example/api/(.*)
backend:
serviceName: example
servicePort: 80
So, when I install the helm, I know the pod is up, and I can see in the pod logs that some backjobs are working, so Im almost sure that the problem are in the service of the heml... because this external and ingress are working when I deploy the originals yamls.
I hope someone could help me. thanks!
Upvotes: 0
Views: 254
Reputation: 71
Ok, the problem is that I forgot the selectors and matchlabel in service and deployment
Upvotes: 0