Reputation: 58
UPDATE: Apologies for perhaps causing controversy but it seems like there was another cronjob running that was also calling a function that was grabbing those apiKeys from the DB but I was not sure until I seperated the part where it was grabbing them from the environment variables ;_;. So basically this whole post is wrong and one container was not grabbing env variables from another container. I am so ashamed I wanted to delete this question but not sure if a good idea or not?
Kubernetes pod running two of basically the same NodeJS application seems to be taking environment variables from another container, I logged the variable and it logged me the correct one but when it makes a request it seems to show two different results.
These variables are taken from two different secrets. I have checked inside of each container that they do indeed have different env variables but for some reason inside of NodeJS when it makes these requests out to a third-party API it grabs both of the variables. Yes, they do have the same name.
In the image below you, can see some logs these entries show the Authorization header for an http request, and this header is taken from an environment variable. Technically speaking it should always stay the same but it grabs the other one for some reason as well.
Here is the pod in YAML:
apiVersion: v1
kind: Pod
metadata:
annotations:
cni.projectcalico.org/podIP: <REDACTED>/32
cni.projectcalico.org/podIPs: <REDACTED>32
kubectl.kubernetes.io/restartedAt: '2021-01-20T15:29:12Z'
labels:
app: mimercado-api
pod-template-hash: 77fb65575
name: mimercado-deployment-77fb65575-tpbsp
namespace: default
spec:
containers:
- envFrom:
- secretRef:
name: secrets-mimercado-a
image: hsduiii/mindi-mimercado:82aae456ee6b637cfefe50c323c2c5b98d2c88f2
imagePullPolicy: Always
name: mimercado-a
ports:
- containerPort: 8080
volumeMounts:
- mountPath: /srv/mindi-mimercado/logfiles
name: mindi-mimercado-a-logdir
- envFrom:
- secretRef:
name: secrets-mimercado-b
image: hsduiii/mindi-mimercado:82aae456ee6b637cfefe50c323c2c5b98d2c88f2
imagePullPolicy: Always
name: mimercado-b
ports:
- containerPort: 8085
volumeMounts:
- mountPath: /srv/mindi-mimercado/logfiles
name: mindi-mimercado-b-logdir
imagePullSecrets:
- name: regcred
preemptionPolicy: PreemptLowerPriority
priority: 0
serviceAccountName: default
tolerations:
- effect: NoExecute
key: node.kubernetes.io/not-ready
operator: Exists
tolerationSeconds: 300
- effect: NoExecute
key: node.kubernetes.io/unreachable
operator: Exists
tolerationSeconds: 300
volumes:
- hostPath:
path: /microk8s-files/logs/mindi-mimercado/mindi-mimercado-a/82aae456ee6b637cfefe50c323c2c5b98d2c88f2
type: DirectoryOrCreate
name: mindi-mimercado-a-logdir
- hostPath:
path: /microk8s-files/logs/mindi-mimercado/mindi-mimercado-b/82aae456ee6b637cfefe50c323c2c5b98d2c88f2
type: DirectoryOrCreate
name: mindi-mimercado-b-logdir
Upvotes: 1
Views: 1111
Reputation: 58
UPDATE: Apologies for perhaps causing controversy but it seems like there was another cronjob running that was also calling a function that was grabbing those apiKeys from the DB but I was not sure until I seperated the part where it was grabbing them from the environment variables ;_;. So basically this whole post is wrong and one container was not grabbing env variables from another container. I am so ashamed I wanted to delete this question but not sure if a good idea or not?
Upvotes: 1
Reputation: 4732
There is still a lot of unknown regarding your overall config but if it can help, here is the potential issues that I see.
kubectl logs -f mimercado-deployment-77fb65575-tpbsp -c mimercado-a
kubectl logs -f mimercado-deployment-77fb65575-tpbsp -c mimercado-b
Send some requests like you did in your screenshot. If your requests appear to be distributed to both containers, it means that something is miss-configured in your service or ingress.
You might have old resources, still around, with slightly different configurations or your service label selector is matching more than just your pod. Check that only this pod, only one service and only one ingress are present. Also check that you don't have other deployments/pods/services with labels that might be overlapping with our pod.
You are using envFrom
which load all the entries from your secret into your environment. Check that you don't have both entries in one of your secret. You can also switch to the env
form to be safe:
env:
- name: MY_SECRET
valueFrom:
secretKeyRef:
name: secrets-mimercado-a
key: my-secret-key
containerPort
only tells kubernetes which port your container is using but node on which port your node app should bind. It shouldn't be possible for both container to bind to the same port of the pod, but if you are running a deployment and not a single pod some pod of your deployment might have different containers bound to a specific port.Upvotes: 1