Reputation: 62804
I created a new AKS cluster and deployed a simple nginx pod. All works well. Then I added a secret injected through the environment and the replicaSet
fails to start with the following error:
mark@L-R910LPKW:~/chip/toolbox/k8s [test ≡ +0 ~2 -0 !]$ k describe rs toolbox-78544646dd | tail -1
Warning FailedCreate 26s replicaset-controller Error creating: Internal error occurred: failed calling webhook "pods.env-injector.admission.spv.no": failed to call webhook: an error on the server ("{\"response\":{\"uid\":\"2e772ecb-e618-42f8-9273-a43a5b17ac52\",\"allowed\":false,\"status\":{\"metadata\":{},\"status\":\"Failure\",\"message\":\"failed to get auto cmd, error: GET https://app541deploycr.azurecr.io/oauth2/token?scope=repository%3Achip%2Ftoolbox%3Apull\\u0026service=app541deploycr.azurecr.io: UNAUTHORIZED: authentication required, visit https://aka.ms/acr/authorization for more information.\\ncannot fetch image descriptor\\ngithub.com/SparebankenVest/azure-key-vault-to-kubernetes/pkg/docker/registry.getImageConfig\\n\\t/go/src/github.com/SparebankenVest/azure-key-vault-to-kubernetes/pkg/docker/registry/registry.go:144\\ngithub.com/SparebankenVest/azure-key-vault-to-kubernetes/pkg/docker/registry.(*Registry).GetImageConfig\\n\\t/go/src/github.com/SparebankenVest/azure-key-vault-to-kubernetes/pkg/docker/registry/registry.go:103\\nmain.getContainerCmd\\n\\t/go/src/github.com/SparebankenVest/azure-key-vault-to-kubernetes/cmd/azure-keyvault-secrets-webhook/registry.go:39\\nmain.podWebHook.mutateContainers\\n\\t/go/src/github.com/SparebankenVest/azure-key-vault-to-kubernetes/cmd/azure-keyvault-secrets-webhook/pod.go:143\\nmain.podWebHook.mutatePodSpec\\n\\t/go/src/github.com/SparebankenVest/azure-key-vault-to-kubernetes/cmd/azure-keyvault-secrets-webhook/pod.go:299\\nmain.vaultSecretsMutator\\n\\t/go/src/github.com/SparebankenVest/azure-key-vault-to-kubernetes/cmd/azure-keyvault-secrets-webhook/main.go:163\\ngithub.com/slok/kubewebhook/pkg/webhook/mutating.MutatorFunc.Mutate\\n\\t/go/pkg/mod/github.com/slok/[email protected]/pkg/webhook/mutating/mutator.go:25\\ngithub.com/slok/kubewebhook/pkg/webhook/mutating.mutationWebhook.mutatingAdmissionReview\\n\\t/go/pkg/mod/github.com/slok/[email protected]/pkg/webhook/mutating/webhook.go:128\\ngithub.com/slok/kubewebhook/pkg/webhook/mutating.mutationWebhook.Review\\n\\t/go/pkg/mod/github.com/slok/[email protected]/pkg/webhook/mutating/webhook.go:120\\ngithub.com/slok/kubewebhook/pkg/webhook/internal/instrumenting.(*Webhook).Review\\n\\t/go/pkg/mod/github.com/slok/[email protected]/pkg/webhook/internal/") has prevented the request from succeeding
mark@L-R910LPKW:~/chip/toolbox/k8s [test ≡ +0 ~2 -0 !]$
This has all the markers of the issue describe in here - https://akv2k8s.io/installation/with-aad-pod-identity. But trying to fix it as described does not work:
mark@L-R910LPKW:~/chip/toolbox/k8s [test ≡ +0 ~2 -0 !]$ helm -n akv2k8s upgrade akv2k8s akv2k8s/akv2k8s --set addAzurePodIdentityException=true
Error: UPGRADE FAILED: [resource mapping not found for name: "akv2k8s-controller-exception" namespace: "akv2k8s" from "": no matches for kind "AzurePodIdentityException" in version "aadpodidentity.k8s.io/v1"
ensure CRDs are installed first, resource mapping not found for name: "akv2k8s-env-injector-exception" namespace: "" from "": no matches for kind "AzurePodIdentityException" in version "aadpodidentity.k8s.io/v1"
ensure CRDs are installed first]
mark@L-R910LPKW:~/chip/toolbox/k8s [test ≡ +0 ~2 -0 !]$
So it does not work either way.
The AKS cluster is deployed using our terraform code. The AKS cluster version is 1.25.4.
akv2k8s
resource "helm_release" "akv2k8s" {
name = "akv2k8s"
chart = "akv2k8s"
version = "2.3.2"
create_namespace = true
namespace = "akv2k8s"
repository = "http://charts.spvapi.no"
}
Our app manifests
mark@L-R910LPKW:~/chip/toolbox/k8s [test ≡ +0 ~2 -0 !]$ helm get manifest toolbox
---
# Source: chip-toolbox/templates/service.yaml
apiVersion: v1
kind: Service
metadata:
labels:
app: toolbox
name: toolbox
namespace: chip
spec:
ports:
- port: 80
protocol: TCP
targetPort: 80
selector:
app: toolbox
---
# Source: chip-toolbox/templates/deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
app: toolbox
name: toolbox
namespace: chip
spec:
replicas: 1
selector:
matchLabels:
app: toolbox
template:
metadata:
labels:
app: toolbox
spec:
containers:
- name: toolbox
image: app541deploycr.azurecr.io/chip/toolbox:1.0.23062.13
env:
- name: DUMMY_SECRET
value: dummy@azurekeyvault
---
# Source: chip-toolbox/templates/ingress.yaml
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
labels:
app: toolbox
annotations:
nginx.ingress.kubernetes.io/rewrite-target: /$2
name: toolbox
namespace: chip
spec:
ingressClassName: nginx-internal
rules:
- host: chip-can.np.dayforcehcm.com
http:
paths:
- path: /toolbox(/|$)(.*)
pathType: Prefix
backend:
service:
name: toolbox
port:
number: 80
tls:
- hosts:
- chip-can.np.dayforcehcm.com
---
# Source: chip-toolbox/templates/akv.yaml
apiVersion: spv.no/v1
kind: AzureKeyVaultSecret
metadata:
name: secret
namespace: chip
spec:
vault:
name: c541chip
object:
name: dummy
type: secret
mark@L-R910LPKW:~/chip/toolbox/k8s [test ≡ +0 ~2 -0 !]$
The toolbox image is just nginx:alpine-slim with a few networking tools in it.
I can provide AKS configuration and any logs on demand, I just do not know what is useful up front.
Additional context
The terraform code we use to deploy the HELM charts used to deploy AAD Pod Identity HELM chart in the past, but that particular HELM chart was deleted and was never applied to the new cluster. So it is a mystery to me why it happens in the first place.
I have also opened the bug here - https://github.com/SparebankenVest/azure-key-vault-to-kubernetes/issues/495
Upvotes: 1
Views: 1512