user826323
user826323

Reputation: 2338

Error from azure Aks integration with Key/Vault

I am getting the following error for aks integration with key/vault.

Error: UPGRADE FAILED: rendered manifests contain a resource that already exists. Unable to continue with update: SecretProviderClass "azure-keyvault-integration" in namespace "project1" exists and cannot be imported into the current release: invalid ownership metadata; label validation error: missing key "app.kubernetes.io/managed-by": must be set to "Helm"; annotation validation error: missing key "meta.helm.sh/release-name": must be set to "project1-dev"; annotation validation error: missing key "meta.helm.sh/release-namespace": must be set to "project1"

helm.go:84: [debug] SecretProviderClass "azure-keyvault-integration" in namespace "project1" exists and cannot be imported into the current release: invalid ownership metadata; label validation error: missing key "app.kubernetes.io/managed-by": must be set to "Helm"; annotation validation error: missing key "meta.helm.sh/release-name": must be set to "project1-dev"; annotation validation error: missing key "meta.helm.sh/release-namespace": must be set to "project1

The error above is thrown our script for helm upgrade command which is as follows.

    ENV=$1
    NAMESPACE=$2
    HELM_RELEASE=$3
    
    output=$(helm upgrade --install \
      -f ./values/${ENV}/values.${ENV}.yaml \
      -f ./values/${ENV}/values.${ENV}.autoscaling.yml \
      -f secrets://values/${ENV}/secrets.${ENV}.enc.yaml \
      ${HELM_RELEASE} charts/project1 \
      --create-namespace --namespace ${NAMESPACE} \
      --dry-run --debug 2>&1)

This is secret-provider-class.yaml.

    {{- if .Values.keyvault.enabled }}
    apiVersion: secrets-store.csi.x-k8s.io/v1
    kind: SecretProviderClass
    metadata:
      name: azure-keyvault-integration # needs to be unique per namespace
    spec:
      provider: azure
      parameters:
        usePodIdentity: "false"
        userAssignedIdentityID: <userAssignedIdentityID>
        keyvaultName: <keyvaultName>
        useVMManagedIdentity: <useVMManagedIdentity>
        tenantId: <tenantId>
        subscriptionId: <subscriptionId>
        objects:  |
          array:
            - |
              objectName: OBJNAME1-VSC
              objectType: secret
              objectVersion: "4475eb5e1790e"
            - |
              objectName: OBJNAME2-VSC
              objectType: secret
              objectVersion: "d94449a99e9"
    
      secretObjects:
        - data:
          - key: API1_KEY
            objectName: OBJNAME1-VSC
          - key: API2_KEY
            objectName: OBJNAME2-VSC
          type: Opaque
          secretName: service1-secrets
    {{- end}}

I checked the yaml file associated with service1 service in the azure. i see this.

     labels:
        app.kubernetes.io/instance: project1-cloud-dev
        app.kubernetes.io/managed-by: Helm
        app.kubernetes.io/name: service1
        app.kubernetes.io/version: 1.16.0
        helm.sh/chart: service1-0.1.0
      annotations:
        deployment.kubernetes.io/revision: '5'
        meta.helm.sh/release-name: project1-cloud-dev
        meta.helm.sh/release-namespace: project1-cloud

I have encrypted values in the secrets.${ENV}.enc.yaml file, but I am trying to remove those encrypted values from secrets.${ENV}.enc.yaml file and are working on this integration.

Any help would be appreciated.

Upvotes: 3

Views: 144

Answers (1)

Dave Kelly
Dave Kelly

Reputation: 1

The existing resource has annotation: meta.helm.sh/release-name: project1-cloud-dev vs from your error message meta.helm.sh/release-name: project1-dev.

Helm won't overwrite the existing resource by this name in this namespace if it's governed by a different release. The release you're attempting to use is project1-dev.

If you want to update it (and everything else in this helm chart) under the old release name, edit your helm upgrade command to reflect the old release name.

If your plan is to overwrite it in this new release, you can delete the existing resource with kubectl delete <resource-kind> <resource-name> -n <namespace> Then your helm command should work.

If you need both releases, change namespaces.

edit: corrected formatting for kubectl command

Upvotes: 0

Related Questions