harry123
harry123

Reputation: 910

helm not creating the resources

I have tried to run Helm for the first time. I am having deployment.yaml, service.yaml and ingress.yaml files alongwith values.yaml and chart.yaml.

deployment.yaml

apiVersion: apps/v1
kind: Deployment
metadata:
  name: abc
  namespace: xyz
  labels:
    app: abc
    app.kubernetes.io/managed-by: {{ .Release.Service }}
spec:
  replicas: 3
  template:
    spec:
      containers:
        - name: abc
          image: {{ .Values.image.repository }}:{{ .Values.image.tag }}
          ports:
            -
              containerPort: 8080

service.yaml

apiVersion: v1
kind: Service
metadata:
  name: abc
  labels:
    app.kubernetes.io/managed-by: {{ .Release.Service }}
  namespace: xyz
  annotations:
    service.beta.kubernetes.io/aws-load-balancer-ssl-cert: {{ .Values.service.sslCert }}
spec:
  ports:
    - name: https
      protocol: TCP
      port: 443
      targetPort: 8080
    - name: http
      protocol: TCP
      port: 80
      targetPort: 8080
  type: ClusterIP
  selector:
    app: abc

ingress.yaml

apiVersion: extensions/v1beta1
kind: Ingress
metadata:
  name: "haproxy-ingress"
  namespace: xyz
  labels:
    app.kubernetes.io/managed-by: {{ .Release.Service }}
  annotations:
    kubernetes.io/ingress.class: alb

From what I can see I do not think I have missed putting app.kubernetes.io/managed-by but still, I keep getting an error:

rendered manifests contain a resource that already exists. Unable to
continue with install: Service "abc" in namespace "xyz" exists and
cannot be imported into the current release: invalid ownership
metadata; label validation error: missing key
"app.kubernetes.io/managed-by": must be set to "Helm"; annotation
validation error: missing key "meta.helm.sh/release-name": must be set
to "abc"; annotation validation error: missing key
"meta.helm.sh/release-namespace": must be set to "default"

It renders the file locally correctly.

helm list --all --all-namespaces returns nothing. Please help.

Upvotes: 49

Views: 168540

Answers (9)

Thiago
Thiago

Reputation: 91

To fix missing metadata errors on helm install or helm upgrade, you can:

  1. Use "helm template" to list all manifests your "helm install" will aplly;
  2. Annotate and labbel the manifests with the data helm chart needs to import them; then
  3. Run "helm install" (or upgrade).

Example:

## Listing manifests
manifests="$(helm template release-example chart-example/chart)"

## Preparing the command with manifests infos
to_handle=$(echo "${manifests}" | yq -N '. | .kind + " " + .metadata.name + " -n " + .metadata.namespace')

## Annotate and label the manifests if exists and if not annotated and labeled yet (without --override to avoid errors)
for resource in $(echo $to_handle); do
    kubectl annotate $resource meta.helm.sh/release-name=release-example || true
    kubectl annotate $resource meta.helm.sh/release-namespace=namespace-example || true
    kubectl label $resource app.kubernetes.io/managed-by=Helm || true
done

## Run helm install
helm install release-example chart-example/chart

Upvotes: 0

texasdave
texasdave

Reputation: 756

Hope this helps, I had to delete two secrets that I had manually creaeted, but were also being created by my helm command:

 helm upgrade \
    --install \
    --username '$oauthtoken' \
    --password "${NGC_API_KEY}" \
    -n ${NAMESPACE} \
    nv-ingest \
    --set imagePullSecret.create=true \
    --set imagePullSecret.password="${NGC_API_KEY}" \
    --set ngcSecret.create=true \
    --set ngcSecret.password="${NGC_API_KEY}" \
    --set image.repository="nvcr.io/ohlfw0olaadg/ea-participants/nv-ingest" \
    --set image.tag="24.08" \
    https://helm.ngc.nvidia.com/ohlfw0olaadg/ea-participants/charts/nv-ingest-0.3.5.tgz

creates 2 secrets but I had already manually made them, and that's when I got the error.

Check your secrets:

kubectl get secrets -n <namespace>

delete

kubectl delete secrets -n <namespace> <secret-name>

try your helm command again.

Upvotes: 0

Rotem jackoby
Rotem jackoby

Reputation: 22198

The error below is quiet common:

 label validation error: missing key "app.kubernetes.io/managed-by":
 must be set to "Helm"; annotation validation error: missing key
 "meta.helm.sh/release-name": must be set to ..

So I'll provide a bit longer explanation and also a context to the topic.


What happened?

It seems that you tried to create resources that were already exist and created outside of Helm (probably with kubectl).

Why Helm throw the error?

Helm doesn't allow a resource to be owned by more than one deployment.

It is the responsibility of the chart creator to ensure that the chart produce unique resources only.

How can you solve this?

Option 1 - Follow the error message and add the meta.helm.sh annotations:

As can be describe in this PR: Adopt resources into release with correct instance and managed-by labels

Helm will no longer error when attempting to create a resource that already exists in the target cluster if the existing resource has the correct meta.helm.sh/release-name and meta.helm.sh/release-namespace annotations, and matches the label selector app.kubernetes.io/managed-by=Helm.
This facilitates zero-downtime migrations to Helm 3 for managing existing deployments, and allows Helm to "adopt" existing resources that it previously created.

(*) I think that the meta.helm.sh scope is a less common approach today.

Option 2 - Add the app.kubernetes.io/instance label:

As can be seen in different Helm chart providers (Bitnami, Nginx ingress controller, External-Dns for example) - the combination of the two labels:

app.kubernetes.io/instance: {{ .Release.Name }}
app.kubernetes.io/managed-by: {{ .Release.Service }}

(*) Notice: There are some CD tools like ArgoCD that automatically sets the app.kubernetes.io/instance label and uses it to determine which resources form the app.

Option 3 - Delete old resources.

It might be relevant in your specific case where the old resources might not be relevant anymore.


For those who need some context

What are those labels?

Shared labels and annotations share a common prefix: app.kubernetes.io.
Labels without a prefix are private to users. The shared prefix ensures that shared labels do not interfere with custom user labels.

In order to take full advantage of using these labels, they should be applied on every resource object.

The app.kubernetes.io/managed-by label is used to describe the tool being used to manage the operation of an application - for example: helm.

Read more on the Recommended Labels section.

Are they added by helm?

No.
First of all, as mentioned before, those labels are not specific to Helm and Helm itself never requires that a particular label be present.

From the other hand, Helm docs recommend to use the following Standard Labels. app.kubernetes.io/managed-by is one of them and should be set to {{ .Release.Service }} in order to find all resources managed by Helm.

So it is the role of the chart maintainer to add those labels.

What is the best way to add them?

Many Helm chart providers adds them to the _helpers.tpl file and let all resources include it:

labels: {{ include "my-chart.labels" . | nindent 4 }}

Upvotes: 45

Baga
Baga

Reputation: 1442

For us, we had to delete ServiceAccount linked to the deployment as well to fix the issue.

$ kubectl delete service -n <namespace> <service-name>
$ kubectl delete deployment -n <namespace> <deployment-name>
$ kubectl delete ingress -n <namespace> <ingress-name>
$ kubectl delete serviceaccount -n <namespace> <ingress-name>

Upvotes: 1

deadlydog
deadlydog

Reputation: 24434

We use GitOps via Flux, and I was getting the same rendered manifests contain a resource that already exists error. For me the problem was I accidentally defined a resource with the same name in two different files, so it was trying to create it twice. I removed the duplicate resource definition from one of the files to fix it up.

Upvotes: 0

Raghavendra V
Raghavendra V

Reputation: 31

I was getting this error because I was trying to upgrade the helm chart with wrong release name. So it conflicted with the existing resources in same namespace.

I was running this command with wrong releasename

helm upgrade --install --namespace <namespace> wrong-releasename <chart-folder>

and got the similar errors

Error: rendered manifests contain a resource that already exists. Unable to continue with install: ConfigMap \"cmname\" in namespace \"namespace\" exists and cannot be imported into the current release

invalid ownership metadata; label validation error: missing key \"app.kubernetes.io/managed-by\": must be set to \"Helm\"; annotation validation error: missing key \"meta.helm.sh/release-name\": must be set to \"wrong-releasename\"; annotation validation error: missing key \"meta.helm.sh/release-namespace\": must be set to \"namespace\"

I checked the existing helm releases in the same namespace and used the same name as the listed release name to upgrade my helmchart

helm ls -n <namespace>
helm upgrade --install --namespace <namespace> releasename <chart-folder>

Upvotes: 2

systemBuilder
systemBuilder

Reputation: 47

Here's a faster and more thorough way to get rid of argo so it can be reinstalled :

helm list -A   # see argocd in namespace argocd
helm uninstall argocd -n argocd
kubectl delete namespace argocd

The last line gets rid of all secrets and other resources not cleaned up by uninstalling the helm chart, and was needed in my environment, otherwise, I got the same sorts of errors about duplicate resources you were seeing.

Upvotes: 1

Kamol Hasan
Kamol Hasan

Reputation: 13546

You already have some resources, e.g. service abc in the given namespace, xyz that you're trying to install via a Helm chart.

Delete those and install them via helm install.

$ kubectl delete service -n <namespace> <service-name>
$ kubectl delete deployment -n <namespace> <deployment-name>
$ kubectl delete ingress -n <namespace> <ingress-name>

Once you have these resources deployed via Helm, you will be able to perform helm update to change properties.

Remove the "app.kubernetes.io/managed-by" label from your yaml's, this will be added by Helm.

Upvotes: 44

Deb
Deb

Reputation: 663

The trick here is to chase the error message. For example, in the below case the erro message points at something wrong with the 'service' in namespace 'xyz'

Unable to
continue with install: Service "abc" in namespace "xyz" exists and
cannot be imported into the current release: invalid ownership
metadata; label validation error: missing key
"app.kubernetes.io/managed-by": must be set to "Helm"; annotation
validation error: missing key "meta.helm.sh/release-name": must be set
to "abc"; annotation validation error: missing key
"meta.helm.sh/release-namespace": must be set to "default"

Simply delete the same service from the mentioned namespace with below:

kubectl -n xyz delete svc abc

And then try the installation/deployment again. It might so happen that similar issue may appear but for a different resource as shown in the below example:

Release "nok-sec-sip-tls-crd" does not exist. Installing it now.
Error: rendered manifests contain a resource that already exists. Unable to continue with install: Role "nok-sec-sip-tls-crd-role" in namespace "debu" exists and cannot be imported into the current release: invalid ownership metadata; annotation validation error: key "meta.helm.sh/release-name" must equal "nok-sec-sip-tls-crd": current value is "nok-sec-sip"

Again use the kubectl command and delete the resource mentioned in the error message. For example, in the above case the error resource should be deleted with the below command:

kubectl delete role nok-sec-sip-tls-crd-role -n debu

Upvotes: 9

Related Questions