pixel
pixel

Reputation: 26441

Invalid spec selector after upgrading helm template

I've upgraded helm templates (by hand)

Fragment of previous depoloyment.yaml:

apiVersion: apps/v1beta2 kind: Deployment metadata:   name: {{ template "measurement-collector.fullname" . }}   labels:
    app: {{ template "measurement-collector.name" . }}
    chart: {{ template "measurement-collector.chart" . }}
    release: {{ .Release.Name }}
    heritage: {{ .Release.Service }} spec:   replicas: {{ .Values.replicaCount }}   selector:
    matchLabels:
      app: {{ template "measurement-collector.name" . }}
      release: {{ .Release.Name }}

New one:

apiVersion: apps/v1beta2
kind: Deployment
metadata:
  name: {{ include "measurement-collector.fullname" . }}
  labels:
    app.kubernetes.io/name: {{ include "measurement-collector.name" . }}
    helm.sh/chart: {{ include "measurement-collector.chart" . }}
    app.kubernetes.io/instance: {{ .Release.Name }}
    app.kubernetes.io/managed-by: {{ .Release.Service }}
spec:
  replicas: {{ .Values.replicaCount }}
  selector:
    matchLabels:
      app.kubernetes.io/name: {{ include "measurement-collector.name" . }}
      app.kubernetes.io/instance: {{ .Release.Name }}

new service.yaml:

  name: {{ include "measurement-collector.fullname" . }}
  labels:
    app.kubernetes.io/name: {{ include "measurement-collector.name" . }}
    helm.sh/chart: {{ include "measurement-collector.chart" . }}
    app.kubernetes.io/instance: {{ .Release.Name }}
    app.kubernetes.io/managed-by: {{ .Release.Service }}
spec:
  type: {{ .Values.service.type }}
  ports:
      protocol: TCP
      name: http
  selector:
    app.kubernetes.io/name: {{ include "measurement-collector.name" . }}
    app.kubernetes.io/instance: {{ .Release.Name }}

Then after running: helm upgrade -i measurement-collector chart/measurement-collector --namespace prod --wait

I get:

Error: UPGRADE FAILED: Deployment.apps "measurement-collector" is invalid: spec.selector: Invalid value: 
v1.LabelSelector{MatchLabels:map[string]string{"app.kubernetes.io/name":"measurement-collector", "app.kubernetes.io/instance":"measurement-collector"},         
MatchExpressions:[]v1.LabelSelectorRequirement(nil)}: field is immutable

Upvotes: 32

Views: 42111

Answers (3)

TigerBear
TigerBear

Reputation: 2824

If you change the selector label, then you will need to purge the release first before deploying:

kubectl delete deploy <deployment-name>

Upvotes: 71

PatS
PatS

Reputation: 11464

The other answers are correct but I didn't understand them because I was dealing with a helm chart. In my case, I changed the helm chart name and got a similar error.

The change I made in my helm/Chart.yaml was

apiVersion: v2
name: my-app  ==> was changed to ==> my-new-app
description: A Helm chart for Kubernetes

I didn't realize that that change is used to name other things in the template. In particular, the deployment was changed as well. For example,

# Source: my-app/templates/deployment.yaml
apiVersion: apps/v1
kind: Deployment
spec:
  replicas: 1
  selector:
    matchLabels:
      app.kubernetes.io/name: my-app       <<< THIS became my-new-app
      app.kubernetes.io/instance: my-app   <<< THIS became my-new-app

So the error was actually related to something that I didn't directly edit, which is why (to me) the error message was confusing.

By changing the Chart Name the selector was changed and that is not allowed when running helm upgrade. I also realize the underlying reason helm upgrade fails is because kubernetes doesn't support this.

Upvotes: 1

mipo256
mipo256

Reputation: 3150

Though @TigerBear answer is correct, I think I need to explain it in a little bit more details. This problem is caused by a simple reason - immutability of the selectors. You cannot update selectors for (I am not sure this is the complete list, feel free to correct me):

  1. ReplicasSets
  2. Deployments
  3. DaemonSets

In other words, for example if you have a Deployment with label 'my-app: ABC' within selectors, after that you have updated label inside selector onto this: 'my-app: XYZ', and then simply applied this changes, e.g. like this:

kubectl apply -f deployment-file-name.yml

it will not work - you have to recreate the deployment.

Related github k8s issue, also there is a little note about it in the Deployment doc

Upvotes: 14

Related Questions