chr0nk
chr0nk

Reputation: 27

Kubernetes Daemon Set not working due to missing spec.selector

I'm currently following this guide: https://dzone.com/articles/export-kubernetes-logs-to-azure-log-analytics-with to install fluentbit on my system and send logs to a LogAnalytics workspace.

The guide is based on a previous version of Kubernetes, I'm currently running Kubernetes 1.18 and most of the code needed some small tweaks to adhere to this version (name secretKeyRef must consist of lower case letters only, apiVersion needed to be apps/v1 instead of extensions/v1beta1 etc.).

Now I want to deploy a DaemonSet that is based on the guide, running the code from the guide gives me the following output:

error: error validating "fluent-bit-ds.yaml": error validating data: [ValidationError(DaemonSet.spec.selector): unknown field "k8s-app" in io.k8s.apimachinery.pkg.apis.meta.v1.LabelSelector, ValidationError(DaemonSet.spec.template.spec): unknown field "selector" in io.k8s.api.core.v1.PodSpec]; if you choose to ignore these errors, turn validation off with --validate=false

I've tried to add the k8s-app myself and also tried to add a spec.selector but that didn't work.

apiVersion: apps/v1
kind: DaemonSet
metadata: 
  name: fluent-bit
  namespace: logging
  labels: 
    k8s-app: fluent-bit-logging
    version: v1
    kubernetes.io/cluster-service: "true"
spec:
  selector:
    k8s-app: fluent-bit-logging
  template:
    metadata:
      labels:
        k8s-app: fluent-bit-logging
        version: v1
        kubernetes.io/cluster-service: "true"
      annotations:
        prometheus.io/scrape: "true"
        prometheus.io/port: "2020"
        prometheus.io/path: /api/v1/metrics/prometheus
    spec:
      selector:
        matchLabels:
          name: fluent-bit-logging
      containers:
      - name: fluent-bit
        image: fluent/fluent-bit:1.5.0
        imagePullPolicy: Always
        ports:
          - containerPort: 2020
        env:
        - name: FLUENT_AZURE_WORKSPACE_ID
          valueFrom:
            secretKeyRef:
              name: log-analytics
              key: WorkSpaceID 
        - name: FLUENT_AZURE_WORKSPACE_KEY
          valueFrom:
            secretKeyRef:
              name: log-analytics 
              key: WorkSpaceKey
        volumeMounts:
        - name: varlog
          mountPath: /var/log
        - name: varlibdockercontainers
          mountPath: /var/lib/docker/containers
          readOnly: true 
        - name: fluent-bit-config
          mountPath: /fluent-bit/etc/
      terminationGracePeriodSeconds: 10
      volumes:
      - name: varlog
        hostPath: 
          path: /var/log
      - name: varlibdockercontainers
        hostPath:
          path: /var/lib/docker/containers
      - name: fluent-bit-config
        configMap:
          name: fluent-bit-config
      serviceAccountName: fluent-bit
      tolerations:
      - key: node-role.kubernetes.io/master
        operator: Exists
        effect: NoSchedule
      - operator: "Exists"
        effect: "NoExecute"
      - operator: "Exists"
        effect: "NoSchedule"

Can anyone spot what I'm missing? I'm not very sufficient in Kubernetes yet, so this might be a simple solution to what I'm overlooking.

Thanks in Advance!

Upvotes: 0

Views: 1760

Answers (1)

Kamol Hasan
Kamol Hasan

Reputation: 13556

In DaemonSet, you must specify a pod selector (spec.selector) that matches the labels of the (.spec.template.metadata.labels). Ref

apiVersion: apps/v1
kind: DaemonSet
metadata:
  name: fluentd-elasticsearch
spec:
  selector:  # <-------- this one
    matchLabels:
      name: fluentd-elasticsearch
  template:
    metadata:
      labels: # <------ matches this one
        name: fluentd-elasticsearch

Fixes:

  • spec.template.spec.selector isn't a valid filed, remove this.
  • spec.selector is consist of two fields, matchLabels and matchExpressions.
apiVersion: apps/v1
kind: DaemonSet
metadata: 
  name: fluent-bit
  namespace: logging
  labels: 
    k8s-app: fluent-bit-logging
    version: v1
    kubernetes.io/cluster-service: "true"
spec:
  selector:
    matchLabels: # <------- added 
      k8s-app: fluent-bit-logging
  template:
    metadata:
      labels:
        k8s-app: fluent-bit-logging
        version: v1
        kubernetes.io/cluster-service: "true"
      annotations:
        prometheus.io/scrape: "true"
        prometheus.io/port: "2020"
        prometheus.io/path: /api/v1/metrics/prometheus
    spec:
      # selector: <--- removed
       # matchLabels:
        #  name: fluent-bit-logging
      containers:
      - name: fluent-bit
        image: fluent/fluent-bit:1.5.0
        imagePullPolicy: Always
        ports:
          - containerPort: 2020
        env:
        - name: FLUENT_AZURE_WORKSPACE_ID
          valueFrom:
            secretKeyRef:
              name: log-analytics
              key: WorkSpaceID 
        - name: FLUENT_AZURE_WORKSPACE_KEY
          valueFrom:
            secretKeyRef:
              name: log-analytics 
              key: WorkSpaceKey
        volumeMounts:
        - name: varlog
          mountPath: /var/log
        - name: varlibdockercontainers
          mountPath: /var/lib/docker/containers
          readOnly: true 
        - name: fluent-bit-config
          mountPath: /fluent-bit/etc/
      terminationGracePeriodSeconds: 10
      volumes:
      - name: varlog
        hostPath: 
          path: /var/log
      - name: varlibdockercontainers
        hostPath:
          path: /var/lib/docker/containers
      - name: fluent-bit-config
        configMap:
          name: fluent-bit-config
      serviceAccountName: fluent-bit
      tolerations:
      - key: node-role.kubernetes.io/master
        operator: Exists
        effect: NoSchedule
      - operator: "Exists"
        effect: "NoExecute"
      - operator: "Exists"
        effect: "NoSchedule"

Upvotes: 2

Related Questions