jencoston
jencoston

Reputation: 1362

How to install Helm 3 Chart on Air Gapped System

I am trying to install a helm chart on an air gapped system. The chart should be pulling an image from a private docker registry and deploy to the kubernetes cluster (both also on the air gapped system). My chart passed linting but when I try to install it, I keep seeing the error:

$ helm install my-mongodb . --debug
install.go:172: [debug] Original chart version: ""
install.go:189: [debug] CHART PATH: /opt/helm-charts/my-mongodb

Error: unable to build kubernetes objects from release manifest: the server could not find the requested resource
helm.go:94: [debug] the server could not find the requested resource
unable to build kubernetes objects from release manifest
helm.sh/helm/v3/pkg/action.(*Install).Run
    /home/circleci/helm.sh/helm/pkg/action/install.go:257
main.runInstall
    /home/circleci/helm.sh/helm/cmd/helm/install.go:242
main.newInstallCmd.func2
    /home/circleci/helm.sh/helm/cmd/helm/install.go:120
github.com/spf13/cobra.(*Command).execute
    /go/pkg/mod/github.com/spf13/[email protected]/command.go:842
github.com/spf13/cobra.(*Command).ExecuteC
    /go/pkg/mod/github.com/spf13/[email protected]/command.go:950
github.com/spf13/cobra.(*Command).Execute
    /go/pkg/mod/github.com/spf13/[email protected]/command.go:887
main.main
    /home/circleci/helm.sh/helm/cmd/helm/helm.go:93
runtime.main
    /usr/local/go/src/runtime/proc.go:203
runtime.goexit
    /usr/local/go/src/runtime/asm_amd64.s:1373

Helm is installed:

$ helm version
version.BuildInfo{Version:"v3.3.4", GitCommit:"a61ce5633af99708171414353ed49547cf05013d", GitTreeState:"clean", GoVersion:"go1.14.9"}

So is Kubernetes:

$ kubectl version
Client Version: version.Info{Major:"1", Minor:"5", GitVersion:"v1.5.2", GitCommit:"269f928217957e7126dc87e6adfa82242bfe5b1e", GitTreeState:"clean", BuildDate:"2017-06-06T13:19:52Z", GoVersion:"go1.7.6", Compiler:"gc", Platform:"linux/amd64"}
Server Version: version.Info{Major:"1", Minor:"5", GitVersion:"v1.5.2", GitCommit:"269f928217957e7126dc87e6adfa82242bfe5b1e", GitTreeState:"clean", BuildDate:"2017-06-06T13:19:52Z", GoVersion:"go1.7.6", Compiler:"gc", Platform:"linux/amd64"}

$ kubectl config get-clusters
NAME
dev-cluster

$ kubectl get node -n kube-system
NAME          STATUS    AGE
kube-node01   Ready     14d
kube-node02   Ready     14d
kube01        Ready     14d

Here is my Helm Chart.yaml:

apiVersion: v2
name: my-mongodb
description: A Helm chart for My MongoDB Kubernetes
type: application
version: 0.1.0
appVersion: 5.0.0-1
annotations:
  category: Database

And my values.yaml

replicaCount: 1

image:
  repository: docker01.dev.local:5000/my-mongodb
  pullPolicy: Always
  pullSecrets: 
     - name: regcred  
     
serviceAccount:
  create: true
  annotations: {}
  name: ""

podAnnotations: {}

podSecurityContext: {}

securityContext: {}

service:
  type: ClusterIP
  port: 80

ingress:
  enabled: false
  annotations: {}
  hosts:
    - host: chart-example.local
      paths: []
  tls: []

resources: {}

autoscaling:
  enabled: false
  minReplicas: 1
  maxReplicas: 100
  targetCPUUtilizationPercentage: 80

nodeSelector: {}

tolerations: []

affinity: {}

My docker image successfully runs in a docker container, so I'm fairly confident the image isn't the issue. What am I doing wrong?

EDIT: I am running the helm install command at: /opt/helm-charts/my-mongodb This is the location of the Charts.yaml file. The file structure is:

my-mongodb
 - Chart.yaml
 - values.schema.json
 - values.yaml
 - charts
 - templates
   - deployment.yaml
   - _helpers.tpl
   - hpa.yaml
   - ingress.yaml
   - NOTES.txt
   - serviceaccount.yaml
   - service.yaml
   - tests
     - test-connection.yaml

This chart was created with the command: helm create my-mongodb and then I edited the Charts.yaml, values.yaml and deployment.yaml files

Here is the deployment.yaml file as well:

apiVersion: apps/v1
kind: Deployment
metadata:
  name: {{ include "my-mongodb.fullname" . }}
  labels:
    {{- include "my-mongodb.labels" . | nindent 4 }}
spec:
{{- if not .Values.autoscaling.enabled }}
  replicas: {{ .Values.replicaCount }}
{{- end }}
  selector:
    matchLabels:
      {{- include "my-mongodb.selectorLabels" . | nindent 6 }}
  template:
    metadata:
    {{- with .Values.podAnnotations }}
      annotations:
        {{- toYaml . | nindent 8 }}
    {{- end }}
      labels:
        {{- include "my-mongodb.selectorLabels" . | nindent 8 }}
    spec:
      {{- with .Values.imagePullSecrets }}
      imagePullSecrets:
        {{- toYaml . | nindent 8 }}
      {{- end }}
      serviceAccountName: {{ include "my-mongodb.serviceAccountName" . }}
      securityContext:
        {{- toYaml .Values.podSecurityContext | nindent 8 }}
      containers:
        - name: {{ .Chart.Name }}
          securityContext:
            {{- toYaml .Values.securityContext | nindent 12 }}
          image: "{{ .Values.image.repository }}:{{ .Values.image.tag | default .Chart.AppVersion }}"
          imagePullPolicy: {{ .Values.image.pullPolicy }}
          ports:
            - name: http
              containerPort: 8080
              protocol: TCP
            - name: mongoDB
              containerPort: 27017
              protocol: TCP
            - name: mongoDBShardSvr
              containerPort: 27018
              protocol: TCP
            - name: mongoDBConfigSvr
              containerPort: 27019
              protocol: TCP                                         
          livenessProbe:
            httpGet:
              path: /
              port: http
          readinessProbe:
            httpGet:
              path: /
              port: http
          resources:
            {{- toYaml .Values.resources | nindent 12 }}
      {{- with .Values.nodeSelector }}
      nodeSelector:
        {{- toYaml . | nindent 8 }}
      {{- end }}
      {{- with .Values.affinity }}
      affinity:
        {{- toYaml . | nindent 8 }}
      {{- end }}
      {{- with .Values.tolerations }}
      tolerations:
        {{- toYaml . | nindent 8 }}
      {{- end }}

Upvotes: 2

Views: 3555

Answers (1)

Eduardo Baitello
Eduardo Baitello

Reputation: 11376

It is probably a version compatibility issue. Helm v3.3.x is not compatible with Kubernetes v1.5.2.

As per Supported Version Skew:

When a new version of Helm is released, it is compiled against a particular minor version of Kubernetes. For example, Helm 3.0.0 interacts with Kubernetes using the Kubernetes 1.16.2 client, so it is compatible with Kubernetes 1.16.

|---------------------|--------------------------------------|
|      Helm Version   |     Supported Kubernetes Versions    |
|---------------------|--------------------------------------|
|         3.4.x       |           1.19.x - 1.16.x            |
|         3.3.x       |           1.18.x - 1.15.x            |
|         3.2.x       |           1.18.x - 1.15.x            |
|---------------------|--------------------------------------|

More than that, you are using apiVersion: apps/v1 for your Deployment version. This API version was introduced on Kubernetes v1.9.0.

Upvotes: 2

Related Questions