gKits
gKits

Reputation: 113

How to install a Helm chart with the Go Helm-SDK from a pod running within the cluster?

Problem

I have a Go project running in a Container on a Pod inside a Kubernetes cluster. I also have a local Helm Chart directory in that Container and I want to install it onto the cluster using the Helm-SDK.

I have used the Helm-SDK before to install Helm charts but in that case the code was running outside of the cluster and I was able to use the kubeconfig file.
In the client-go package for Kubernetes exists the option to use the InClusterConfig when creating a Clientset instead of creating it with the kubeconfig file especially for the case when the code is running on a pod inside a cluster.
Sadly after reading through the helm documentation as well as going through some pieces of the code I wasn't able to find a similar InCluster option like with the Kubernetes client.

Does anybody have an idea how I could do this or if it's even possible?
It should be because the Helm-SDK is running the Kubernetes client under the hood.


This was the usual way I installed a Helm chart via the SDK.
By just passing the path to my Kubeconfig.

func InstallChart(dir, release, namespace string) error {
    chart, err := loader.LoadDir(dir)
    if err != nil {
        return err
    }

    actionConfig := new(action.Configuration)
    if err := actionConfig.Init(
        kube.GetConfig(
            "/path/to/kubeconfig",
            "",
            namespace,
        ),
        namespace,
        os.Getenv("HELM_DRIVER"),
        log.Printf,
    ); err != nil {
        return err
    }

    client := action.NewInstall(actionConfig)
    if _, err := client.Run(chart, nil); err != nil {
        return err
    }

    return nil
}


When I looked through the docs I saw that the kube.Client was looking like this

type Client struct {
    Factory Factory
    Log     func(string, ...interface{})
    // Namespace allows to bypass the kubeconfig file for the choice of the namespace
    Namespace string

    kubeClient *kubernetes.Clientset
}

and I was thinking that I might just be able to manually create my own client from the struct and just set the kubeClient field to the clientset that I get from the Kubernetes client with

restConfig, _ := rest.InClusterConfig()
clientset; _ := kubernetes.NewForConfig(restConfig)

I tried doing that but just ran into some problems with the Factory inside the kube.Client.


Answer

Thanks to the solution from usman-malik I was able to come up with this working mock-up.

#main.go

package main

import (
    "log"
    "os"

    "helm.sh/helm/v3/pkg/action"
    "helm.sh/helm/v3/pkg/chart/loader"
    "k8s.io/cli-runtime/pkg/genericclioptions"
)

func main() {
    err = InstallChart("./chart", "release", "default")
    log.Println(err)
}

func InstallChart(dir, release, namespace string) error {
    chart, err := loader.Load(dir)
    if err != nil {
        return err
    }

    actionConfig := new(action.Configuration)
    if err := actionConfig.Init(
        &genericclioptions.ConfigFlags{
            Namespace: &namespace,
        },
        namespace,
        os.Getenv("HELM_DRIVER"),
        log.Printf,
    ); err != nil {
        return err
    }

    client := action.NewInstall(actionConfig)
    client.ReleaseName = release
    client.Namespace = namespace

    if _, err := client.Run(chart, chart.Values); err != nil {
        return err
    }

    return nil
}

Then I needed a Dockerfile to build the image and making sure to also copy over the chart directory.

#Dockerfile

FROM golang:alpine AS builder

WORKDIR /build

COPY go.mod go.sum ./
RUN go mod download

COPY . .

ENV CGO_ENABLED=0 GOOS=linux GOARCH=amd64
RUN go build -ldflags="-s -w" -o app .

FROM scratch

COPY --from=builder ["/build/app", "/"]
COPY --from=builder ["/build/chart/", "/chart"]

ENTRYPOINT ["/app"]

After building the image with docker build -t helmtest . and loading it into minikube with minikube image load helmtest I also needed to bind the correct permissions to the default service account (don't use the default SA unless it's just a test like in this case).

#role.yaml

apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
  name: helm-role
  namespace: default
rules:
  # select your needed api groups
- apiGroups: ["", "apps", batch", "networking.k8s.io", "networking", "extensions"]
  # select your needed resources
  # don't forget the according apiGroup above
  resources: [ "deployments", "pods", "replicasets", "services", "ingresses", "configmaps", "persistentvolumeclaims", "persistentvolumes", "secrets"]
  # select your needed permissions (keep to a minimum)
  verbs: [ "create", "get", "list", "delete", "watch", "update", "patch"]
---
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
  name: helm-role-binding
  namespace: default
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: Role
  name: helm-role
subjects:
  - kind: ServiceAccount
    name: default
    namespace: default

Then finally after applying the RoleBinding with kubectl apply -f role.yaml and creating a pod with the build docker image with kubectl run test --image=testhelm --image-pull-policy=Never --restart=Never the Go app runs once and installs the locally saved helm chart.
Once again thanks to usman-malik.

Upvotes: 5

Views: 2347

Answers (0)

Related Questions