Arkon
Arkon

Reputation: 2954

Run helm3 client from in-cluster

Helm 3 does not provide any way to create an action.Configuration structure if the code is running from within the cluster.

I have tried by building my own generic flags:

config, err := rest.InClusterConfig()
if err != nil {
    panic(err)
}
insecure := false

genericConfigFlag := &genericclioptions.ConfigFlags{
    Timeout: stringptr("0"),
    Insecure: &insecure,
    APIServer: stringptr(config.Host),
    CAFile: stringptr(config.CAFile),
    BearerToken: stringptr(config.BearerToken),
    ImpersonateGroup: &[]string{},
    Namespace: stringptr(namespace),
}

actionConfig := &action.Configuration{
    RESTClientGetter: genericConfigFlag,
    KubeClient: kube.New(genericConfigFlag),
    Log: log.Infof,
}

Unfortunately, this result in a SIGSEGV error later when running action.NewList(actionConfig).Run().

Is it the right way to define an action config for Helm 3 from within a Kubernetes cluster?

Upvotes: 3

Views: 1355

Answers (2)

Darshan Karia
Darshan Karia

Reputation: 11

This is what I did, and works fine for me (using helm 3.2.0 level sdk code): imports

import (
    "log"

    "helm.sh/helm/v3/pkg/action"

    "k8s.io/cli-runtime/pkg/genericclioptions"
    "k8s.io/client-go/rest"
)

ActionConfig

func getActionConfig(namespace string) (*action.Configuration, error) {
    actionConfig := new(action.Configuration)
    var kubeConfig *genericclioptions.ConfigFlags
    // Create the rest config instance with ServiceAccount values loaded in them
    config, err := rest.InClusterConfig()
    if err != nil {
        return nil, err
    }
    // Create the ConfigFlags struct instance with initialized values from ServiceAccount
    kubeConfig = genericclioptions.NewConfigFlags(false)
    kubeConfig.APIServer = &config.Host
    kubeConfig.BearerToken = &config.BearerToken
    kubeConfig.CAFile = &config.CAFile
    kubeConfig.Namespace = &namespace
    if err := actionConfig.Init(kubeConfig, namespace, os.Getenv("HELM_DRIVER"), log.Printf); err != nil {
        return nil, err
    }
    return actionConfig, nil
}

Usage

actionConfig, kubeConfigFileFullPath, err := getActionConfig(namespace)
listAction := action.NewList(actionConfig)
releases, err := listAction.Run()
if err != nil {
    log.Println(err)
}
for _, release := range releases {
    log.Println("Release: " + release.Name + " Status: " + release.Info.Status.String())
}

It is not much different from what what you originally did, except the initialization of the actionConfig. It could also be that newer version fixed some issues. Let me know if this works for you.

Upvotes: 1

Matt
Matt

Reputation: 8132

To run helm 3 in-cluster you need to modify the source code. Here is the function:

func (c *Configuration) KubernetesClientSet() (kubernetes.Interface, error) {
    conf, err := c.RESTClientGetter.ToRESTConfig()
    if err != nil {
        return nil, errors.Wrap(err, "unable to generate config for kubernetes client")
    }

    return kubernetes.NewForConfig(conf)
}

This line conf, err := c.RESTClientGetter.ToRESTConfig()
change to conf, err := rest.InClusterConfig() and compile the code.

You can also try modifying code in a way that resulting binary is universal and can run out of cluster as well as in-cluster.

Let me know if it's helpful and if it solves your problem.

Upvotes: 0

Related Questions