bZhang
bZhang

Reputation: 307

How to run kubectl exec on scripts that never end

I have a mssql pod that I need to use the sql_exporter to export its metrics. I was able to set up this whole thing manually fine:

  1. download the binary
  2. install the package
  3. run ./sql_exporter on the pod to start listening on port for metrics

I tried to automate this using kubectl exec -it ... and was able to do step 1 and 2. When I try to do step 3 with kubectl exec -it "$mssql_pod_name" -- bash -c ./sql_exporter the script just hangs and I understand as the server is just going to be listening forever, but this stops the rest of my installation scripts.

I0722 21:26:54.299112     435 main.go:52] Starting SQL exporter (version=0.5, branch=master, revision=fc5ed07ee38c5b90bab285392c43edfe32d271c5) (go=go1.11.3, user=root@f24ba5099571, date=20190114-09:24:06)
I0722 21:26:54.299534     435 config.go:18] Loading configuration from sql_exporter.yml
I0722 21:26:54.300102     435 config.go:131] Loaded collector "mssql_standard" from mssql_standard.collector.yml
I0722 21:26:54.300207     435 main.go:67] Listening on :9399
<nothing else, never ends>

Any tips on just silencing this and let it run in the background (I cannot ctrl-c as that will stop the port-listening). Or is there a better way to automate plugin install upon pod deployment? Thank you

Upvotes: 0

Views: 898

Answers (1)

DazWilkin
DazWilkin

Reputation: 40296

To answer your question:

This answer should help you. You should (!?) be able to use ./sql_exporter & to run the process in the background (when not using --stdin --tty). If that doesn't work, you can try nohup as described by the same answer.

To recommend a better approach:

Using kubectl exec is not a good way to program a Kubernetes cluster.

kubectl exec is best used for debugging rather than deploying solutions to a cluster.

I assume someone has created a Kubernetes Deployment (or similar) for Microsoft SQL Server. You now want to complement that Deployment with the exporter.

You have options:

  1. Augment the existing Deployment and add the sql_exporter as a sidecar (another container) in the Pod that includes the Microsoft SQL Server container. The exporter accesses the SQL Server via localhost. This is a common pattern when deploying functionality that complements an application (e.g. logging, monitoring)
  2. Create a new Deployment (or similar) for the sql_exporter and run it as a standalone Service. Configure it scrape one|more Microsoft SQL Server instances.

Both these approaches:

  • take more work but they're "more Kubernetes" solutions and provide better repeatability|auditability etc.
  • require that you create a container for sql_exporter (although I assume the exporter's authors already provide this).

Upvotes: 1

Related Questions