Reputation: 309
I want to watch changes to Pods continuously using the client-go Kubernetes SDK. I am using the below code to watch the changes:
func (c *Client) watchPods(namespace string, restartLimit int) {
fmt.Println("Watch Kubernetes Pods")
watcher, err := c.Clientset.CoreV1().Pods(namespace).Watch(context.Background(),
metav1.ListOptions{
FieldSelector: "",
})
if err != nil {
fmt.Printf("error create pod watcher: %v\n", err)
return
}
for event := range watcher.ResultChan() {
pod, ok := event.Object.(*corev1.Pod)
if !ok || !checkValidPod(pod) {
continue
}
owner := getOwnerReference(pod)
for _, c := range pod.Status.ContainerStatuses {
if reflect.ValueOf(c.RestartCount).Int() >= int64(restartLimit) {
if c.State.Waiting != nil && c.State.Waiting.Reason == "CrashLoopBackOff" {
doSomething()
}
if c.State.Terminated != nil {
doSomethingElse()
}
}
}
}
}
The code is watching changes to the Pods, but it exits after some time. I want to run this continuously. I also want to know how much load it puts on the API Server and what is the best way to run a control loop for looking for changes.
Upvotes: 2
Views: 1411
Reputation: 487
In Watch, a long poll connection is established with the API server. Upon establishing a connection, the API server sends an initial batch of events and any subsequent changes. The connection will be dropped after a timeout occurs.
I would suggest using an Informer instead of setting up a watch, as it is much more optimized and easier to setup. While creating an informer, you can register specific functions which will be invoked when pods get created, updated and deleted. Even in informers you can target specific pods using a labelSelector, similar to watch. You can also create shared informers, which are shared across multiple controllers in the cluster. This results in reducing the load on the API server.
Below are few links to get you started:
Upvotes: 1