Reputation: 11
I am new to golang, and i have following requirement:
I need to create golang app that will be deployed on k8s cluster and it will be checking if there was change in deployments image.
func main() {
clientset, err := createKubernetesClient()
if err != nil {
log.Fatalf("Error creating Kubernetes client: %v", err)
}
gitlabClient, err := createGitLabClient()
if err != nil {
log.Fatalf("Failed to create GitLab client: %v", err)
}
updateGitLabFileAndFetchProject(clientset, gitlabClient)
factory :=
informers.NewSharedInformerFactoryWithOptions(clientset, 0)
informer := factory.Apps().V1().Deployments().Informer()
stopper := make(chan struct{})
defer close(stopper)
defer runtime.HandleCrash()
informer.AddEventHandler(cache.ResourceEventHandlerFuncs{
UpdateFunc: func(oldObj, newObj interface{}) {
onUpdate(clientset, gitlabClient, oldObj, newObj)
},
AddFunc: func(obj interface{}) {
onAdd(clientset, gitlabClient, obj)
},
})
go informer.Run(stopper)
fmt.Println("Starting stopper")
if !cache.WaitForCacheSync(stopper, informer.HasSynced) {
runtime.HandleError(fmt.Errorf("Timed out waiting for caches to sync"))
return
}
<-stopper
}
func onUpdate(clientset *kubernetes.Clientset, gitlabClient
*gitlab.Client, oldObj, newObj interface{}) {
oldDepl := oldObj.(*v1.Deployment)
newDepl := newObj.(*v1.Deployment)
updateMutex.Lock()
defer updateMutex.Unlock()
for oldContainerID := range oldDepl.Spec.Template.Spec.Containers {
for newContainerID := range newDepl.Spec.Template.Spec.Containers {
if oldDepl.Spec.Template.Spec.Containers[oldContainerID].Name == newDepl.Spec.Template.Spec.Containers[newContainerID].Name {
if oldDepl.Spec.Template.Spec.Containers[oldContainerID].Image != newDepl.Spec.Template.Spec.Containers[newContainerID].Image {
fmt.Printf("OLD DEPLOYMENT %s IN NS %s UPDATED FROM IMAGE %s to NEW DEPLOYMENT %s IN NS %s TO IMAGE %s", oldDepl.Name, oldDepl.Namespace,
oldDepl.Spec.Template.Spec.Containers[oldContainerID].Image, newDepl.Name, newDepl.Namespace, newDepl.Spec.Template.Spec.Containers[newContainerID].Image)
updateGitLabFileForSingleDeployment(clientset, gitlabClient, newDepl.Name, newDepl.Namespace, newDepl.Spec.Template.Spec.Containers[newContainerID].Image)
}
}
}
}
With this, i was able to successfully obtain new image changes of deployments on k3s cluster, however, when i ran this same exact code on production cluster,it seems that informer did not recognize there was change in image(even if there was) , all i see in logs is this : Starting stopper.
I was able to get deployments with this curl : curl -H "Authorization: Bearer $TOKEN" https://kubernetes.default.svc/apis/apps/v1/namespaces//deployments/ --insecure from within my container where application is deployed.
so I think clusterrole,clusterrolebinding and serviceaccount(which have same config as on my local cluster) are configured correctly.
I cant figure out why same code works on my local cluster, but doesnt not work on prod cluster, even if manifests are same.
Thanks for help.
Upvotes: 1
Views: 395
Reputation: 550
Since you're able to test the app locally, there can be a missing role or permission on which the service account is being used and applied to the cluster's RBAC configuration. This depends on what platform you are using.
Furthermore, you can use commands like kubectl describe <pod\>
and kubectl logs <pod\>
to debug and have additional information about the error you are encountering.
Upvotes: 0