Reputation: 147
We have a full cluster running in production and suddenly it stopped working with the following error:
The Deployment "authapi" is invalid: metadata.finalizers[0]: Invalid value: "foregroundDeletion": name is neither a standard
finalizer name nor is it fully qualified
My current cluster version is:
Client Version: version.Info{Major:"1", Minor:"7", GitVersion:"v1.7.3", GitCommit:"2c2fe6e8278a5db2d15a013987b53968c743f2a1", GitTreeState:"clean", BuildDate:"2017-08-03T07:00:21Z", GoVersion:"go1.8.3", Compiler:"gc", Platform:"linux/amd64"} Server Version: version.Info{Major:"1", Minor:"6", GitVersion:"v1.6.6", GitCommit:"7fa1c1756d8bc963f1a389f4a6937dc71f08ada2", GitTreeState:"clean", BuildDate:"2017-06-16T18:21:54Z", GoVersion:"go1.7.6", Compiler:"gc", Platform:"linux/amd64"}
On the other hand, we cannot deploy either. The following message appear when kubectl is trying to deploy:
W1127 15:28:32.999978 42625 factory_object_mapping.go:423] Failed to download OpenAPI (the server could not find the requested
resource), falling back to swagger The Deployment "authapi" is invalid: metadata.finalizers[0]: Invalid value: "foregroundDeletion": name is neither a standard finalizer name nor is it fully qualified /home/builduser/myagent/_work/_temp/kubectlTask/1511796511792/kubectl failed with return code: 1
YAML definition is shown below:
apiVersion: apps/v1beta1
kind: Deployment
metadata:
name: authapi
spec:
replicas: 1
strategy:
type: RollingUpdate
rollingUpdate:
maxSurge: 1
maxUnavailable: 1
template:
metadata:
labels:
app: authapi
spec:
containers:
- name: authapi
image: edgecontainerregistry.azurecr.io/portal.authapi:latest
imagePullPolicy: Always
resources:
requests:
cpu: 100m
ports:
- containerPort: 5006
env:
- name: ASPNETCORE_ENVIRONMENT
valueFrom:
configMapKeyRef:
name: aspnetcore-config
key: aspnetcore.env
imagePullSecrets:
- name: edgesecret
---
kind: Service
apiVersion: v1
metadata:
name: authapi
spec:
ports:
- protocol: TCP
port: 5006
targetPort: 5006
selector:
app: authapi
type: ClusterIP
---
apiVersion: autoscaling/v1
kind: HorizontalPodAutoscaler
metadata:
name: authapi
spec:
scaleTargetRef:
apiVersion: apps/v1beta1
kind: Deployment
name: authapi
minReplicas: 1
maxReplicas: 10
targetCPUUtilizationPercentage: 50
Any help on this?
Upvotes: 1
Views: 1974
Reputation: 81
Had this issue on a cluster running version 1.6.0 while trying to upgrade a service running in the cluster. I could not upgrade the kubernetes cluster (to get the bug fix) right away.
Checked the pods in the deployment and noticed that one was stuck in "Terminating" state.
I described the pod to get the kubernetes node it was running on, got into the node, had this error on kubelet logs
kuberuntime_manager.go:858] getPodContainerStatuses for pod "_default(781d0645-23d3-11e8-bcca-00505690014f)" failed: rpc error: code = 2 desc = unable to inspect docker image
Restarted Docker and kubelet on the node, the pod was now gone and could update the service without issues.
Upvotes: 0
Reputation: 18111
This is a bug, fixed in 1.6.7+
https://github.com/kubernetes/kubernetes/blob/master/CHANGELOG-1.6.md/#v167
Fix Invalid value: "foregroundDeletion" error when attempting to delete a resource. (#46500, @tnozicka)
Upvotes: 2
Reputation: 147
I solved the issue running the following command:
kubectl delete deployments/authapi --namespace=production --force
Upvotes: 0