Reputation: 9
I just started learning kubernetes. Please help me out.
As per my assumption, when deplyoment object is going to create replica set based on the selector if any existing running pod with same label of selector used in deployment has to utilize by scheduler but in my below case it is not. Please correct me if am wrong, other wise please help me out where am going wrong.
pod.yml
apiVersion: v1
kind: Pod
metadata:
name: web
labels:
type: frontend
spec:
containers:
- name: nginx-container
image: nginx
deployment.yml
apiVersion: apps/v1
kind: Deployment
metadata:
name: web-deployment
spec:
template:
metadata:
name: web
labels:
type: frontend
spec:
containers:
- name: nginx-container
image: nginx
selector:
matchLabels:
type: frontend
replicas: 2
I have create a pod first then iam trying with deployment of same pod type.
kubectl get pods
NAME READY STATUS RESTARTS AGE
web 1/1 Running 0 22m
web-deployment-89d6bf94f-5dqxj 1/1 Running 0 13m
web-deployment-89d6bf94f-xrngx 1/1 Running 0 21m
Upvotes: 0
Views: 82
Reputation: 5583
If you look at description of one of the pods(for example kubectl describe pod web-deployment-89d6bf94f-5dqxj
), created by deployment, you will find that it has additional label pod-template-hash
, that was added by deployment controller to it's underlying replica set. Here is little paragraph in docs about how and why this label is set:
This label ensures that child ReplicaSets of a Deployment do not overlap. It is generated by hashing the PodTemplate of the ReplicaSet and using the resulting hash as the label value that is added to the ReplicaSet selector, Pod template labels, and in any existing Pods that the ReplicaSet might have.
So, basically, pod that was created outside of deployment is not part of this deployment's replica set.
Upvotes: 2
Reputation: 389
Why are you trying to override a pod specification via a deployment?
Why not just use the deployment?
If you want to edit a pod specification why not just edit it directly vs trying to override it with a deployment?
Upvotes: 0