overexchange
overexchange

Reputation: 1

Why do we need Pod replica set in Kubernetes?

An existing Pod(P) is running 3 containers for API.

To scale Pod P horizonatally,

Is it possible to add one(or n) more container to an existing Pod(running 3 containers)?

or

Is Pod replica set concept supposed to be applied for this scenario(to scale horizontally)?

Upvotes: 2

Views: 981

Answers (3)

coderanger
coderanger

Reputation: 54211

No, you don't use multi-container Pods for scaling. Pods with multiple containers are for cases where you need multiple daemons running together (on the same hardware) for a single "instance". That's pretty rare for new users so you almost certainly want 3 replicas of a Pod with one container.

Upvotes: 4

Harsh Manvar
Harsh Manvar

Reputation: 30113

HPA won't scale the container inside the pod.

horizontal scaling meaning increasing the number of POD replicas which can mean each pod will have three containers inside it however as per user traffic we increase or decrease the POD horizontally.

Is it possible to add one(or n) more container to an existing Pod(running 3 containers)?

Yes, you can do it but it won't be best practice as if one container in the POD goes down or restarts your pod won't be able to serve the traffic due to readiness/liveness.

Is Pod replica set concept supposed to be applied for this scenario(to scale horizontally)?

Yes, ideally that would be the best option to scale the POD horizontally and manage the high load traffic. POD replicas might you have to increase based on the load till 100 to 500 so don't worry about the number of replicas.

Mutilple container in single POD useful when containers has interdependency or want to run multiple daemons of process.

For scaling increasing the replicas horizontally is the best solution.

Upvotes: 0

Mark Bramnik
Mark Bramnik

Reputation: 42461

Like other people said, you don't really scale out by increasing the number of containers in the pod.

Here are the some reasons, certainly incomplete but I believe its enough to give you an idea:

  • From the K8S standpoint POD is something "atomic" the smallest possible unit it can manipulate. In particular, it will run the "whole" pod on the same node, you can't really make it run a "part" of the pod on one machine and another part on another machine. But this would mean that the containers running inside the pod would "compete" for the same CPU, memory and other resources of that machine. So if you scale out stuff, it should be at least possible to use other machines, of course, Kubernetes can run the same pod twice on the same machine, but you can "play" with it and modify the configurations so that it will deploy the pod only on the machine that has a "capacity" for doing that, everything is automatic.

  • Adding a new container to the existing pod means recreating the pod as the container image is a part of pod's definition, right? Again, ideally you would like to scale out without any kind of manual intervention, everything should be done by automatically, so for sure you won't want to recompile anything.

  • I assume that containers running in the pod are doing something "important" in terms of business, I mean. So Kubernetes should know that they're up and running property. Kubernetes has a concept of "probes": readiness probe to make sure that the pod is up-and-running, liveness probe to make sure that the pod is still going well. If you have one container, or at least one "business-oriented" container in the pod, which is assumed by k8s architecture, then you can technically implement the probe by exposing some endpoint in the application that k8s will query from time to time and see what happens. If you have many containers running in the same pod, what exactly would you like to "probe"? All containers are equally important, so it becomes a tedious task.

This list can go on and on, but bottom line, PODs are units of deployment in K8S's eyes, you can do various things with PODs, you can't work at the level of containers. Scaling, including autoscaling, is one of the core K8S features, although it can be pretty advanced - so adhere to the K8S standards to benefit from it.

Upvotes: 1

Related Questions