Reputation: 149
My development setup uses a microservices deployment in a Docker environment on an Amazon EC2 instance. Now, we are trying to implement Kubernetes in that EC2 instance to autoscale and load-balance the microservices deployed in Docker.
I saw one method of installing kubectl and minikube and setting up Kubernetes with a node and pods (I think this method is only to test locally). Another source has a giant process of having two to three EC2 instances, having one as master and other as slave with same setup.
Can I have Kubernetes setup on a master node alone and implement autoscaling and load balancing on that one node? Can keep everything in a single EC2 instance where my microservices are?
Upvotes: 1
Views: 365
Reputation: 1948
Can I have Kubernetes setup on a master node alone and implement autoscaling and load balancing on that one node?
Yes, absolutely.
Can keep everything in a single EC2 instance where my microservices are?
Yes you can. However, it is not a "best practice" if we are speaking about production.
My development setup uses a microservices deployment in a Docker environment on an Amazon EC2 instance. Now, we are trying to implement Kubernetes in that EC2 instance to autoscale and load-balance the microservices deployed in Docker.
I see the following issues with this approach.
I have 30 microservices running on docker.
It looks like that is at least 30 containers, thus most probably it'll translate to around 30 pods. Please keep in mind, that in addition to the pods requested by user, k8s runs a dozen of pods in a kube-system
namespace (dns, kube-proxy, default-backend, etc, etc).
Additionally, you have mentioned autoscaling, so there could be quite a lit of pods on that single node.
Limits.
K8s recommends a maximum number of 110 pods per node.
Up to this number, Kubernetes has been tested to work reliably on common node types.
Most managed k8s services even impose hard limits on the number of pods per node:
Availability.
If you have only a few nodes, then the impact of a failing node is bigger than if you have many nodes. For example, if you have only two nodes, and one of them fails, then about half of your pods disappear. And in case you have only one node (with the cluster configuration database on it)... :)
So you aren't getting the benefit of rescheduling the workloads of failed nodes to other nodes.
If you have high-availability requirements, you might require a certain minimum number of nodes in your cluster.
To sum it up:
Another source has a giant process of having two to three EC2 instances, having one as master and other as slave with the same setup.
There are a few ways to install k8s. I think you might want to check kops
tool. Additionally, there is a very comprehensive reading called kubernetes-the-hard-way by Kelsey Hightower.
Last but not least, there is a good article on topic. Additionally, there are some similar discussions on StackOverflow.
I hope that info sheds some light on a problem and answers your questions :)
Upvotes: 1