Chella Kumaran
Chella Kumaran

Reputation: 149

kubernetes on docker microservices on AWS EC2 instance

My development setup uses a microservices deployment in a Docker environment on an Amazon EC2 instance. Now, we are trying to implement Kubernetes in that EC2 instance to autoscale and load-balance the microservices deployed in Docker.

I saw one method of installing kubectl and minikube and setting up Kubernetes with a node and pods (I think this method is only to test locally). Another source has a giant process of having two to three EC2 instances, having one as master and other as slave with same setup.

Can I have Kubernetes setup on a master node alone and implement autoscaling and load balancing on that one node? Can keep everything in a single EC2 instance where my microservices are?

Upvotes: 1

Views: 365

Answers (1)

Nick
Nick

Reputation: 1948

Can I have Kubernetes setup on a master node alone and implement autoscaling and load balancing on that one node?

Yes, absolutely.

Can keep everything in a single EC2 instance where my microservices are?

Yes you can. However, it is not a "best practice" if we are speaking about production.

My development setup uses a microservices deployment in a Docker environment on an Amazon EC2 instance. Now, we are trying to implement Kubernetes in that EC2 instance to autoscale and load-balance the microservices deployed in Docker.

I see the following issues with this approach.

  1. you are going to put more load on the node as Kubernetes (k8s) itself provides some overhead
  2. as @DavidMaze told, k8s is designed for a massive load in a multi-node environment. And you are trying to use it just to have autoscaling for pods.

I have 30 microservices running on docker.

It looks like that is at least 30 containers, thus most probably it'll translate to around 30 pods. Please keep in mind, that in addition to the pods requested by user, k8s runs a dozen of pods in a kube-system namespace (dns, kube-proxy, default-backend, etc, etc).

Additionally, you have mentioned autoscaling, so there could be quite a lit of pods on that single node.

Limits.

K8s recommends a maximum number of 110 pods per node.

Up to this number, Kubernetes has been tested to work reliably on common node types.

Most managed k8s services even impose hard limits on the number of pods per node:

  • On Amazon Elastic Kubernetes Service (EKS), the maximum number of pods per node depends on the node type and ranges from 4 to 737.
  • On Google Kubernetes Engine (GKE), the limit is 100 pods per node, regardless of the type of node.
  • On Azure Kubernetes Service (AKS), the default limit is 30 pods per node but it can be increased up to 250.

Availability.

If you have only a few nodes, then the impact of a failing node is bigger than if you have many nodes. For example, if you have only two nodes, and one of them fails, then about half of your pods disappear. And in case you have only one node (with the cluster configuration database on it)... :)

So you aren't getting the benefit of rescheduling the workloads of failed nodes to other nodes.

If you have high-availability requirements, you might require a certain minimum number of nodes in your cluster.

To sum it up:

  1. Overhead of maintaining such a setup (and bigger EC2 instance) can be bigger than the price of the managed solution (EKS);
  2. The setup looks quite fragile and you are not getting the main benefit of reliability. That could lead to devastating consequences.

Another source has a giant process of having two to three EC2 instances, having one as master and other as slave with the same setup.

There are a few ways to install k8s. I think you might want to check kops tool. Additionally, there is a very comprehensive reading called kubernetes-the-hard-way by Kelsey Hightower.

Last but not least, there is a good article on topic. Additionally, there are some similar discussions on StackOverflow.

I hope that info sheds some light on a problem and answers your questions :)

Upvotes: 1

Related Questions