Reputation: 105
To start with I'm not the one who created and setup the architecture, but forced to currently "fix" the problem. It's a 10+ ExpressJS microservices + API Gateway running as docker containers under Kubernetes. It's hosted on an EC2 instance and it has a ELB setup.
After days of testing with Apache JMeter I'm in the current situation.
Sending a simple POST request to the API gateway
which then sends a fetch(node-fetch) POST request to one of the microservices which executes a fairly simple database query and returns a 1.7kb JSON. With 2500 threads on JMeter and a total of 25 000
requests for 1 minute 20s
I'm getting a min response time of 100ms and Max 18800ms, which is a huge difference. The throughput shown is 285.7/sex.
It also has an nginx setup and one of the first things I tried is increasing the worker_connections
and currently it's set at 16500, but it did not make a difference.
Tried also to add an Agent to the fetch request to reuse the connections, but also didn't make a difference.
I'm really not sure where the bottleneck is, as the EC2 Instance at max hits 45-50% CPU usage, so I'm guessing it might be from Express/Node itself?
Upvotes: -1
Views: 24