Reputation: 323
This is a bit of share and compare kind of question.
I have the following stack deployed on AWS: ELB > ECS Fargate > node/express > RDS
I'm (negatively) surprised by some of the latencies observed for some simple requests that involve or not DB queries:
/healthcheck
would average at 150/200msI tried to search for benchmark results but couldn't find anything useful. So I'd be grateful for anyone sharing his experience for a similar stack.
Thanks a lot!
Additional info on the deployment:
db.t2.micro
instance and a postgresql v12.4 engineA typical cURL latency analysis on the ELB dns:
❯ curl -kso /dev/null http://be-api-main-elb-uat.wantedtv.com -w "==============\n\n
| dnslookup: %{time_namelookup}\n
| connect: %{time_connect}\n
| appconnect: %{time_appconnect}\n
| pretransfer: %{time_pretransfer}\n
| starttransfer: %{time_starttransfer}\n
| total: %{time_total}\n
| size: %{size_download}\n
| HTTPCode=%{http_code}\n\n"
==============
| dnslookup: 0,003741
| connect: 0,065718
| appconnect: 0,000000
| pretransfer: 0,065813
| starttransfer: 0,155532
| total: 0,155639
| size: 92
| HTTPCode=200
Upvotes: 0
Views: 1009
Reputation: 3875
ECS sits on 256 cpu units and 512 reserved memory
It might be worth allocating more resources to see if that has some improvement, especially since there are some weird hidden limitations tied to the different CPU and Memory levels that might not be that apparent at first. Since 0.25 vCPU doesn't even give you a full thread to work with there could be other preempting going on that isn't visible to you.
Outside of that there are other things you can look for-
The basic idea with a lot of this is to try and isolate the various components to identify the one causing the slow down. I do believe it'll end up being something in the task itself (service container, sidecar, or service) considering you mentioned quick responses from the database server itself.
Upvotes: 1