Reputation: 1904
This isn't specifically related to programming, more-so infrastructure, but of all the exchange sites StackOverflow seems to be most knowledgeable in departments of RESTful APIs.
I have a single endpoint configured for handling events that could take in up to 1k events within a 3 minute window. I am noticing a lot of events "missed", but I'm not sure that I'm willing to blame over-utilization right away without fully understanding.
The listening endpoint is /users/events?user=2345345
where 2345345
is the user id. From here we perform necessary actions on that particular user, but what if during this the next user, 2895467
performs an action which results in a new event being sent to /users/events?user=2895467
before the first could be processed. What happens?
I intend to alleviate the concern by using celery
to signal tasks which would greatly reduce this, but is it fair to assume that events could be missed while this single endpoint remains synchronous?
Upvotes: 0
Views: 1058
Reputation: 4392
Real-life behavior depends on approach used for "deployment".
For example if you are using uwsgi with single unthreaded worker behind nginx, then requests will be processed "sequentially": if second request arrives before first is processed, then second will be "queued" (added to backlog).
How long it can be queued and how many requests may be in queue
depends on the configuration of nginx (listen backlog), configuration of uwsgi (concurrency, listen backlog) and even on configuration
of OS kernel (search for net.core.somaxconn
,
net.core.netdev_max_backlog
). When queue becomes "full" then new
"concurrent" connections will be dropped instead of being added to queue.
Upvotes: 1