Reputation: 29
I am writing a node app with various communities and within those communities users are able to create and join rooms/lobbies. I have written the logic for these lobbies into the node app itself though a collection of lobby objects.
Lobbies require some maintenance once created. Users can change various statuses within the lobby and I also have calls using socket.io at regular intervals(about every 2 seconds) for each lobby to keep track of some user input "live".
None of the tasks are too cpu intensive. The biggest potential threat I foresee is a load distributing algorithm but it is not one of the "live calls" and is only activated upon a button press by the lobby creator (it also is never performed on more than 10 things).
My concern arises in that, in production, if the server starts to get close too around 100-200 lobbies I could be risking blocking the event loop. Are my concerns reasonable? Is the potential quantity of these operations, although they are small, large enough to warent offloading this code to a separate executable or getting involved with various franken-thread javascript libs?
TL;DR: node app has object with regular small tasks run. Should I worry about event-loop blocking if many of these objects are made.
Upvotes: 1
Views: 405
Reputation: 708026
There is no way to know ahead of time whether what you describe will "fill" up the event loop and take all the time one thread has or not. If you want to "know", you will have to build a simulation and measure while using commensurate hardware with what you expect to use in production.
With pretty much all performance questions, you have to measure, measure and measure again to really know or understand whether you have a problem and, if so, what is the main source of the problem.
For non-compute intensive things, your CPU can probably handle a lot of activity. If you get a lot of users all pounding transactions every two seconds though, you could end up with a bottleneck somewhere that causes issues. 200 users with a transaction every 2 seconds means 100 transactions/sec which means if you take more than 10ms of CPU or of any other serialized resource (like perhaps the network card) per transaction, then you may have issues.
As for offloading some work to another process, I wouldn't spend much time worrying about that until you've measured whether you have an issue or not. If you do, then it will be very important to understand what the main cause of the issue is. It may make more sense to simply cluster your node.js processes to put multiple processes on the same server than it does to break your core logic into multiple processes. It will all depend upon what the main cause of your issue is (if you even have an issue). Or, you may end up needing multiple network cards. Or something else. It's far too early to know before measuring and understanding.
Upvotes: 2