Node's epoll behaviour on socket - node.js

I wrote a simple node.js program that sends out 1000 http requests and it records when these requests comes back and just increases counter by 1 upon response. Endpoint is very light weight and it just has simple http resonse without any heavy html. I recorded that it returns me around 200-300 requests per second for 3 seconds. On other hand, when i start this same process 3 times (4 total processes = amount of my available cores) i notice that it performs x4 faster. So i receive aproximately 300 * 4 requests per second back. I want to understand what happens when epoll gets triggered upon Kernel notifying the poll about new file descriptor being ready to compute (new tcp payload arrived). Does v8 take out this filedescriptor and read the data / manipulate with it and where is the actuall bottleneck? Is it in computing and unpacking the payload? It seems that only 1 core is working on sending/receiving these requests for this 1 process and when i start multiple (on amount of my cores), it performs faster.

where is the actuall bottleneck?
Sounds like you're interested in profiling your application. See https://nodejs.org/en/docs/guides/simple-profiling/ for the official documentation on that.
What I can say up front is that V8 does not deal with file descriptors or epoll, that's all Node territory. I don't know Node's internals myself (only V8), but I do know that the code is on https://github.com/nodejs/node, so you can look up how any given feature is implemented.

Related

FIO: distribute io_size over time_based runtime

I want to test a volume's ability to handle io request while im doing some operation on that volume.
for this purpose i want fio to send consecutive io requests for 3 minutes and i'll do those operations on the volume while fio sending request, such that requests will be sent while im doing those operation and also after
For this purpose i want to use the attribute time_based = 3 minutes (that is: run_time=180, time_based=true), and finish those operations within 2 minutes so that the operation will be issued while fio sending io requests and also after.
However, the snag is that for some reasons that inherited in those operations this volume can't get io request for more than 3gb, so im using io_size = 3gb , but as stated in time_based documentation:
If set, fio will run for the duration of the runtime specified even if the file(s) are completely read or written. It will simply loop over the same workload as many times as the runtime allows.
I can't allow it to loop and also can't allow it to finish after it sends 3gb cause i need thew operations to be taken in between.
For conclusion:
what Im actually looking for is a way to tell fio to send 3gb workload that evenly distributed over 3 minutes,
so that there will inbound io request in any time within 3 minutes and the total workload will not exceed 3gb.
Thanks

Why is Python consistently struggling to keep up with constant generation of asyncio tasks?

I have a Python project with a server that distributes work to one or more clients. Each client is given a number of assignments which contain parameters for querying a target API. This includes a maximum number of requests per second they can make with a given API key. The clients process the response and send the results back to the server to store into a database.
Both the server and clients use Tornado for asynchronous networking. My initial implementation for the clients relied on the PeriodicCallback to ensure that n-number of calls to the API would occur. I thought that this was working properly as my tests would last 1-2 minutes.
I added some telemetry to collect statistics on performance and noticed that the clients were actually having issues after almost exactly 2 minutes of runtime. I had set the API requests to 20 per second (the maximum allowed by the API itself) which the clients could reliably hit. However, after 2 minutes performance would fluctuate between 12 and 18 requests per second. The number of active tasks steadily increased until it hit the maximum amount of active assignments (100) given from the server and the HTTP request time to the API was reported by Tornado to go from 0.2-0.5 seconds to 6-10 seconds. Performance is steady if I only do 14 requests per second. Anything higher than 15 requests will experience issues 2-3 minutes after starting. Logs can be seen here. Notice how the column of "Active Queries" is steady until 01:19:26. I've truncated the log to demonstrate
I believed the issue was the use of a single process on the client to handle both communication to the server and the API. I proceeded to split the primary process into several different processes. One handles all communication to the server, one (or more) handles queries to the API, another processes API responses into a flattened class, and finally a multiprocessing Manager for Queues. The performance issues were still present.
I thought that, perhaps, Tornado was the bottleneck and decided to refactor. I chose aiohttp and uvloop. I split the primary process in a similar manner to that in the previous attempt. Unfortunately, performance issues are unchanged.
I took both refactors and enabled them to split work into several querying processes. However, no matter how much you split the work, you still encounter problems after 2-3 minutes.
I am using both Python 3.7 and 3.8 on MacOS and Linux.
At this point, it does not appear to be a limitation of a single package. I've thought about the following:
Python's asyncio library cannot handle more than 15 coroutines/tasks being generated per second
I doubt that this is true given that different libraries claim to be able to handle several thousand messages per second simultaneously. Also, we can hit 20 requests per second just fine at the start with very consistent results.
The API is unable to handle more than 15 requests from a single client IP
This is unlikely as I am not the only user of the API and I can request 20 times per second fairly consistently over an extended period of time if I over-subscribe processes to query from the API.
There is a system configuration causing the limitation
I've tried both MacOS and Debian which yield the same results. It's possible that's it a *nix problem.
Variations in responses cause a backlog which grows linearly until it cannot be tackled fast enough
Sometimes responses from the API grow and shrink between 0.2 and 1.2 seconds. The number of active tasks returned by asyncio.all_tasks remains consistent in the telemetry data. If this were true, we wouldn't be consistently encountering the issue at the same time every time.
We're overtaxing the hardware with the number of tasks generated per second and causing thermal throttling
Although CPU temperatures spike, neither MacOS nor Linux report any thermal throttling in the logs. We are not hitting more than 80% CPU utilization on a single core.
At this point, I'm not sure what's causing it and have considered refactoring the clients into a different language (perhaps C++ with Boost libraries). Before I dive into something so foolish, I wanted to ask if I'm missing something simple.
Conclusion
Performance appears to vary wildly depending on time of day. It's likely to be the API.
How this conclusion was made
I created a new project to demonstrate the capabilities of asyncio and determine if it's the bottleneck. This project takes two websites, one to act as the baseline and the other is the target API, and runs through different methods of testing:
Spawn one process per core, pass a semaphore, and query up to n-times per second
Create a single event loop and create n-number of tasks per second
Create multiple processes with an event loop each to distribute the work, with each loop performing (n-number / processes) tasks per second
(Note that spawning processes is incredibly slow and often commented out unless using high-end desktop processors with 12 or more cores)
The baseline website would be queried up to 50 times per second. asyncio could complete 30 tasks per second reliably for an extended period, with each task completing their run in 0.01 to 0.02 seconds. Responses were very consistent.
The target website would be queried up to 20 times per second. Sometimes asyncio would struggle despite circumstances being identical (JSON handling, dumping response data to queue, returning immediately, no CPU-bound processing). However, results varied between tests and could not always be reproduced. Responses would be under 0.4 seconds initially but quickly increase to 4-10 seconds per request. 10-20 requests would return as complete per second.
As an alternative method, I chose a parent URI for the target website. This URI wouldn't have a large query to their database but instead be served back with a static JSON response. Responses bounced between 0.06 seconds to 2.5-4.5 seconds. However, 30-40 responses would be completed per second.
Splitting requests across processes with their own event loop would decrease response time in the upper-bound range by almost half, but still took more than one second each to complete.
The inability to reproduce consistent results every time from the target website would indicate that it's a performance issue on their end.

Bash - cURL Get requests are slow (Relatively)

I'm constantly querying a server for a list of items. Usually the data returned is "offers:" with a blank list, 99.999% of the time this is the data I get back from my request. So, pretty small payload.
I have a ping of 35ms pretty rock solid, jitter is about 0.2ms
But when running a single loop of requests, I get updates about every 300ms.
My goal is to make this as fast as I possibly can, so I parallelized it, running 8 of the loops. But now, I see that frequently 4 or more of the threads will seem to run simultaneously. Giving me 4 requests within 10 or 20ms of each other, and leaving periods of up to 200ms with no requests being processed.
The server I'm interfacing is of unknown specs, but it's a large company I'm interfacing with and I'd assume that the server I am communicating with is more than capable of handling anything I could possibly throw at it.

"Resequencing" messages after processing them out-of-order

I'm working on what's basically a highly-available distributed message-passing system. The system receives messages from someplace over HTTP or TCP, perform various transformations on it, and then sends it to one or more destinations (also using TCP/HTTP).
The system has a requirement that all messages sent to a given destination are in-order, because some messages build on the content of previous ones. This limits us to processing the messages sequentially, which takes about 750ms per message. So if someone sends us, for example, one message every 250ms, we're forced to queue the messages behind each other. This eventually introduces intolerable delay in message processing under high load, as each message may have to wait for hundreds of other messages to be processed before it gets its turn.
In order to solve this problem, I want to be able to parallelize our message processing without breaking the requirement that we send them in-order.
We can easily scale our processing horizontally. The missing piece is a way to ensure that, even if messages are processed out-of-order, they are "resequenced" and sent to the destinations in the order in which they were received. I'm trying to find the best way to achieve that.
Apache Camel has a thing called a Resequencer that does this, and it includes a nice diagram (which I don't have enough rep to embed directly). This is exactly what I want: something that takes out-of-order messages and puts them in-order.
But, I don't want it to be written in Java, and I need the solution to be highly available (i.e. resistant to typical system failures like crashes or system restarts) which I don't think Apache Camel offers.
Our application is written in Node.js, with Redis and Postgresql for data persistence. We use the Kue library for our message queues. Although Kue offers priority queueing, the featureset is too limited for the use-case described above, so I think we need an alternative technology to work in tandem with Kue to resequence our messages.
I was trying to research this topic online, and I can't find as much information as I expected. It seems like the type of distributed architecture pattern that would have articles and implementations galore, but I don't see that many. Searching for things like "message resequencing", "out of order processing", "parallelizing message processing", etc. turn up solutions that mostly just relax the "in-order" requirements based on partitions or topics or whatnot. Alternatively, they talk about parallelization on a single machine. I need a solution that:
Can handle processing on multiple messages simultaneously in any order.
Will always send messages in the order in which they arrived in the system, no matter what order they were processed in.
Is usable from Node.js
Can operate in a HA environment (i.e. multiple instances of it running on the same message queue at once w/o inconsistencies.)
Our current plan, which makes sense to me but which I cannot find described anywhere online, is to use Redis to maintain sets of in-progress and ready-to-send messages, sorted by their arrival time. Roughly, it works like this:
When a message is received, that message is put on the in-progress set.
When message processing is finished, that message is put on the ready-to-send set.
Whenever there's the same message at the front of both the in-progress and ready-to-send sets, that message can be sent and it will be in order.
I would write a small Node library that implements this behavior with a priority-queue-esque API using atomic Redis transactions. But this is just something I came up with myself, so I am wondering: Are there other technologies (ideally using the Node/Redis stack we're already on) that are out there for solving the problem of resequencing out-of-order messages? Or is there some other term for this problem that I can use as a keyword for research? Thanks for your help!
This is a common problem, so there are surely many solutions available. This is also quite a simple problem, and a good learning opportunity in the field of distributed systems. I would suggest writing your own.
You're going to have a few problems building this, namely
2: Exactly-once delivery
1: Guaranteed order of messages
2: Exactly-once delivery
You've found number 1, and you're solving this by resequencing them in redis, which is an ok solution. The other one, however, is not solved.
It looks like your architecture is not geared towards fault tolerance, so currently, if a server craches, you restart it and continue with your life. This works fine when processing all requests sequentially, because then you know exactly when you crashed, based on what the last successfully completed request was.
What you need is either a strategy for finding out what requests you actually completed, and which ones failed, or a well-written apology letter to send to your customers when something crashes.
If Redis is not sharded, it is strongly consistent. It will fail and possibly lose all data if that single node crashes, but you will not have any problems with out-of-order data, or data popping in and out of existance. A single Redis node can thus hold the guarantee that if a message is inserted into the to-process-set, and then into the done-set, no node will see the message in the done-set without it also being in the to-process-set.
How I would do it
Using redis seems like too much fuzz, assuming that the messages are not huge, and that losing them is ok if a process crashes, and that running them more than once, or even multiple copies of a single request at the same time is not a problem.
I would recommend setting up a supervisor server that takes incoming requests, dispatches each to a randomly chosen slave, stores the responses and puts them back in order again before sending them on. You said you expected the processing to take 750ms. If a slave hasn't responded within say 2 seconds, dispatch it again to another node randomly within 0-1 seconds. The first one responding is the one we're going to use. Beware of duplicate responses.
If the retry request also fails, double the maximum wait time. After 5 failures or so, each waiting up to twice (or any multiple greater than one) as long as the previous one, we probably have a permanent error, so we should probably ask for human intervention. This algorithm is called exponential backoff, and prevents a sudden spike in requests from taking down the entire cluster. Not using a random interval, and retrying after n seconds would probably cause a DOS-attack every n seconds until the cluster dies, if it ever gets a big enough load spike.
There are many ways this could fail, so make sure this system is not the only place data is stored. However, this will probably work 99+% of the time, it's probably at least as good as your current system, and you can implement it in a few hundred lines of code. Just make sure your supervisor is using asynchronous requests so that you can handle retries and timeouts. Javascript is by nature single-threaded, so this is slightly trickier than normal, but I'm confident you can do it.

To child_process fork or not to fork for I/O tasks?

Does it make sense to use child_process fork for long running (15 - 30 seconds) I/O tasks such as fetching a feed and saving it to a db?
The context for this question is for an express route, and I need to mention that a status response is sent to the browser early when the feed url has been validated. After the status response has been sent the fetching and saving of the feed items continues and can obviously take a bit of time (10-30 sec). Should this second part be forked with a child process?
I have read contradictory posts (not on SO) about the I/O efficiency of node with/out forking the job to a background process, so I wanted to have a clear response to this. Does it make sense to fork I/O tasks (not CPU intensive tasks per se which I reckon is a separate question)
In general cases, Node is great for handling I/O. Due to the Event driven architecture, as soon as an I/O intensive action leaves Node (or any I/O action, really), Node forgets about that action until said action is finished (or has an error). The returning Event then goes back into the single-threaded Node process.
Take for example a remote DB and an intensive query. Even if the DB server takes seconds to query and return the results, the Node process was only responsible for building the query (a string?), and putting said query on TCP socket. The transferring of data on the socket doesn't even take up the Node process! Then, Node cares nothing about the request until the returning data has finished coming across the socket. (There could be some processing you don't see in your DB package, like when a RDBMS result is converted into JSON).
There might be corner cases to this and those you will have to look out for . . . if they ever come up. A huge majority of the time, Node will handle I/O very well. (Post some links to said articles, in your question or as comments under this answer.)
Forking child processes is typically reserved for high CPU tasks that would slow down the main event loop. There could be other reasons, but "in general."

Resources