I use node.js to send an http request. I have a requirement to measure how much time it took.
start = getTime()
http.send(function(data) {end=getTime()})
If I call getTime inside the http response callback, there is the risk that my callback is not being called immediately when the response cames back due to other events in the queue. Such a risk also exists if I use regular java or c# synchronous code for this task, since maybe another thread got attention before me.
start = getTime()
http.send()
end=getTime()
How does node.js compares to other (synchronous) platform - does it make my chance for a good measure better or worse?
Great observations!
Theory:
If you are performing micro-benchmarking, there exists a number of considerations which can potentially skew the measurements:
Other events in the event loop which are ready to fire along with the http send in question, and get executed sequentially before the send get a chance - node specific.
Thread / Process switching which can happen any time within the span of send operation - generic.
Kernel’s I/O buffers being in limited volume causes arbitrary delays - OS / workload / system load specific.
Latency incurred in gathering the system time - language / runtime specific.
Chunking / Buffering of data: socket [ http implementation ] specific.
Practice:
Noe suffers from (1), while a dedicated thread of Java / C# do not have this issue. But as node implements an event driven non-blocking I/O model, other events will not cause blocking effects, rather will be placed into the event queue. Only the ones which are ready will get fired, and the latency incurred due to them will be a function of how much I/O work they have to carry out, and any CPU bound actions performed in their associated callbacks. These, in practice, would become negligible and evened out in the comparison, due to the more visible effects of items (2) to (5). In addition, writes are generally non-blocking, which means they will be carried out without waiting for the next loop iteration to run. And finally, when the write is carried out, the callback is issued in-line and sequentially, there is no yielding to another event in between.
In short, if you compare a dedicated Java thread doing blocking I/O with a Node code, you will see Java measurements good, but in large scale applications, the thread context switching effort will offset this gain, and the node performance will stand out.
Hope this helps.
Related
I've been using various concurrency constructs for a while now without much consideration for how all the magic happens, which has recently made me increasingly uneasy.
In an attempt to remedy this ... feeling, I have been reading up on how async works under the hood. When I say async, in this case I'm referring to userland / greenthread / cooperative multitasking, although I assume some of the concepts will also apply to traditional OS managed threads insofar as a scheduler and workers are involved.
I see how a worker can suspend itself and let other workers execute, but at the lowest level in non-blocking library code, how does the scheduler know when a previously suspended worker's job is done and to wake up that worker?
For example if you fire up a worker in some sort of async block and perform an operation that would normally block (e.g. HTTP request, SQL query, other I/O), then even though your calling code is async, that operation (library code) better play nice with your async framework or you've effectively defeated the purpose of using it and blocked your scheduler from calling other waiting operations (the, What Color is Your Function problem) while it waits for your blocking call, which was executed inside your non-blocking calling code, to complete.
So now we've got async code calling other async library code, and now I'm asking myself the question all over again - how does the async library code know when to suspend and resume operation?
The idea of firing off a HTTP request, moving on, and returning later to check for results is weird to think about for me - not conceptually but from an implementation standpoint.
How do you perform a partial operation, e.g. sending TCP packets and then continuing with the rest of the program execution, only to come back later and check if results have been delivered. Delivered to what? A socket?
Now we're another layer deep and you are using socket selects to avoid creating threads and blocking, but, again...
how do those sockets start an operation, move on before completion, and then how does select know when data is available?
Are you continually checking some buffer to see if bytes have been delivered in an infinite loop and moving on if not?
Anyhow - I think you see where I'm going here....
I focused mainly on HTTP as a motivating example, but the same question applies for any normally blocking operations - how does it all work at the bottom?
Here are some of the resources I found helpful while researching the topic and which informed this question:
David Beazley's excellent video Build Your Own Async where he walks you through a simple implementation of a scheduler which fire callbacks and suspend execution by sleeping on a waiting queue. I found this video tremendously instructive, but it stops a bit short in that it shows you how using an async sleep frees up the scheduler to execute other workers, but doesn't really go into what would happen when you call code in those workers that itself must be non-blocking so it plays nice with the scheduler.
How does non-blocking IO work under the hood - This got me further along in my understanding, but still left with a few uncertainties.
I'm considering a few frameworks/programming methods for our new backend project. It regards a BackendForFrontend implementation, which aggregates downstream services. For simplicity, these are the steps it goes trough:
Request comes into the webserver
Webserver makes downstream request
Downstream request returns result
Webserver returns request
How is event-driven programming better than "regular" thread-per-request handling? Some websites try to explain, and it often comes down to something like this:
The second solution is a non-blocking call. Instead of waiting for the answer, the caller continues execution, but provides a callback that will be executed once data arrives.
What I don't understand: we need a thread/handler to await this data, right? Its nice that the event handler can continue, but we still need (in this example) a thread/handler per request that awaits each downstream request, right?
Consider this example: the downstream requests take n seconds to return. In this n seconds, r requests come in. In the thread-per-request we need r threads: one for every request. After n seconds pass, the first thread is done processing and available for a new request.
When implementing a event-driven design, we need r+1 threads: an event loop and r handlers. Each handler takes a request, performs it, and calls the callback once done.
So how does this improve things?
What I don't understand: we need a thread/handler to await this data,
right?
Not really. The idea behind NIO is that no threads ever get blocked.
It is interesting because the operating system already works in a non-blocking way. It is our programming languages that were modeled in a blocking manner.
As an example, imagine that you had a computer with a single CPU. Any I/O operation that you do will be orders of magnitude slower than the CPU, right?. Say you want to read a file. Do you think the CPU will stay there, idle, doing nothing while the disk head goes and fetches a few bytes and puts them in the disk buffer? Obviously not. The operating system will register an interruption (i.e. a callback) and will use the valuable CPU for something else in the mean time. When the disk head has managed to read a few bytes and made them available to be consumed, an interruption will be triggered and the OS will then give attention to it, restore the previous process block and allocate some CPU time to handle the available data.
So, in this case, the CPU is like a thread in your application. It is never blocked. It is always doing some CPU-bound stuff.
The idea behind NIO programming is the same. In the case you're exposing, imagine that your HTTP server has a single thread. When you receive a request from your client you need to make an upstream request (which represents I/O). So what a NIO framework would do here is to issue the request and register a callback for when the response is available.
Immediately after that your valuable single thread is released to attend yet another request, which is going to register another callback, and so on, and so on.
When the callback resolves, it will be automatically scheduled to be processed by your single thread.
As such, that thread works as an event loop, one in which you're supposed to schedule only CPU bound stuff. Every time you need to do I/O, that's done in a non-blocking way and when that I/O is complete, some CPU-bound callback is put into the event loop to deal with the response.
This is a powerful concept, because with a very small amount threads you can process thousands of requests and therefore you can scale more easily. Do more with less.
This feature is one of the major selling points of Node.js and the reason why even using a single thread it can be used to develop backend applications.
Likewise this is the reason for the proliferation of frameworks like Netty, RxJava, Reactive Streams Initiative and the Project Reactor. They all are seeking to promote this type of optimization and programming model.
There is also an interesting movement of new frameworks that leverage this powerful features and are trying to compete or complement one another. I'm talking of interesting projects like Vert.x and Ratpack. And I'm pretty sure there are many more out there for other languages.
The whole idea of non-blocking paradigm is achieved by this idea called
"Event Loop"
Interesting references:
http://www.masterraghu.com/subjects/np/introduction/unix_network_programming_v1.3/ch06lev1sec2.html
Understanding the Event Loop
https://www.youtube.com/watch?v=8aGhZQkoFbQ
I know this question has been discussed in the past in much details (How is Node.js inherently faster when it still relies on Threads internally?) but I still fail to properly understand node.js event loop model and being a single threaded model how it handles concurrent requests.
Uptil now my understanding is : We receive an IO request --> a thread is spawned internally by node.js and IO request is handed to it --> since this is an IO request so CPU hands it to DMA controller and frees this thread --> this thread again goes into the thread pool to serve a different request --> DMA is still doing the IO, once DMA get all the data a sort of event is fired --> this event is captured by the node.js system and it puts the supplied callback function on the event loop --> whenever event loop get the opportunity it executed the callback on the data fetched by the IO -- > thanks to closures, callback function executes on the data fetched by the callback only
So this process goes on repeatedly. Please someone elucidate on my understand and provide some information
There is only one thread (the main thread) for dealing with network I/O (file I/O is a slightly different story because not all platforms provide usable asynchronous, non-blocking file I/O APIs, so the synchronous file I/O APIs are used on those platforms in a threadpool).
So when network requests come in, they're all handled by the main thread which uses (indirectly via libuv) epoll/kqueue/IOCP/etc. for detecting (in a non-blocking way) when data is available (or when there is an incoming TCP connection for example). If there is data available, it calls out appropriately to javascript as needed, passing the socket data. If there is no data on the socket (and there's nothing else for the event loop to do, e.g. firing timers), then execution proceeds to the next iteration of the event loop where the process starts all over again.
As far as associating socket data with socket javascript objects goes, it's the combination of C++ wrapper objects (e.g. tcp_wrap, udp_wrap, etc.) and javascript objects that makes sure the data gets to the appropriate place.
Here's a slightly older diagram that explains what happens in a single cycle of node's event loop. Some of it may have changed slightly since node v0.9, but it gets you the general idea:
node.js has a single threaded model which eliminates the need for locks and semaphores (used in the traditional multithreaded model). Locks and semaphores can add some costs in terms of performance and, more importantly, can provide a lot of rope to hang yourself with (in other words, many pitfalls). IO operations happen in parallel and because work between IOs is typically very small, this single threaded model usually works quite nicely.
(side note: if you have an app that does a lot of work between IO operations, i.e. CPU intense apps, that is a case where node doesn't not scale well)
I like to think of the argument for why node's model scales well is the same as why people think NoSQL scales better than SQL databases. Obviously Java (multi-threaded) and SQL scale; big companies like Facebook and Twitter have proven that. However, like in SQL, there are a lot of things you could do incorrectly to slow down your performance. Node.js doesn't eliminate all potential problems, it just does a good job of restricting many of the common causes.
I'm writing a multi-threaded OpenMPI application, using MPI_Isend and MPI_Irecv from several threads to exchange hundreds of messages per second between ranks over InfiniBand RDMA.
Transfers are in the order of 400 - 800KByte, generating about 9 Gbps in and out for each rank, well within the capacity of FDR. Simple MPI benchmarks also show good performance.
The completion of the transfers is checked upon by polling all active transfers using MPI_Testsome in a dedicated thread.
The transfer rates I achieve depend on the message rate, but more importantly also on the polling frequency of MPI_Testsome. That is, if I poll, say, every 10ms, the requests finish later than if I poll every 1ms.
I'd expect that if I poll evert 10ms instead of every 1ms, I'd at most be informed of finished requests 9ms later. I'd not expect the transfers themselves to delay completion by fewer calls to MPI_Testsome, and thus slow down the total transfer rates. I'd expect MPI_Testsome to be entirely passive.
Anyone here have a clue why this behaviour could occur?
The observed behaviour is due to the way operation progression is implemented in Open MPI. Posting a send or receive, no matter if it is done synchronously or asynchronously, results in a series of internal operations being queued. Progression is basically the processing of those queued operations. There are two modes that you can select at library build time: one with asynchronous progression thread and one without with the latter being the default.
When the library is compiled with async progression thread enabled, a background thread takes care and processes the queue. This allows for background transfers to commence in parallel with the user's code but increases the latency. Without async progression, operations are faster but progression can only happen when the user code calls into the MPI library, e.g. while in MPI_Wait or MPI_Test and family. The MPI_Test family of functions are implemented in such a way as to return as fast as possible. That means that the library has to balance a trade-off between doing stuff in the call, thus slowing it down, or returning quickly, which means less operations are progressed on each call.
Some of the Open MPI developers, notably Jeff Squyres, visits Stack Overflow every now and then. He could possibly provide more details.
This behaviour is hardly specific to Open MPI. Unless heavily hardware-assisted, MPI is usually implemented following the same methods.
I believed any kind of asynchronous execution makes a thread in invisible area. But if so,
Async codes does not offer any performance gain than threaded codes.
But I can't understand why so many developers are making many features async form.
Could you explain about difference and cost of them?
The purpose of an asynchronous execution is to prevent the code calling the asynchronous method (the foreground code) from being blocked. This allows your foreground code to go on doing useful work while the asynchronous thread is performing your requested work in the background. Without asynchronous execution, the foreground code must wait until the background task is completed before it can continue executing.
The cost of an asynchronous execution is the same as that of any other task running on a thread.
Typically, an async result object is registered with the foreground code. The async result object can either raise an event when the background task is completed, or the foreground code can periodically check the async result object to see if its completion flag has been set.
Concurrency does not necessarily require threads.
In Linux, for example, you can perform non-blocking syscalls. Using this type of calls, you can for instance start a number of network reads. Your code can keep track of the reads manually (using handles in a list or similar) and periodically ask the OS if new data is available on any of the connections. Internally, the OS also keeps a list of ongoing reads. Using this technique, you can thus achieve concurrency without any (extra) threads, neither in your program nor in the OS.
If you use threads and blocking IO, you would typically start one thread per read. In this scenario, the OS will instead have a list of ongoing threads, which it parks when the tread tries to read data when there is none available. Threads are resumed as data becomes available.
Having the OS switch between threads might involve slightly more overhead in the form of context switching - switching program counter and register content. But the real deal breaker is usually stack allocation per thread. This size is a couple of megabytes by default on Linux. If you have a lot of concurrency in your program, this might push you in the direction of using non-blocking calls to handle more concurrency per thread.
So it is possible to do async programming without threads. If you want to do async programming using only blocking OS-calls you need to dedicate a thread to do the blocking while you continue. But if you use non-blocking calls you can do a lot of concurrent things with just a single thread. Have a look at Node.js, which have great support for many concurrent connections while being single-threaded for most operations.
Also check out Golang, which achieve a similar effect using a sort of green threads called goroutines. Multiple goroutines run concurrently on the same OS thread and they are restrictive in stack memory, pushing the limit much further.
Async codes does not offer any performance gain than threaded codes.
Asynchornous execution is one of the traits of multi-threaded execution, which is becoming more relevant as processors are packing in more cores.
For servers, multi-core only mildly relevant, as they are already written with concurrency in mind and will scale natrually, but multi-core is particularly relevant for desktop apps, which traditionally do only a few things concurrently - often just one foreground task with a background thread. Now, they have to be coded to do many things concurrently if they are to take advantage of the power of the multi-core cpu.
As to the performance - on single-core - the asynchornous tasks slow down the system as much as they would if run sequentially (this a simplication, but true for the most part.) So, running task A, which takes 10s and task B which takes 5s on a single core, the total time needed will be 15s, if B is run asynchronously or not. The reason is, is that as B runs, it takes away cpu resources from A - A and B compete for the same cpu.
With a multi-core machine, additional tasks run on otherwise unused cores, and so the situation is different - the additional tasks don't really consume any time - or more correctly, they don't take away time from the core running task A. So, runing tasks A and B asynchronously on multi-core will conume just 10s - not 15s as with single core. B's execution runs at the same time as A, and on a separate core, so A's execution time is unaffected.
As the number of tasks and cores increase, then the potential improvements in performance also increase. In parallel computing, exploiting parallelism to produce an improvement in performance is known as speedup.
we are already seeing 64-core cpus, and it's esimated that we will have 1024 cores commonplace in a few years. That's a potential speedup of 1024 times, compared to the single-threaded synchronous case. So, to answer your question, there clearly is a performance gain to be had by using asynchronous execution.
I believed any kind of asynchronous execution makes a thread in invisible area.
This is your problem - this actually isn't true.
The thing is, your whole computer is actually massively asynchronous - requests to RAM, communication via a network card, accessing a HDD... those are all inherently asynchronous operations.
Modern OSes are actually built around asynchronous I/O. Even when you do a synchronous file request, for example (e.g. File.ReadAllText), the OS sends an asynchronous request. However, instead of giving control back to your code, it blocks while it waits for the response to the asynchronous request. And this is where proper asynchronous code comes in - instead of waiting for the response, you give the request a callback - a function to execute when the response comes back.
For the duration of the asynchronous request, there is no thread. The whole thing happens on a completely different level - say, the request is sent to the firmware on your NIC, and given a DMA address to fill the response. When the NIC finishes your request, it fills the memory, and signals an interrupt to the processor. The OS kernel handles the interrupt by signalling the owner application (usually an IOCP "channel") the request is done. This is still all done with no thread whatsoever - only for a short time right at the end, a thread is borrowed (in .NET this is from the IOCP thread pool) to execute the callback.
So, imagine a simple scenario. You need to send 100 simultaneous requests to a database engine. With multi-threading, you would spin up a new thread for each of those requests. That means a hundred threads, a hundread thread stacks, the cost of starting a new thread itself (starting a new thread is cheap - starting a hundred at the same time, not so much), quite a bit of resources. And those threads would just... block. Do nothing. When the response comes, the threads are awakened, one after another, and eventually disposed.
On the other hand, with asynchronous I/O, you can simply post all the requests from a single thread - and register a callback when each of those is finished. A hundred simultaneous requests will cost you just your original thread (which is free for other work as soon as the requests are posted), and a short time with threads from the thread pool when the requests are finished - in "worst" case scenario, about as many threads as you have CPU cores. Provided you don't use blocking code in the callback, of course :)
This doesn't necessarily mean that asynchronous code is automatically more efficient. If you only need a single request, and you can't do anything until you get a response, there's little point in making the request asynchronous. But most of the time, that's not your actual scenario - for example, you need to maintain a GUI in the meantime, or you need to make simultaneous requests, or your whole code is callback-based, rather than being written synchronously (a typical .NET Windows Forms application is mostly event-based).
The real benefit from asynchronous code comes from exactly that - simplified non-blocking UI code (no more "(Not Responding)" warnings from the window manager), and massively improved parallelism. If you have a web server that handles a thousand requests simultaneously, you don't want to waste 1 GiB of address space just for the completely unnecessary thread stacks (especially on a 32-bit system) - you only use threads when you have something to do.
So, in the end, asynchronous code makes UI and server code much simpler. In some cases, mostly with servers, it can also make it much more efficient. The efficiency improvements come precisely from the fact that there is no thread during the execution of the asynchronous request.
Your comment only applies to one specific kind of asynchronous code - multi-threaded parallelism. In that case, you really are wasting a thread while executing a request. However, that's not what people mean when saying "my library offers an asynchronous API" - after all, that's a 100% worthless API; you could have just called await Task.Run(TheirAPIMethod) and gotten the exact same thing.