I was going through the details of node.jsand came to know that, It supports asynchronous programming though essentially it provides a single threaded model.
How is asynchronous programming handled in such cases? Is it like runtime itself creates and manages threads, but the programmer cannot create threads explicitly? It would be great if someone could point me to some resources to learn about this.
Say it with me now: async programming does not necessarily mean multi-threaded.
Javascript is a single-threaded runtime - you simply aren't able to create new threads in JS because the language/runtime doesn't support it.
Frank says it correctly (although obtusely) In English: there's a main event loop that handles when things come into your app. So, "handle this HTTP request" will get added to the event queue, then handled by the event loop when appropriate.
When you call an async operation (a mysql db query, for example), node.js sends "hey, execute this query" to mysql. Since this query will take some time (milliseconds), node.js performs the query using the MySQL async library - getting back to the event loop and doing something else there while waiting for mysql to get back to us. Like handling that HTTP request.
Edit: By contrast, node.js could simply wait around (doing nothing) for mysql to get back to it. This is called a synchronous call. Imagine a restaurant, where your waiter submits your order to the cook, then sits down and twiddles his/her thumbs while the chef cooks. In a restaurant, like in a node.js program, such behavior is foolish - you have other customers who are hungry and need to be served. Thus you want to be as asynchronous as possible to make sure one waiter (or node.js process) is serving as many people as they can.
Edit done
Node.js communicates with mysql using C libraries, so technically those C libraries could spawn off threads, but inside Javascript you can't do anything with threads.
Ryan said it best: sync/async is orthogonal to single/multi-threaded. For single and multi-threaded cases there is a main event loop that calls registered callbacks using the Reactor Pattern. For the single-threaded case the callbacks are invoked sequentially on main thread. For the multi-threaded case they are invoked on separate threads (typically using a thread pool). It is really a question of how much contention there will be: if all requests require synchronized access to a single data structure (say a list of subscribers) then the benefits of having multiple threaded may be diminished. It's problem dependent.
As far as implementation, if a framework is single threaded then it is likely using poll/select system call i.e. the OS is triggering the asynchronous event.
To restate the waiter/chef analogy:
Your program is a waiter ("you") and the JavaScript runtime is a kitchen full of chefs doing the things you ask.
The interface between the waiter and the kitchen is mediated by queues so requests are not lost in instances of overcapacity.
So your program is assigned one thread of execution. You can only wait one table at a time. Each time you want to offload some work (like making the food/making a network request), you run to the kitchen and pin the order to a board (queue) for the chefs (runtime) to pick-up when they have spare capacity. The chefs will let you know when the order is ready (they will call you back). In the meantime, you go wait another table (you are not blocked by the kitchen).
So the accepted answer is misleading. The JavaScript runtime is definitionally multithreaded because I/O does not block your JavaScript program. As a waiter you can continue serving customers, while the kitchen cooks. That involves at least two threads of execution. The reality is that the runtime will maintain several threads of execution behind the scenes, in order to efficiently serve the single thread directly corresponding to your script.
By design, only one thread of execution is assigned to the synchronous running of your JavaScript program. This is a good thing because it makes your program easier to reason about than having to handle multiple threads of execution yourself. Don't worry: your JavaScript program can still get plenty complicated though!
Related
I was going through Node.js documentation, and couldn't understand the line :
A Node.js app runs in a single process, without creating a new thread for every request. Node.js provides a set of asynchronous I/O primitives in its standard library that prevent JavaScript code from blocking and generally, libraries in Node.js are written using non-blocking paradigms, making blocking behavior the exception rather than the norm.
Source : Introduction to node js
I couldn't understand specifically:
[...] Node.js provides a set of asynchronous I/O primitives in its standard library that prevent JavaScript code from blocking [..]
Does it simply means, it has built in functionality that provides the provision to work as asynchronous?
If not then what are these set of asynchronous I/O primitives? If anyone could provide me some link for better understanding or getting started with Node.js, that would be really great.
P.S : I have practical experience with Nodejs where I understand how it's code will work but don't understand why it will work, so I want understand its theoretical part, so I can understand what actually is going on in the background.
Does it simply means, it has built in functionality that provides the provision to work as asynchronous?
Yes, that's basically what it means.
In a "traditional" one-thread-per-connection* model you accept a connection and then hand off the handling of that request to a thread (either a freshly started one or from a pool, doesn't really change much) and do all work related to that connection on that one thread, including sending the response.
This can easily done with synchronous/blocking I/O: have a read method that simply returns the read bytes and a write method that blocks until the writing is done.
This does, however mean that the thread handling that request can not do anything else and also that you need many threads to be able to handle many concurrent connections/requests. And since I/O operations take a relatively huge time (when measured in the speed of memory access and computation), that means that most of these threads will be waiting for one I/O operation or another for most of the time.
Having asynchronous I/O and an event-based core architecture means that when a I/O operation is initiated the CPU can immediately go on to processing whatever action needs to be done next, which will likely be related to an entirely different request.
Therefore one can handle many more requests on a single thread.
The "primitives" just means that basic I/O operations such as "read bytes" and "write bytes" to/from network connections or files are provided as asynchronous operations and higher-level operations need to be built on top of those (again in an asynchronous way, to keep the benefits).
As a side node: many other programming environments have either had asynchronous I/O APIs for a long time or have gotten them in recent years. The one thing that sets Node.js apart is that it's the default option: If you're reading from a socket or a file, then doing it asynchronously is what is "normal" and blocking calls are the big exception. This means that the entire ecosystem surrounding Node.js (i.e. almost all third-party libraries) works with that assumption in mind and is also written in that same manner.
So while Java, for example, has asynchronous I/O you lose that advantage as soon as you use any I/O related libraries that only support blocking I/O.
* I use connection/request interchangeably in this answer, under the assumption that each connection contains a single request. That assumption is usually wrong these days (most common protocols allow multiple request/response pairs in a single connnection), but handling multiple requests on a single connection doesn't fundamentally change anything about this answer.
It means node js doesn't halts on input/output operations. Suppose you need to do some task and it have some blocking condition e.g if space key is pressed then do this or while esc key isn't pressed keep taking input as node js is single threaded this will stop all operations and it will focus of doing the blocking condition job until its finishes it asyncronous will allow application not halt other tasks while doing one it will do other task until than task finishes and thats why we use await to get value out of promises in node js async function when the data is processed then the pointer will comes to the line where await line is present and get the value or process the information from it.
I'm considering a few frameworks/programming methods for our new backend project. It regards a BackendForFrontend implementation, which aggregates downstream services. For simplicity, these are the steps it goes trough:
Request comes into the webserver
Webserver makes downstream request
Downstream request returns result
Webserver returns request
How is event-driven programming better than "regular" thread-per-request handling? Some websites try to explain, and it often comes down to something like this:
The second solution is a non-blocking call. Instead of waiting for the answer, the caller continues execution, but provides a callback that will be executed once data arrives.
What I don't understand: we need a thread/handler to await this data, right? Its nice that the event handler can continue, but we still need (in this example) a thread/handler per request that awaits each downstream request, right?
Consider this example: the downstream requests take n seconds to return. In this n seconds, r requests come in. In the thread-per-request we need r threads: one for every request. After n seconds pass, the first thread is done processing and available for a new request.
When implementing a event-driven design, we need r+1 threads: an event loop and r handlers. Each handler takes a request, performs it, and calls the callback once done.
So how does this improve things?
What I don't understand: we need a thread/handler to await this data,
right?
Not really. The idea behind NIO is that no threads ever get blocked.
It is interesting because the operating system already works in a non-blocking way. It is our programming languages that were modeled in a blocking manner.
As an example, imagine that you had a computer with a single CPU. Any I/O operation that you do will be orders of magnitude slower than the CPU, right?. Say you want to read a file. Do you think the CPU will stay there, idle, doing nothing while the disk head goes and fetches a few bytes and puts them in the disk buffer? Obviously not. The operating system will register an interruption (i.e. a callback) and will use the valuable CPU for something else in the mean time. When the disk head has managed to read a few bytes and made them available to be consumed, an interruption will be triggered and the OS will then give attention to it, restore the previous process block and allocate some CPU time to handle the available data.
So, in this case, the CPU is like a thread in your application. It is never blocked. It is always doing some CPU-bound stuff.
The idea behind NIO programming is the same. In the case you're exposing, imagine that your HTTP server has a single thread. When you receive a request from your client you need to make an upstream request (which represents I/O). So what a NIO framework would do here is to issue the request and register a callback for when the response is available.
Immediately after that your valuable single thread is released to attend yet another request, which is going to register another callback, and so on, and so on.
When the callback resolves, it will be automatically scheduled to be processed by your single thread.
As such, that thread works as an event loop, one in which you're supposed to schedule only CPU bound stuff. Every time you need to do I/O, that's done in a non-blocking way and when that I/O is complete, some CPU-bound callback is put into the event loop to deal with the response.
This is a powerful concept, because with a very small amount threads you can process thousands of requests and therefore you can scale more easily. Do more with less.
This feature is one of the major selling points of Node.js and the reason why even using a single thread it can be used to develop backend applications.
Likewise this is the reason for the proliferation of frameworks like Netty, RxJava, Reactive Streams Initiative and the Project Reactor. They all are seeking to promote this type of optimization and programming model.
There is also an interesting movement of new frameworks that leverage this powerful features and are trying to compete or complement one another. I'm talking of interesting projects like Vert.x and Ratpack. And I'm pretty sure there are many more out there for other languages.
The whole idea of non-blocking paradigm is achieved by this idea called
"Event Loop"
Interesting references:
http://www.masterraghu.com/subjects/np/introduction/unix_network_programming_v1.3/ch06lev1sec2.html
Understanding the Event Loop
https://www.youtube.com/watch?v=8aGhZQkoFbQ
The documentation of node.js describes the so called phases of its underlying event loop.
It explicitly states also that idle and prepare phases are only used internally.
For the event loop of node.js is the one of libuv, it goes without saying that those phases are probably mapped on the idle and prepare handles of libuv.
They would help to have greater granularity while organizing the tasks in a software. In particular, they are the only way to schedule something between the execution of the I/O callbacks and the poll phase.
Anyway, they are not exported from the underlying environment.
What's the reason for which those phases have been forbidden, actually giving to the users an apparently poorest event loop than the one offered by libuv?
Is there any other way to schedule tasks the way mentioned above?
Side note: it's just curiosity.
I used to work with both libuv and nodejs and I noticed it, so I want to know if there is a technical reason for that or... Well, that is how it has been designed and that's all, no particular reason.
I don't think there is a specific reason to "forbid" them. Moreover, they are not really forbidden, they are just not exposed. You could create a Node addon which allows you to create idle and prepare handles and there would be no problem at all. There are some things you must be aware of:
Idle handles have a terrible name: they don't run when the loop is actually idle. They run once per loop iteration, after the timers, and if any idle timer is active, the loop will block for i/o for zero seconds. So they can be dangerous because the CPU will spin if you don't stop it.
Callbacks registered with process.nextTick are called when the C++ <-> JS boundary is crossed (see calls to MakeCallback) so i/o callbacks could be deferred and run a bit later. If you exposed prepare handles to JS you would use MakeCallback in the C++ code, so some of the process.nextTick callbacks would also be called alongside your prepare callbacks.
As a general note: idle, check and prepare handles were somehow inherited from libev (which libuv used to use internally). Check and prepare can be used when embedding libuv with other libraries and idle handles are a bit weird, as I mentioned above. Also, libuv follows its own path these days, so not everything libuv has will end up exposed in Node land.
You could ask a reverse question "why do you need idle phase, for example, to be exposed"? You can just use setImmediate().
Also, why do you want to execute something in between I/O callbacks and polling phase, as you don't control explicitly those things anyways?
I've been reading (and working) quite a bit with massively multi-threaded applications, and with IO, and I've found that the term asynchronous has become some sort of catch-all for multiple vague ideas. I'm wondering if I understand it correctly. The way I see it is that there are two main branches of "asynchronicity".
Asynchronous I/O. Such as network read/write. What this really boils down to is efficient parallel processing between multiple CPUs, such as your main CPU and your NIC CPU. The idea is to have multiple processors running in parallel, exchanging data, without blocking waiting for the other to finish and return the results of it's job.
Minimizing context-switching penalties by minimizing use of threads. This seems to be what the .NET framework is focusing on with it's async/await features. Instead of spawning/closing/blocking threads, break parallel jobs into tasks, and use a software task scheduler to keep a pool of threads as busy as possible without resorting to spawning new threads.
These seem like two entirely separate concepts with no similarities that could tie them together, but are both referred to by the same "asynchronous computing" vocabulary.
Am I understanding all of this correctly?
Asynchronous basically means not blocking, i.e. not having to wait for an operation to complete.
Threads are just one way of accomplishing that. There are many ways of doing this, from hardware level, SO level, software level.
Someone with more experience than me can give examples of asyncronicity not related to threads.
What this really boils down to is efficient parallel processing between multiple CPUs, such as your main CPU and your NIC CPU. The idea is to have multiple processors running in parallel...
Asynchronous programming is not all about multi-core CPU's and parallelism: consider a single core CPU, with just one thread creating email messages and sends them. In a synchronous fashion, it would spend a few micro seconds to create the message, and a lot more time to send it through network, and only then create the next message. But in asynchronous program, the thread could create a new message while the previous one is being sent through the network. One implementation for that kind of program can be using .NET async/await feature, where you can have just one thread. But even a blocking IO program could be considered asynchronous: If the main thread creates the messages and queues them in a buffer, which another thread pulls them from and sends them in a blocking IO way. From the main thread's point of view - it's completely async.
.NET async/await just uses the OS api's which are already async - reading /writing a file, send /receive data through network, they are all async anyway - the OS doesn't block on them (the drivers themselves are async).
Asynchronous is a general term, which does not have widely accepted meaning. Different domains have different meanings to it.
For instance, async IO means that instead of blocking on IO call, something else happens. Something else can be really different things, but it usually involves some sort of notification of call completion. Details might differ. For instance, a notification might be built into the call itself - like in MS Completeion Ports (if memory serves). Or, it can be something verify do before you make a call so that the call can not block - this is what poll() and friends do.
Async might also well mean simply parallel execution. For instance, one might say that 'database is updated asynchronously' meaning that there is a dedicated thread which handles database connectivity, and that thread does not slow down the main processing thread.
I know this question has been discussed in the past in much details (How is Node.js inherently faster when it still relies on Threads internally?) but I still fail to properly understand node.js event loop model and being a single threaded model how it handles concurrent requests.
Uptil now my understanding is : We receive an IO request --> a thread is spawned internally by node.js and IO request is handed to it --> since this is an IO request so CPU hands it to DMA controller and frees this thread --> this thread again goes into the thread pool to serve a different request --> DMA is still doing the IO, once DMA get all the data a sort of event is fired --> this event is captured by the node.js system and it puts the supplied callback function on the event loop --> whenever event loop get the opportunity it executed the callback on the data fetched by the IO -- > thanks to closures, callback function executes on the data fetched by the callback only
So this process goes on repeatedly. Please someone elucidate on my understand and provide some information
There is only one thread (the main thread) for dealing with network I/O (file I/O is a slightly different story because not all platforms provide usable asynchronous, non-blocking file I/O APIs, so the synchronous file I/O APIs are used on those platforms in a threadpool).
So when network requests come in, they're all handled by the main thread which uses (indirectly via libuv) epoll/kqueue/IOCP/etc. for detecting (in a non-blocking way) when data is available (or when there is an incoming TCP connection for example). If there is data available, it calls out appropriately to javascript as needed, passing the socket data. If there is no data on the socket (and there's nothing else for the event loop to do, e.g. firing timers), then execution proceeds to the next iteration of the event loop where the process starts all over again.
As far as associating socket data with socket javascript objects goes, it's the combination of C++ wrapper objects (e.g. tcp_wrap, udp_wrap, etc.) and javascript objects that makes sure the data gets to the appropriate place.
Here's a slightly older diagram that explains what happens in a single cycle of node's event loop. Some of it may have changed slightly since node v0.9, but it gets you the general idea:
node.js has a single threaded model which eliminates the need for locks and semaphores (used in the traditional multithreaded model). Locks and semaphores can add some costs in terms of performance and, more importantly, can provide a lot of rope to hang yourself with (in other words, many pitfalls). IO operations happen in parallel and because work between IOs is typically very small, this single threaded model usually works quite nicely.
(side note: if you have an app that does a lot of work between IO operations, i.e. CPU intense apps, that is a case where node doesn't not scale well)
I like to think of the argument for why node's model scales well is the same as why people think NoSQL scales better than SQL databases. Obviously Java (multi-threaded) and SQL scale; big companies like Facebook and Twitter have proven that. However, like in SQL, there are a lot of things you could do incorrectly to slow down your performance. Node.js doesn't eliminate all potential problems, it just does a good job of restricting many of the common causes.