Why is Node.js scalable? - node.js

node.js scalable, what is meant by the that? What part of a node.js server is scalable. I read that it is a single-threaded technology that is not suitable for applications that would need a lot of CPU resources. These facts do not fit with scalability, so what is meant by that?

The javascript that node runs is single threaded, but a lot of the things you call in node - such as network or file io - run in background threads. See this post for a basic overview:
Node is not single threaded
If you need the gritty details, you should look into libuv which is the 'magic' piece converting threads into event loops:
http://nikhilm.github.io/uvbook/basics.html#event-loops
Additionally, if you need to do something CPU intensive in node itself, you can easily send this to a child process - see http://nodejs.org/api/child_process.html#child_process_child_process_fork_modulepath_args_options for details

Is not that is more scalable "per se".
Is more that all you do is (I/O) scalable without having to do anything special.
I/O is safer and easier to do in parallel, as it tends to share no data between execution threads. Node.js lets you do it using event programming which is simple, elegant and easy to use. Is an old and proved programming paradigm, used for years by GUIs and other graphical intensive apps like games for example.
In fact is less scalable than full fledged languages like c++, c, java, etc.. which can scale out much better by the usage of full multithreading. This allows to scale CPU too, but also opens a can of worms.
To share CPU, you have to share data, and that is another story (semaphores, locks, etc...)
You can do the same as node.js with any of the above languages, but is not part of the language itself, so you have to roll your own or use libraries that provide it to you.
That said, is not that hard, but sure harder than in node.js.
Most web services, are IO bound, so Node.js fits well, and is Ok for most cases. But once you start to use CPU intensive work, the events are not serviced and all stops.
In that case better use another language. There's no really a good solution in Node for that. You can spawn multiple processes, but then you will not be able to share data between them. Without data sharing, there's no way to scale CPU efficiently, so better don't try.
Use Node.js for IO, and some better suited language with proper multithreading for CPU intensive work.

it is scalable due to load balancing. Essentially you can have multiple jobs for node to process and it can handle it with no significant burden. This makes it scalable.

All APIs of Node are written is such a way that they supports callbacks.
For example, a function to read a file may start reading file and return the control to execution environment immidiately so that next instruction can be executed. Once file I/O is complete, it will call the callback function while passing the callback function, the content of the file as parameter. So there is no blocking or wait for File I/O. This makes Node.js highly scalable, as it can process high number of request without waiting for any function to return result. -- Tutorial's Point.

Related

What should nodejs NOT be doing?

I'm now a couple of weeks into my node deep dive. I've learned a lot from Anthony's excellent course on udemy and I'm currently going through a book " nodejs the right way". I've also gone through quite a few articles that brought up some very good points about real world scenarios with node and coupling other technologies out there.
HOWEVER, it seems to be accepted as law, that you don't perform computationally heavy tasks with Node as its a single thread architecture. I get the idea of the event loop and asynch callbacks etc. In fact nodes strength stems from tons of concurrent IO connections if I understand correctly. No matter where I'm reading though, the source warns against hanging up that thread executing a task. I can't seem to find any rule of thumb of things to avoid using nodes process for. I've seen a solution saying that node should pass computationally heavy tasks to a message service like RabbitMQ which a dedicated app server can churn through(any suggestions on what pairs well with node for this task? I read something about an N-tier architecture). The reason I'm so confused is because I see node being used for reading and writing files to highlight the usage of streams but in my mind fetching/reading/writing files is an expensive task (I feel mistaken).
Tl;Dr What kind of tasks should node pass off to a work horse server ? What material can I read that explains the paradigm in detail?
Edit: it seems like my lack of understanding stemmed from not knowing what would even halt a thread in the first place outside of an obviously synchronous IO request . So if I understand correctly reading and writing data is IO where mutating said data or doing mathematical computations is computationally expensive (at varying levels depending on the task of course ) . Thanks for all the answers!
If you're using node.js as a server, then running a long running synchronous computational task ties up the one thread and during that computation, your server is non-responsive to other requests. That's generally a bad situation for servers.
So, the general design principles for node.js server design is this:
Use only asynchronous I/O functions. For example, use fs.readFile(), not fs.readyFileSync().
If you have a computationally intense operation, then move it to a child process. If you do a lot of these, then have several child processes that can process these long running operations. This keeps your main thread free so it can be responsive to I/O requests.
If you want to increase the overall scalability and responsiveness of your server, you can implement clustering with a server process per CPU. This isn't really a substitute for #2 above, but can also improve scalability and responsiveness.
The reason I'm so confused is because I see node being used for
reading and writing files to highlight the usage of streams but in my
mind fetching/reading/writing files is an expensive task (I feel
mistaken).
If you use the asynchronous versions of the I/O functions, then read/writing from the disk does not block the main JS thread as they present an asynchronous interface and the main thread can do other things while the system is fetching data from the disk.
What kind of tasks should node pass off to a work horse server ?
It depends a bit on the server load that you are trying to support, what you're asking it to do and your tolerance for responsiveness delays. The higher the load you're aiming for, then the more you need to get any computationally intensive task off the main JS thread and into some other process. At a medium number of long running transactions and a modest server load, you may just be able to use clustering to reach your scalability and responsiveness goal, but at some threshold of either length of the transaction or the load you're trying to support, you have to get the computationally intensive stuff out of the main JS thread.
HOWEVER, it seems to be accepted as law, that you don't perform computationally heavy tasks with Node as its a single thread architecture.
I would reword this:
don't perform computationally heavy tasks unless you need to with Node
Sometimes, you need to crunch through a bunch of data. There are times when it's faster or better to do that in-process than it is to pass it around.
A practical example:
I have a Node.js server that reads in raw log data from a bunch of servers. No standard logging utilities could be used as I have some custom processing being done, as well as custom authentication schemes for getting the log data. The whole thing is HTTP requests, and then parsing and re-writing the data.
As you can imagine, this uses a ton of CPU. Here's the thing though... is that CPU wasted? Am I doing anything in JS that I could do faster had I written it in another language? Often times the CPU is busy for a real reason, and the benefit of switching to something more native might be marginal. And then, you have to factor in the overhead of switching.
Remember that with Node.js, you can compile native extensions, so it's possible to have the best of both worlds in a well established framework.
For me, the human trade-offs came in. I'm a far more efficient Node.js developer than anything that runs natively. Even if my Node.js app were prove to be 5x slower than something native (which I'd imagine would be on the extreme), I could just buy 5 more servers to run, at much less cost than it would take for me to develop and maintain the native solution.
Use what you need. If you need to burn a lot of CPU in Node.js, just make sure you're doing it as efficiently as you can be. If you find that you could optimize something with native code, consider making an extension and besure to measure the performance differences afterwards. If you feel the desire to throw out the whole stack... reconsider your approach, as there might be something you're not considering.
Reading and writing files are I/O operations, so they are NOT CPU intensive. You can do a fair amount of concurrent I/O with Node without tying up any single request (in a Node HTTP server for example).
But people use Node in general for CPU-intensive tasks all the time and its fine. You just have to realize that if it uses all of the CPU for any significant amount of time then you will block all other requests to that server, which generally won't be acceptable if you need the server to stay available. But there are lots of times when your Node process is not trying to be a server firing back responses to many requests, such as when you have a Node program that just processes data and isn't a server at all.
Also, using another process is not the only way to do background tasks in Node. There is also webworker-threads which allows you to use threads if that is more convenient (you do have to copy the data in and out).
I would stop reading and do some targeted experiments so you can see what they are talking about. Try to create and test three different programs: 1) HTTP server handles lots of requests, always returns immediately with file contents 2) HTTP server handles lots of requests, but a certain request causes a large math computation that takes many seconds to return (which will block all the other requests -- big problem) 3) a Node program that is not an HTTP server, which does that large math computation and spits out the result in the terminal (which even though it takes awhile to work, is not handling other requests since its not a server, so its fine for it to block).

why should I run two or more nodejs instances when I want to make good use of Multi-Core cpu

I knows that when doing I/O operations, nodejs does the work in a separate thread, so, can't the thread run in the other core? if so, isn't that only one nodejs process can make use of two cores?
Node does asynchronous I/O but the I/O is very little "work" in terms of CPU. It's mostly waiting for the OS to have data ready to be processed. So a single node process is only going to make good use of 1 CPU core for the actual logic in your node program itself. If you max out that CPU core and want more of your actual program's logic to run at the same time, you need another process. Yes, there are technically a handful of threads in use by any node program, but only 1 of them is the CPU-intensive one that is actually interpreting your javascript.
It's mostly waiting for the OS to have data ready to be processed. So
a single node process is only going to make good use of 1 CPU core for
the actual logic in your node program itself. why?
This is by design. By making this particular choice, node programs do not have to deal with complex and error-prone concurrent programming idioms such as threads, semaphores, locking, deadlocks, etc. It's just one choice of dealing with that complexity. It is based on two main observations that 1. asynchronous I/O seems to be currently regarded as the most efficient and manageable design for high-concurrency systems and 2. Most of the other programming idioms for concurrency are relatively more difficult to code correctly. This is just what node.js chose. The "go" language, for example, made a different set of choices favoring other idioms that allow multithreading, blocking, channels, message queueing, etc. Consequently, go programs make better use of multi-core machines.

How is Node.js evented system different than the actor pattern of Akka?

I've worked with Node.js for a little while and consider myself pretty good with Java. But I just discovered Akka and was immediately interested in its actor pattern (from what I understand).
Now, assuming my JavaScript skills were on par with my Scala/Java skills, I want to focus on the practicality of either system. Especially in the terms of web services.
It was my understanding that Node is excellent at handling many concurrent operations. I imagine a good Node web service for an asset management system would excel at handling many users submitting changes at the same time (in a large, heavy traffic application).
But after reading about the actors in Akka, it seams it would excel at the same thing. And I like the idea of reducing work to bite-sized pieces. Plus, years ago I dabbled in Erlang and fell in love with the message passing system it uses.
I work on many applications that deal with complex business logic and I'm thinking it's time to jump heavier into one or the other. Especially upgrading legacy Struts and C# applications.
Anyway, avoiding holy wars, how are the two systems fundamentally different? It seems both are geared towards the same goal. With maybe Akka's "self-healing" architecture having an advantage.
EDIT
It looks like I am getting close votes. Please don't take this question as a "which is better, node or akka?". What I am looking for is the fundamental differences in event driven libraries like Node and actor based ones like Akka.
Without going into the details (about which I know too little in the case of Node.js), the main difference is that Node.js supports only concurrency without parallelism while Akka supports both. Both systems are completely event-driven and can scale to large work-loads, but the lack of parallelism makes it difficult in Node.js (i.e. parallelism is explicitly coded by starting multiple nodes and dispatching requests accordingly; it is therefore inflexible at runtime), while it is quite easy in Akka due to its tunable multi-threaded executors. Given small isolated units of work (actor invocations) Akka will automatically parallelize execution for you.
Another difference of importance is that Akka includes a system for handling failure in a structured way (by having each actor supervised by its parent, which is mandatory) whereas Node.js relies upon conventions for authors to pass error conditions from callback to callback. The underlying problem is that asynchronous systems cannot use the standard approach of exceptions employed by synchronous stack-based systems, because the “calling” code will have moved on to different tasks by the time the callback’s error occurs. Having fault handling built into the system makes it more likely that applications built on that system are robust.
The above is not meant to be exhaustive, I’m sure there are a lot more differences.
I didn't yet use Akka, but it seems it's erlang-like but in java. In erlang all processes are like actors in Akka, they have mailboxes, you can send messages between them, you have supervisors etc.
Node.js uses cooperative concurrency. That means you have concurrency when you allow it (for example when you call io operation or some asynchronous event). When you have some long operation (calculating something in long loop) whole system blocks.
Erlang uses preemptive task switching. When you have long loop, system can pause it to run other operation and continue after some time. For massive concurrency Node.js is good if you do only short operations. Both support millions of clients:
http://blog.caustik.com/2012/08/19/node-js-w1m-concurrent-connections/
http://blog.whatsapp.com/index.php/2012/01/1-million-is-so-2011/
In java you need threads to do any concurrency, otherwise you can't pause execution inside function which erlang does (actually erlang pauses between function calls, but this hapens with all functions). You can pause execution between messages.
I'm not sure this is a fair comparison to draw. I read this more as "how does an evented based system compare with an actor model?". Nodejs can support an actor model just as Scala does in Akka, or C# does in Orleans, in fact check out nactor, somebody appears to already be trying it.
As for how an evented system vs. actor model compare, I would let wiser people describe it. A few brief points through about the Actor model:
Actor model is message based
Actor models tend to do well with distributed systems (clusters). Sure event based systems can be distributed, but I think the actor model has distribution built in with regards to distributing computations. A new request can be routed to a new actor in a different silo, not sure how this would work in event based.
The Actor model supports failure in that, if hey cluster 1 apperas to be down, the observer can generally find a different silo to do the work
Also, check out drama. It is another nodejs actor model implementation.

Instant Messaging Server Design

Let's suppose we have an instant messaging application, client-server based, not p2p. The actual protocol doesn't matter, what matters is the server architecture. The said server can be coded to operate in single-threaded, non-parallel mode using non-blocking sockets, which by definition allow us to perform operations like read-write effectively immediately (or instantly). This very feature of non-blocking sockets allows us to use some sort of select/poll function at the very core of the server and waste next to no time in the actual socket read/write operations, but rather to spend time processing all this information. Properly coded, this can be very fast, as far as I understand. But there is the second approach, and that is to multithread aggressively, creating a new thread (obviously using some sort of thread pool, because that very operation can be (very) slow on some platforms and under some circumstances), and letting those threads to work in parallel, while the main background thread handles accept() and stuff. I've seen this approach explained in various places over the Net, so it obviously does exist.
Now the question is, if we have non-blocking sockets, and immediate read/write operations, and a simple, easily coded design, why does the second variant even exist? What problems are we trying to overcome with the second design, i.e. threads? AFAIK those are usually used to work around some slow and possibly blocking operations, but no such operations seem to be present there!
I'm assuming you're not talking about having a thread per client as such a design is usually for completely diffreent reasons, but rather a pool of threads each handles several concurrent clients.
The reason for that arcitecture vs a single threaded server is simply to take advantage of multiple processors. You're doing more work than simply I/O. You have to parse the messages, do various work, maybe even run some more heavyweight crypto algorithms. All this takes CPU. If you want to scale, taking advantage of multiple processors will allow you to scale even more, and/or keep the latency even lower per client.
Some of the gain in such a design can be a bit offset by the fact you might need more locking in a multithreaded environment, but done right, and certainly depening on what you're doing, it can be a huge win - at the expense of more complexity.
Also, this might help overcome OS limitations . The I/O paths in the kernel might get more distributed among the procesors. Not all operating systems might fully be able to thread the IO from a single threaded applications. Back in the old days there were'nt all the great alternatives to the old *nix select(), which usually had a filedesciptor limit of 1024, and similar APIs severly started degrading once you told it to monitor too many socket. Spreading all those clients on multiple threads or processes helped overcome that limit.
As for a 1:1 mapping between threads, there's several reasons to implement that architecture:
Easier programming model, which might lead to less hard to find bugs, and faster to implement.
Support blocking APIs. These are all over the place. Having a thread handle many/all of the clients and then go on to do a blocking call to a database is going to stall everyone. Even reading files can block your application, and you usually can't monitor regular file handles/descriptors for IO events - or when you can, the programming model is often exceptionally complicated.
The drawback here is it won't scale, atleast not with the most widely used languages/framework. Having thousands of native threads will hurt performance. Though some languages provides a much more lightweight approach here, such as Erlang and to some extent Go.

When is multi-threading not a good idea? [closed]

As it currently stands, this question is not a good fit for our Q&A format. We expect answers to be supported by facts, references, or expertise, but this question will likely solicit debate, arguments, polling, or extended discussion. If you feel that this question can be improved and possibly reopened, visit the help center for guidance.
Closed 9 years ago.
I was recently working on an application that sent and received messages over Ethernet and Serial. I was then tasked to add the monitoring of DIO discretes. I throught,
"No reason to interrupt the main
thread which is involved in message
processing, I'll just create
another thread that monitors DIO."
This decision, however, proved to be poor. Sometimes the main thread would be interrupted between a Send and a Receive serial message. This interruption would disrupt the timing and alas, messages would be lost (forever).
I found another way to monitor the DIO without using another thread and Ethernet and Serial communication were restored to their correct functionality.
The whole fiasco, however, got me thinking. Are their any general guidelines about when not to use multiple-threads and/or does anyone have anymore examples of situations when using multiple-threads is not a good idea?
**EDIT:Based on your comments and after scowering the internet for information, I have composed a blog post entitled When is multi-threading not a good idea?
On a single processor machine and a desktop application, you use multi threads so you don't freeze the app but for nothing else really.
On a single processor server and a web based app, no need for multi threading because the web server handles most of it.
On a multi-processor machine and desktop app, you are suggested to use multi threads and parallel programming. Make as many threads as there are processors.
On a multi-processor server and a web based app, no need again for multi threads because the web server handles it.
In total, if you use multiple threads for other than un-freezing desktop apps and any other generic answer, you will make the app slower if you have a single core machine due to the threads interrupting each other.
Why? Because of the hardware switches. It takes time for the hardware to switch between threads in total. On a multi-core box, go ahead and use 1 thread for each core and you will greatly see a ramp up.
To paraphrase an old quote: A programmer had a problem. He thought, "I know, I'll use threads." Now the programmer has two problems. (Often attributed to JWZ, but it seems to predate his use of it talking about regexes.)
A good rule of thumb is "Don't use threads, unless there's a very compelling reason to use threads." Multiple threads are asking for trouble. Try to find a good way to solve the problem without using multiple threads, and only fall back to using threads if avoiding it is as much trouble as the extra effort to use threads. Also, consider switching to multiple threads if you're running on a multi-core/multi-CPU machine, and performance testing of the single threaded version shows that you need the performance of the extra cores.
Multi-threading is a bad idea if:
Several threads access and update the same resource (set a variable, write to a file), and you don't understand thread safety.
Several threads interact with each other and you don't understand mutexes and similar thread-management tools.
Your program uses static variables (threads typically share them by default).
You haven't debugged concurrency issues.
Actually, multi threading is not scalable and is hard to debug, so it should not be used in any case if you can avoid it. There is few cases where it is mandatory : when performance on a multi CPU matters, or when you deal whith a server that have a lot of clients taking a long time to answer.
In any other cases, you can use alternatives such as queue + cron jobs or else.
You might want to take a look at the Dan Kegel's "The C10K problem" web page about handling multiple data sources/sinks.
Basically it is best to use minimal threads, which in sockets can be done in most OS's w/ some event system (or asynchronously in Windows using IOCP).
When you run into the case where the OS and/or libraries do not offer a way to perform communication in a non-blocking manner, it is best to use a thread-pool to handle them while reporting back to the same event loop.
Example diagram of layout:
Per CPU [*] EVENTLOOP ------ Handles nonblocking I/O using OS/library utilities
| \___ Threadpool for various blocking events
Threadpool for handling the I/O messages that would take long
Multithreading is bad except in the single case where it is good. This case is
The work is CPU Bound, or parts of it is CPU Bound
The work is parallelisable.
If either or both of these conditions are missing, multithreading is not going to be a winning strategy.
If the work is not CPU bound, then you are waiting not on threads to finish work, but rather for some external event, such as network activity, for the process to complete its work. Using threads, there is the additional cost of context switches between threads, The cost of synchronization (mutexes, etc), and the irregularity of thread preemption. The alternative in most common use is asynchronous IO, in which a single thread listens to several io ports, and acts on whichever happens to be ready now, one at a time. If by some chance these slow channels all happen to become ready at the same time, It might seem like you will experience a slow-down, but in practice this is rarely true. The cost of handling each port individually is often comparable or better than the cost of synchronizing state on multiple threads as each channel is emptied.
Many tasks may be compute bound, but still not practical to use a multithreaded approach because the process must synchronise on the entire state. Such a program cannot benefit from multithreading because no work can be performed concurrently. Fortunately, most programs that require enormous amounts of CPU can be parallelized to some level.
Multi-threading is not a good idea if you need to guarantee precise physical timing (like in your example). Other cons include intensive data exchange between threads. I would say multi-threading is good for really parallel tasks if you don't care much about their relative speed/priority/timing.
A recent application I wrote that had to use multithreading (although not unbounded number of threads) was one where I had to communicate in several directions over two protocols, plus monitoring a third resource for changes. Both protocol libraries required a thread to run the respective event loop in, and when those were accounted for, it was easy to create a third loop for the resource monitoring. In addition to the event loop requirements, the messages going through the wires had strict timing requirements, and one loop couldn't be risked blocking the other, something that was further alleviated by using a multicore CPU (SPARC).
There were further discussions on whether each message processing should be considered a job that was given to a thread from a thread pool, but in the end that was an extension that wasn't worth the work.
All-in-all, threads should if possible only be considered when you can partition the work into well defined jobs (or series of jobs) such that the semantics are relatively easy to document and implement, and you can put an upper bound on the number of threads you use and that need to interact. Systems where this is best applied are almost message passing systems.
In priciple everytime there is no overhead for the caller to wait in a queue.
A couple more possible reasons to use threads:
Your platform lacks asynchronous I/O operations, e.g. Windows ME (No completion ports or overlapped I/O, a pain when porting XP applications that use them.) Java 1.3 and earlier.
A third-party library function that can hang, e.g. if a remote server is down, and the library provides no way to cancel the operation and you can't modify it.
Keeping a GUI responsive during intensive processing doesn't always require additional threads. A single callback function is usually sufficient.
If none of the above apply and I still want parallelism for some reason, I prefer to launch an independent process if possible.
I would say multi-threading is generally used to:
Allow data processing in the background while a GUI remains responsive
Split very big data analysis onto multiple processing units so that you can get your results quicker.
When you're receiving data from some hardware and need something to continuously add it to a buffer while some other element decides what to do with it (write to disk, display on a GUI etc.).
So if you're not solving one of those issues, it's unlikely that adding threads will make your life easier. In fact it'll almost certainly make it harder because as others have mentioned; debugging mutithreaded applications is considerably more work than a single threaded solution.
Security might be a reason to avoid using multiple threads (over multiple processes). See Google chrome for an example of multi-process safety features.
Multi-threading is scalable, and will allow your UI to maintain its responsivness while doing very complicated things in the background. I don't understand where other responses are acquiring their information on multi-threading.
When you shouldn't multi-thread is a mis-leading question to your problem. Your problem is this: Why did multi-threading my application cause serial / ethernet communications to fail?
The answer to that question will depend on the implementation, which should be discussed in another question. I know for a fact that you can have both ethernet and serial communications happening in a multi-threaded application at the same time as numerous other tasks without causing any data loss.
The one reason to not use multi-threading is:
There is one task, and no user interface with which the task will interfere.
The reasons to use mutli-threading are:
Provides superior responsiveness to the user
Performs multiple tasks at the same time to decrease overall execution time
Uses more of the current multi-core CPUs, and multi-multi-cores of the future.
There are three basic methods of multi-threaded programming that make thread safety implemented with ease - you only need to use one for success:
Thread Safe Data types passed between threads.
Thread Safe Methods in the threaded object to modify data passed between.
PostMessage capabilities to communicate between threads.
Are the processes parallel? Is performance a real concern? Are there multiple 'threads' of execution like on a web server? I don't think there is a finite answer.
A common source of threading issues is the usual approaches employed to synchronize data. Having threads share state and then implement locking at all the appropriate places is a major source of complexity for both design and debugging. Getting the locking right to balance stability, performance, and scalability is always a hard problem to solve. Even the most experienced experts get it wrong frequently. Alternative techniques to deal with threading can alleviate much of this complexity. The Clojure programming language implements several interesting techniques for dealing with concurrency.

Resources