I am trying to learn Node.js and some of points that I understand:
Node.js does'nt create a seperate process for each request, instead it is just one process which processes all requests.
It is asynchronous which means you can attach a callback to a long-lasting process and continue your rest of the work without waiting for it to finish.
What I really don't understand is author's point in Understanding node.js - "Everything runs in parallel except your code". I have understood the analogy and the code that explains it but still I don't get it what is the distinction between "Everything" and "code". I have more often heard this about node.js.
Also, people pat node.js for its efficiency since memory overhead for one concurrent connection may be as low as 8KB but what about CPU load. Does node.js make it way less as compared to PHP+Apache?
Node.js uses a single thread any time it is running the JavaScript in your application. Tasks that are asynchronous (network, filesystem, etc.) are all handled on separate threads automatically for you. This means that you get much of the usefulness of a multithreaded application without having to worry about all of the trouble that comes with locking resources and what not.
Node is not a tool for every job. It is ideal for applications that are IO bound. For example, if your application required a ton of work to process templates and what not, Node probably isn't for you. If instead you're just shuffling data around, Node can be very effective.
The reason Node is often quoted as being faster than servers like Apache is that it doesn't create a thread and all of the resources with it to handling requests. In Apache, most of the time, that thread handling requests is waiting on network or filesystem data. While it does this, it is wasting resources. With Node, only one thread processes those requests (in your application). Again, this is great for some things, but if you have a lot of processing to do, Node would not be effective as it can really only handle a single request at a time in these situations.
This video does a pretty good job of explaining: http://www.youtube.com/watch?v=F6k8lTrAE2g&feature=youtube_gdata
Everything runs in parallel except your code.
It means if you do
while(true){}
anywhere in your code the entire node application will stop. While the code you write executes, nothing else does. Requests will not be handled, responses won't be returned, nothing. You have to be extremely careful to not hog the cpu in node.
but what about CPU load?
That completely depends on the nature of your application and the load. If your app is busy, it'll use more cpu.
Imagine a busy intersection with a traffic cop in the middle. When the cop is doing his job properly, hundreds of cars can pass through the intersection in a very fast and efficient way.
If the cop starts receiving and answering SMS messages on his cell while doing traffic, then things might go out of hand really quickly.
The traffic cop is your node.js app, and the time he spends doing SMS is what the author refers to as "your code".
In other words: node.js performance will shine the more you use it as a traffic cop. The more you start using it to do things other than pulling and pushing data (i.e.: sorting a list of numbers, rendering an html template, etc.), the more your capacity to accept and process new connections quickly will suffer.
"Everything" refers to everything else besides your code. For example, the stuff that handles HTTP. Another way to say the same thing is "your code doesn't wait for node.js to do stuff, like send data over TCP, because that's done asynchronously."
To answer your second question, I don't know which has less CPU load, I'm guessing they're similar. Node.js' touted advantage is the CPU is better utilized due to the aforementioned asynchronicity.
Related
I was learning Node.js and also found out that Node.js is best to be used with I/O intensive tasks which confused me a bit. So, after some research I found this statement: "An application that reads and/or writes a large amount of data". So, does it mean that Node.js is best to be used with data, that is, read big data, take necessary data from that and send back to client?
A nodejs application can be architectured just fine to include non-I/O things and is not just suited for big data applications (in fact big data has nothing to do with it at all).
A default, simple implementation of Node.js performs best when your application is not CPU intensive and instead spends most of its time doing I/O (input/output) tasks such as reading/writing to a database, read/writing from files, reading/sending network data and so on. It's not about big data, it's about what does the server spend most of its time doing.
Surprisingly enough (to some) since a web server's primary job is responding to http requests which are usually requests for data, most web servers spend most of their time fetching things, reading and writing things and sending things which are all I/O tasks. In the node.js design, all these I/O tasks happen asynchronously in a non-blocking fashion and they use events to signal when those operations complete. This is where the phrase "event-driven design" comes from when describing node.js. It so happens that this makes node.js very efficient at handling things that involve primarily I/O. This is what a simple implementation of node.js does best. And, it generally does it better than a purely threaded server design that devotes an OS thread to every currently in-flight I/O operation (the original design for many server frameworks).
If you do have CPU intensive things (major calculations, image processing, heavy crypto operations, etc...) and you do them very often or they take very long, then you will be best served if you put those tasks in a Worker Thread or in another process and communicate back and forth between the main process in node.js and this worker to get that CPU-intensive work done. It used to be that node.js didn't have Worker Threads which made this task a little more complicated where you often had to use one or more additional processes (either via clustering or additional dedicated processes) in order to handle this CPU-intensive work, but now you can use Worker Threads which can be a bit more convenient.
For example, I have a server task that requires a very heavy amount of crypto (performing a billion crypto operations). If I put that in the main node.js thread, that essentially blocks the event loop so my server can't process other requests while that heavy duty crypto operation is running which would ruin the responsiveness of my server.
But, I was able to move the crypto work to a worker thread (actually to several worker threads) and then can crunch away on the crypto while my main thread stays nice and lively to handle other, unrelated incoming requests in a timely fashion.
First of all, Big Data has nothing to do with Node.js.
I/O intensive means that the given task often waits for I/O. The best examples for these are file operations, networking.
If the processor has to regularly wait for data to arrive, the task is said to be I/O intensive.
Node.js's asynchronous nature however makes it really good at I/O intensive tasks, as it can keep doing other work while it waits for the data to arrive asynchronously.
For example, if you have 10 clients connected to the server and one of the clients requests for a data or task that is heavy to process, my server should not get stuck or wait until this task is finished as it will cause greater response time to other 9 clients or bad user experience. Rather, server should allow the other 9 clients to request data or task from the server, and when the respective tasks get finished, response should be sent back to clients.
PS: You can study about Event loop in Node.js
What Node.js is great at is serving as the middle layer between clients and data sources, i.e. the inputs and outputs.
The reason Node.js is great at this is in the non-blocking event-driven approach it takes.
For example, when you make a request to a Node.js app that asks for some data from a database, Node.js will request that data and immediately return to other requests without being blocked by the database request.
Once the database sends the data back, Node.js triggers the callback (or resolves the promise) with that data and continues onwards.
There's no race condition between these input and output events because their synchronization is done in a single threaded mechanism called the Event Loop. Only one event gets processed at a time.
We can think of the Event Loop as a single-seat rollercoaster ride in an amusement park that has many lines of people waiting to go on the ride, one by one. When you get to go depends on when you got in a line, how important you are or if a friend saved you a spot but nevertheless only one person at a time will be able to partake.
This non-blocking event-driven approach allows Node.js to very efficiently react to input and output events and process many read/write operations because it's not really doing much processing, the CPU work is quite low. It's just serving as the middle layer between you and the data.
On the other hand, if these events lead to some intense CPU operations, Node.js used to perform quite poorly because the Event Loop can process only one event at a time.
To use the rollercoaster analogy from above, a CPU-intensive task would be as if one person is taking a really long ride while all others have to wait for them to be done.
Newer versions of Node.js did get some tools to allow it do to more than 1 thing at time (parallelism) by using workers. The trick here is that every pool of workers has its own Event Loop which allows applications to move the intense work into a different thread and run it in parallel with the rest of the application. Do note that this will only actually help if you run on a machine with more than 1 core. If your machine has 1 core, no matter what tool you use, you're gonna have a bad time because nothing can actually be done in parallel on a single core machine.
In case of Intensive I/O tasks Majority of the time is spent waiting for network, filesystem and perhaps database I/O to complete. Increasing hard disk speed or network connection improves the overall performance.
In its most basic form Node.js is best suited for this type of computing. All I/O in Node.js is non-blocking and it allows other requests to be served while waiting for a particular read or write to complete.
I have a simple nodejs webserver running, it:
Accepts requests
Spawns separate thread to perform background processing
Background thread returns results
App responds to client
Using Apache benchmark "ab -r -n 100 -c 10", performing 100 requests with 10 at a time.
Average response time of 5.6 seconds.
My logic for using nodejs is that is typically quite resource efficient, especially when the bulk of the work is being done by another process. Seems like the most lightweight webserver option for this scenario.
The Problem
With 10 concurrent requests my CPU was maxed out, which is no surprise since there is CPU intensive work going on the background.
Scaling horizontally is an easy thing to, although I want to make the most out of each server for obvious reasons.
So how with nodejs, either raw or some framework, how can one keep that under control as to not go overkill on the CPU.
Potential Approach?
Could accepting the request storing it in a db or some persistent storage and having a separate process that uses an async library to process x at a time?
In your potential approach, you're basically describing a queue. You can store incoming messages (jobs) there and have each process get one job at the time, only getting the next one when processing the previous job has finished. You could spawn a number of processes working in parallel, like an amount equal to the number of cores in your system. Spawning more won't help performance, because multiple processes sharing a core will just run slower. Keeping one core free might be preferred to keep the system responsive for administrative tasks.
Many different queues exist. A node-based one using redis for persistence that seems to be well supported is Kue (I have no personal experience using it). I found a tutorial for building an implementation with Kue here. Depending on the software your environment is running in though, another choice might make more sense.
Good luck and have fun!
We host about 150 websites (possibly scaling to 300+) that we are considering migrating to node.js. Most of the sites are fairly low traffic <1mil pageviews per month.
Should each website be it's own node.js process, or should we serve all websites using the same node.js process (or small set of load balanced processes). Is there a technical limit or a reasonable limit to the number of node processes per server?
Process per site: Feels inefficient, but I don't know if it actually is inefficient. Would ensure one buggy site doesn't affect other sites.
Process per core/small set of processes: Likely higher performance, but what happens when I need to update a sites codebase, won't it take down other sites? Also, code failures in one site would affect other sites.
Ideally, I would prefer one process per site so that we could host all sites from each worker server. That way when load increases we can just spin up another identical worker server and load balance between the two without having to arbitrarily say SiteA goes to ServerA and SiteB goes to ServerB. Any node.js gurus available to offer some wisdom?
All static file requests will be handled likely by Nginx or something like Varnish.
There are a lot of issues at play here. The big picture answer is, it depends... as it always does when you bring in the whole "performance" discussion. That being said, the simplest way to get a solid Node set up is to note the following basic facts about NodeJS, and I will also comment on their implications as they pertain to your questions.
The concurrency you get with Node works really good in certain situations, namely IO heavy operations. What we're really talking about here is minimizing the amount of downtime to wait for the next request. Because of this, Node works really well in an environment where there is one process per core on a machine. Node does really well at maximizing the amount of CPU available to serve requests under heavy load. This being said, if you have literally ZERO other work going on in your even loop, you can see minor performance increases (in terms of max requests/second/processor core) by having multiple node processes per core. But, I've never seen any benefit from increasing this number past 3. Even under circumstances where the entire event loop was literally just a file server.
On the process per site comment. This is a bad idea for many reasons. For one, a well put together node server can process thousands of requests per second. Our (company name omitted) servers, hosted through Amazon EC2 on medium clusters (lots of ram, mid CPU clock, 4 cores), typically fail around 3000 requests per second per cluster. Our servers do a fair bit of CPU work, for simple file servers I'm sure you can do much better. Strictly speaking, sure, per site, you will be able to serve more requests by launching each site in its own process/core/escalating quickly here! But it's not necessary from a cost and over complication of your architecture point of view. What I WOULD recommend, is investing in a setup with a lot of RAM. The ability for your server to cache often requested files will effect your performance infinitely more than launching an abundance of processes for a given machine.
On the whole RAM thing. The number of processes you want to launch for a given core is dependant on two things. One is how much synchronous work done in your event loop. The more synchronous work, the more time between a given request coming in and the event loop being ready to adress the next one. If you have a busy event loop, you will be in a situation where you require more processes/CPU Core. The other thing that can effect this, particularly relevant for file servers, is the amount of RAM. Node runs much better in a high ram environment, but you can say this about ANY file server really... What this has to do with, is the number of active asynchronous operations. One downside of the way node works, is under heavy loads, you can get a large number of event handlers active at once. This is great for concurrency/simplicity, however, if your server is busy waiting around for a lot of async disk/IO to happen it will slow down and crash much sooner than if you had plenty of RAM. If you don't have enough RAM to handle all of these event handlers, you will want to keep to the 1 process/core arrangement. Otherwise, it is easier for Node to spin up many event handlers simultaneously, and again cause you to crash sooner than you would otherwise.
I don't really have enough information to tell you what you SHOULD do. This depends entirely too much on the architecture of your specific server, sites, size of your sites, amount of data... etc. But these three pieces of knowledge are the basic things that help you get the most out of your Node server. To be honest, your idea about load balancing mixed with the considerations above, should do nicely for you. Surely, microoptimizations are possible, but if you do these things, you should easily see requests/second in the thousands before you start experiencing crashes because of DDOS type of conditions.
No, don't do it. Keep it simple! And check out http://12factor.net/.
A few hundred processes is nothing compared to the simplicity you otherwise lose. It would be a terrible decision, on so many levels, to have more than one site (or, "logical application unit") served by a single Node process.
If you're asking this question, you may want to explore Node more before you "migrate" to Node. Error handling and separation of concerns are more complicated in Node than in other situations. Specifically, neither the domain nor cluster APIs are mature. But really it's the philosophy of clean and simple application deployment that you'd be violating. I could go on and on.
The following two diagrams are my understanding on how threads work in a event-driven web server (like Node.js + JavaScript) compared to a non-event driven web server (like IIS + C#)
From the diagram is easy to tell that on a traditional web server the number of threads used to perform 3 long running operations is larger than on a event-driven web server (3 vs 1.)
I think I got the "traditional web server" counts correct (3) but I wonder about the event-driven one (1). Here are my questions:
Is it correct to assume that only one thread was used in the event-driven scenario? That can't be correct, something must have been created to handle the I/O tasks. Right?
How did the evented server handled the I/O? Let's say that the I/O was to read from a database. I suspect that the web server had to create a thread to hand off the job of connecting to the database? Right?
If the event-driven web server indeed created threads to handle the I/O where is the gain?
A possible explanation for my confusion could be that on both scenarios, traditional and event-driven, three separate threads were indeed created to handle the I/O (not shown in the pictures) but the difference is really on the number of threads on the web server per-se, not on the I/O threads. Is that accurate?
Node may use threads for IO. The JS code runs in a single thread, but all the IO requests are running in parallel threads. If you want some JS code to run in parallel threads, use thread-a-gogo or some other packages out there which mitigate that behaviour.
Same as 1., threads are created by Node for IO operations.
You don't have to handle threading, unless you want to. Easier to develop. At least that's my point of view.
A node application can be coded to run like another web server. Typically, JS code runs in a single thread, but there are ways to make it behave differently.
Personally, I recommend threads-a-gogo (the package name isn't that revealing, but it is easy to use) if you want to experiment with threads. It's faster.
Node also supports multiple processes, you may run a completely separate process if you also want to try that out.
The best way to picture NodeJS is like a furious squirrel (i.e. your thread) running in a wheel with an infinite number of pigeons (your I/O) available to pass messages around.
I/O in node is "free". Your squirrel works to set up the connection and send the pigeon off, then can go on to do other things while the pigeon retrieves the data, only dealing with the data when the pigeon returns.
If you write bad code, you can end up having the squirrel waiting for each pigeon.
So always write non-blocking i/o code.
If you can encourage your Pigeons to promise to come back ;)
Promises and generators are probably the best approach you can take to this.
HOWEVER you can always use Node cluster to establish a master squirrel that will procreate child squirrels based on the number of CPUs the master squirrel can find to dole out the work.
Hope this helps and note the complete lack of a car analogy.
I have a site which sometimes takes particularly long to process a request (and that's not a defect). 99% of the time it's pretty quick because it almost doesn't do any processing.
I want to show a message that says "Loading" when the site takes long to process the request. My site uses mod_wsgi and Apache. The way I see it, I would respond saying 'Loading' before completing the processing and do one of two things right before:
-spawn a (daemon) thread to take care of the processing.
-communicate through socket with other process and tell it to take care of the processing (most likely send request to http://localhost:8080/do_processing).
What are the pros and cons of one approach vs the other?
Using a separate process is better. It does not have to be hard at all as suggested in another answer as you can use an existing system for doing exactly that such as Celery (http://celeryproject.org/). Relying on in process threads is not necessarily a good idea unless you are going to implement an internal job queueing system of your own to prevent blowing out of number of threads. Also, in a multiprocess server configuration you cant be guaranteed a request comes back to the same process and so not easy to get status of a running operation. Finally, the web server processes could get killed off and thus your background task could also be killed before it finishes. You would need to have a mechanism for holding state which can survive such an event if that was important. Far easier to use something like Celery.
The process route requires quite a bit of a system processing. Creation of a separate process is relatively expensive and slow. However if your process crashes it doesn't affect your main governing process (you will receive the exit status code and will have an opportunity to respawn a new working process). You will also need some sort of InterProcessCommunication layer (can be a socket, pipe, shared memory, etc...) which is adds to complexity if your project.
Threads are lightweight and cheap. All you need to do is to manage concurrent access to shared resources. So it really depends on the task you have in mind. Threads probably will be more likely the appropriate way to implement your task.