I am building a CPU intensive web app, where ill write the CPU intensive stuff in C++ while ill write the webserver in node.js. The node.js would be connected to c++ via addons. I am confused about one thing -
Say the time of the CPU intensive operation per request is 5 seconds(maybe this involved inverting a huge matrix). When this request comes through, the node.js binding to c++ would send this request over to the c++ code.
Now does this mean that node.js would not be caught up for the next 5 seconds and can continue serving other requests?
I am confused as i have heard that even though node offers asynchronous features, it is still single threaded.
Obviously I would not want node.js to be stuck up for 5s as it is a huge price to pay. Imagine 100s of requests simultaneously for this intensive operation..
Trying to understand JS callbacks and asynchronicity logic, i came across with many different versions of the following description;
a callback function which is passed to another function as a parameter, runs
following to the time taking process of the function it's passed to.
The dilemma gets originated with the "time taking" adjective. Such as is it
Time taking because of CPU being idle and waiting for a response?
Time taking because of CPU being busy with number crunching like hell?
This is not clear in the description and confused me. So i tried the following two codes.
getData('http://fakedomain1234.com/userlist', writeData);
document.getElementById('output').innerHTML += "show this before data ...";
function getData(dataURI, callback) {
// Normally you would actually connect to a server here.
// We're just going to simulate a 3-second delay.
var timer = setTimeout(function () {
var dataArray = [123, 456, 789, 012, 345, 678];
callback(dataArray);
}, 3000);
}
function writeData(myData) {
document.getElementById('output').innerHTML += myData;
}
<body>
<p id="output"></p>
</body>
and
getData('http://fakedomain1234.com/userlist', writeData);
document.getElementById('output').innerHTML += "show this before data ...";
function getData(dataURI, callback) {
var dataArray = [123, 456, 789, 012, 345, 678];
for (i=0; i<1000000000; i++);
callback(dataArray);
}
function writeData(myData) {
document.getElementById('output').innerHTML += myData;
}
<body>
<p id="output"></p>
</body>
so in both codes there is a time taking activity in the getData function. In the first one the CPU is idle and in the second the CPU is busy. Clearly when CPU is busy the JS runtime is not asynchronous.
The main thread of Node is the JS event loop, so all logic interacting with JS is single threaded. This also includes any C++ logic triggered directly via JS.
Generally any long-running tasks should be split off into worker processes. For instance, in your case, you could have a worker process that would queue up calculations, emitting events back to the JS thread when they have completed.
So really, it's a question of how you go about your connected to c++ via addons code.
I'm not going to refer to the specifics of Node.js as I'm not that familiar with the internal architecture and the possibilities it allows (but I understand it supports multiple worker threads, each representing a different event loop)
In general, if you need to process 100 request/s that take 5 seconds solid CPU time, then there's nothing you can do, except ensuring that you have 500 processors available.
If 100 request/s is peak, while on average it will be much lower, then the solution is queueing, and you use the queue to absorb the blow.
Now things start to get interesting when it is not 5 seconds solid CPU time, but 0.1 CPU time and 4.9 waiting or anything in between. This is the case where asynchronous processing should be used to put all that waiting time to work.
Asynchronous in this case means that:
All your execution happens in an event loop.
You don't wait, no sleep, no blocking I/O, just execute or return to the event loop.
You split your task into non-blocking subtasks, interspeded with (async) events (e.g. with a response) that continue the execution.
You split your system into a number of event processing services, exchanging requests and responses through asynchronous events and collaborating to provide the overall functionality.
What to do if you have a subsystem you cannot turn into an asynchronous service under the principles above?
The answer is to wrap it with queues (to absorb the requests) + multiple threads (allowing execution of some threads hile other threads are waiting), providing the async events request/response interface expected by rest of the subsystems.
In all cases it is best to keep a bounded number of threads (instead of a per-request thread model) and always keep the total number of active/hot threads in the system below the number of processing resources.
Node.js is nice in that its input/output is inherently asynchronously and all the infrastructure is geared towards implementing the kind of things I described above.
Related
I have this code periodically calls the load function which does very load work taking 10sec. Problem is when load function is being executed, it's blocking the main flow. If I send a simple GET request (like a health check) while load is being executed, the GET call is blocked until the load call is finished.
function setLoadInterval() {
var self = this;
this.interval = setInterval(function doHeavyWork() {
// this takes 10 sec
self.load();
self.emit('reloaded');
}, 20000);
I tried async.parallel but still the GET call was blocked. I tried setTimeout but got the same result. How do I make load to running on background so that it doesn't block the main flow?
this.interval = setInterval(function doHeavyWork() {
async.parallel([function(cb) {
self.load();
cb(null);
}], function(err) {
if (err) {
// log error
}
self.emit('reloaded');
})
}, 20000);
Node.js is an event driven non-blocking IO model
Anything that is IO is offloaded as a separate thread in the underlying engine and hence parallelism is achieved.
If the task is CPU intensive there is no way you can achieve parallelism as by default Javascript is a blocking sync language
However there are some ways to achieve this by offloading the CPU intensive task to a different process.
Option1:
exec or spawn a child process and execute the load() function in that spawned node app. This is okay if the interval fired is for every 20000 ms as by the time another one fired, the 10sec process will be completed.
Otherwise it is dangerous as it can spawn too many node applications eating up your Systems resources
Option2:
I dont know how much data self.load() accepts and returns. If it is trivial and network overhead is acceptable, make that task a load-balanced web service (may be 4 webservers running in parallel) which accepts (rather point to) 1M records and returns back filtered records.
NOTE
It looks like you are using node async parallel function. But keep a note of this description from the documentation.
Note: parallel is about kicking-off I/O tasks in parallel, not about parallel execution of code. If your tasks do not use any timers or perform any I/O, they will actually be executed in series. Any synchronous setup sections for each task will happen one after the other. JavaScript remains single-threaded.
Many people are saying that modern rest apis should be "async", and as a main argument they say that on some platforms, for example in Java, "blocking" way of doing things produce many threads and "async" way allows to limit thread count and overhead.
What I don't understand, is how it is achieved.
Consider I have an app in a framework like vert.x (but actually it doesn't matter, you can think of NodeJS as well), and say 1_000_000 concurrent connections for a service which makes some request to a database. The framework allows each request itself to be processed async on the long task i|o operations, so database data exchange looks syntactically asynchronous in the business logic code. BUT. As I understand, DB request is made not in the vacuum - it is processed in some other thread, and that thread actually blocks until db request is finished. So it means, that despite the fact, that request business logic looks async and non blocking, long time operations which are called from such logic are actually blocking somewhere under the hood of framework and the more such operations are done, the more threads should be consumed anyway (for NodeJS you can think of threads, created in C++ code of a framework itself)
So as I see the big picture - in async approach there is only one thread, which processes all the requests, it's ok, but there is a bunch of threads, which are doing the actual I/O work in the background anyway, and if one doesn't limit their count, then the number of threads will be the same as for a blocking approach + 1. On the other hand if you limit the number of background thread pool programmatically, then what will be the benefits compared to the blocking approach, which combines a queue for user requests and a limit for the number of request processing threads?
Since you're asking a fairly low level question I'll answer with a low level answer. Hope you're comfortable with C.
First, a disclaimer: I'll be talking mostly about networking code because the only widely used database I know of that use file I/O is sqlite. Since you're asking about postgres I can assume you're interested about how socket I/O (be it TCP socket or unix local sockets) can work with only one thread.
At the core of almost all async systems and libraries is a piece of code that looks like this:
while (1)
{
read_fd_set = active_fd_set;
// This blocks until we receive a packet or until timeout expires:
select(FD_SETSIZE, &read_fd_set, NULL, NULL, timeout);
// Process timed events:
timeout = process_timeout();
// Process I/O:
for (i = 0; i < FD_SETSIZE; ++i) {
if (FD_ISSET(i, &read_fd_set)) {
if (i == sock) {
/* Connection arriving on listening socket */
int new;
size = sizeof(clientname);
new = accept (sock,(struct sockaddr *) &clientname, &size);
FD_SET (new, &active_fd_set);
}
else {
/* Data arriving on an already-connected socket. */
if (read_from_client(i) < 0) {
close (i);
FD_CLR (i, &active_fd_set);
}
}
}
}
}
(code example paraphrased from a GNU socket programming example)
As you can see, the code above uses no threading whatsoever. Yet it can handle many connections simultaneously. If you take a look at the for loop it is also obvious that it is basically a simple state machine that processes sockets one at a time if they have any packets waiting to be read (if not it is skipped by the if (FD_ISSET...) statement).
Non-I/O events can logically only come from timed events. And that's where the timeout management (details not shown for clarity) comes in. All I/O related stuff (basically almost all your async code) gets called back from the read_from_client() function (again, details omitted for clarity).
There is zero code running in parallel.
Where does the parallelization come from?
Basically the server you're connecting to. Most databases support some form of parallelism. Some support mulththreading. Some even support node.js or vert.x style parallelism by supporting asynchronous disk I/O (like postgres). Some configurations of databases allow higher level of parallelism by storing data on more than one server via partitioning and/or sharding and/or master/slave servers.
That's where the big parallelism comes from -- parallel computing. Most databases have very strong support for read parallelism but weaker support for write parallelism (master/slave setups for example allow you to write only to the master database). But this is still a big win because most apps read more data than they write.
Where does disk parallelism come from?
The hardware. Mostly this has to do with DMA which can transfer data without the CPU. DMA is not one thing. It is more like a concept. Different systems like the PCI bus, SATA, USB even the CPU RAM bus itself has various kinds of DMA to transfer data directly to RAM (and in the case of RAM, to transfer data higher up to the various levels of CPU cache) or to a faster buffer.
While waiting for the DMA to complete. The CPU is not doing anything. And while it is doing nothing and there happens to be a network packet coming in or a setTimeout() expiring the code that handles them can be executed on the CPU. All while a file is being read into RAM.
But Node.js docs keep mentioning I/O threads
Only for disk I/O. It's not impossible to do async disk I/O with a single thread. Tcl has done that for years and many other programming languages and frameworks have too. It's just very-very messy since BSD does it differently form Linux which does it differently from Windows and even OSX may be subtly different form BSD even though it is derived from it etc. etc.
For the sake of simplicity and solid reliability node developers have opted to process disk I/O in separate threads.
Note that even for socket I/O it is not as simple as the code example I gave above. Since select() has some limitations (for example, you're forced to loop over ALL sockets to check for incoming data even though most won't have incoming data), people have come up with better APIs. And obviously different OSes do it differently. That is why there are a lot of libraries created to handle cross platform event processing like libevent and libuv (the one node.js uses).
OK. But postgres still runs on my PC
Asynchronous, event-oriented systems does not automagically give you performance superpowers. What they DO give you is choice: the app server is blazing fast so where you put your database servers and what database you use us up to you.
OK. But I can do this with threads. Why async?
Benchmarks.
Since 1999, many people have run many benchmarks and in the majority of cases single threaded (or low thread count), event-oriented systems have outperformed simple multithreaded systems. It was especially true in the old days of single CPU, single core servers. It is still partly true now (since cores are still limited).
That is why Apache was re-written into Apache2 to use a thread pool of async listeners and why Nginx was written from scratch to use a thread pool of async code.
Yes, on modern servers ideally you'd still want some threads in order to use all your CPUs. The alternative is a process pool like how the cluster module works in node.js. But you'd want the number of threads/processes to be constant or as constant as possible to avoid the overhead of context switching and thread creation.
This is true to some async frameworks where JDBC client is still synchronised.
When querying DB in Vert.x you reuse same application threads.
Please see the following example:
#Test
public void testMultipleThreads() throws InterruptedException {
Vertx vertx = Vertx.vertx();
System.out.println("Before starting server: " + Thread.activeCount());
// Start server
vertx.createHttpServer().
requestHandler(httpServerRequest -> {
// System.out.println("Request");
httpServerRequest.response().end();
}).
listen(8080, o -> {
System.out.println("Server ready");
});
// Start counting threads
vertx.setPeriodic(500, (o) -> {
System.out.println(Thread.activeCount());
});
// Create requests
HttpClient client = vertx.createHttpClient();
int loops = 1_000_000;
CountDownLatch latch = new CountDownLatch(loops);
for (int i = 0; i < loops; i++) {
client.getNow(8080, "localhost", "/", httpClientResponse -> {
// System.out.println("Response received");
latch.countDown();
});
}
latch.await();
}
You'll notice that the number of threads doesn't change, even though you serve as many connections as you would like. You can also add Vert.x JDBC client to test it.
I am implementing a REST service for financial calculation. So each request is supposed to be a CPU intensive task, and I think that the best place to create threads it's in the following function:
exports.execute = function(data, params, f, callback) {
var queriesList = [];
var resultList = [];
for (var i = 0; i < data.lista.length; i++)
{
var query = (function(cod) {
return function(callbackFlow) {
params.paramcodneg = cod;
doCdaQuery(params, function(err, result)
{
if (err)
{
return callback({ERROR: err}, null);
}
f(data, result, function(ret)
{
resultList.push(ret);
callbackFlow();
});
});
}
})(data.lista[i]);
queriesList.push(query);
}
flow.parallel(queriesList, function() {
callback(null, resultList);
});
};
I don't know what is best, run flow.parallel in a separeted thread or run each function of the queriesList in its own thread. What is best ? And how to use threads-a-gogo module for that ?
I've tried but couldn't write the right code for that.
Thanks in advance.
Kleyson Rios.
I'll admit that I'm relatively new to node.js and I haven't yet used threads a gogo, but I have had some experience with multi-threaded programming, so I'll take a crack at answering this question.
Creating a thread for every single query (I'm assuming these queries are CPU-bound calculations rather than IO-bound calls to a database) is not a good idea. Creating and destroying threads in an expensive operation, so creating an destroying a group of threads for every request that requires calculation is going to be a huge drag on performance. Too many threads will cause more overhead as the processor switches between them. There isn't any advantage to having more worker threads than processor cores.
Also, if each query doesn't take that much processing time, there will be more time spent creating and destroying the thread than running the query. Most of the time would be spent on threading overhead. In this case, you would be much better off using a single-threaded solution using flow or async, which distributes the processing over multiple ticks to allow the node.js event loop to run.
Single-threaded solutions are the easiest to understand and debug, but if the queries are preventing the main thread from getting other stuff done, then a multi-threaded solution is necessary.
The multi-threaded solution you propose is pretty good. Running all the queries in a separate thread prevents the main thread from bogging down. However, there isn't any point in using flow or async in this case. These modules simulate multi-threading by distributing the processing over multiple node.js ticks and tasks run in parallel don't execute in any particular order. However, these tasks still are running in a single thread. Since you're processing the queries in their own thread, and they're no longer interfering with the node.js event loop, then just run them one after another in a loop. Since all the action is happening in a thread without a node.js event loop, using flow or async in just introduces more overhead for no additional benefit.
A more efficient solution is to have a thread pool hanging out in the background and throw tasks at it. The thread pool would ideally have the same number of threads as processor cores, and would be created when the application starts up and destroyed when the application shuts down, so the expensive creating and destroying of threads only happens once. I see that Threads a Gogo has a thread pool that you can use, although I'm afraid I'm not yet familiar enough with it to give you all the details of using it.
I'm drifting into territory I'm not familiar with here, but I believe you could do it by pushing each query individually onto the global thread pool and when all the callbacks have completed, you'll be done.
The Node.flow module would be handy here, not because it would make processing any faster, but because it would help you manage all the query tasks and their callbacks. You would use a loop to push a bunch of parallel tasks on the flow stack using flow.parallel(...), where each task would send a query to the global threadpool using threadpool.any.eval(), and then call ready() in the threadpool callback to tell flow that the task is complete. After the parallel tasks have been queued up, use flow.join() to run all the tasks. That should run the queries on the thread pool, with the thread pool running as many tasks as it can at once, using all the cores and avoiding creating or destroying threads, and all the queries will have been processed.
Other requests would also be tossing their tasks onto the thread pool as well, but you wouldn't notice that because the request being processed would only get callbacks for the tasks that the request gave to the thread pool. Note that this would all be done on the main thread. The thread pool would do all the non-main-thread processing.
You'll need to do some threads a gogo and node.flow documentation reading and figure out some of the details, but that should give you a head start. Using a separate thread is more complex than using the main thread, and making use of a thread pool is even more complex, so you'll have to choose which one is best for you. The extra complexity might or might not be worth it.
Using ExpressJS and Socket.IO I have an HTML scene where multiple users can connect to NodeJS. I am about to do some animation that has to sync to all clients.
In the client, I know animation can be achieved by setInterval() (not time-ideal) then socket.emit() to NodeJS. But is there an Idle loop in NodeJS that can be used for master-controlling animations and io.sockets.emit() to update everyone about everyone?
EDIT: I want to do general "animation" of values in node.js e.g. pseudocode:
process.idle(function() {
// ...
itempos.x += (itempos.dest - itempos.x) / 20; // easing
itempos.y += (itempos.dest - itempos.y) / 20; // easing
io.sockets.broadcast('update', itempos);
// ...
});
Being a server-side framework it will rarely idle (CPU or I/O). Besides idleloop is more suited for DOM requirements. But in node.js you have the following functions:
process.nextTick : Execute callback after current event queue finishes i.e. at the beginning of next event loop. It does not allow I/O execution until maxTickDepth nextTick calls are executed. If used too much it can prevent I/O from occurring.
setImmediate : Execute callback after I/O callbacks in current event loop are finished. Allows I/O to happen between multiple setImmediate calls.
Given what you want setImmediate is more suited for your needs.
Check out the Timers docs: http://nodejs.org/api/timers.html
All of the timer functions are globals.
setInterval(callback, delay, [arg], [...])
To schedule the repeated execution of callback every delay milliseconds. Returns a intervalId for possible use with clearInterval(). Optionally you can also pass arguments to the callback.
For synchronized client animation it may make sense to do sequences in chunks at a slower rate than trying to squeeze as many websocket emissions into the animation duration. Human eyes are much slower than websockets in my experience.
There's tons of client frameworks that will do the easing for you, not a server concern.
(All of this oblivious to your use case, of course!)
With Node.js, or eventlet or any other non-blocking server, what happens when a given request takes long, does it then block all other requests?
Example, a request comes in, and takes 200ms to compute, this will block other requests since e.g. nodejs uses a single thread.
Meaning your 15K per second will go down substantially because of the actual time it takes to compute the response for a given request.
But this just seems wrong to me, so I'm asking what really happens as I can't imagine that is how things work.
Whether or not it "blocks" is dependent on your definition of "block". Typically block means that your CPU is essentially idle, but the current thread isn't able to do anything with it because it is waiting for I/O or the like. That sort of thing doesn't tend to happen in node.js unless you use the non-recommended synchronous I/O functions. Instead, functions return quickly, and when the I/O task they started complete, your callback gets called and you take it from there. In the interim, other requests can be processed.
If you are doing something computation-heavy in node, nothing else is going to be able to use the CPU until it is done, but for a very different reason: the CPU is actually busy. Typically this is not what people mean when they say "blocking", instead, it's just a long computation.
200ms is a long time for something to take if it doesn't involve I/O and is purely doing computation. That's probably not the sort of thing you should be doing in node, to be honest. A solution more in the spirit of node would be to have that sort of number crunching happen in another (non-javascript) program that is called by node, and that calls your callback when complete. Assuming you have a multi-core machine (or the other program is running on a different machine), node can continue to respond to requests while the other program crunches away.
There are cases where a cluster (as others have mentioned) might help, but I doubt yours is really one of those. Clusters really are made for when you have lots and lots of little requests that together are more than a single core of the CPU can handle, not for the case where you have single requests that take hundreds of milliseconds each.
Everything in node.js runs in parallel internally. However, your own code runs strictly serially. If you sleep for a second in node.js, the server sleeps for a second. It's not suitable for requests that require a lot of computation. I/O is parallel, and your code does I/O through callbacks (so your code is not running while waiting for the I/O).
On most modern platforms, node.js does us threads for I/O. It uses libev, which uses threads where that works best on the platform.
You are exactly correct. Nodejs developers must be aware of that or their applications will be completely non-performant, if long running code is not asynchronous.
Everything that is going to take a 'long time' needs to be done asynchronously.
This is basically true, at least if you don't use the new cluster feature that balances incoming connections between multiple, automatically spawned workers. However, if you do use it, most other requests will still complete quickly.
Edit: Workers are processes.
You can think of the event loop as 10 people waiting in line to pay their bills. If somebody is taking too much time to pay his bill (thus blocking the event loop), the other people will just have to hang around waiting for their turn to come.. and waiting...
In other words:
Since the event loop is running on a single thread, it is very
important that we do not block it’s execution by doing heavy
computations in callback functions or synchronous I/O. Going over a
large collection of values/objects or performing time-consuming
computations in a callback function prevents the event loop from
further processing other events in the queue.
Here is some code to actually see the blocking / non-blocking in action:
With this example (long CPU-computing task, non I/O):
var net = require('net');
handler = function(req, res) {
console.log('hello');
for (i = 0; i < 10000000000; i++) { a = i + 5; }
}
net.createServer(handler).listen(80);
if you do 2 requests in the browser, only a single hello will be displayed in the server console, meaning that the second request cannot be processed because the first one blocks the Node.js thread.
If we do an I/O task instead (write 2 GB of data on disk, it took a few seconds during my test, even on a SSD):
http = require('http');
fs = require('fs');
buffer = Buffer.alloc(2*1000*1000*1000);
first = true;
done = false;
write = function() {
fs.writeFile('big.bin', buffer, function() { done = true; });
}
handler = function(req, res) {
if (first) {
first = false;
res.end('Starting write..')
write();
return;
}
if (done) {
res.end("write done.");
} else {
res.end('writing ongoing.');
}
}
http.createServer(handler).listen(80);
here we can see that the a-few-second-long-IO-writing-task write is non-blocking: if you do other requests in the meantime, you will see writing ongoing.! This confirms the well-known non-blocking-for-IO features of Node.js.