I am currently developing a node js app with a REST API that exposes data from a mongo db.
The application needs to update some data every 5 minutes by calling an external service (could take more than one minute to get the new data).
I decided to isolate this task into a child_process but I am not sure about what should I need put in this child process :
Only the function to be executed. The schedule is managed by the main process.
Having a independent process that auto-refresh data every 5 minute and send a message to main process every time the refresh is done.
I don't really know if there is a big cost to start a new child process every 5 minutes or if I should use only one long time running child process or if I am overthinking the problem ^^
EDIT - Inforamtion the update task
the update task can take up than one minute but it consists in many smaller tasks (gathering information from many external providers) than run asynchronously do many I don't even need a child process ?
Thanks !
Node.js has an event-driven architecture capable of handling asynchronous calls hence it is unlike your typical C++ program where you will go with a multi-threaded/process architecture.
For your use-case I'm thinking maybe you can make use of the setInterval to repeatedly perform an operation which you can define more tiny async calls through using some sort of promises framework like bluebirdJS?
For more information see:
setInterval: https://developer.mozilla.org/en-US/docs/Web/API/WindowTimers/setInterval
setInterval()
Repeatedly calls a function or executes a code snippet, with a fixed
time delay between each call. Returns an intervalID.
Sample code:
setInterval(function() {
console.log("I was executed");
}, MILLISECONDS_IN_FIVE_MINUTE);
Promises:
http://bluebirdjs.com/docs/features.html
Sample code:
new Promise(function(resolve, reject) {
updateExternalService(data)
.then(function(response) {
return this.parseExtResp(response);
})
.then(function(parsedResp) {
return this.refreshData(parsedResp);
})
.then(function(returnCode) {
console.log("yay updated external data source and refreshed");
return resolve();
})
.catch(function(error) {
// Handle error
console.log("oops something went wrong ->" + error.message);
return reject();
});
}
It does not matter the total clock time that it takes to get data from an external service as long as you are using asynchronous requests. What matters is how much CPU you are using in doing so. If the majority of the time is waiting for the external service to respond or to send the data, then your node.js server is just sitting idle most of the time and you probably do not need a child process.
Because node.js is asynchronous, it can happily have many open requests that are "in flight" that it is waiting for responses to and that takes very little system resources.
Because node.js is single threaded, it is CPU usage that typically drives the need for a child process. If it takes 5 minutes to get a response from an external service, but only 50ms of actual CPU time to process that request and do something with it, then you probably don't need a child process.
If it were me, I would separate out the code for communicating with the external service into a module of its own, but I would not add the complexity of a child process until you actually have some data that such a change is needed.
I don't really know if there is a big cost to start a new child
process every 5 minutes or if I should use only one long time running
child process or if I am overthinking the problem
There is definitely some cost to starting up a new child process. It's not huge, but if you're going to be doing it every 5 minutes and it doesn't take a huge amount of memory, then it's probably better to just start up the child process once, have it manage the scheduling of communicating with the external service entirely upon it's own and then it can communicate back results to your other node.js process as needed. This makes the 2nd node process much more self-contained and the only point of interaction between the two processes is to communicate an update. This separation of function and responsibility is generally considered a good thing. In a multi-developer project, you could more easily have different developers working on each app.
It depends on how cohesion between your app and the auto refresh task.
If the auto refresh task can running standalone, without interaction with your app, then it better to start your task as a new process. Use child_process directly is not a good idea, spawn/monitor/respawn child process is tricky, you can use crontab or pm2 to manage it.
If auto refresh task depends on your app, you can use child_process directly, send message to it for schedule. But first try to break this dependency, this will simplify your app, easy to deployment and maintain separately. Child process is long running or one shot is not a question until you have hundreds of such task running on one machine.
Related
I need to build a Node.js API that, for each different user that calls it, starts running some piece of code (a simple script that sets up a Telegram client, listens to new messages and performs a couple of tasks here) that'd then continuously run in the background.
My ideas so far have been a) launching a new child process for each API call and b) for each call automatically deploying the script on the cloud.
I assume the first idea wouldn't be scalable, as for the second I have no experience on the matter.
I searched a dozen of keyword and haven't found anything relevant so far. Is there any handy way to implement this? In which direction can I search?
I look forward to any hint
Im not a node dev, but as a programmer you can do something like:
When user is active, it calls a function
this function must count the seconds that has passed to match the 24h (86400 seconds == 24 hours) and do the tasks;
When time match, the program stops
Node.js is nothing more that an event loop (libuv) whose execution stack run on v8 (javascript). The process will keep running until the stack is empty.
Keep it mind that there is only one thread executing your code (the event loop) and everything will happen as callback.
As long as you set up your telegram client with some listeners, node.js will wait for new messages and execute related listener.
Just instantiate a new client on each api call and listen to it, no need to spam a new process.
Anyway you'll eventually end in out of memory if you don't limit the number of parallel client of if you don't close them after some time (eg. using setInterval()).
I have a question that nobody seems to help with. How will this be handled in a production mode with thousands of requests at the same time?
I did a simple test case:
module.exports = {
index: function (req, res){
if (req.param('foo') == 'bar'){
async.series([
function(callback){
for (k=0; k <= 50000; k++){
console.log('did something stupid a few times');
}
callback();
}
], function(){
return res.json(null);
});
}else{
return res.view('homepage', {
});
}
}
};
Now if I go to http://localhost:1337/?foo=bar it will obviously wait a while before it responds. So if I now open a different session (other browser or incognito, and go to http://localhost:1337/ I am expecting a result immediately. Instead it is waiting for the other request to finish and only then it will let this request go through.
Therefore it is not asynchronous and it is a huge problem if I have as much as 2 ppl at the same time operating this app. I mean this app will have drop downs coming from databases, html files being served etc...
My question is this: how does one handle such an issue??? I hear the word "promises vs callbacks" - is this some sort of a solution to this?
I know about clustering, but that only separates the requests over the amount of cpu's, ultimately you will fix it by at most allowing 8 people at the same time without being blocked. It won;t handle 100 requests at the same time...
P.S. That test was to simplify the example, but think of someone uploading a file, a web service that goes to a different server, a point of sales payment terminal waiting for a user to input the pin, someone downloading a file from the app, etc...
nodejs is event driven and runs your Javascript as single threaded. So, as long as your code from the first request is sitting in that for loop, nodejs can't do anything else and won't get to the next event in the event queue so thus your second request has to wait for the first one to finish.
Now, if you used a true async operation such as setTimeout() instead of your big for loop, then nodejs could service other events while the first request was waiting for the setTimeout().
The general rule in nodejs is to avoid doing anything that takes a ton of CPU in your main nodejs app. If you are stuck with something CPU-intensive, then you're best to either run clusters (as many as CPUs you have) or move the CPU-intensive work to some sort of worker queue that is served by different processes and let the OS time slice those other processes while the main nodejs process stays free and ready to service new incoming requests.
My question is this: how does one handle such an issue??? I hear the word "promises vs callbacks" - is this some sort of a solution to this?
I know about clustering, but that only separates the requests over the amount of cpu's, ultimately you will fix it by at most allowing 8 people at the same time without being blocked. It won;t handle 100 requests at the same time...
Most of the time, a server process spends most of the time of a request doing things that are asynchronous in nodejs (reading files, talking to other servers, doing database operations, etc...) where the actual work is done outside the nodejs process. When that is the case, nodejs does not block and is free to work on other requests while the async operations from other requests are underway. The little bit of CPU time coordinating these operations can be helped further by clustering though it's probably worth testing a single process first to see if clustering is really needed.
P.S. That test was to simplify the example, but think of someone uploading a file, a web service that goes to a different server, a point of sales payment terminal waiting for a user to input the pin, someone downloading a file from the app, etc...
All the operations you mentioned here can be done truly asynchronously so they won't block your nodejs app like your for loop does so basically the for loop isn't a good simulation of any of this. You need to use a real async operation to simulate it. Real async operations do their work outside of the main nodejs thread and then just post an event to the event queue when they are done, allowing nodejs to do other things while the async operations are doing their work. That's the key.
I want this kind of structure;
express backend gets a request and runs a function, this function will get data from different apis and saves it to db. Because this could takes minutes i want it to run parallel while my web server continues to processing requests.
i want this because of this scenario:
user has dashboard, after logs in app starts to collect data from apis and preparing the dashboard for user, at that time user can navigate through site even can close the browser but the function has to run until it finishes fetching data.Once it finishes, all data will be saved db and the dashboard is ready for user.
how can i do this using child_process or any kind of structure in nodejs?
Since what you're describing is all async I/O (networking or disk) and is not CPU intensive, you don't need multiple child processes in order to effectively serve multiple requests. This is the beauty of node.js. With async I/O, node.js can be working on many different requests over the same period of time.
Let's supposed part of your process is downloading an image. Your node.js code sends a request to fetch an image. That request is sent off via TCP. Immediately, there is nothing else to do on that request. It's winging it's way to the destination server and the destination server is preparing the response. While all that is going on, your node.js server is completely free to pull other events from it's event queue and start working on other requests. Those other requests do something similar (they start async operations and then wait for events to happen sometime later).
Your server might get 10 different async operations started and "in flight" before the first one actually starts getting a response. When a response starts coming in, the system puts an event into the node.js event queue. When node.js has a moment between other requests, it pulls the next event out of the event queue and processes it. If the processing has further async operations (like saving it to disk), the whole async and event-driven process starts over again as node.js requests a write to disk and node.js is again free to serve other events. In this manner, events are pulled from the event queue one at a time as they become available and lots of different operations can all get worked on in the idle time between async operations (of which there is a lot).
The only thing that upsets the apple cart and ruins the ability of node.js to juggle lots of different things all at once is an operation that takes a lot of CPU cycles (like say some unusually heavy duty crypto). If you had something like that, it would "hog" too much of the CPU and the CPU couldn't be effectively shared among lots of other operations. If that were the case, then you would want to move the CPU-intensive operations to a group of child processes. But, just doing async I/O (disk, networking, other hardware ports, etc...) does not hog the CPU - in fact it barely uses much node.js CPU.
So, the next question is often "how do I know if I have too much stuff that uses the CPU". The only way to really know is to just code your server properly using async I/O and then measure its performance under load and see how things go. If you're doing async things appropriately and the CPU still spikes to 100%, then you have too much CPU load and you'll want to either use generic clustering or move specific CPU-heavy operations to a group of child processes.
I have a site that makes the standard data-bound calls, but then also have a few CPU-intensive tasks which are ran a few times per day, mainly by the admin.
These tasks include grabbing data from the db, running a few time-consuming different algorithms, then reuploading the data. What would be the best method for making these calls and having them run without blocking the event loop?
I definitely want to keep the calculations on the server so web workers wouldn't work here. Would a child process be enough here? Or should I have a separate thread running in the background handling all /api/admin calls?
The basic answer to this scenario in Node.js land is to use the core cluster module - https://nodejs.org/docs/latest/api/cluster.html
It is an acceptable API to :
easily launch worker node.js instances on the same machine (each instance will have its own event loop)
keep a live communication channel for short messages between instances
this way, any work done in the child instance will not block your master event loop.
I'd like to execute an untrusted js code using runInNewContext in node.js but as far as I see there is no way to limit its execution time. Also it is a sync operation. is there a way to set timeout on it or async version of it that will allow me to control its execution from 'outside'?
UPDATE: running in an external process is no good:
takes too much resources
more importantly, I need the code to have access to my data/code though sandbox environment
Run script in external process using dnode or child_process.fork, set deadline timer and kill process if timeout reached or timer if script finished.
I'd like to execute an untrusted js code using runInNewContext in
node.js but as far as I see there is no way to limit its execution
time. Also it is a sync operation. is there a way to set timeout on
it or async version of it that will allow me to control its execution
from 'outside'?
I think What you are saying is completely true. I think the only option is to fill an issue with Joyent/Ryan Dahl. Hopefully he/they can come up with something smart or maybe will tell you it is not possible.
From vm.runInNewContext:
Note that running untrusted code is a tricky business requiring great
care. To prevent accidental global variable leakage,
vm.runInNewContext is quite useful, but safely running untrusted code
requires a separate process.
So to do this safely you need to run in external program. I think the "expensive part" can be avoided by preforking.
A single control process is responsible for launching child processes
which listen for connections and serve them when they arrive. Apache
always tries to maintain several spare or idle server processes, which
stand ready to serve incoming requests. In this way, clients do not
need to wait for a new child processes to be forked before their
requests can be served.
This is now possible because I added timeout parameter support to the Node vm module. You can simply pass in a millisecond timeout value to runInNewContext() and it will throw an exception if the code does not finish executing in the specified amount of time.
Note, this does not imply any kind of security model for running untrusted code. This simply allows you to timeout code which you do trust or otherwise secure.
var vm = require("vm");
try {
vm.runInNewContext("while(true) {}", {}, "loop", 1000);
} catch (e) {
// Exception thrown after 1000ms
}
console.log("finished"); // Will now be executed
Exactly what you would expect:
$ time ./node test.js
finished
real 0m1.069s
user 0m1.047s
sys 0m0.017s