How to load data from database when the node server start - node.js

I want to load some data from database to cache when the node server start,but I am not sure how to implement it:
I have though this:
load.js:
var connection=require('mysql');
var loader=function(){
connection.query('sql',function(err,rows){
cache.put('data',rows);
});
};
loader();
module.exports={}; //export nothing
Then I have two questions:
1 Is this the node way to do the job?
2 The load process is async, which means once the file is loaded(by the require command), the load job may not be completed. I need something like the servlet initialization work in JavaEE. The server will start only after the job done.
Is this possible?

Here's how to perform initialization tasks in node
app.js
var project = require('project'); // assuming project.js contains your project code
function initializationTasks(callback){
// perform all initialization tasks e.g. read from database
callback();
}
initializationTasks(project.start); // start executing your project

Related

How to call external function in background tasks in arangodb

I have a functionality fully working and I want to call this every 30 min from background task. But It is not calling and throwing error as 'undefined'.
app.js
function hourly() { require("console"); console.log('I am running');}
controller.get('/testOnce', function(req, res) {
var tasks = require("org/arangodb/tasks");
tasks.register({
id : "Test",
name : "Testing background task",
period : 5,
command : "hourly()"
});
});
I tried defining hourly in a separate js and then calling that with 'require' But this throws cannot locate module 'myjob'
myjob.js
function hourly() { require("console"); console.log('I am running');
app.js
controller.get('/testOnce', function(req, res) {
var tasks = require("org/arangodb/tasks");
tasks.register({
id : "Test",
name : "Testing background task",
period : 5,
command : "var job = require('myjob');"
});
});
The contents of the command attribute cannot refer to variables defined in other scopes. For example, in the app.js variant you're using a variable named hourly which may not be present anymore when the command gets executed.
In the simple case of just logging something, the app.js variant could be made working if its command parameter is changed to the following (which won't require any variables):
var tasks = require("org/arangodb/tasks");
tasks.register({
period : 5,
command : "require('console').log('I am running from inline command');"
});
The variant that defines the job function in a separate file (named myjob.js) can be made working by making the function available via an export of that module:
function hourly() {
require("console").log('I am running from myjob.js');
}
exports.hourly = hourly;
This is because a require() will only expose what the module exported. In the above case the module will expose a function named hourly, which can now be invoked from a background task as follows:
var tasks = require("org/arangodb/tasks");
tasks.register({
period : 5,
command : "require('myjob').hourly();"
});
Please note that in order for this to work, the file myjob.js needs to be located in the module search path. IIRC that is js/node by default. Also note that this directory already includes the bundled modules and may change on ArangoDB updates.
If the regular command is to be executed from within a Foxx route, then using Foxx queues might also be an option as they should allow putting the script with the job function inside the application directory. However, I have not tried this yet.
The "Foxx way" of solving this would be using the queue and a script-based job (introduced in 2.6).
I've covered this in the last Foxx webinar and am working on a blog post and cookbook recipe.
The problem with doing it this way is that Foxx jobs can not be periodic in 2.6. The feature is planned for 2.7 but with 2.6 only just having been released, you probably won't be able to use it any time soon.
Personally I would recommend using an external scheduler and invoking the Foxx script from there (via the foxx-manager CLI or the HTTP API).

restart node.js application on file change

as you know in node.js if you edit a server-side file, you need to restart the application in order to make for the changes.
Now I was wondering is there a way to do this inside the server, as we know when a file has changed or not(based on last modification date), we only need to re-run the application or restart it or do something that make the changes available without us doing it from the command line.
And we all know how to do this with some Grunt.js(or something like that) or supervisor, but I want to do this without any external package.
thanks alot :)
You can initially have the server startup such that when it ends it start again. In a Bash file it would simply be a recursive function.
function start(){
node index.js
start
}
Or in a batch file a goto statement
:start
node index.js
goto start
Then in your node server when you detect a file change you simply end the process
For watching the files there's modules out there that make it easier. Eg. watch
require('watch').watchTree('./server', process.exit);
You can maybe use this method to watch the files who have to restart server on change :
https://nodejs.org/api/fs.html#fs_fs_watchfile_filename_options_listener
Using cluster.fork to run all code in a child process and setting the master process to fork a new child whenever the previous child exits. Then simply exit the child focibly upon file change, using chokidar.
// Put this at the very beginning of your code
var cluster = require('cluster');
if (cluster.isMaster) {
cluster.on('exit', cluster.fork);
cluster.fork();
return;
}
require('chokidar').watch('./**/*.*').on('change', process.exit);
// Rest of the code here
...

How to execute / abort long running tasks in Node JS?

NodeJS server with a Mongo DB - one feature will generate a report JSON file from the DB, which can take a while (60 seconds up - has to process hundreds of thousands of entries).
We want to run this as a background task. We need to be able to start a report build process, monitor it, and abort it if the user decides to change the params and re build it.
What is the simplest approach with node? Don't really want to get into the realms of separate worker servers processing jobs, message queues etc - we need to keep this on the same box and fairly simple implementation.
1) Start the build as a async method, and return to the user, with socket.io reporting progress?
2) Spin off a child process for the build script?
3) Use something like https://www.npmjs.com/package/webworker-threads?
With the few approaches I've looked at I get stuck on the same two areas;
1) How to monitor progress?
2) How to abort an existing build process if the user re-submits data?
Any pointers would be greatly appreciated...
The best would be to separate this task from your main application. That said, it'd be easy to run it in the background.
To run it in the background and monit without message queue etc., the easiest would be a child_process.
You can launch a spawn job on an endpoint (or url) called by the user.
Next, setup a socket to return live monitoring of the child process
Add another endpoint to stop the job, with a unique id returned by 1. (or not, depending of your concurrency needs)
Some coding ideas:
var spawn = require('child_process').spawn
var job = null //keeping the job in memory to kill it
app.get('/save', function(req, res) {
if(job && job.pid)
return res.status(500).send('Job is already running').end()
job = spawn('node', ['/path/to/save/job.js'],
{
detached: false, //if not detached and your main process dies, the child will be killed too
stdio: [process.stdin, process.stdout, process.stderr] //those can be file streams for logs or wathever
})
job.on('close', function(code) {
job = null
//send socket informations about the job ending
})
return res.status(201) //created
})
app.get('/stop', function(req, res) {
if(!job || !job.pid)
return res.status(404).end()
job.kill('SIGTERM')
//or process.kill(job.pid, 'SIGTERM')
job = null
return res.status(200).end()
})
app.get('/isAlive', function(req, res) {
try {
job.kill(0)
return res.status(200).end()
} catch(e) { return res.status(500).send(e).end() }
})
To monit the child process you could use pidusage, we use it in PM2 for example. Add a route to monit a job and call it every second. Don't forget to release memory when job ends.
You might want to check out this library which will help you manage multi processing across microservices.

Execute node command from UI

I am not very much familiar with nodejs but, I need some guidance in my task. Any help would be appreciated.
I have nodejs file which runs from command line.
filename arguments and that do some operation whatever arguments I have passed.
Now, I have html page and different options to select different operation. Based on selection, I can pass my parameters to any file. that can be any local node js file which calls my another nodejs file internally. Is that possible ? I am not sure about what would be my approach !
I always have to run different command from terminal to execute different task. so, my goal is to reduce that overhead. I can select options from UI and do operations through nodejs file.
I was bored so I decided to try to answer this even though I'm not totally sure it's what you're asking. If you mean you just need to run a node script from a node web app and you normally run that script from the terminal, just require your script and run it programmatically.
Let's pretend this script you run looks like this:
// myscript.js
var task = process.argv[2];
if (!task) {
console.log('Please provide a task.');
return;
}
switch (task.toLowerCase()) {
case 'task1':
console.log('Performed Task 1');
break;
case 'task2':
console.log('Performed Task 2');
break;
default:
console.log('Unrecognized task.');
break;
}
With that you'd normally do something like:
$ node myscript task1
Instead you could modify the script to look like this:
// Define our task logic as functions attached to exports.
// This allows our script to be required by other node apps.
exports.task1 = function () {
console.log('Performed Task 1');
};
exports.task2 = function () {
console.log('Performed Task 2');
};
// If process.argv has more than 2 items then we know
// this is running from the terminal and the third item
// is the task we want to run :)
if (process.argv.length > 2) {
var task = process.argv[2];
if (!task) {
console.error('Please provide a task.');
return;
}
// Check the 3rd command line argument. If it matches a
// task name, invoke the related task function.
if (exports.hasOwnProperty(task)) {
exports[task]();
} else {
console.error('Unrecognized task.');
}
}
Now you can run it from the terminal the same way:
$ node myscript task1
Or you can require it from an application, including a web application:
// app.js
var taskScript = require('./myscript.js');
taskScript.task1();
taskScript.task2();
Click the animated gif for a larger smoother version. Just remember that if a user invokes your task script from your web app via a button or something, the script will be running on the web server and not the user's local machine. That should be obvious but I thought I'd remind you anyway :)
EDIT
I already did the video so I'm not going to redo it, but I just discovered module.parent. The parent property is only populated if your script was loaded from another script via require. This is a better way to test if your script is being run directly from the terminal or not. The way I did it might have problems if you pass an argument in when you start your app.js file, such as --debug. It would try to run a task called "--debug" and then print out "Unrecognized task." to the console when you start your app.
I suggest changing this:
if (process.argv.length > 2) {
To this:
if (!module.parent) {
Reference: Can I know, in node.js, if my script is being run directly or being loaded by another script?

using streamlinejs with nodejs express framework

I am new to the 'nodejs' world.So wanting to explore the various technologies,frameworks involved i am building a simple user posts system(users posting something everybody else seeing the posts) backed by redis.I am using express framework which is recommended by most tutorials.But i have some difficulty in gettting data from the redis server i need to do 3 queries from the redis server to display the posts.In which case have to use neested callback after each redis call.So i wanted to use streamline.js to simplify the callbacks.But i am unable to get it to work even after i used npm install streamline -g and require('streamline').register(); before calling
var keys=['comments','timestamp','id'];
var posts=[];
for(var key in keys){
var post=client.sort("posts",'by','nosort',"get","POST:*->"+keys[key],_);
posts.push(post);
}
i get the error ReferenceError: _ is not defined.
Please point me in the right direction or point to any resources i might have missed.
The require('streamline').register() call should be in the file that starts your application (with a .js extension). The streamline code should be in another file with a ._js extension, which is required by the main script.
Streamline only allows you to have async calls (calls with _ argument) at the top level in a main script. Here, your streamline code is in a module required by the main script. So you need to put it inside a function. Something like:
exports.myFunction = function(_) {
var keys=['comments','timestamp','id'];
var posts=[];
for(var key in keys){
var post=client.sort("posts",'by','nosort',"get","POST:*->"+keys[key],_);
posts.push(post);
}
}
This is because require is synchronous. So you cannot put asynchronous code at the top level of a script which is required by another script.

Resources