Make scheduled tasks with node-schedule (using forever) persist after a restart - node.js

I want to develop a node.js program that will be executed at a specific time using a job scheduler (node-schedule).
This program is running in the background using forever (node.js module).
Here's the content of my app.js :
var schedule = require('node-schedule');
~
~
var id = request.body.id;
var scheduled = schedule.scheduledJobs;
if(scheduled[id]!=null){
//Tasks
}
else{
scheduled[id].cancel;
delete scheduled[id];
}
But if app.js is killed by any reason,
the schedule object is removed.
and sometimes app.js is restarted by forever.
how can I handle node-schedule objects ?

I have faced similar problem recently and there are two solutions:
1. Use actual cron
2. Use database
I solved my problem by using database. Each time you are creating some event save it to database. In your app.js file when the application is starting make function reading the database and creating scheduled events accordingly.
The first option is better if you do not have dynamic tasks, you do not create new tasks or if they are always the same.

Related

Api route is blocked by huge process in event loop

I have a restify api similar to this (i write in pseudocode) :
server.post('api/import')
{
database.write(status of this file.id is pending)
fileModification(req.file)
res.status(200)
res.send(import has started)
} //here I do some file modifications and then i import it to database
server.get('api/import_info')
{
database.select(file status)
} //here I want to see status (is file imported or pending(process is not finished yet))
//In another module after import is finished I update database to
database.write(file.id import status is completed)
Importing file is process that takes about 2 minutes, but even I don't wait for it to finish in api/import when I try to trigger 'info' route my api is blocked
Is it possible that event loop is blocked or maybe connection is not properly closed.
Thanks in advance
I have some ideas about your question.
you can use cluster module Cluster, cluster module can create process depend on your cpu core. When on process blocked, Others process still can work.
you can fork a new process in your api, use the new process handle your task.

Background process in node

I am new to the whole javascript stack .I have been trying to learn by building a small application based on React-Express-Mongo. My application basically saves some config settings to mongo . And based on these settings the app periodically tries to fetch some values by querying and elasticsearch index.
So far i have the part to save the config settings done.
What I need to do now ,is to extract these setting from my mongo DB and schedule a job which keeps runnning periodically (the period is one of the settings) to poll my elastic index.The thing that i cannot wrap my head around ,is how do I create this scheduled job. All i have been using till now is the Express router to interact with my UI and DB.
I did some research ,would spawning a child process be the ideal way to go ahead with this ?
I would suggest you to go take a look to node-corn. Cron is a popular task scheduler on UNIX and node-cron is its implementation in node.
Basic usage - taken from the doc
var CronJob = require('cron').CronJob;
new CronJob('* * * * * *', function() {
console.log('You will see this message every second');
}, null, true, 'America/Los_Angeles');

How to run a job through Queue in arangodb

I am moving from ArangoDb 2.5.7 to ArangoDb 3.1.7. I have managed to make everything work except the Jobs. I look at the documentation and I don't understand If I have to create a separate service just for that ?
So, I have a foxx application myApp
manifest.json
{
"name": "myApp",
"version": "0.0.1",
"author": "Deepak",
"files":
{
"/static": "static"
},
"engines":
{
"arangodb": "^3.1.7"
},
"scripts":
{
"setup": "./scripts/setup.js",
"myJob": "./scripts/myJob.js"
},
"main": "index.js"
}
index.js
'use strict';
module.context.use('/one', require('./app'));
app.js
const createRouter = require('org/arangodb/foxx/router');
const controller = createRouter();
module.exports = controller;
const queues = require('#arangodb/foxx/queues');
queue = queues.create('myQueue', 2);
queue.push({mount:"/myJob", name:"myJob"}, {"a":4}, {"allowUnknown": true});
myJob.js
const argv = module.context.argv;
var obj = argv[0];
console.log('obj:'+obj);
I get following error:
Job failed:
ArangoError: service not found
Mount path: "/myJob".
I am not sure if I have to expand myJob as an external service. Can you help me. I don't see a complete example of how to do it.
To answer your question:
You do not have to extract the job script into a new service. You can specify the mount point of the current service by using module.context.mount.
You can find more information about the context object in the documentation: https://docs.arangodb.com/3.1/Manual/Foxx/Context.html
By the way, it's probably not a good idea to arbitrarily create jobs at mount-time. The common use case for the queue is to create jobs in route handlers as a side-effect of incoming requests (e.g. to dispatch a welcome e-mail on signup).
If you create a job at mount-time (e.g. in your main file or a file required by it) the job will be created whenever the file as executed, which will be at least once for each Foxx thread (by default ArangoDB uses multiple Foxx threads to handle parallel requests) or when development mode is enabled once per request(!).
Likewise if you create a job in your setup script it will be created whenever the setup script is executed, although this will only happen in one thread each time (but still once per request when development mode is active).
If you need e.g. a periodic job that lives alongside your service, you should put it in a unique queue and only create it in your setup script after checking whether it already exists.
On the changes in the queue API:
The queue API changed in 2.6 due to a serious issue with the old API that would frequently result in pending jobs not being properly rescheduled when the ArangoDB daemon was restarted after a job had been pushed to the queue.
Specifically ArangoDB 2.6 introduced so-called script-based (rather than function-based) job types: https://docs.arangodb.com/3.1/Manual/ReleaseNotes/UpgradingChanges26.html#foxx-queues
Support for the old function-based job types was dropped in ArangoDB 2.7 and the cookbook recipe was updated to reflect script-based job types: https://docs.arangodb.com/2.8/cookbook/FoxxQueues.html
A more detailed description of the new queue can be found in the documentation: https://docs.arangodb.com/3.1/Manual/Foxx/Scripts.html

Starting a scheduling service in sails.js with forever from within sails with access to all waterline models

I have a standalone scheduling service set to execute some logic every 1 hour, I want to start this service with forever right after sails start and I am not sure what's the best way to do that.
// services/Scheduler.js
sails.load(function() {
setInterval( logicFn , config.schedulingInterval);
});
Sails can execute bootstrap logic in the config.bootstrap module and I'll be using the forever-monitor node module \
var forever = require('forever-monitor'),
scheduler = new (forever.Monitor)( schedulerPath, {
max: 20,
silent: true,
args: []
});
module.exports.bootstrap = function(cb) {
scheduler.start();
cb();
};
What if the service failed and restarted for whatever reason would it have access to all waterline models again, how to ensure it works as intended every time?
as brittonjb said in comments, a simple solution is to use the cron module for scheduling.
You can specify a function for it to call at whatever interval you wish; this function could be defined within /config/bootstrap.js or it could be defined somewhere else (e.g. mail.dailyReminders() if you have a mail service with a dailyReminders method);
Please please please, always share your sails.js version number! This is really important for people googling questions/answers!
There are many ways to go about doing this. However, for those that want the "sails.js" way, there are hooks for newer sails.js versions.
See this issue thread in github, specifically, after the issue gets closed some very helpful solutions get provided by some users. The latest is shared by "scott-wyatt", commented on Dec 28, 2014:
https://github.com/balderdashy/sails/issues/2092

Nodejs: Async job queue processing

I am working in nodejs with express for a web app that communicates with mongodb frequently. Currently I running production with my own job queue system that only begins processing a job once the previous job has been completed (an approach that seems to be taken by kue).
To me, this seems wildly inefficient, I was hoping for a more asynchronous job queue, so I am looking for some advice as to how other nodejs developers queue their jobs and structure their processing.
One of my ideas is to process any jobs that are received immediately and return the resulting data in the order of addition.
Also to be considered: currently each user has their own independent job queue instance, is this normal practice? Is there any reason why this should not be the case? (ie, all users send jobs to one universal queue?)
Any comments/advice are appreciated.
Why do you build your own queue system? You did quite a lot of work to serialize a async queue with addLocalJob.
Why don't you just do something like
on('request', function(req, res) { queryDB(parameter, function(result) { res.send(result) })
? Full parallel access, no throttle, no (async) waiting.
If you really want to do it "by hand" in your own code, why not execute the first n elements of your trafficQueue instead of only the first?
If you want to throttle the DB - two ways:
use a library like async and the function parallelLimit
connect to your mongodb with http(s) and use the node-build-in http.globalAgent.maxSockets.
Hope this helps.

Resources