cronJob and HTTP requests - node.js

i'm new to web development and have some questions about http requests and cron jobs. I npm installed cron and wanted to incorporate it into my app, where app.js is getting requests from clients that adds data into a database (using mongoose) from a form that client filled out. I want to run a script (executer.js) to be called every 10 seconds to execute a task that will use the data in the same database. Any suggestions on how I could accomplish this?

You don't need to use a cron job for this (though if you need such a library there is the excellent: https://github.com/kelektiv/node-cron). I'd recommend using setInterval for your particular example.
See https://nodejs.org/api/timers.html#timers_setinterval_callback_delay_args for detailed documentation on this.
var intervalMs = 10000;
function updateDB() {
console.log("Updating db..");
/* insert db update code here. */
}
setInterval(updateDB, intervalMs);

Related

How to automatically send a ``get request`` in a loop after some interval of time?

I am making a web application that is scrapping the news from a news site and saves to my database (scrapping is done just for learning purpose). After the database is updated all the stored data is sent to the user frontend.
Here is the route responsible for the above action.
router.get('/news, postController.getNewsPost);
New news are added to the site being scrapped as the day passes. Lets say if no user logs in to my application my database does not updates because the route mentioned above does not fires.
I want my database to be updated periodically even when no users have logged into my web application.
I am new to backend development so please guide me on how i can achieve this, also let me know if more information is required.
The simplest solution would be to use a setInterval to execute a specific function every N seconds:
setInterval(function() {
// scrap news and save them to the database every 5 seconds
}, 5000)
More versatile and solid solutions can be implemented using an external library for scheduling task, something like Agenda
You can use node-cron module to schedule your scrapping job.
Install node-cron using npm:
$ npm install --save node-cron
Import node-cron and schedule your scrapping job:
var cron = require('node-cron');
cron.schedule('* * * * *', () => {
console.log('Scrapping....');
});
Here is the module link. https://www.npmjs.com/package/node-cron
You can use the following method
$ npm install --save node-cron
var cron = require('node-cron');
cron.schedule('second minute hour day month', () => {
//update database code
});
library link enter link description here

Send scheduled email reports from Angular using node.js

I have set up an Angular project and it is consuming APIs from the NodeJS app.
Angular dashboards have some reports/charts, I will configure a schedule somewhere in DB.
I want to add scheduling functionality so that I will get an automated email containing a graph/chart as an email body.
Can anyone guide me here!
Your scheduling will have to happen out of the NodeJS app since that can be always 'alive' The Angular project is only doing things when you've loaded it in your browser and cannot process that scheduled emails when you don't have it open (unless you were to make it a PWA perhaps, but then still it would be rather convoluted to do it out of Angular).
Do all the processing on the server, including generating and rendering the charts to an email message that you send over SMTP or through a service like Mailgun or Sendgrid.
You can use node-schedule package from npm node-schedule, it will help you to schedule cron jobs whenever you want to send email.
const schedule = require('node-schedule')
const job = schedule.scheduleJob('21 * * * *', function(){
console.log('Send my email.')
})
This will excute this cron at exactly 21min of any hour.
You can go through the npm package for more scheduling details.

Node.js How to always listen to Rest api?

Hey I want to build my first node.js app and I would like to get some guidance.
So I have external service that provide me Rest Api with pricing list.
The pricing update every few seconds, I want to listen to this api and update my database every time I get new changes.
I come from PHP and for example here I can make Cron job and make foreach on all the table, but I know it's not smart idea, and I know node.js made for this.
I would like to get any ideas how to start to build something like that.
Thanks!
One option would be to use an Event Emitter:
var EventEmitter = require('events')
var ee = new EventEmitter()
ee.on('message', function (text) {
//Here you would update your database
console.log(text)
})
ee.emit('message', 'hello world')//You would call this whenever/wherever the API sends your data
NPM events:
https://www.npmjs.com/package/events
NodeJS Docs:
https://nodejs.org/api/events.html

Nodejs can I make a GET request to my own application?

I have a route in my app that I've defined with tasks to be run in the style of a few cron jobs. I know that this can be triggered by a GET request from an external devise when necessary (and that's ideal). (FYI: I will be adding validations for security purposes to this route.)
router.get('/cron', function(req) {
/**
*
* Do cron things...
*/
task();
});
What I'm wondering is if I'd also be able to trigger this via a GET request from my own system when necessary?
What would be really helpful is to reuse the same route above with an npm module like node-crontab and simply make a request to the route every few hours.
var doEveryThirtyMinutes = crontab.scheduleJob("*/30 * * * *", function(){
/**
* Make GET request to '/cron' controller.
* Live a happy life.
*/
});
I can't find any information on how to make that request (to my same system), even in the npm request module documentation. Is there a reason not to do this? Am I missing something? Is this a bad practice?
The reason this setup would be incredibly beneficial is that I connect to my database via an extension of the req object and don't want to implement a new connection module. Also, I already have a logging procedure implemented for successful/ unsuccessful route executions, so I would be able to reuse that as well.
Thanks ahead of time for your help!
Yes, you can make a get request to your own application. You would make this request like any other request, just use your application's host and port.
If you want to grab the hostname from your os, you can do this with require('os').hostname
https://nodejs.org/api/os.html#os_os_hostname
The reason you wouldn't do this is you are already in your application, so you shouldn't need to communicate via it's network interface.

How does Meteor receive updates to the results of a MongoDB query?

I asked a question a few months ago, to which Meteor seems to have the answer.
Which, if any, of the NoSQL databases can provide stream of *changes* to a query result set?
How does Meteor receive updates to the results of a MongoDB query?
Thanks,
Chris.
You want query.observe() for this. Say you have a Posts collection with a tags field, and you want to get notified when a post with the important tag is added.
http://docs.meteor.com/#observe
// collection of posts that includes array of tags
var Posts = new Meteor.Collection('posts');
// DB cursor to find all posts with 'important' in the tags array.
var cursor = Posts.find({tags: 'important'});
// watch the cursor for changes
var handle = cursor.observe({
added: function (post) { ... }, // run when post is added
changed: function (post) { ... } // run when post is changed
removed: function (post) { ... } // run when post is removed
});
You can run this code on the client, if you want to do something in each browser when a post changes. Or you can run this on the server, if you want to say send an email to the team when an important post is added.
Note that added and removed refer to the query, not the document. If you have an existing post document and run
Posts.update(my_post_id, {$addToSet: {tags: 'important'}});
this will trigger the 'added' callback, since the post is getting added to the query result.
Currently, Meteor really works well with one instance/process. In such case all queries are going through this instance and it can broadcast it back to other clients. Additional, it polls MongoDB every 10s for changes to the database which were done by outside queries. They are plans for 1.0 to improve the scalability and hopefully allow multiple instances to inform each one about changes.
DerbyJS on the other hand is using Redis PubSub.
From the docs:
On the server, a collection with that name is created on a backend Mongo server. When you call methods on that collection on the server,
they translate directly into normal Mongo operations.
On the client, a Minimongo instance is created. Minimongo is essentially an in-memory, non-persistent implementation of Mongo in
pure JavaScript. It serves as a local cache that stores just the
subset of the database that this client is working with. Queries on
the client (find) are served directly out of this cache, without
talking to the server.
When you write to the database on the client (insert, update, remove),
the command is executed immediately on the client, and,
simultaneously, it's shipped up to the server and executed there too.
The livedata package is responsible for this.
That explains client to server
Server to client from what I can gather is the livedata and mongo-livedata packages.
https://github.com/meteor/meteor/tree/master/packages/mongo-livedata
https://github.com/meteor/meteor/tree/master/packages/livedata
Hope that helps.

Resources