I have a route in my app that I've defined with tasks to be run in the style of a few cron jobs. I know that this can be triggered by a GET request from an external devise when necessary (and that's ideal). (FYI: I will be adding validations for security purposes to this route.)
router.get('/cron', function(req) {
/**
*
* Do cron things...
*/
task();
});
What I'm wondering is if I'd also be able to trigger this via a GET request from my own system when necessary?
What would be really helpful is to reuse the same route above with an npm module like node-crontab and simply make a request to the route every few hours.
var doEveryThirtyMinutes = crontab.scheduleJob("*/30 * * * *", function(){
/**
* Make GET request to '/cron' controller.
* Live a happy life.
*/
});
I can't find any information on how to make that request (to my same system), even in the npm request module documentation. Is there a reason not to do this? Am I missing something? Is this a bad practice?
The reason this setup would be incredibly beneficial is that I connect to my database via an extension of the req object and don't want to implement a new connection module. Also, I already have a logging procedure implemented for successful/ unsuccessful route executions, so I would be able to reuse that as well.
Thanks ahead of time for your help!
Yes, you can make a get request to your own application. You would make this request like any other request, just use your application's host and port.
If you want to grab the hostname from your os, you can do this with require('os').hostname
https://nodejs.org/api/os.html#os_os_hostname
The reason you wouldn't do this is you are already in your application, so you shouldn't need to communicate via it's network interface.
Related
I am using this (contentful-export) library in my express app like so
const app = require('express');
...
app.get('/export', (req, rex, next) => {
const contentfulExport = require('contentful-export');
const options = {
...
}
contentfulExport(options).then((result) => {
res.send(result);
});
})
now this does work, but the method takes a bit of time and sends status / progress messages to the node console, but I would like to keep the user updated also.. is there a way I can send the node console progress messages to the client??
This is my first time using node / express any help would be appreciated, I'm not sure if this already has an answer since im not entirely sure what to call it?
Looking of the documentation for contentful-export I don't think this is possible. The way this usually works in Node is that you have an object (contentfulExport in this case), you call a method on this object and the same object is also an EventEmitter. This way you'd get a hook to react to fired events.
// pseudo code
someLibrary.on('someEvent', (event) => { /* do something */ })
someLibrary.doLongRunningTask()
.then(/* ... */)
This is not documented for contentful-export so I assume that there is no way to hook into the log messages that are sent to the console.
Your question has another tricky angle though. In the code you shared you include a single endpoint (/export). If you would like to display updates or show some progress you'd probably need a second endpoint giving information about the progress of your long running task (which you can not access with contentful-export though).
The way this is usually handled is that you kick of a long running task via a certain HTTP endpoint and then use another endpoint that serves infos via polling or or a web socket connection.
Sorry that I can't give a proper solution but due to the limitation of contentful-export I don't think there is a clean/easy way to show progress of the exported data.
Hope that helps. :)
I'd like to know how does NodeJS process multiple GET requests from different users/browsers which have event emitted to return the results? I'd like to think of it as each time a user executes the GET request, it's as if a new session is started for that user.
For example if I have this GET request
var tester = require('./tester-class');
app.get('/triggerEv', async function(req, res, next) {
// Start the data processing
tester.startProcessing('some-data');
// tester has event emitters that are triggered when processing is complete (success or fail)
tester.on('success', function(data) {
return res.send('success');
}
tester.on('fail', function(data) {
return res.send('fail');
}
}
What I'm thinking is that if I open a browser and run this GET request by passing some-data and start processing. Then open another browser to execute this GET request with different data (to simulate multiple users accessing it at the same time), it will overwrite the previous startProcessing function and rerun it again with the new data.
So if multiple users execute this GET request at the same time, would it handle it separately for each user as if it was different and independent sessions then return when there's a response for each user's sessions? Or will it do as I mentioned above (this case I will have to somehow manage different sessions for each user that triggers this GET request)?
I want to make it so that each user that executes this GET request doesn't interfere with other users that also execute this GET request at the same time and the correct response is returned for each user based on their own data sent to the startProcessing function.
Thanks, I hope I'm making sense. Will clarify if not.
If you're sharing the global tester object among different requests, then the 2nd request will interfere with the first request. Since all incoming requests use the same global environment in node.js, the usual model is that any request that may be "in-flight" for awhile needs to create its own resources and keep them for itself. Then, if some other request arrives while the first one is still waiting for something to complete, then it will also create its own resources and the two will not conflict.
The server environment does not have a concept of "sessions" in the way you're using the term. There is no separate server-session or server state that each request lives in other than the request and response objects that are created for each incoming request. This is not like PHP - there is not a whole new interpreter state for each request.
I want to make it so that each user that executes this GET request doesn't interfere with other users that also execute this GET request at the same time and the correct response is returned for each user based on their own data sent to the startProcessing function.
Then, don't share any resources between requests and don't use any objects that have global state. I don't know what your tester is, but one way to keep multiple requests separate from each other is to just make a new tester object for each request so they can each use it to their heart's content without any conflict.
I currently have a request which is made from an angular 4 app(which uses electron[which uses chromium]) to a bottleneck(nodejs/express) server. The server takes about 10 minutes to process the request.
The default timeout which I'm getting is 120 seconds.
I tried to use setting the timeout on the server using
App.use(timeout("1000s")
In the client side I have used
options = {
url,
method: GET
timeout : 600 * 1000}
let req = http.request(options, () => {})
req.end()
I have also tried to give the specific route timeout.
Each time the request hits 120 seconds the socket dies and I get a "socket timeout"
I have read many posts with the same questions but I didn't get any concrete answers. Is it possible to do a request with a long/no timeout using the tools above? Do I need to download a new library which handles long timeouts?
Any help would be greatly appriciated.
So after browsing through the internet I have discovered that there is no possible way to increase Chrome's timeout time.
My solution to this problem was to open the request and return a default answer(something like "started") then pinging the server to find out it's status.
There is another possible solution which will be to put a route in the client(I'm using electron and node modules in the client side so it is possible) and then let the server ping back to the client with the status of the query.
Writing this down so other people will have some possible patches. Will update if I'll find anything better.
i'm new to web development and have some questions about http requests and cron jobs. I npm installed cron and wanted to incorporate it into my app, where app.js is getting requests from clients that adds data into a database (using mongoose) from a form that client filled out. I want to run a script (executer.js) to be called every 10 seconds to execute a task that will use the data in the same database. Any suggestions on how I could accomplish this?
You don't need to use a cron job for this (though if you need such a library there is the excellent: https://github.com/kelektiv/node-cron). I'd recommend using setInterval for your particular example.
See https://nodejs.org/api/timers.html#timers_setinterval_callback_delay_args for detailed documentation on this.
var intervalMs = 10000;
function updateDB() {
console.log("Updating db..");
/* insert db update code here. */
}
setInterval(updateDB, intervalMs);
I'll give a small premise of what I'm trying to do. I have a game concept in mind which requires multiple players sitting around a table somewhat like poker.
The normal interaction between different players is easy to handle via socket.io in conjunction with node js.
What I'm having a hard time figuring out is; I have a cron job which is running in another process which gets new information every minute which then needs to be sent to each of those players. Since this is a different process I'm not sure how I send certain clients this information.
socket.io does have information for this and I'm quoting it below:
In some cases, you might want to emit events to sockets in Socket.IO namespaces / rooms from outside the context of your Socket.IO processes.
There’s several ways to tackle this problem, like implementing your own channel to send messages into the process.
To facilitate this use case, we created two modules:
socket.io-redis
socket.io-emitter
From what I understand I need these two modules to do what I mentioned earlier. What I do not understand however is why is redis in the equation when I just need to send some messages.
Is it used to just store the messages temporarily?
Any help will be appreciated.
There are several ways to achieve this if you just need to emit after an external event. It depend on what you're using for getting those new data to send :
/* if the other process is an http post incoming you can use for example
express and use your io object in a custom middleware : */
//pass the io in the req object
app.use( '/incoming', (req, res, next) => {
req.io = io;
})
//then you can do :
app.post('/incoming', (req, res, next) => {
req.io.emit('incoming', req.body);
res.send('data received from http post request then send in the socket');
})
//if you fetch data every minute, why don't you just emit after your job :
var job = sheduledJob('* */1 * * * *', io => {
axios.get('/myApi/someRessource').then(data => io.emit('newData', data.data));
})
Well in the case of socket.io providing those, I read into that you actually need both. However this shouldn't necessarily be what you want. But yes, redis is probably just used to store data temporarily, where it also does a really good job, by being close to what a message queue does.
Your cron now wouldn't need a message queue or similar behaviour.
My suggestion though would be to run the cron with some node package from within your process as a child_process hook onto it's readable stream and then push directly to your sockets.
If the cron job process is also a nodejs process, you can exchange data through redis.io pub-sub client mechanism.
Let me know what is your cron job process in and in case further help required in pub-sub mechanism..
redis is one of the memory stores used by socket.io(in case you configure)
You must employ redis only if you have multi-server configuration (cluster) to establish a connection and room/namespace sync between those node.js instances. It has nothing to do with storing data in this case, it works as a pub/sub machine.