Forever Node.js Script Hangs Up on Loop - node.js

I have made a Node.js script which checks for new entries in a MySQL database and uses socket.io to send data to the client's web browser. The script is meant to check for new entries approximately every 2 seconds. I am using Forever to keep the script running as this is hosted on a VPS.
I believe what's happening is that the for loop is looping infinitely (more on why I think that's the issue below). There are no error messages in the Forever generated log file and the script is "running" even when it's started to hang up. Specifically, the part of the script that hangs up is the script stops accepting browser requests at port 8888 and doesn't serve the client-side socket.io js files. I've done some troubleshooting and identified a few key components that may be causing this issue, but at the end of the day, I'm not sure why it's happening and can't seem to find a work around.
Here is the relevant part of the code:
http.listen(8888,function(){
console.log("Listening on 8888");
});
function checkEntry() {
pool.getConnection(function(err,connection) {
connection.query("SELECT * FROM `data_alert` WHERE processtime > " + (Math.floor(new Date() / 1000) - 172800) + " AND pushed IS NULL", function (err, rows) {
connection.release();
if (!err) {
if(Object.keys(rows).length > 0) {
var x;
for(x = 0; x < Object.keys(rows).length; x++) {
connection.query("UPDATE `data_alert` SET pushed = 1 WHERE id = " + rows[x]['id'],function() {
connection.release();
io.emit('refresh feed', 'refresh');
});
}
}
}
});
});
setTimeout(function() { checkEntry();var d = new Date();console.log(d.getTime()); },1000);
}
checkEntry();
Just a few interesting things I've discovered while trouble shooting...
This only happens when I run the script on Forever. Work's completely fine if I use shell and just leave my terminal open.
It starts to happen after 5-30 minutes of running the script, it does not immediately hang up on the first execution of the checkEntry function.
I originally tried this with setInterval instead of setTimeout, the issue has remained exactly the same.
If I remove the setInterval/setTimeout function and run the checkEntry function only once, it does not hang up.
If I take out the javascript for loop in the checkEntry function, the hang ups stop (but obviously, that for loop controls necessary functionality so I have to at least find another way of using it).
I've also tried using a for-in loop for the rows object and the performance is exactly the same.
Any ideas would be immensely helpful at this point. I started working with Node.js just recently so there may be a glaringly obvious reason that I'm missing here.
Thank you.

So I just wanted to come back to this and address what the issue was. It took me quite some time to figure out and it can only be explained by my own inexperience. There is a section to my script where my code contained the following:
app.get("/", (request, response) => {
// Some code to log things to the console here.
});
The issue was that I was not sending a response. The new code looks as follows and has resolved my hang up issues:
app.get("/", (request, response) => {
// Some code to log things to the console here.
response.send("OK");
});
The issue had nothing to do with the part of the code I presented in the initial question.

Related

How to detect that Chrome Extension with Manifest v3 was unloaded

Our Chrome extension has both content and background scripts communicating with each other. When the plugin is updated, the background script is stopped and the content scripts start getting Error: Extension context invalidated.. In V2, we used port.onDisconnect event as described here to clean things up. But in V3, this event is also sent after 5 minutes (when the background service worker is automatically terminated). So this event now means either extension unloading (and the cleanup should be done), or just SW lifecycle event (no need to cleanup, reconnecting is fine).
So the question is, how to unambiguously determine whether the cleanup is necessary.
I've tried:
chrome.management. events: onDisabled etc. But unfortunately chrome.management is undefined in my content script.
Checking for chrome.runtime.id inside port.onDisconnected callback to determine the plugin is unloaded. But the id is still present at that moment.
Again inside port.onDisconnected, trying to do chrome.runtime.connect() again and catching the exception. But there's no exception! The port is created successfully, but it receives neither messages nor its own onDisconnected events.
Trying point 3 inside setTimeout(..., 0) and setTimeout(..., 100). The former doesn't produce exceptions either. The latter does, but it introduces a delay of questionable duration (why 100? would it work the CPU is overloaded?) and potential race conditions when other plugin functionality could try to send messages with unpredictable results. So I'd appreciate a more bullet-proof solution.
Thanks to wOxxOm's suggestions, I've found a solution that seems to work for now: every once in a while (<5 seconds) to disconnect the port in the content script and then reconnect again. The code looks like this:
let portToBackground: chrome.runtime.Port | undefined = openPortToBackground();
function openPortToBackground(): chrome.runtime.Port {
const port = chrome.runtime.connect();
const timeout = setTimeout(() => {
console.log('reconnecting');
portToBackground = openPortToBackground();
port.disconnect();
}, 2 * 60 * 1000); // 2 minutes here, just to be sure
port.onDisconnect.addListener(() => {
clearTimeout(timeout);
if (port !== portToBackground) return;
// perform the cleanup
});
return port;
}
export function isExtensionContextInvalidated(): boolean {
return !portToBackground;
}

node.js socket.io : io.of('....')..... seems to run the code twice on page load and refresh

I have been trying to do some changes regarding the code below. At first I discovered that a function that returns a promise and in which a query is sent to db to be executed was being run twice instead of once. I have checked the query and the function itself just to make sure. Then I removed all code from inside io.of() except socket.on() functions which didn't seem to be involved in this matter. I have put a simple console.log() statement inside after removing the code I mentioned and it also produced the 'being executed twice' problem.
io.of('....').on('connection', socket => {
console.log("hello");
//...
//......
// below are socket.on('...')... and nothing more
})
Adding this to html and moving the code to socket.on('load') in io.of() fixed it for me.
$(document).ready(function () {
socket.emit('load');
});

Nightmare doesn't run twice in a row - NodeJS

EDIT
I have noticed the removal of the .end() function appears to solve the issue, but after reading the Nightmare docs on the use of .end() it says: Completes any queue operations, disconnect and close the electron process.
Now while this does solve the problem, am I now just opening more and more electron processes each time the route is called, which will eventually cause the server to run out of memory, or is this a safe way to fix the issue?
ORIGINAL TEXT
Please consider the following problem:
I am developing a Node based service that will allow the user to request screenshot of a particular URL.
For this I am using Nightmare to visit the URL, wait 2 seconds, take a screenshot, which is saved to the disk, convert it to base64, delete the image and then return the base64 string.
console.log('Nightmare starts');
nightmare
.goto(url)
.wait(2000)
.screenshot(filename)
.end()
.then(function (result)
{
fs.exists(filename, function(exists)
{
if (exists)
{
data = fs.readFileSync(filename);
var base64 = data.toString('base64')
fs.unlink(filename);
var output = {'message':'success','map_image':base64};
res.send(output);
}
});
})
.catch(function (error)
{
console.error('Search failed:', error);
});
console.log("Nightmare Finished");
The above code works just fine, the first time it runs. However any subsequent calls to this just consoles "Nightmare starts" and "Nightmare Finished" instantly with the actual code in-between not running. I don't appear to have any errors display, nothing is caught if I wrap it in a try/catch. The node requires a reboot to allow it to happen again.
Something worth noting is that I am running on a headless ubuntu machine, as electron (one of the nightmare dependencies) appears to need a GUI, I am using xvfb to launch the node using the following command:
xvfb-run --auto-servernum --server-num=1 node server.js
I'm assuming this may be an issue with some resource not being released correctly on the first run, but any assistance would be appreciated.
Also open to any constructive criticism of my code, very new to Node and i'm sure i'm not writing in the most optimal way (sync file loading etc)
It appears that you are simply misplacing where you are creating the nightmare instances. Cannot help much without some more code snippet and information.
Way 1
Create nightmare instance every time and close them after you are done with your task. It will require some time to boot up the instance, but it will also lessen the memory load. Not to mention you can have multiple nightmare instances for different users.
Way 2
Don't end and re-use same nightmare instance. Have multiple nightmare instances and queue the call for screenshot. The websites will load fast and it won't take time to boot up an instance, but you will have longer wait time for longer queue.

Best way to manage unique child processes in node.js

I'm about to start coding a chat bot. However, I plan on running more than one, using a wrapper to communicate and restart them. I have done this in the past with child_process.fork(), but it was incredibly inefficient. I've looked into spawn and cluster as well, but they all seem to focus on running the same thing, not unique bots. As for plugins, I've looked into fleet, forkfriend, and workerfarm, but none seem to fit my needs.
Is there any plugin or way I'm not seeing to help me do this? Or am I just going to have o wing it again?
You can have as many chat bots as you wish in a single process. The rule of thumb in Node.js is using one process per processor core since Node has slightly different multithreading model you might got used to.
Assuming you still need some multithreading on top of this, here is a couple of node modules you might find fitting your needs:
node-webworker-threads, dnode.
UPDATE:
Now I see what you need. There is a nice example in Node.js docs, which I saw recently. I just copy & paste it here:
var normal = require('child_process').fork('child.js', ['normal']);
var special = require('child_process').fork('child.js', ['special']);
// Open up the server and send sockets to child
var server = require('net').createServer();
server.on('connection', function (socket) {
// if this is a VIP
if (socket.remoteAddress === '74.125.127.100') {
special.send('socket', socket);
return;
}
// just the usual dudes
normal.send('socket', socket);
});
server.listen(1337);
child.js looks like this:
process.on('message', function(m, socket) {
if (m === 'socket') {
socket.end('You were handled as a ' + process.argv[2] + ' person');
}
});
I believe it's pretty much what you need. Launch several processes with different configs (if number of configs is relatively low) and pass socket to a particular one from master process.

'job complete' event isn't firing in Kue

I can't figure out what I'm doing wrong, perhaps somone can point it out. I'm trying to figure out why my 'job complete' event isn't firing.
var kue = require('kue'),
jobs = kue.createQueue();
var util= require('util');
var job = jobs.create('test', util.puts('123')).on('complete', function(){
console.log("Job complete");
}).on('failed', function(){
console.log("Job failed");
}).on('progress', function(progress){
process.stdout.write('\r job #' + job.id + ' ' + progress + '% complete');
});
Now when I run this on node it prints 123 but it doesn't say job complete.
This question is old, but there is still no solution on the internet for those, who encounters this problem even with placing save()... I've made out three steps for myself to solve the problem:
1. Make sure that you call save() method on your jobs AFTER you set handlers on them.
var job = queue.create('some process', some_args);
job.on('complete', function(result) {
console.log('complete');
}).on('failed', function(result) {
console.log('failed');
}).removeOnComplete(true).save();
P.S. It's also a good practice to remove jobs on complete, otherwise you'll overfill redis memory.
2. Make sure your handlers are alright.
I myself experimented with event handlers, trying to pass them several arguments. My 'failed' event handler accepted both error code and other data I passed through done(err, data) method. This was not right. So check the documentation and official Kue examples to make sure your code isn't bugged.
3. If nothing helps, execute redis-cli flushall in your terminal.
And BEWARE!!! This will delete everything in your redis. I'm myself noob in it, so it is used only as a dependency for Kue on my system. I don't know definitely, but I suppose that this can destroy your data you may use in redis. Though it somehow fixes the problem, when nothing more helps.
Everyone, please, feel free to suggest any other secure ways to fix Kue with redis.
P.S. Haven't check, but I suppose, that changing process name for your jobs (it's 'some process' in my example) can also workaround the problem.
I think you need to run job.save() after the .create is executed.
As #James mentions you must call .save() after the event handlers has been set.
See the example.

Resources