How to add continuous running code into nodejs postgresql client? - node.js

I'm stuck on a problem of wiring some logic into a nodejs pg client, the main logic has two part, the first one is connect to postgres server and getting some notification, it is as the following:
var rules = {} // a rules object we are monitoring...
const pg_cli = new Client({
....
})
pg_cli.connect()
pg_cli.query('LISTEN zone_rules') // listen to the zone_rules channel
pg_cli.on('notification', msg => {
rules = msg.payload
})
This part is easy and run without any issue, now what I'm trying to implement is to have another function keeps monitoring the rules, and when an object is received and put into the rules, the function start accumulating the time the object stays in the rules (which may be deleted with another notification from pg server), and the monitoring function would send alert to another server if the duration of the object passed a certain time. I tried to wrote the code in the following style:
function check() {
// watch and time accumulating code...
process.nextTick(check)
}
check()
But I found the onevent code of getting notification then didn't have a chance to run! Does anybody have any idea about my problem. Or should I doing it in another way?
Thanks!!!

Well, I found change the nextTick to setImmediate solve the problem.

Related

Intermittent network timeouts while trying to fetch data in Node application

We have a NextJS app with an Express server.
The problem we're seeing is lots of network timeouts to the API we are calling (the underlying exception says "socket hangup"). However, that API does not show any errors or a slow response time. It's as if the API calls aren't even making it all the way to the API.
Theories and things we've tried:
Blocked event loop: we tried replacing synchronous logging with asynchronous "winston" framework, to make sure we're not blocking the event loop. Not sure what else could be blocking
High CPU: the CPU can spike up to 60% sometimes. We're trying to minimize that spike by taking out some regexes we were using (since we heard those are expensive, CPU-wise).
Something about how big the JSON response is from the API? We're passing around a lot of data…
Too many complex routes in our Express routing structure: We minimized the number of routes by combining some together (which results in more complicated regexes in the route definitions)…
Any ideas why we would be seeing these fetch timeouts? They only appear during load tests and in production environments, but they can bring down the whole app with heavy load.
The code that emits the error:
function socketCloseListener() {
const socket = this;
const req = socket._httpMessage;
debug('HTTP socket close');
// Pull through final chunk, if anything is buffered.
// the ondata function will handle it properly, and this
// is a no-op if no final chunk remains.
socket.read();
// NOTE: It's important to get parser here, because it could be freed by
// the `socketOnData`.
const parser = socket.parser;
const res = req.res;
if (res) {
// Socket closed before we emitted 'end' below.
if (!res.complete) {
res.aborted = true;
res.emit('aborted');
}
req.emit('close');
if (res.readable) {
res.on('end', function() {
this.emit('close');
});
res.push(null);
} else {
res.emit('close');
}
} else {
if (!req.socket._hadError) {
// This socket error fired before we started to
// receive a response. The error needs to
// fire on the request.
req.socket._hadError = true;
req.emit('error', connResetException('socket hang up'));
}
req.emit('close');
The message is generated when the server does not send and response.
That's that easy bit.
But why would the API server not send a response?
Well, without seeing the minimum code that repro this I can only give you some pointers.
This issue here discusses at length the changes between version 6 and 8, in particular how a GET with a body now can cause it. This change of behaviour is more aligned to the REST specs.

Azure function run code on startup for Node

I am developing Chatbot using Azure functions. I want to load the some of the conversations for Chatbot from a file. I am looking for a way to load these conversation data before the function app starts with some function callback. Is there a way load the conversation data only once when the function app is started?
This question is actually a duplicate of Azure Function run code on startup. But this question is asked for C# and I wanted a way to do the same thing in NodeJS
After like a week of messing around I got a working solution.
First some context:
The question at hand, running custom code # App Start for Node JS Azure Functions.
The issue is currently being discussed here and has been open for almost 5 years, and doesn't seem to be going anywhere.
As of now there is an Azure Functions "warmup" trigger feature, found here AZ Funcs Warm Up Trigger. However this trigger only runs on-scale. So the first, initial instance of your App won't run the "warmup" code.
Solution:
I created a start.js file and put the following code in there
const ErrorHandler = require('./Classes/ErrorHandler');
const Validator = require('./Classes/Validator');
const delay = require('delay');
let flag = false;
module.exports = async () =>
{
console.log('Initializing Globals')
global.ErrorHandler = ErrorHandler;
global.Validator = Validator;
//this is just to test if it will work with async funcs
const wait = await delay(5000)
//add additional logic...
//await db.connect(); etc // initialize a db connection
console.log('Done Waiting')
}
To run this code I just have to do
require('../start')();
in any of my functions. Just one function is fine. Since all of the function dependencies are loaded when you deploy your code, as long as this line is in one of the functions, start.js will run and initialize all of your global/singleton variables or whatever else you want it to do on func start. I made a literal function called "startWarmUp" and it is just a timer triggered function that runs once a day.
My use case is that almost every function relies on ErrorHandler and Validator class. And though generally making something a global variable is bad practice, in this case I didn't see any harm in making these 2 classes global so they're available in all of the functions.
Side Note: when developing locally you will have to include that function in your func start --functions <function requiring start.js> <other funcs> in order to have that startup code actually run.
Additionally there is a feature request for this functionality that can voted on open here: Azure Feedback
I have a similar use case that I am also stuck on.
Based on this resource I have found a good way to approach the structure of my code. It is simple enough: you just need to run your initialization code before you declare your module.exports.
https://github.com/rcarmo/azure-functions-bot/blob/master/bot/index.js
I also read this thread, but it does not look like there is a recommended solution.
https://github.com/Azure/azure-functions-host/issues/586
However, in my case I have an additional complication in that I need to use promises as I am waiting on external services to come back. These promises run within bot.initialise(). Initialise() only seems to run when the first call to the bot occurs. Which would be fine, but as it is running a promise, my code doesn't block - which means that when it calls 'listener(req, context.res)' it doesn't yet exist.
The next thing I will try is to restructure my code so that bot.initialise returns a promise, but the code would be much simpler if there was a initialisation webhook that guaranteed that the code within it was executed at startup before everything else.
Has anyone found a good workaround?
My code looks something like this:
var listener = null;
if (process.env.FUNCTIONS_EXTENSION_VERSION) {
// If we are inside Azure Functions, export the standard handler.
listener = bot.initialise(true);
module.exports = function (context, req) {
context.log("Passing body", req.body);
listener(req, context.res);
}
} else {
// Local server for testing
listener = bot.initialise(false);
}
You can use global variable to load data before function execution.
var data = [1, 2, 3];
module.exports = function (context, req) {
context.log(data[0]);
context.done();
};
data variable initialized only once and will be used within function calls.

Is there any risk to read/write the same file content from different 'sessions' in Node JS?

I'm new in Node JS and i wonder if under mentioned snippets of code has multisession problem.
Consider I have Node JS server (express) and I listen on some POST request:
app.post('/sync/:method', onPostRequest);
var onPostRequest = function(req,res){
// parse request and fetch email list
var emails = [....]; // pseudocode
doJob(emails);
res.status(200).end('OK');
}
function doJob(_emails){
try {
emailsFromFile = fs.readFileSync(FILE_PATH, "utf8") || {};
if(_.isString(oldEmails)){
emailsFromFile = JSON.parse(emailsFromFile);
}
_emails.forEach(function(_email){
if( !emailsFromFile[_email] ){
emailsFromFile[_email] = 0;
}
else{
emailsFromFile[_email] += 1;
}
});
// write object back
fs.writeFileSync(FILE_PATH, JSON.stringify(emailsFromFile));
} catch (e) {
console.error(e);
};
}
So doJob method receives _emails list and I update (counter +1) these emails from object emailsFromFile loaded from file.
Consider I got 2 requests at the same time and it triggers doJob twice. I afraid that when one request loaded emailsFromFile from file, the second request might change file content.
Can anybody spread the light on this issue?
Because the code in the doJob() function is all synchronous, there is no risk of multiple requests causing a concurrency problem.
If you were using async IO in that function, then there would be possible concurrency issues.
To explain, Javascript in node.js is single threaded. So, there is only one thread of Javascript execution running at a time and that thread of execution runs until it returns back to the event loop. So, any sequence of entirely synchronous code like you have in doJob() will run to completion without interruption.
If, on the other hand, you use any asynchronous operations such as fs.readFile() instead of fs.readFileSync(), then that thread of execution will return back to the event loop at the point you call fs.readFileSync() and another request can be run while it is reading the file. If that were the case, then you could end up with two requests conflicting over the same file. In that case, you would have to implement some form of concurrency protection (some sort of flag or queue). This is the type of thing that databases offer lots of features for.
I have a node.js app running on a Raspberry Pi that uses lots of async file I/O and I can have conflicts with that code from multiple requests. I solved it by setting a flag anytime I'm writing to a specific file and any other requests that want to write to that file first check that flag and if it is set, those requests going into my own queue are then served when the prior request finishes its write operation. There are many other ways to solve that too. If this happens in a lot of places, then it's probably worth just getting a database that offers features for this type of write contention.

ws.onclose event does not get called on nodejs exit

i use web sockets and the onclose method does not get triggert when the server is down.
i tried to close the ws connections manually with progress.exit but it seems this only gets triggert on a clean exit.
is their a way to catch a stopped nodejs app and execute code before the stop?
it should trigger on ctr+c and any other crash/exit.
basicly i want to tell the client when the connection is not alive anymore before he is trying to do something, since onclose does not handel every case, what can i do to check the connection?
the only solution i came up with is to ping the server from time to time.
since this is not possible i started sending pings as a workarround:
var pingAnswer = true;
pingInterval = setInterval(function(){
if(pingAnswer){
ws.send(JSON.stringify({type:'ping'})); //on the serverside i just send a ping back everytime i recive one.
pingAnswer = false;
}else{
clearInterval(pingInterval);
//reload page or something
}
},1000);
ws.onMessage(function(e){
m = JSON.parse(e.data);
switch(m.type){
....
case 'ping':
pingAnswer=true;
return;
}
}
);
You don't provide a snippet showing how you're defining your on('close',...) handler, but it's possible that you're defining the handler incorrectly.
At the time of this writing, the documentation for websocket.onclose led me to first implement the handler as ws_connection.onclose( () => {...});, but I've found the proper way to be ws_connection.on('close', () => {...} );
Maybe the style used in the ws documentation is some kind of idiom I'm unfamiliar with.
I've tested this with node 6.2.1 and ws 1.1.1. The on.('close',...) callback is triggered on either side (server/client) when the corresponding side is shutdown via Ctrl+c or crashes for whatever reason (for example, for testing, JSON.parse("fail"); or throw new Error('');).

node-vertica throws exception for multi-queries with a callback

I have created a simple web interface for vertica.
I expose simple operation above a vertica cluster.
one of the functionality I expose is querying vertica.
when my user enters a multi-query the node modul throws an exception and my process exits with exit 1.
Is there any way to catch this exception?
Is there any way overcome the problem in a different way?
Right now there's no way to overcome this when using a callback for the query result.
Preventing this from happening would involve making sure there's only one query in the user's input. This is hard because it involves parsing SQL.
The callback API isn't built to deal with multi-queries. I simply haven't bothered implementing proper handling of this case, because this has never been an issue for me.
Instead of a callback, you could use the event listener API, which will send you lower level messages, and handle this yourself.
q = conn.query("SELECT...; SELECT...");
q.on("fields", function(fields) { ... }); // 1 time per query
q.on("row", function(row) { ... }); // 0...* time per query
q.on("end", function(status) { ... }); // 1 time per query

Resources