How to debug request-promise MaxListenersExceededWarning - node.js

I've been working on a set-up script for a database and this evening I began getting some MaxListenersExceededWarning warnings in my console.
I debugged the warnings to be coming from specific request-promise-native calls. I initially thought it may have something to do with payload size; however this warning is not happening on the largest request. I'm really lost on how to debug this further and get to the bottom of these warnings.

I figure out what I was doing wrong:
I am using a logger called pino in my application. For a lot of my set up methods I had an optional logger parameter that defaults to a stdout instance of pino when undefined. This default behavior was creating multiple writestreams hence the MaxListeners warning. I changed my code to pass the same stdout logger instance to all methods and now there are no more warnings!

Related

Implement logging levels for aws lambda

What is the recommended way to implement logging levels for aws lambda functions in nodejs. I was going through many third party libraries e.g winston, winston cloudwatch, logplease, but it seems like we can also achieve using the native console. e.g
console.log(), console.error(), console.warn(), console.info()
Any recommendations?
The relevant code is here:
https://github.com/aws/aws-lambda-nodejs-runtime-interface-client/blob/a850fd5adad5f32251350ce23ca2c8934b2fa542/src/utils/LogPatch.ts#L69-L89
So, you can use 7 console methods to get 6 CloudWatch log levels:
FATAL: console.fatal()
ERROR: console.error()
WARN: console.warn()
INFO: console.info() or console.log()
DEBUG: console.debug()
TRACE: console.trace()
console.trace() doesn't produce the same trace it produces in plain Node 14
console.fatal() is missing in plain Node 14, it's added by the AWS Lambda Runtime
This was tested with the Node 14.x runtime on Aug 25 2022. YMMV.
Since the Lambda console output goes directly into CloudWatch Logs, you really don't need to use something like Winston CloudWatch if that is your preferred log destination. If you wanted to send the logs somewhere else like Loggly then you might want to use something like Winston Loggly.
However, even if you just want to send all console output to CloudWatch Logs, I would still recommend using a basic Winston configuration, so that you could quickly and easily enable debug logging, for example through an environment variable, and then turn off debug logging once you are ready to use the Lambda function in production.

How to debug mystery ENOTFOUND?

I am getting random restarts on a PM2 managed nodejs cluster. The only symptom I get on the error log is of the following pattern - an ENOTFOUND on dns.js.
Error: getaddrinfo ENOTFOUND walkinto.inhttp walkinto.inhttp:80
at errnoException (dns.js:28:10)
at GetAddrInfoReqWrap.onlookup [as oncomplete] (dns.js:76:26)
Clearly the problem is a malformed server name - walkinto.inhttp is incorrect and it should be walkinto.in . The challenge is that this is not a host name hard coded in the code. There are many places in this fairly large code base that makes name resolution and it is of dynamic nature.
I have spent considerable time to pinpoint the root cause but so far have had no luck. I need help to print more log information from dns.js; probably a call stack 'may' would help to move forward.
Q1 : How to enable more detailed logging on nodejs core modules?
Q2 : What could cause a nodejs restart to happen for an ENOTFOUND? How to avoid a restart - This path is not desirable.
Q3: Are there any other smarter way to trouble shoot this problem?
Since there's no way for us to help you solve the issue without some code to go on, I'll answer your questions:
How to enable more detailed logging on nodejs core modules?
Run node with the inspect option and attach to the debugger with Chrome DevTools or another application. See these links:
https://nodejs.org/api/debugger.html
https://nodejs.org/en/docs/guides/debugging-getting-started/
What could cause a nodejs restart to happen for an ENOTFOUND? How to avoid a restart - This path is not desirable.
The Node runtime isn't restarting. The error you're seeing is generated from something similar to throw new Error(`getaddrinfo ${err}`), and any uncaught error from throw will crash the runtime.
The restart is happening because you run the app via PM2, and can be disabled by passing the --no-autorestart option to PM2. If you want to avoid the application from crashing, you should wrap whatever code that this could be generated from in a try/catch-block, and try to recover from the error.
Are there any other smarter way to trouble shoot this problem?
This is most likely not an issue with the dns stdlib module. If I understand correctly, you are performing name resolutions on dynamically generated data, and that is most likely your issue. Somewhere in the code you have one or more functions that are either not validating the generated data or are generating invalid data due to a bug. We can't help you solve that unfortunately, since you haven't provided any code to go on. Would be great if you could try to pinpoint what code might cause this and update the question with it.
I was getting this error in my request that was something like this:
var optionsSearch = {
host: 'https://mysite.sharepoint.com',
path: '_api/search/query?querytext="sharepoint"',
method: 'GET'
};
All did was removing the https:// leaving only mysite.sharepoint.com and it was fixed.

After NodeJS script has an error, it will not start again. AWS Lambda

I have a lambda function on the nodejs4.x runtime. If my script stops execution due to an error, lets say I try to get .length of an undefined object, then I can't start the script again. It's not even like the script runs and hits the same error, the script doesn't run. The lambda handler function is never called the second time.
This lambda function is the endpoint for Amazon Alexa. When I reupload the code (a zip file) then the system works again.
Is this some behavior of nodejs? Is the script ending prematurly corrupting the files so it cannot start again?
When the server hits an error I get this message Process exited before completing request
And then subsequent requests hit the timeout limit.
Important Edit
I have pinpointed the issue to NPM request. the module doesnt finish loading ie.
console.log('i see this message');
var request = require('request');
console.log('this never happens');
Couple of things that I know:
If lambda invocation fails, due to any reason, it will be invoked again (actually it will be retried at most 3 times).
However, this is only true for asynchronous invocations, there are two types of invocations.
Any external module that your lambda's code requires, must be included in the package that you deploy to the lambda, I have explained this simply in here.
You can write code that accesses a property of undefined variable, yes it will throw an exception, and if this invocation is asynchronous it will be retried 2 more times - which will fail too of course.
Since the Lambda function fails when calling require('request') I believe that the project has not been deployed correctly. request must be deployed with the Lambda function because it is not part of Node.js 4.3.2 (the current Lambda JavaScript runtime).
Make sure that:
require is added to your package.json file (e.g. by calling $ npm install require --save, see npm install for details).
You create a deployment package by zipping your project folder (including the node_modules folder).
That the deployment .zip is uploaded to your Lambda function.
So after contacting AWS through their forums, this turns out to be a bug. The container is not cleared upon an error, so the code has to be re-uploaded.
A solution is to make a cloudwatch alarm that fires another lambda function that re uploads the code automatically.
They are working on a fix.
Forum post: https://forums.aws.amazon.com/thread.jspa?threadID=238434&tstart=0
In fact there are many cases when Lambda becomes unresponsive, e.g.:
Parsing not valid json:
exports.handler = function(event, context, callback)
{
var nonValidJson = "Not even Json";
var jsonParse = JSON.parse(nonValidJson);
Accessing property of undefined variable:
exports.handler = function(event, context, callback)
{
var emptyObject = {};
var value = emptyObject.Item.Key;
Not closing mySql connection after accessing RDS leads to Lambda timeout and then it becomes non-responsive.
Making a lambda that reuploads the code can take a portion of time.
After some tests it's revealed that in fact Lambda tries to restart (reload the container?), there is just not enough time. If you set the timeout to be 10s, after ~4s of execution time Lambda starts working, and then in next runs comes to behave normally. I've also tried playing with setting:
context.callbackWaitsForEmptyEventLoop = false;
and putting all 'require' blocks inside handler, nothing really worked. So the good way to prevent Lambda becoming dead is setting bigger timeout, 10s should be more than enough as a workaround protection against this bug.

Random 'ECONNABORTED' error when using sendFile in Express/Node

I have set a node server with Express middleware. I get the ECONNABORTED error randomly on some files when loading an HTML file which triggers about 10 other loads (js, css, etc.). The exact error is:
{ [Error: Request aborted] code: 'ECONNABORTED' }
Generated by this simplified code (after I tried to debug the issue):
res.sendFile(res.locals.physicalUrl,function (err) {
if (err)
console.log(err);
...
}
Many posts talk about this error resulting from not specifying the full path name. That is not the situation here. I do specify the full path and indeed the error is randomly generated. There are times when the page and all its subsequent links load perfectly and there are times when they do not. I tried to flush the cache and did not find any pattern to connect it with this.
This specific error appears to be a a generic term for socket connection getting aborted and is discussed in the context of other applications like FTP.
Having realized that the node worker threads can be increased, I tried to do so using:
process.env.UV_THREADPOOL_SIZE = 20;
However, my understanding is that even absent this, at most the file transfer may have to wait for a worker thread to be free and not get aborted. I am not talking about big files here, all files are less than 1 MB.
I have a gut feeling that this has nothing to do with node directly.
Please point to any other possibilities (node or otherwise) to handle this error. Also, any other indirect solutions? Retrying a few times could be one but that would be clumsy. EDIT: No, I cannot retry. Headers are already sent with the error!
A SIDE NOTE:
Many examples on the use of sendFile skip using the callback thereby giving the impression that it is a synchronous call. It is not. Do use the callback at all times, check for success and only then move on to the "next" middleware or take appropriate steps if the send fails for whatever reason. Not doing so can make it difficult to debug the consequences in an asynchronous environment.
See https://stackoverflow.com/a/36949631/2798152
Could it be possible that in some cases you terminate the connection by calling res.end before the asynchronous call to res.sendFile ends?
If that's not the case - can you pastebin more of your application code?
Uninstalling and Re-installing MongoDB solved this for me.
I was facing the same problem. It started happening when I had to force restart my laptop because it became unresponsive. On restarting, trying to connect to mongo server using nodejs, always threw ECONNABORTED error

Winston not logging at any log level, what could be wrong?

I have run in to a weird bug and don't know how to proceed/debug. I have an app that's written in Nodejs and uses Winston for logging. Everything was working fine until I brought up a new production server yesterday and retired the old one.
My prod server has 4 Nodejs processes running. On the new production server, Winston logs the very first log message per .js file, period. It stops logging after that, and changing the log level doesn't work. My app has got about 6 .js files, and in case of any error on any of those files, the very first error message gets logged but any subsequent errors/warning/info are not logged.
The funny thing is Winston was working just fine on the old prod server and the dev server still works fine.
I am on Winston 0.6.2 on both dev and prod. As far as I know, all the sw packages are the same between dev and prod.
How can I debug this issue?
After some research, I came across this issue => https://github.com/flatiron/winston/issues/227
Looks like, the new way of handling streams in the latest version of node has broken file transport in winston. I am going back to node v0.8.22 for the time being as a work around.
What transports are you using for logging? Does the console transport work? Perhaps the new production server has a network issue that prevents it logging to a remote service such as CouchDB or Loggly.
If you add a simple console.log('...') line next to your winston log lines do those get fired. This will confirm or deny that your winston log lines are getting called on the production server
winston.info('winston log test')
console.log('console log test')
You can can expose the logger instance and have a URL to trigger the required log level.
I had the same need so I came up with a dynamic log level setter for Winston https://github.com/yannvr/Winston-dynamic-loglevel

Resources