I have set a node server with Express middleware. I get the ECONNABORTED error randomly on some files when loading an HTML file which triggers about 10 other loads (js, css, etc.). The exact error is:
{ [Error: Request aborted] code: 'ECONNABORTED' }
Generated by this simplified code (after I tried to debug the issue):
res.sendFile(res.locals.physicalUrl,function (err) {
if (err)
console.log(err);
...
}
Many posts talk about this error resulting from not specifying the full path name. That is not the situation here. I do specify the full path and indeed the error is randomly generated. There are times when the page and all its subsequent links load perfectly and there are times when they do not. I tried to flush the cache and did not find any pattern to connect it with this.
This specific error appears to be a a generic term for socket connection getting aborted and is discussed in the context of other applications like FTP.
Having realized that the node worker threads can be increased, I tried to do so using:
process.env.UV_THREADPOOL_SIZE = 20;
However, my understanding is that even absent this, at most the file transfer may have to wait for a worker thread to be free and not get aborted. I am not talking about big files here, all files are less than 1 MB.
I have a gut feeling that this has nothing to do with node directly.
Please point to any other possibilities (node or otherwise) to handle this error. Also, any other indirect solutions? Retrying a few times could be one but that would be clumsy. EDIT: No, I cannot retry. Headers are already sent with the error!
A SIDE NOTE:
Many examples on the use of sendFile skip using the callback thereby giving the impression that it is a synchronous call. It is not. Do use the callback at all times, check for success and only then move on to the "next" middleware or take appropriate steps if the send fails for whatever reason. Not doing so can make it difficult to debug the consequences in an asynchronous environment.
See https://stackoverflow.com/a/36949631/2798152
Could it be possible that in some cases you terminate the connection by calling res.end before the asynchronous call to res.sendFile ends?
If that's not the case - can you pastebin more of your application code?
Uninstalling and Re-installing MongoDB solved this for me.
I was facing the same problem. It started happening when I had to force restart my laptop because it became unresponsive. On restarting, trying to connect to mongo server using nodejs, always threw ECONNABORTED error
Related
It doesn't happen regularly, but I was able to catch it in my debugger:
And then, because of that trailing comma, things further complicate. That error is not even well formatted JSON, which makes it throw DeserializationError, which is how it reaches my code. We can ignore this one, I just needed to bitch about it a little bit.
How do we find why is the "No server available to handle the request" happening? How to mitigate this?
If this is the expected behavior of an overloaded ES cluster, how do we properly handle this error specifically?
Here are the health metrics:
Here's context around the error:
I am getting random restarts on a PM2 managed nodejs cluster. The only symptom I get on the error log is of the following pattern - an ENOTFOUND on dns.js.
Error: getaddrinfo ENOTFOUND walkinto.inhttp walkinto.inhttp:80
at errnoException (dns.js:28:10)
at GetAddrInfoReqWrap.onlookup [as oncomplete] (dns.js:76:26)
Clearly the problem is a malformed server name - walkinto.inhttp is incorrect and it should be walkinto.in . The challenge is that this is not a host name hard coded in the code. There are many places in this fairly large code base that makes name resolution and it is of dynamic nature.
I have spent considerable time to pinpoint the root cause but so far have had no luck. I need help to print more log information from dns.js; probably a call stack 'may' would help to move forward.
Q1 : How to enable more detailed logging on nodejs core modules?
Q2 : What could cause a nodejs restart to happen for an ENOTFOUND? How to avoid a restart - This path is not desirable.
Q3: Are there any other smarter way to trouble shoot this problem?
Since there's no way for us to help you solve the issue without some code to go on, I'll answer your questions:
How to enable more detailed logging on nodejs core modules?
Run node with the inspect option and attach to the debugger with Chrome DevTools or another application. See these links:
https://nodejs.org/api/debugger.html
https://nodejs.org/en/docs/guides/debugging-getting-started/
What could cause a nodejs restart to happen for an ENOTFOUND? How to avoid a restart - This path is not desirable.
The Node runtime isn't restarting. The error you're seeing is generated from something similar to throw new Error(`getaddrinfo ${err}`), and any uncaught error from throw will crash the runtime.
The restart is happening because you run the app via PM2, and can be disabled by passing the --no-autorestart option to PM2. If you want to avoid the application from crashing, you should wrap whatever code that this could be generated from in a try/catch-block, and try to recover from the error.
Are there any other smarter way to trouble shoot this problem?
This is most likely not an issue with the dns stdlib module. If I understand correctly, you are performing name resolutions on dynamically generated data, and that is most likely your issue. Somewhere in the code you have one or more functions that are either not validating the generated data or are generating invalid data due to a bug. We can't help you solve that unfortunately, since you haven't provided any code to go on. Would be great if you could try to pinpoint what code might cause this and update the question with it.
I was getting this error in my request that was something like this:
var optionsSearch = {
host: 'https://mysite.sharepoint.com',
path: '_api/search/query?querytext="sharepoint"',
method: 'GET'
};
All did was removing the https:// leaving only mysite.sharepoint.com and it was fixed.
We're currently in the process of updating from node 0.10 to node 4.1.2 and we're seeing some weird patterns. The number of connections to our postgres database doubles1 and we're seeing the same pattern with requests to external services2. We are running a clustered app running the native cluster API and the number of workers is the same for both versions.
I'm failing to understand why upgrading the runtime language would apparently change application behaviour by doubling requests to external services.
One of the interesting things I've noticed with 0.12 and 4.x is the change in garbage collection. I've not used the pg module before so I don't know internally how it maintains it's pools of if it would be affected by memory or garbage collection. If you haven't defined default memory setting for node you could try giving that a shot and see if you see any other results.
node --max_old_space_size <some sane value in MB>
I ran into something similar, but I was getting double file writes. I don't know your exact case, but I've seen a scenario where requests could almost exactly double.
in the update to 4.1.2, process.send and child.send has gone from synchronous to asynchronous.
I found an issue like this:
var child = fork('./request.js');
var test = {};
child.send(small request);
child.send(large request);
child.on('response', function (val) {
console.log('small request came back: ' + val);
test = val;
});
if(!test){
//retry request
} ...
So where as previously the blocking sends has allowed this code to work, the non-blocking version assumes an error has occurred and retries. No error actually occurred, so double the requests come in.
I'm working on a project which uses Npm request package for making request to an API server. On getting response, the callback processes the returned response. During this response processing I get the error: Failed to receive keepalive! Exiting. The following code will help you understand.
request({url: 'http://api-link-from-where-data-is-to-be-fetched'
},
function (err,res,body) {
//The code for processing response
}
Anybody can help me please who knows how to resolve this issue?
This might help answer this for you:
https://github.com/meteor/meteor/issues/1302
The last post on that page says:
Note that this is just a behavior of the develop-mode meteor run (and any hosting environment that chooses to turn on the keepalive option, which probably isn't most of them), not a production issue. And in any case, if your Node process is churning CPU for seconds, it's not going to be able to respond to any network traffic.
this post might help you : Meteor error message: "Failed to receive keepalive! Exiting."
Removing autopublish with meteor remove autopublish and then writing my own publish and subscribe functions fixed the problem.
What is the best way in node to handle unhandled expections that are coming out of core node code? I have a background process that runs and crawls web content and will run for long periods of time without issue, but every so often an unexpected exception occurs and I can't seem to gracefully handle it. The usual culprit appears to be some networking issue (lost connectivity) where the http calls I'm making fail. All of the functions that I have created follow the pattern of FUNCTION_NAME(error, returned_data), but in the situations where the error occurs I'm not seeing any of the functions I created in the call stack that is printed out, instead its showing some of the core node modules. I'm not really worried about these infrequent errors and their root cause, the purpose of this posting is just trying to find a graceful way of handling these exceptions.
I've tried putting a try/catch at the top level of my code where everything runs under but it doesn't seem to capture these exceptions. Is it good practice in node to use try/catch within all the lower level functions that use any core code? Or is there some way to globally capture all unhandled exceptions?
Thanks
Chris
UPDATED TO ADD STACK
node.js:201
throw e; // process.nextTick error, or 'error' event on first tick
^
Error: connect Unknown system errno 10060
at errnoException (net.js:642:11)
at Object.afterConnect [as oncomplete] (net.js:633:18)
is there some way to globally capture all unhandled exceptions?
You can catch all exceptions using process.on('uncaughtException'). Listening to this event will avoid the default action of printing the stack and exiting. However be conscious that ignoring exceptions may lead to problems in your app execution.
Link: http://nodejs.org/docs/latest/api/process.html#process_event_uncaughtexception
Pay attention to the documentation advice:
Note that uncaughtException is a very crude mechanism for exception handling. Using try / catch in your program will give you more control over your program's flow. Especially for server programs that are designed to stay running forever, uncaughtException can be a useful safety mechanism.
To catch network errors and avoid the default behavior (printing stack and exit) you have to listen to "error" events.
For example
var net = require('net');
var client = net.connect(80, 'invalid.host', function () {
console.log("Worked");
})
client.on('error', console.log);
I wrote about this recently at http://snmaynard.com/2012/12/21/node-error-handling/. A new feature of node in version 0.8 is domains and allow you to combine all the forms of error handling into one easier manage form. You can read about them in my post and in the docs.
You can use domains to handle callback error arguments, error event emitters and exceptions all in one place. The problem in this specific case is that when you dont handle an error event emitter, node by default will print the stack trace and exit the app.
I've put together a quick error handling file which logs and emails me whenever an unhandled exception is thrown. it then (optionally) tries to restart the server.
Check it out!