How to handle sending out multiple http request in Node? - node.js

I have the standard code for sending out http request. Using http.globalAgent.
I set my maxSockets be 2500.
And then when I send out multiple requests at once, I get this error:
['{'code':'ECONNRESET'}']
However, if I sent out the request after a bit of timeout between each request, then it works.
So, questions are:
1) what does ECONNRESET really mean? Why this error happen?
2) How to send out multiple requests instantly without getting that error?
original code to send out multiple requests:
// I'm using Seq()
Seq().
seq(function() {
this(null, ['p1','p2','p3','p4','p5']);
})
.flatten(false)
.parEach(fuctnion(data) {
// send out request
sendRemoteRequest(data); // a function that uses http.request
})
.seq(function(data) {
console.log("done");
})

ECONNRESET basically means that the remote server has closed the connection. I assume it only allows a certain number of concurrent connections and if that limit is reached it just drops the connection, resulting in a ECONNRESET in your program.

Related

what happens if neither res.send() nor res.end() is called in express.js?

I have a security issue that someone is trying to call random APIs that are not supported on our server but are frequently used for administrators API in general. and I set this code below to handle 404 to not respond to this attack
url-not-found-handler.js
'use strict';
module.exports = function () {
//4XX - URLs not found
return ((req, res, next) => {
});
};
what happens to client is that it waits until the server responds but I want to know if this will affect the performance of my express.js server also what happens behind the scene in the server without res.send() or res.end() ?
According to the documentation of res.end().
Ends the response process. This method actually comes from Node core,
specifically the response.end() method of http.ServerResponse.
And then response.end
This method signals to the server that all of the response headers and
body have been sent; that server should consider this message
complete. The method, response.end(), MUST be called on each response.
If you leave your request hanging, the httpserver will surely keep data about it. Which means that if you let hang many requests, your memory will grow and reduce your server performance.
About the client, he's going to have to wait until he got a request timeout.
The best to do having a bad request is to immediately reject the request, which is freeing the memory allowed for the request.
You cannot prevent bad requests (maybe have a firewall blocking requests from certains IP address?). Best you can do is to handle them as fast as possible.

why subsequent HTTP requests

My JavaScript makes that ajax call which retrieves a JSON array.
I am trying simulate long running HTTP REST call request that takes longer to return the results.
The way I do it is delay writing anything to the response object on the server side until 5 minutes elapsed since the request landed. After that I set the status to 200 and write the response with the JSON ending the stream.
Putting a breakpoint on the serve side I realize that the request shows up second time but the browser's Network tab does not show another request being made.
It may not be relevant but I am using browsersync middlewars to serve this JSON and write the bytes and end the response in setTimeout().
setTimeout(()=> {
res.statusCode = 200;
res.write(data);
res.end();
});
Question:
Anyone has any explanation as to why this is happening ? And if there is a way to simulate this in another ways ?
In most cases the browser should retry if connection is closed before response. This is a link to the details => HTTP spec Client Behavior if Server Prematurely Closes Connection
BTW it might help you use the chrome throttling options on the network section of dev tools (F12)

Node.js http server: "getifaddres: Too many open files"

I'm currently running a nodejs server, and using GazeboJS to connect to the Gazebo server in order to send messages.
The problem is:
From my searches it seems like its due to the linux open file limit which is default at 1024 (Using Ubunuty 14.04). Most solutions seem to be to increase the open file limit.
However I don't know why my script is opening files and not closing them. It seems like each http request opens a connection which is not closed even though a response is sent? The http requests are coming from a Lua script using async.
The error
getifaddres: Too many open files
occurs after exactly 1024 requests.
I have no experience with webservers so I hope someone could give an explanation.
The details of the nodejs server i'm running:
The server is created using
http.createServer(function(req, res))
when a HTTP GET request is received, the response is sent as a string. Example of one response
gazebo.subscribe(obj.states[select].type, obj.states[select].topic, function(err, msg) // msg is a JSON object
{
if (err)
{
console.log('Error: ' + err);
return;
}
res.setHeader('Content-Type', 'text/html');
res.end(JSON.stringify(msg));
gazebo.unsubscribe(obj.states[select].topic);
})
The script makes use of the publish/subscribe topics in the Gazebo server to extract information or publish actions. More information about Gazebo communication is here.

Connecting to a Reliable Webservice with Nodejs

My application needs to receive a result from Reliable Webservice. Here is the scenario:-
First I send a CreateSequence request. Then the server replies with a CreateSequenceResponse message. Next I send the actual request to the webservice.
Then the webservice send a response with 202 accept code and sends result in a later message. All these messages contain the header Connection: keep-alive.
I made request with http.ClientRequest. I could capture all responses except the result. http.ClientRequest fires only one response event.
How can I receive the message which contains the result?
Is there any way to listen to socket for remaining data (socket.on('data') did not work). I checked this with ReliableStockQuoteService shipped with Apache Synapse. I appreciate if someone can help me.
When you get the response event, you are given a single argument, which is an http.IncomingMessage, which is a Readable stream. This means that you should bind your application logic on the data event of the response object, not on the request itself.
req.on('response', function (res) {
res.on('data', console.log);
});
Edit: Here is a good article on how to make HTTP requests using Node.

How to kill a connection in nodejs

I have a homework assignment to build an http server using only node native modules.
I am trying to protect the server from overloading, so each request is hashed and stores.
If a certain request reaches a high number, say 500, I call socket.destroy().
Every interval (one minute) I restart the hash-table. Problem is that when I do a socket that was previously dead is now working again. The only thing I do each interval is requests = {}, and nothing to do with the connections.
Any ideas why the connection is live again? Is there a better function to use than destroy()?
Thanks
Destroying the socket won't necessarily stop the client from retrying the request with a new socket.
You might instead try responding minimally with just a non-OK status code:
if (requests[path] >= 500) {
res.statusCode = 503;
res.end();
}
And, on the 503 status code:
The server is currently unable to handle the request due to a temporary overloading or maintenance of the server.

Resources