My node.js application is using http.request to the REST API http://army.gov/launch-nukes and I need to distinguish between three possible cases:
Success -- The server replies in the affirmative. I know my enemies are destroyed.
Failure -- Either I have received error from the server, or was unable to connect to server. I still have enemies.
Unknown -- After establishing a connection to the server, I have sent the request -- but not sure what happened. This could mean the request never made it to the server, or the server response to me never made it. I may or may not have just started a world war.
As you can see, it's very important for me to distinguish the Failure and Unknown case, as they have very different consequences and different actions I need to take.
I would also very much like to use http Keep-Alive -- as what can I say, I'm a bit of a war-monger and plan on making lots of requests in bursts (and then nothing for long periods of time)
--
The core of the question is how to separate a connection-error/time-out (which is a Failure) from an error/timeout that occurs after the request is put on the wire (which is an Unknown).
In psuedo-code logic I want this:
var tcp = openConnectionTo('army.gov') // start a new connection, or get an kept-alive one
tcp.on('error', FAILURE_CASE);
tcp.on('connectionEstablished', function (connection) {
var req = connection.httpGetRequest('launch-nukes');
req.on('timeout', UNKNOWN_CASE);
req.on('response', /* read server response and decide FAILURE OR SUCCESS */);
}
)
Here is an example:
var http = require('http');
var options = {
hostname: 'localhost',
port: 7777,
path: '/',
method: 'GET'
};
var req = http.request(options, function (res) {
// check the returned response code
if (('' + res.statusCode).match(/^2\d\d$/)) {
// Request handled, happy
} else if (('' + res.statusCode).match(/^5\d\d$/))
// Server error, I have no idea what happend in the backend
// but server at least returned correctly (in a HTTP protocol
// sense) formatted response
}
});
req.on('error', function (e) {
// General error, i.e.
// - ECONNRESET - server closed the socket unexpectedly
// - ECONNREFUSED - server did not listen
// - HPE_INVALID_VERSION
// - HPE_INVALID_STATUS
// - ... (other HPE_* codes) - server returned garbage
console.log(e);
});
req.on('timeout', function () {
// Timeout happend. Server received request, but not handled it
// (i.e. doesn't send any response or it took to long).
// You don't know what happend.
// It will emit 'error' message as well (with ECONNRESET code).
console.log('timeout');
req.abort();
});
req.setTimeout(5000);
req.end();
I recommend you play with it using netcat, ie.:
$ nc -l 7777
// Just listens and does not send any response (i.e. timeout)
$ echo -e "HTTP/1.1 200 OK\n\n" | nc -l 7777
// HTTP 200 OK
$ echo -e "HTTP/1.1 500 Internal\n\n" | nc -l 7777
// HTTP 500
(and so on...)
Related
I wonder how does node.js request module work in regard to timeout parameter.
What happens after timeout time period have passed? I.e:
var request = require('request');
var options = {
url: Theurl,
timeout: 300000
};
request(options, function(error, resp, body) {...
What happens after 300000? Does request try to request the url again or not?
I also found that Linux Kernel have a default 20 sec TCP socket connection timeout. (http://www.sekuda.com/overriding_the_default_linux_kernel_20_second_tcp_socket_connect_timeout)
Does it mean that timeout option in request will be max 20 sec (if I dont change the Linux Kernel timeout), regardless of what I set in options?
I use Ubuntu.
From the readme of the request package:
Note that if the underlying TCP connection cannot be established,
the OS-wide TCP connection timeout will overrule the timeout option
So in your case, the request will be aborted after 20 sec. The request won't try to request the url again (even if the timeout is set to a lower value than 20000). You would have to write your own logic for this or use another package, such as requestretry.
Example:
var options = {
url: 'http://www.gooooerererere.com/',
timeout: 5000
}
var maxRequests = 5;
function requestWithTimeout(attempt){
request(options, function(error,response,body){
if(error){
console.log(error);
if(attempt==maxRequests)
return;
else
requestWithTimeout(attempt+1);
}
else {
//do something with result
}
});
}
requestWithTimeout(1);
You can also check for a specific error message, such as ETIMEDOUT, with
if(error.code == [ERROR_MESSAGE])
request returns error with error code set as stated in request readme (timeout section).
Take a look at TIME_WAIT details.
But yes, kernel will cut it down with its configuration. As stated in your link, you can change it by chaning tcp_syn_retries.
If timeout happens, your callback function will be executed with error set to message 'Error: ETIMEDOUT'.
This little project https://github.com/FGRibreau/node-request-retry provides ready-to-use, configured wrapper for making retries triggered by many connection error codes, timeout included.
I'm working with elasticsearch-js (NodeJS) and everything works just fine as long as long as ElasticSearch is running. However, I'd like to know that my connection is alive before trying to invoke one of the client's methods. I'm doing things in a bit of synchronous fashion, but only for the purpose of performance testing (e.g., check that I have an empty index to work in, ingest some data, query the data). Looking at a snippet like this :
var elasticClient = new elasticsearch.Client({
host: ((options.host || 'localhost') + ':' + (options.port || '9200'))
});
// Note, I already have promise handling implemented, omitting it for brevity though
var promise = elasticClient.indices.delete({index: "_all"});
/// ...
Is there some mechanism to send in on the client config to fail fast, or some test I can perform on the client to make sure it's open before invoking delete?
Update: 2015-05-22
I'm not sure if this is correct, but perhaps attempting to get client stats is reasonable?
var getStats = elasticClient.nodes.stats();
getStats.then(function(o){
console.log(o);
})
.catch(function(e){
console.log(e);
throw e;
});
Via node-debug, I am seeing the promise rejected when ElasticSearch is down / inaccessible with: "Error: No Living connections". When it does connect, o in my then handler seems to have details about connection state. Would this approach be correct or is there a preferred way to check connection viability?
Getting stats can be a heavy call to simply ensure your client is connected. You should use ping, see 2nd example https://github.com/elastic/elasticsearch-js#examples
We are using ping too, after instantiating elasticsearch-js client connection on start up.
// example from above link
var elasticsearch = require('elasticsearch');
var client = new elasticsearch.Client({
host: 'localhost:9200',
log: 'trace'
});
client.ping({
// ping usually has a 3000ms timeout
requestTimeout: Infinity,
// undocumented params are appended to the query string
hello: "elasticsearch!"
}, function (error) {
if (error) {
console.trace('elasticsearch cluster is down!');
} else {
console.log('All is well');
}
});
I have a rest api working perfectly and now instead of manually testing it with postman I have written http requests in order to test CRUD operations. I know for a fact that all the requests are working as they give me status codes of 200.
My problem is when running the tests command line will show it ran through nearly all of them but it will not go beyond some of the GET requests (even though I have the same code for requests of the same type earlier in the code.
When I comment out the GET request where it gets stuck, it runs the requests after that one with no problem.
This is the code I have for my GET requests:
var options = {
host: 'localhost',
port: 4000,
path: '/api/services/00001/method/01/args/01'
};
http.get(options, function(res) {
console.log("Got response: " + res.statusCode+"\nGET argument 01 for user 00001 method 01.\n\n ");
}).on('error', function(e) {
console.log("Got error: " + e.message);
});
Am I missing something that will occasionally get the code to stop running?
Any help is appreciated.
I can add code for my other operations as well if needed(POST, PUT, DELETE).
After running my entire code with nothing commented out, I realized that it also stops in my second delete request, code is as follows:
var options = {
host: 'localhost',
port: 4000,
path: '/api/services/00001/method/01',
method: 'DELETE'
};
http.get(options, function(res) {
console.log("Got response: " + res.statusCode +"\nDELETE method 01 for user 00001.\n\n ");
}).on('error', function(e) {
console.log("Got error: " + e.message);
});
The solution could be one or both of:
Setting agent: false in your request options. This prevents the request from using the global http agent socket pool, meaning the request will get its own new socket every time. I have seen strange things where requests never start because of node's http agent implementation. The one thing you should be aware of though is your OS user's (the one the process is running under) file descriptor limits. If you do a lot of concurrent requests using agent: false, you may need to up that OS limit.
Use res.resume() in your request callback to drain any response data from the server whenever you don't care about it. Doing so may ensure that a socket from the http agent socket pool is available for use by another request.
I have been stuck on this "socket hang up" error for a couple days now, and I was hoping someone could help me.
I currently have two Node programs set up:
An HTTP server in Node that responds with the same data for every
request.
An HTTP server which responds with data from HTTP server 1.
for every request.
My code for HTTP server 2. is below.
var http = require('http')
, https = require('https')
, server = http.createServer(do).listen(801)
function do(req, res) {
var getReq = htts.get('http://127.0.0.1', function (getRes) {
setTimeout(function () {getRes.pipe(res)},1000)
})
req.on('close', function () {
// setTimeout(function () {getReq.abort()},20)
console.log(++i + ': req closed')
getReq.abort()
})
}
The problem is when I send a request to HTTP server 2. and close the request before a response is sent to my browser (I've set a timeout to give me time to abort). If I continually hold down the refresh button, I will receive the "socket hang up" error after an x amount of times, and do not really understand how to fix this. If I set a timer before executing getReq.abort(), the problem happens less often, and if I set the timer to a large number (above 100ms), there isn't an issue at all.
I can consistently replicate the error by executing getReq.abort() right after creating the request, so I believe that it has something to do with aborting between the time the socket is assigned to the request and before the response is sent.
What is wrong with my code, and how do I prevent this from happening?
Thanks
Can some one explain this code to create a proxy server. Everything makes sense except the last block. request.pipe(proxy - I don't get that because when proxy is declared it makes a request and pipes its response to the clients response. What am I missing here? Why would we need to pipe the original request to the proxy because the http.request method already makes the request contained in the options var.
var http = require('http');
function onRequest(request, response) {
console.log('serve: ' + request.url);
var options = {
hostname: 'www.google.com',
port: 80,
path: request.url,
method: 'GET'
};
var proxy = http.request(options, function (res) {
res.pipe(response, {
end: true
});
});
request.pipe(proxy, {
end: true
});
}
http.createServer(onRequest).listen(8888);
What am I missing here? [...] the http.request method already makes the request contained in the options var.
http.request() doesn't actually send the request in its entirety immediately:
[...] With http.request() one must always call req.end() to signify that you're done with the request - even if there is no data being written to the request body.
The http.ClientRequest it creates is left open so that body content, such as JSON data, can be written and sent to the responding server:
var req = http.request(options);
req.write(JSON.stringify({
// ...
}));
req.end();
.pipe() is just one option for this, when you have a readable stream, as it will .end() the client request by default.
Although, since GET requests rarely have a body that would need to be piped or written, you can typically use http.get() instead, which calls .end() itself:
Since most requests are GET requests without bodies, Node provides this convenience method. The only difference between this method and http.request() is that it sets the method to GET and calls req.end() automatically.
http.get(options, function (res) {
res.pipe(response, {
end: true
});
});
Short answer: the event loop. I don't want to talk too far out of my ass, and this is where node.js gets both beautiful and complicated, but the request isn't strictly MADE on the line declaring proxy: it's added to the event loop. So when you connect the pipe, everything works as it should, piping from the incoming request > proxy > outgoing response. It's the magic / confusion of asynchronous code!