What is the default timeout for NPM request module (REST client)? - node.js

Following will be my node.js call to retrive some data, which is taking more than 1 minute. Here this will be timeout at 1 minute (60 seconds). I put a console log for the latency also. However I have configured the timeout for 120 seconds but it is not reflecting. I know the default level nodejs server timeout is 120 seconds but still I get the timeout (of 60 seconds) from this request module for this call. Please provide your insights on this.
var options = {
method: 'post',
url:url,
timeout: 120000,
json: true,
headers: {
"Content-Type": "application/json",
"X-Authorization": "abc",
"Accept-Encoding":"gzip"
}
}
var startTime = new Date();
request(options, function(e, r, body) {
var endTime = new Date();
var latencyTime = endTime - startTime;
console.log("Ended. latencyTime:"+latencyTime/1000);
res.status(200).send(body);
});

From the request options docs, scrolling down to the timeout entry:
timeout - Integer containing the number of milliseconds to wait for a server to send response headers (and start the response body) before aborting the request. Note that if the underlying TCP connection cannot be established, the OS-wide TCP connection timeout will overrule the timeout option (the default in Linux can be anywhere from 20-120 seconds).
Note the last part "if the underlying TCP connection cannot be established, the OS-wide TCP connection timeout will overrule the timeout option".
There is also an entire section on Timeouts. Based on that, and your code sample, we can modify the request sample as such
request(options, function(e, r, body) {
if (e.code === 'ETIMEDOUT' && e.connect === true){
// when there's a timeout and connect is true, we're meeting the
// conditions described for the timeout option where the OS governs
console.log('bummer');
}
});
If this is true, you'll need to decide if changing OS settings is possible and acceptable (this is beyond the scope of this answer and such a question would be better on Server Fault).

Related

ETIMEDOUT error when making a request to sendgrid API

I'm using the node js request module to send emails via sendgrid. I am getting the error ETIMEDOUT. I intend on using node-retry npm module to retry the sending, but how can I detect what the error code is? Does the sendgrid API return the error code somehow? Also when I do detect the error code, is it just a matter of waiting X seconds to send the email again? If so how do I determine what X is?
_makeAPIRequest (httpMethod, url, body) {
var defer = Q.defer();
var options = {
method: httpMethod,
url: this.SENDGRID_API_URL + url,
headers: {
'content-type': 'application/json',
authorization: 'Bearer ' + SENDGRID_API_KEY
},
body: body,
json: true
};
request(options, function (error, response, body) {
if (error) {
console.dir(error);
return defer.reject(error);
}
defer.resolve(body);
});
return defer.promise;
}
ETIMEDOUT is an OS error message. It indicates a failed attempt, at the TCP/IP level, to connect to a remote host, probably the one mentioned in SENDGRID_API_URL.
The default value for that is https://api.sendgrid.com/v3/. For some reason, possibly an outbound firewall or some sort of network configuration trouble, your nodejs program cannot reach that URL, and waits for a response. You should check your value of that URL.
If this is intermittent (doesn't happen all the time) you probably can wait a few seconds and try again.
If it starts happening after you've sent a bunch of emails, you may be hitting a limit at sendgrid. Pace out your sending of emails; try putting a half-second delay between them.

Request Timeout while uploading image

Am developing web application using golang for web server and frontend with reactJS and nodeJS to serve the frontend. I have two issue while uploading images that are big (currently am testing with 2.9 mb) the first one am getting is a timeout within 10 second saying request timeout at the browser side but the upload is successfully uploaded to the database. The second issue is the request is being duplicated two times and as a result the request is saved to the database two times. I have searched on stack overflow but it doesn't seem to work.
First Option
Here is the code using ajax call i.e. fetch from isomorphic-fetch
Following suggestion to implement a timeout wrapper at https://github.com/github/fetch/issues/175
static addEvent(events){
let config = {
method: 'POST',
body: events
};
function timeout(ms, promise) {
return new Promise(function(resolve, reject) {
setTimeout(function() {
reject(new Error("timeout"))
}, ms)
promise.then(resolve, reject)
})
}
return timeout(120000,fetch(`${SERVER_HOSTNAME}:${SERVER_PORT}/event`, config))
.then(function(response){
if(response.status >= 400){
return {
"error":"Bad Response from Server"
};
}else if(response.ok){
browserHistory.push({
pathname: '/events'
});
}
});
}
The request timeout still occurs within 10 seconds.
Second Option
I have tried a different node module for the ajax call i.e. axios since it has a timeout option but this also didn't fix the timeout issue.
Third Option
I tried to set read timeout and write timeout on the server side similiar to https://blog.cloudflare.com/the-complete-guide-to-golang-net-http-timeouts/
server := &http.Server{
Addr: ":9292",
Handler: router,
ReadTimeout: 180 * time.Second,
WriteTimeout: 180 * time.Second,
}
Again am getting request timeout at browser side within 10 seconds.
what shall I do to fix or point me if i made a mistake ?

Timeout in node.js request

I wonder how does node.js request module work in regard to timeout parameter.
What happens after timeout time period have passed? I.e:
var request = require('request');
var options = {
url: Theurl,
timeout: 300000
};
request(options, function(error, resp, body) {...
What happens after 300000? Does request try to request the url again or not?
I also found that Linux Kernel have a default 20 sec TCP socket connection timeout. (http://www.sekuda.com/overriding_the_default_linux_kernel_20_second_tcp_socket_connect_timeout)
Does it mean that timeout option in request will be max 20 sec (if I dont change the Linux Kernel timeout), regardless of what I set in options?
I use Ubuntu.
From the readme of the request package:
Note that if the underlying TCP connection cannot be established,
the OS-wide TCP connection timeout will overrule the timeout option
So in your case, the request will be aborted after 20 sec. The request won't try to request the url again (even if the timeout is set to a lower value than 20000). You would have to write your own logic for this or use another package, such as requestretry.
Example:
var options = {
url: 'http://www.gooooerererere.com/',
timeout: 5000
}
var maxRequests = 5;
function requestWithTimeout(attempt){
request(options, function(error,response,body){
if(error){
console.log(error);
if(attempt==maxRequests)
return;
else
requestWithTimeout(attempt+1);
}
else {
//do something with result
}
});
}
requestWithTimeout(1);
You can also check for a specific error message, such as ETIMEDOUT, with
if(error.code == [ERROR_MESSAGE])
request returns error with error code set as stated in request readme (timeout section).
Take a look at TIME_WAIT details.
But yes, kernel will cut it down with its configuration. As stated in your link, you can change it by chaning tcp_syn_retries.
If timeout happens, your callback function will be executed with error set to message 'Error: ETIMEDOUT'.
This little project https://github.com/FGRibreau/node-request-retry provides ready-to-use, configured wrapper for making retries triggered by many connection error codes, timeout included.

Azure Storage NodeJS modify default timeout settings

I am wondering if its possible for me to change the default timeout settings for Azure Storage BlobService. From the documentation I can see that the default settings are:
Calls to get a blob, get page ranges, or get a block list are permitted 2 minutes per megabyte to complete. If an operation is taking longer than 2 minutes per megabyte on average, it will time out.
Calls to write a blob, write a block, or write a page are permitted 10 minutes per megabyte to complete. If an operation is taking longer than 10 minutes per megabyte on average, it will time out.
Looking through the source code I see that the BlobService.getServiceProperties and setServiceProperties are listed with these two parameters:
#param {int} [options.timeoutIntervalInMs] The server timeout interval, in milliseconds, to use for the request.
#param {int} [options.maximumExecutionTimeInMs] The maximum execution time, in milliseconds, across all potential retries, to use when making this request. The maximum execution time interval begins at the time that the client begins building the request. The maximum execution time is checked intermittently while performing requests, and before executing retries.
Are these two parameters equal to the items above?
Now when I try to use the getServiceProperties using the following code I am not given any information other than logging, metrics, and cors data. Which is what is said on the Github page
blobSvc.getServiceProperties(function(error, result, response) {
if (!error) {
console.log('Result: ', result);
console.log('Response: ', response);
} else {
console.log(error);
}
});
Result: { Logging:
{ Version: '1.0',
Delete: false,
Read: false,
Write: false,
RetentionPolicy: { Enabled: false } },
HourMetrics:
{ Version: '1.0',
Enabled: true,
IncludeAPIs: true,
RetentionPolicy: { Enabled: true, Days: 7 } },
MinuteMetrics:
{ Version: '1.0',
Enabled: false,
RetentionPolicy: { Enabled: false } },
Cors: {} }
Response: { isSuccessful: true,
statusCode: 200,
body:
{ StorageServiceProperties:
{ Logging: [Object],
HourMetrics: [Object],
MinuteMetrics: [Object],
Cors: '' } },
headers:
{ 'transfer-encoding': 'chunked',
'content-type': 'application/xml',
server: 'Windows-Azure-Blob/1.0 Microsoft-HTTPAPI/2.0',
'x-ms-request-id': '45a3cfeb-0001-0127-0cf7-0149a8000000',
'x-ms-version': '2015-02-21',
date: 'Thu, 08 Oct 2015 18:32:36 GMT',
connection: 'close' },
md5: undefined }
So really I guess I am confused on the mismatch between documentation and if it is even possible to modify any timeout settings.
A sample call with the timeout option is:
var options = { maximumExecutionTimeInMs: 1000 };
blobSvc.createBlockBlobFromLocalFile('mycontainer', 'myblob', 'test.txt', options, function(error, result, response) {
if(!error) {
// file uploaded
}
});
You might also want to check the APIs and their options at: http://azure.github.io/azure-storage-node/BlobService.html
The timeout settings are not 'properties associated with the service', but instead they are 'properties associated with a call to the Storage Library'. The timeoutIntervalInMs setting and the maximumExecutionTimeInMs setting are parameters that you can set on the 'options' object that can be passed in with pretty much every operation (including uploading and downloading blobs). So, if you want to modify the timeouts for a given operation, just pass the desired setting on the 'options' object when you call into the library.
The 'timeoutIntervalInMs' is the timeout sent in a request to the Azure Storage service. This is the amount of time that the service will spend attempting to fulfill the request before it times out. This is the setting in the documentation you mentioned here -
https://msdn.microsoft.com/en-us/library/azure/dd179431.aspx
If a call into the Storage Client makes multiple HTTP(S) requests to the Storage Service, this value will be passed with each call.
The 'maximumExecutionTimeInMs' is a client timeout. This is tracked by the Storage Client across all Storage Requests made from that single API call. For example, if you have retries configured in the client, this value will be checked before every potential retry, and the retry will not continue if more than the 'maximumExecutionTimeInMs' has elapsed since the start of the first request.
Hope this makes sense.

Node request queue backed up

TL;DR - Are there any best practices when configuring the globalAgent that allow for high throughput with a high volume of concurrent requests?
Here's our issue:
As far as I can tell, connection pools in Node are managed by the http module, which queues requests in a globalAgent object, which is global to the Node process. The number of requests pulled from the globalAgent queue at any given time is determined by the number of open socket connections, which is determined by the maxSockets property of globalAgent (defaults to 5).
When using "keep-alive" connections, I would expect that as soon as a request is resolved, the connection that handled the request would be available and can handle the next request in the globalAgent's queue.
Instead, however, it appears that each connection up to the max number is resolved before any additional queued requests are handled.
When watching networking traffic between components, we see that if maxSockets is 10, then 10 requests resolve successfully. Then there is a pause 3-5 second pause (presumably while new tcp connections are established), then 10 more requests resolve, then another pause, etc.
This seems wrong. Node is supposed to excel at handling a high volume of concurrent requests. So if, even with 1000 available socket connections, if request 1000 cannot be handled until 1-999 resolve, you'd hit a bottleneck. Yet I can't figure out what we're doing incorrectly.
Update
Here's an example of how we're making requests -- though it's worth noting that this behavior occurs whenever a node process makes an http request, including when that request is initiated by widely-used third-party libs. I don't believe it is specific to our implementation. Nevertheless...
class Client
constructor: (#endpoint, #options = {}) ->
#endpoint = #_cleanEndpoint(#endpoint)
throw new Error("Endpoint required") unless #endpoint && #endpoint.length > 0
_.defaults #options,
maxCacheItems: 1000
maxTokenCache: 60 * 10
clientId : null
bearerToken: null # If present will be added to the request header
headers: {}
#cache = {}
#cards = new CardMethods #
#lifeStreams = new LifeStreamMethods #
#actions = new ActionsMethods #
_cleanEndpoint: (endpoint) =>
return null unless endpoint
endpoint.replace /\/+$/, ""
_handleResult: (res, bodyBeforeJson, callback) =>
return callback new Error("Forbidden") if res.statusCode is 401 or res.statusCode is 403
body = null
if bodyBeforeJson and bodyBeforeJson.length > 0
try
body = JSON.parse(bodyBeforeJson)
catch e
return callback( new Error("Invalid Body Content"), bodyBeforeJson, res.statusCode)
return callback(new Error(if body then body.message else "Request failed.")) unless res.statusCode >= 200 && res.statusCode < 300
callback null, body, res.statusCode
_reqWithData: (method, path, params, data, headers = {}, actor, callback) =>
headers['Content-Type'] = 'application/json' if data
headers['Accept'] = 'application/json'
headers['authorization'] = "Bearer #{#options.bearerToken}" if #options.bearerToken
headers['X-ClientId'] = #options.clientId if #options.clientId
# Use method override (AWS ELB problems) unless told not to do so
if (not config.get('clients:useRealHTTPMethods')) and method not in ['POST', 'PUT']
headers['x-http-method-override'] = method
method = 'POST'
_.extend headers, #options.headers
uri = "#{#endpoint}#{path}"
#console.log "making #{method} request to #{uri} with headers", headers
request
uri: uri
headers: headers
body: if data then JSON.stringify data else null
method: method
timeout: 30*60*1000
, (err, res, body) =>
if err
err.status = if res && res.statusCode then res.statusCode else 503
return callback(err)
#_handleResult res, body, callback
To be honest, coffeescript isn't my strong point so can't comment really on the code.
However, I can give you some thoughts: in what we are working on, we use nano to connect to cloudant and we're seeing up to 200requests/s into cloudant from a micro AWS instance. So you are right, node should be up to it.
Try using request https://github.com/mikeal/request if you're not already. (I don't think it will make a difference, but nevertheless worth a try as that is what nano uses).
These are the areas I would look into:
The server doesn't deal well with multiple requests and throttles it. Have you run any performance tests against your server? If it can't handle the load for some reason or your requests are throttled in the os, then it doesn't matter what your client does.
Your client code has a long running function somewhere which prevents node processing any reponses you get back from the server. Perhaps 1 specific response causes a response callback to spend far too long.
Are the endpoints all different servers/hosts?

Resources