Node request queue backed up - node.js

TL;DR - Are there any best practices when configuring the globalAgent that allow for high throughput with a high volume of concurrent requests?
Here's our issue:
As far as I can tell, connection pools in Node are managed by the http module, which queues requests in a globalAgent object, which is global to the Node process. The number of requests pulled from the globalAgent queue at any given time is determined by the number of open socket connections, which is determined by the maxSockets property of globalAgent (defaults to 5).
When using "keep-alive" connections, I would expect that as soon as a request is resolved, the connection that handled the request would be available and can handle the next request in the globalAgent's queue.
Instead, however, it appears that each connection up to the max number is resolved before any additional queued requests are handled.
When watching networking traffic between components, we see that if maxSockets is 10, then 10 requests resolve successfully. Then there is a pause 3-5 second pause (presumably while new tcp connections are established), then 10 more requests resolve, then another pause, etc.
This seems wrong. Node is supposed to excel at handling a high volume of concurrent requests. So if, even with 1000 available socket connections, if request 1000 cannot be handled until 1-999 resolve, you'd hit a bottleneck. Yet I can't figure out what we're doing incorrectly.
Update
Here's an example of how we're making requests -- though it's worth noting that this behavior occurs whenever a node process makes an http request, including when that request is initiated by widely-used third-party libs. I don't believe it is specific to our implementation. Nevertheless...
class Client
constructor: (#endpoint, #options = {}) ->
#endpoint = #_cleanEndpoint(#endpoint)
throw new Error("Endpoint required") unless #endpoint && #endpoint.length > 0
_.defaults #options,
maxCacheItems: 1000
maxTokenCache: 60 * 10
clientId : null
bearerToken: null # If present will be added to the request header
headers: {}
#cache = {}
#cards = new CardMethods #
#lifeStreams = new LifeStreamMethods #
#actions = new ActionsMethods #
_cleanEndpoint: (endpoint) =>
return null unless endpoint
endpoint.replace /\/+$/, ""
_handleResult: (res, bodyBeforeJson, callback) =>
return callback new Error("Forbidden") if res.statusCode is 401 or res.statusCode is 403
body = null
if bodyBeforeJson and bodyBeforeJson.length > 0
try
body = JSON.parse(bodyBeforeJson)
catch e
return callback( new Error("Invalid Body Content"), bodyBeforeJson, res.statusCode)
return callback(new Error(if body then body.message else "Request failed.")) unless res.statusCode >= 200 && res.statusCode < 300
callback null, body, res.statusCode
_reqWithData: (method, path, params, data, headers = {}, actor, callback) =>
headers['Content-Type'] = 'application/json' if data
headers['Accept'] = 'application/json'
headers['authorization'] = "Bearer #{#options.bearerToken}" if #options.bearerToken
headers['X-ClientId'] = #options.clientId if #options.clientId
# Use method override (AWS ELB problems) unless told not to do so
if (not config.get('clients:useRealHTTPMethods')) and method not in ['POST', 'PUT']
headers['x-http-method-override'] = method
method = 'POST'
_.extend headers, #options.headers
uri = "#{#endpoint}#{path}"
#console.log "making #{method} request to #{uri} with headers", headers
request
uri: uri
headers: headers
body: if data then JSON.stringify data else null
method: method
timeout: 30*60*1000
, (err, res, body) =>
if err
err.status = if res && res.statusCode then res.statusCode else 503
return callback(err)
#_handleResult res, body, callback

To be honest, coffeescript isn't my strong point so can't comment really on the code.
However, I can give you some thoughts: in what we are working on, we use nano to connect to cloudant and we're seeing up to 200requests/s into cloudant from a micro AWS instance. So you are right, node should be up to it.
Try using request https://github.com/mikeal/request if you're not already. (I don't think it will make a difference, but nevertheless worth a try as that is what nano uses).
These are the areas I would look into:
The server doesn't deal well with multiple requests and throttles it. Have you run any performance tests against your server? If it can't handle the load for some reason or your requests are throttled in the os, then it doesn't matter what your client does.
Your client code has a long running function somewhere which prevents node processing any reponses you get back from the server. Perhaps 1 specific response causes a response callback to spend far too long.
Are the endpoints all different servers/hosts?

Related

Why is my node.js server chopping off the response body?

My node server has a strange behaviour when it comes to a GET endpoint that resplies with a big JSON (30-35MB).
I am not using any npm package. Just the core API.
The unexpected behaviour only happens when querying the server from the Internet and it behaves fine if it is queried from the local network.
The problem is that the server stops writing to the response after it writes the first 1260 bytes of the content body. It does not close the connection nor throw an error. Insomnia (the REST client I use for testing) just states that it received a 1260B chunk. If I query the same endpoint from a local machine it says that it received more and bigger chunks (a few KB each).
I don't even think the problem is caused by node but since I am on a clean raspberry pi (installed raspbian and then just node v13.0.1) and the only process I use is node.js I don't know how to find the source of the problem, there is no load balancer or web server to blame. Also the public IP seems OK, every other endpoint is working fine (they reply with less than 1260B per request)
The code for that endpoint looks like this
const text = url.parse(req.url, true).query.text;
if (text.length > 4) {
let results = await models.fullTextSearch(text);
results = await results.map(async result=>{
result.Data = await models.FindData(result.ProductID, 30);
return result;
});
results = await Promise.all(results);
results = JSON.stringify(results);
res.writeHead(200, {'Content-Type': 'application/json', 'Transfer-Encoding': 'chunked', 'Access-Control-Allow-Origin': '*', 'Cache-Control': 'max-age=600'});
res.write(results);
res.end();
break;
}
res.writeHead(403, {'Content-Type': 'text/plain', 'Access-Control-Allow-Origin': '*'});
res.write("You made an invalid request!");
break;
Here are a number of things to do in order to debug this:
Add console.log(results.length) to make sure the length of the data is what you expect it to be.
Add a callback to res.end(function() { console.log('finished sending response')}) to see if the http library thinks it is done sending the response.
Check the return value from res.write(). If it is false (indicating that not all data has yet been sent), add a handler for the drain event and see if it gets called.
Try increasing the sending timeout with res.setTimeout() in case it's just taking too long to send all the data.

Nodejs proxy request coalescing

I'm running into an issue with my http-proxy-middleware stuff. I'm using it to proxy requests to another service which i.e. might resize images et al.
The problem is that multiple clients might call the method multiple times and thus create a stampede on the original service. I'm now looking into (what some services call request coalescing i.e. varnish) a solution that would call the service once, wait for the response and 'queue' the incoming requests with the same signature until the first is done, and return them all in a single go... This is different from 'caching' results due to the fact that I want to prevent calling the backend multiple times simultaneously and not necessarily cache the results.
I'm trying to find if something like that might be called differently or am i missing something that others have already solved someway... but i can't find anything...
As the use case seems pretty 'basic' for a reverse-proxy type setup, I would have expected alot of hits on my searches but since the problemspace is pretty generic i'm not getting anything...
Thanks!
A colleague of mine has helped my hack my own answer. It's currently used as a (express) middleware for specific GET-endpoints and basically hashes the request into a map, starts a new separate request. Concurrent incoming requests are hashed and checked and walked on the separate request callback and thus reused. This also means that if the first response is particularly slow, all coalesced requests are too
This seemed easier than to hack it into the http-proxy-middleware, but oh well, this got the job done :)
const axios = require('axios');
const responses = {};
module.exports = (req, res) => {
const queryHash = `${req.path}/${JSON.stringify(req.query)}`;
if (responses[queryHash]) {
console.log('re-using request', queryHash);
responses[queryHash].push(res);
return;
}
console.log('new request', queryHash);
const axiosConfig = {
method: req.method,
url: `[the original backend url]${req.path}`,
params: req.query,
headers: {}
};
if (req.headers.cookie) {
axiosConfig.headers.Cookie = req.headers.cookie;
}
responses[queryHash] = [res];
axios.request(axiosConfig).then((axiosRes) => {
responses[queryHash].forEach((coalescingRequest) => {
coalescingRequest.json(axiosRes.data);
});
responses[queryHash] = undefined;
}).catch((err) => {
responses[queryHash].forEach((coalescingRequest) => {
coalescingRequest.status(500).json(false);
});
responses[queryHash] = undefined;
});
};

Nodejs - make requests wait on lazy init of an object

let's say I have an application that returns exchange rates for today.
The service should read data via REST then save in cache and give clients from this cache. I want this request to 3rd party API to happen upon first attempt to get today's rate (kind of lazy init for every day).
Something like this:
(1) HTTP Request to get rate (form my app's client)
(2) if rate for today is available then return it
else
(3) read it from 3rd party service (via REST request)
(4) save in cache
(5) return from cache
The whole logic is written by mean of promises but the is a problem if i have millions of requests simultaneously at the very beginning of the day. In this case if one of the requests is on operations (3), (4) or (5) ( which are organized as a promise chain) operation (1) and (2) for other request can be handled by node in between.
E.g. while first requests is still waiting for the 3rd party API to response and the cache is empty other million of requests can also fire the same request to the same 3rd party API.
My thought is to chain operation (3) to some kind of an object A with the promise ( A.promise) inside that exposes resolve function to A. All other requests would wait (not synchronously wait of course) till the first request updates the cache and calls A.resolve() which will resolve A.promise.
But it looks a bit ugly, any idea of a better approach?
Update
I've got one solution, not sure whether it's node.js style:
function Deferred(){;
this.promise = false
this.markInProgress = ()=>{
this.promise = new Promise((res, rej)=>{
this.resolve = res;
this.reject = rej;
})
}
this.markDone = ()=>{
this.resolve()
this.promise = false
}
this.isInProgress = this.promise
}
let state = new Deferred();
function updateCurrencyRate(){
return db.any(`select name from t_currency group by name`)
.then((currencies) => {
return getRateFromCbr()
.then(res => Promise.all(
currencies.map((currency, i, currencies) =>
saveCurrency(
currency.name,
parseRate(res, currency.name)))));
})
}
function loadCurrencyRateFroDate(date){
if (state.isInProgress){
return state.promise
} else {
state.markInProgress();
return updateCurrencyRate()
.then(()=> {
state.markDone();
})
}
}
function getCurrencyRateForDate(date){
return getCurrencytRateFromDb(date)
.then((rate) => {
if (rate[0]) {
return Promise.resolve(rate)
} else {
loadCurrencyRateFroDate(date)
.then(()=>getCurrencytRateFromDb(date))
}
})
}
I would take a very simple queue, flush and fallback approach to this.
Implement a queuing mechanism (maybe with RabbitMQ) and route all your requests to the queue. This way you can hold off responding to requests when cache expires.
Create an expirable cache layer (maybe a redis cache) and expire your cache everyday.
By default route your requests from the queue to get data from cache. On receiving the data from cache, if the cache has expired, hold the queue and get data directly from 3rd party and update your cache and its expiry.
flush your cache every day
With queues, you have better control over the traffic. You can also add 3rd party API call as a fallback way to get data when your cache fails or anything goes wrong.

Timeout in node.js request

I wonder how does node.js request module work in regard to timeout parameter.
What happens after timeout time period have passed? I.e:
var request = require('request');
var options = {
url: Theurl,
timeout: 300000
};
request(options, function(error, resp, body) {...
What happens after 300000? Does request try to request the url again or not?
I also found that Linux Kernel have a default 20 sec TCP socket connection timeout. (http://www.sekuda.com/overriding_the_default_linux_kernel_20_second_tcp_socket_connect_timeout)
Does it mean that timeout option in request will be max 20 sec (if I dont change the Linux Kernel timeout), regardless of what I set in options?
I use Ubuntu.
From the readme of the request package:
Note that if the underlying TCP connection cannot be established,
the OS-wide TCP connection timeout will overrule the timeout option
So in your case, the request will be aborted after 20 sec. The request won't try to request the url again (even if the timeout is set to a lower value than 20000). You would have to write your own logic for this or use another package, such as requestretry.
Example:
var options = {
url: 'http://www.gooooerererere.com/',
timeout: 5000
}
var maxRequests = 5;
function requestWithTimeout(attempt){
request(options, function(error,response,body){
if(error){
console.log(error);
if(attempt==maxRequests)
return;
else
requestWithTimeout(attempt+1);
}
else {
//do something with result
}
});
}
requestWithTimeout(1);
You can also check for a specific error message, such as ETIMEDOUT, with
if(error.code == [ERROR_MESSAGE])
request returns error with error code set as stated in request readme (timeout section).
Take a look at TIME_WAIT details.
But yes, kernel will cut it down with its configuration. As stated in your link, you can change it by chaning tcp_syn_retries.
If timeout happens, your callback function will be executed with error set to message 'Error: ETIMEDOUT'.
This little project https://github.com/FGRibreau/node-request-retry provides ready-to-use, configured wrapper for making retries triggered by many connection error codes, timeout included.

Node http requests are executing out of order causing problems with an API using a nonce value

I am making http requests to an external API that requires each request to have an ever increasing nonce value.
The problem I am experiencing is that even though the requests are submitted in order, they are not getting popped off the call stack in order (presumably). I am using the request library. A portion of my helper method looks like this:
Api.prototype.makeRequest = function(path, args, callback) {
var self = this;
var nonce = null;
var options = null;
// Create the key, signature, and nonce for API auth
nonce = (new Date()).getTime() * 1000;
args.key = self.key;
args.signature = ( ... build signature ... );
args.nonce = nonce;
options = {
url: path,
method: 'POST',
body: querystring.stringify(args)
};
request(options, function(err, resp, body) {
console.log('My nonce is: ' + args.nonce);
.
.
.
});
};
The console log results in a nonce order that is not ever increasing, but jumbled, even though each request are necessarily created in order (I tested this by placing the console log before the request call). How do I enforce a certain order? Why isn't it already doing this? Any understanding would be much appreciated.
Because of the asynchronous nature of Node.js if you make three HTTP requests with three different nonce values like the following
GET http://example.com/?nonce=1
GET http://example.com/?nonce=2
GET http://example.com/?nonce=3
All three request will happen concurrently. Which ever request gets a response back from the server will be the first to complete (i.e its callback will run).
You could incorporate the functions of the async module like async.series or async.map to ensure the requests return in order.

Resources