I'm using express to build a API that should be used internally. One of the request trigger an heavy process on the server and should return a CSV out of that. This process might take more than 10 minutes.
To not overloading the server I want to restrict the call of this API and make it so that, since the process isn't terminated, we can't request the same URL again.
For this I tried to use express-rate-limit with the following configuration:
new RateLimit({
windowMs: 30 * 60 * 1000, // 30 minutes
max: 1,
delayMs: 0, // disabled
message: 'Their is already a running execution of the request. You must wait for it to be finished before starting a new one.',
handler: function handler(req, res) {
logger.log('max request achieved');
logger.log(res);
},
});
But it seems that the 'max request' is reached every time after exactly 2 mins even if I start only once. I suspect the browser to retry the request after 2 min if doesn't get any answer, is it possible?
I would like that this request doesn't have any retry-strategy and that the only way the max request is reached is by manually asking the server to execute this request 2 times in a row.
Thank's.
Edit
Here is my full code:
const app = express();
const port = process.env.API_PORT || 3000;
app.enable('trust proxy');
function haltOnTimedout(req, res, next) {
if (!req.timedout) { next(); }
}
app.use(timeout(30 * 60 * 1000)); // 30min
app.use(haltOnTimedout);
app.listen(port, () => {
logger.log(`Express server listening on port ${port}`);
});
// BILLING
const billingApiLimiter = new RateLimit({
windowMs: 30 * 60 * 1000, // 30 minutes
max: 1,
delayMs: 0, // disabled
message: 'Their is already a running execution of the request. You must wait for it to be finished before starting a new one.',
handler: function handler(req, res) {
logger.log('max request achieved');
},
});
app.use('/billing', billingApiLimiter);
app.use('/billing', BillingController);
And the code of my route:
router.get('/billableElements', async (request, response) => {
logger.log('Route [billableElements] called');
const { startDate } = request.query;
const { endDate } = request.query;
try {
const configDoc = await metadataBucket.getAsync(process.env.BILLING_CONFIG_FILE || 'CONFIG_BILLING');
const billableElements = await getBillableElements(startDate, endDate, configDoc.value);
const csv = await produceCSV(billableElements);
logger.log('csv produced');
response.status(200).send(`${csv}`);
} catch (err) {
logger.error('An error occured while getting billable elements.', err);
response.status(500).send('An internal error occured.');
}
});
I found the answer thank's to this GitHub issue: https://github.com/expressjs/express/issues/2512.
TLDR: I added request.connection.setTimeout(1000 * 60 * 30); to avoid firing the request every 2 minutes.
But considering the code I wrote inside my question, #Paul's advices are still good to be taken into account.
Related
Let's assume you have a server in koa and you have a POST route which takes ~ 3 minutes to return a request.
According to the various sources, you set-up your server with the timeout by setting up
let app = new Koa();
let server=app.listen(3000);
server.timeout=5*60*1000; // set to 5 minutes
On the client side, you set-up a timeout as well and you send the post request with the timeout
const request = await axios.post('https://somewebsite.herokuapp.com/testkeepalive', {}, {timeout:240000})
console.log('request is ', request)
Why when sending the above, on the server it still returns a timeout error and doesn't execute as intended?
Info about /testkeepalive
.post('/testkeepalive', async ctx => {
console.log('request received')
ctx.request.socket.setTimeout(5 * 60 * 1000)
ctx.request.socket.setKeepAlive(true)
await delay(3 * 60 * 1000) // we delay 3 mintues before returning the request
ctx.body = "nice"
ctx.status = 202
})
Client side error:
EDIT:
Code above is correct. The only issue was that I was using Heroku which times out to 30seconds. Moving the server elsewhere (e.g AWS EC2 + deploy server with https ssl certificate) made it work.
If you still want to use heroku, you can only do it by implementing background jobs.
https://devcenter.heroku.com/articles/background-jobs-queueing
Axios is showing you ERR_NETWORK so I think that means that some intermediary (perhaps a proxy in your hosting environment) has closed the socket.
Below is a test app I wrote that lets me separately set the serverTimeout, the clientTimeout and the serverDelay before sending a response. This allows you to simulate either client or server timeout and see exactly what the axios response is. This is generic Express since I'm not a koa person, but presumably, it is the same http server object in either case.
With these values configured such that the serverDelay is less than both serverTimeout and clientTimeout:
const serverTimeout = 10 * 1000;
const clientTimeout = 8 * 1000;
const serverDelay = 7 * 1000;
Then, I get the appropriate response from the server (no error).
With these values configured such that the serverTimeout is shorter than the serverDelay:
const serverTimeout = 5 * 1000;
const clientTimeout = 8 * 1000;
const serverDelay = 7 * 1000;
Then, I get Error socket hang up ECONNRESET.
With these values configured such that the clientTimeout is shorter than the serverDelay or serverTimeout:
const serverTimeout = 10 * 1000;
const clientTimeout = 5 * 1000;
const serverDelay = 7 * 1000;
Then, I get Error timeout of 5000ms exceeded ECONNABORTED.
So, it all seems to be working for me with no extra infrastructure in between the client and server. So, that plus the ERR_NETWORK error you see from Axios makes me think that a proxy in your hosting infrastructure is responsible for the error.
Here's the code I used:
import express from 'express';
import axios from 'axios';
const serverTimeout = 10 * 1000;
const clientTimeout = 8 * 1000;
const serverDelay = 7 * 1000;
function delay(t) {
return new Promise(resolve => {
setTimeout(resolve, t);
})
}
const app = express();
app.get("/timeout", async (req, res) => {
await delay(serverDelay);
res.send("Hello");
});
const server = app.listen(80);
server.timeout = serverTimeout;
try {
let result = await axios.get("http://localhost/timeout", { timeout: clientTimeout });
console.log("Got Result", result.data);
} catch (e) {
console.log("Error", e.message, e.code);
}
server.close();
Note also that there is no need to set a timeout separately on each incoming socket. The server timeout itself will handle that for you.
here is my issue :
I built a Node express app that handles incoming requests from a webhook that sometimes sends dozens of packages in one second. Once the request has been processed I need to make an API POST request using Axios with the transformed data.
Unfortunetely this API has a rate limit of 2 request per second.
I am looking for a way to build some kind of queuing system that accepts every incoming requests, and send the outgoing request at a limited rate of 2 request per seconds.
I tried adding a delay with SetTimeout, but it only delayed the load => when hundreds of requests were receieved, the handling of each of them was delayed by 10 seconds, but they were still being sent out at nearly the same time, just 10 seconds later.
I was thinking of trying to log the time of each outgoing request, and only send a new outgoing request if (time - timeOfLastRequest > 500ms) but I'm pretty sure this is not the right way to handle this.
Here is a very basic version of the code to illustrate my issue :
// API1 SOMETIMES RECEIVES DOZENS OF API CALLS PER SECOND
app.post("/api1", async (req, res) => {
const data = req.body.object;
const transformedData = await transformData(data);
// API2 ONLY ACCEPTS 2 REQUEST PER SECOND
const resp = await sendToApi2WithAxios(transformedData);
})
Save this code with data.js file
you can replace the get call with your post call.
import axios from 'axios'
const delay = time => new Promise(res=>setTimeout(res,time));
const delayCall = async () => {
try {
let [seconds, nanoseconds] = process.hrtime()
console.log('Time is: ' + (seconds + nanoseconds/1000000000) + ' secs')
let resp = await axios.get(
'https://worldtimeapi.org/api/timezone/Europe/Paris'
);
console.log(JSON.stringify(resp.data.datetime, null, 4));
await delay(501);
[seconds, nanoseconds] = process.hrtime()
console.log('Time is: ' + (seconds + nanoseconds/1000000000) + ' secs')
resp = await axios.get(
'https://worldtimeapi.org/api/timezone/Europe/Paris'
);
console.log(JSON.stringify(resp.data.datetime, null, 4))
} catch (err) {
console.error(err)
}
};
delayCall()
In packages.json
{
"dependencies": {
"axios": "^1.2.1"
},
"type": "module"
}
Install and run it from terminal
npm install axios
node data.js
Result - it will guaranty more than 501 msec API call
$ node data.js
Time is: 3074.690104402 secs
"2022-12-07T18:21:41.093291+01:00"
Time is: 3077.166384501 secs
"2022-12-07T18:21:43.411450+01:00"
So your code
const delay = time => new Promise(res=>setTimeout(res,time));
// API1 SOMETIMES RECEIVES DOZENS OF API CALLS PER SECOND
app.post("/api1", async (req, res) => {
const data = req.body.object;
const transformedData = await transformData(data);
await delay(501);
// API2 ONLY ACCEPTS 2 REQUEST PER SECOND
const resp = await sendToApi2WithAxios(transformedData);
})
in order to access a Swagger UI based API I wrote some code.
app.get('/getData', async (req, res)=>{
token = await getToken().then(res =>{return res})
async function getData() {
return fetch(dataurl, {
method: 'GET',
headers: {
accept: 'application/json;charset=UTF-8',
authorization: 'Bearer ' + token.access_token
}
})
.then(res => res.json())
.catch(error => console.error('Error:', error));
}
const result = await getData().then(res =>{return res})
res.json(result)
})
The issue I have is that some requests will take about 10 minutes to finish since the data that gets accessed is very large and it just takes that time. I can't change that.
But after exactly 300 seconds I get "Headers Timeout Error" (UND_ERR_HEADERS_TIMEOUT).
I'm not sure where the 300 seconds come from. On the Swagger UI API the time is set to 600 seconds.
I think it's the standard timeout from express / NodeJS.
const port = 3000
const server = app.listen(port,()=>{ console.log('Server started')})
server.requestTimeout = 610000
server.headersTimeout = 610000
server.keepAliveTimeout = 600000
server.timeout = 600000
As you can see tried to increase all timeouts for express to about 600 seconds but nothing changes.
I also changed the network.http.response.timeout in Firefox to 600 seconds.
But still after 300 seconds I get "Headers Timeout Error".
Can anybody help me where and how I can increase the timeout for the request to go through?
Have you tried using the connect-timeout library?
npm install connect-timeout
//...
var timeout = require('connect-timeout');
app.use(timeout('600s'));
Read more here: https://www.npmjs.com/package/connect-timeout#examples
I have a Google Function that never returns; it just hits the timeout limit as a Google Function. It works fine locally within under 60 seconds. Not sure what the issue might be. Code is below:
/**
* Responds to any HTTP request.
*
* #param {!express:Request} req HTTP request context.
* #param {!express:Response} res HTTP response context.
*/
const {Storage} = require('#google-cloud/storage');
exports.main = async (req, res) => {
const storage = new Storage({projectId: 'our-project'});
const store = storage.bucket('our-bucket');
const incomplete = {
LT04: [],
LT05: [],
LE07: [],
LC08: []
};
store.getFilesStream({prefix : 'prefixToMatch', autoPaginate : true })
.on('error', (err) => {
return console.error(err.toString())
})
.on('data', (file) => {
// Find small/bad files
if (file.metadata.size === 162) {
const split = file.name.split('/');
const prefix = split[2].substr(0, 4);
incomplete[prefix].push(file.name);
}
})
.on('end', () => {
return JSON.stringify(incomplete, false, ' ');
});
};
Your code it seems ok. But you need to take into account some additional details about this.
Does your Cloud function's memory is enough for this? I think that you could increase the memory allocated of your CF.
Are you sure that this is due to a timeout issue? If you have not seen the logs you can do it going to the Error reporting section.
In case that you have already confirm this, another option could be to change the timeout duration.
I think the issue was that I needed to send a "res.send" instead a Promise.resolve. As well, I needed to remove the async before the function.
Thanks for the quick response with guidelines, error was easier than that apparently.
In my application, I would like to be able to catch the message that is being produced by the express-rate-limit package. This is an example of the code I have. I would like to be able to catch the message part with middleware so I can post-process it (in this case I have multiple languages ).
const apiCreatingAccountLimiter = rateLimit({
windowMs: 10 * 60 * 1000, // 10 minutes
max: 10, // limit each IP to 10 requests per windowMs
message: {
limiter: true,
type: "error",
message: 'maximum_accounts'
}
});
and then
router.post('/signup', apiCreatingAccountLimiter, (req, res, next) => {
// handling the post request
})
I have a similar solution middleware setup for some of my other API messages:
// error processing middleware
app.use((err, req, res, next) => {
const statusCode = err.statusCode || 500;
res.status(statusCode).send({
type: 'error',
message: err.message,
fields: err.fields === '' ? '' : err.fields,
code: err.code === '' ? '' : err.code,
section: err.section === '' ? 'general' : err.section
});
});
However, when trying to read a message from the express-rate-limit package it does not seem to be passing via this middleware at all. I guess it's because it happens before it can even reach any API and trigger this middleware.
Looking at the res part passing through, I can see there is an object with the following data:
rateLimit:
{ limit: 10,
current: 10,
remaining: 0,
resetTime: 2019-10-21T12:35:46.919Z
},
But that does not seem to be transporting the message object that is set at the very top in the apiCreatingAccountLimiter. I wonder how could I get to it?
Does anyone know how this can be done? I do not want those messages to be translated on the front end. I need the translation to happen on the NodeJS server. I am only interested in the middleware part where I can catch the message and post-process it.
In reading the source code, instead of using another middleware, you should play with the handler options as an option.
const apiCreatingAccountLimiter = rateLimit({
windowMs: 10 * 60 * 1000, // 10 minutes
max: 10, // limit each IP to 10 requests per windowMs
message: "my initial message",
handler: function(req, res /*, next*/) {
var myCustomMessage = require('anotherModuleYouWannaUse_ForExemple');
res.status(options.statusCode).send(myCustomMessage);
},
});
At this end, you'll find an extract of the source code
function RateLimit(options) {
options = Object.assign(
{
windowMs: 60 * 1000, // milliseconds - how long to keep records of requests in memory
max: 5, // max number of recent connections during `window` milliseconds before sending a 429 response
message: "Too many requests, please try again later.",
statusCode: 429, // 429 status = Too Many Requests (RFC 6585)
headers: true, //Send custom rate limit header with limit and remaining
skipFailedRequests: false, // Do not count failed requests (status >= 400)
skipSuccessfulRequests: false, // Do not count successful requests (status < 400)
// allows to create custom keys (by default user IP is used)
keyGenerator: function(req /*, res*/) {
return req.ip;
},
skip: function(/*req, res*/) {
return false;
},
handler: function(req, res /*, next*/) {
res.status(options.statusCode).send(options.message);
},
onLimitReached: function(/*req, res, optionsUsed*/) {}
},
options
);
I found that my frontend was only able to catch the message if I set the statusCode to 200, even though technically it should be a 429. So try this instead:
const apiCreatingAccountLimiter = rateLimit({
windowMs: 10 * 60 * 1000, // 10 minutes
max: 10, // limit each IP to 10 requests per windowMs,
statusCode: 200,
message: {
status: 429, // optional, of course
limiter: true,
type: "error",
message: 'maximum_accounts'
}
});
I did my best to match what you already had. Personally mine just looks like this basically:
const loginRatelimiter = rateLimit({
windowMs: 6 * 60 * 1000,
max: 10,
statusCode: 200,
message: {
status: 429,
error: 'You are doing that too much. Please try again in 10 minutes.'
}
})
Then, on my frontend I just check for res.data.error when the response comes in, and display that to the user if it exists.