Increase Headers Timeout in express - node.js

in order to access a Swagger UI based API I wrote some code.
app.get('/getData', async (req, res)=>{
token = await getToken().then(res =>{return res})
async function getData() {
return fetch(dataurl, {
method: 'GET',
headers: {
accept: 'application/json;charset=UTF-8',
authorization: 'Bearer ' + token.access_token
}
})
.then(res => res.json())
.catch(error => console.error('Error:', error));
}
const result = await getData().then(res =>{return res})
res.json(result)
})
The issue I have is that some requests will take about 10 minutes to finish since the data that gets accessed is very large and it just takes that time. I can't change that.
But after exactly 300 seconds I get "Headers Timeout Error" (UND_ERR_HEADERS_TIMEOUT).
I'm not sure where the 300 seconds come from. On the Swagger UI API the time is set to 600 seconds.
I think it's the standard timeout from express / NodeJS.
const port = 3000
const server = app.listen(port,()=>{ console.log('Server started')})
server.requestTimeout = 610000
server.headersTimeout = 610000
server.keepAliveTimeout = 600000
server.timeout = 600000
As you can see tried to increase all timeouts for express to about 600 seconds but nothing changes.
I also changed the network.http.response.timeout in Firefox to 600 seconds.
But still after 300 seconds I get "Headers Timeout Error".
Can anybody help me where and how I can increase the timeout for the request to go through?

Have you tried using the connect-timeout library?
npm install connect-timeout
//...
var timeout = require('connect-timeout');
app.use(timeout('600s'));
Read more here: https://www.npmjs.com/package/connect-timeout#examples

Related

How to space out (rate-limiting) outgoing axios requests originating from an express app handling requests received from a webhook?

here is my issue :
I built a Node express app that handles incoming requests from a webhook that sometimes sends dozens of packages in one second. Once the request has been processed I need to make an API POST request using Axios with the transformed data.
Unfortunetely this API has a rate limit of 2 request per second.
I am looking for a way to build some kind of queuing system that accepts every incoming requests, and send the outgoing request at a limited rate of 2 request per seconds.
I tried adding a delay with SetTimeout, but it only delayed the load => when hundreds of requests were receieved, the handling of each of them was delayed by 10 seconds, but they were still being sent out at nearly the same time, just 10 seconds later.
I was thinking of trying to log the time of each outgoing request, and only send a new outgoing request if (time - timeOfLastRequest > 500ms) but I'm pretty sure this is not the right way to handle this.
Here is a very basic version of the code to illustrate my issue :
// API1 SOMETIMES RECEIVES DOZENS OF API CALLS PER SECOND
app.post("/api1", async (req, res) => {
const data = req.body.object;
const transformedData = await transformData(data);
// API2 ONLY ACCEPTS 2 REQUEST PER SECOND
const resp = await sendToApi2WithAxios(transformedData);
})
Save this code with data.js file
you can replace the get call with your post call.
import axios from 'axios'
const delay = time => new Promise(res=>setTimeout(res,time));
const delayCall = async () => {
try {
let [seconds, nanoseconds] = process.hrtime()
console.log('Time is: ' + (seconds + nanoseconds/1000000000) + ' secs')
let resp = await axios.get(
'https://worldtimeapi.org/api/timezone/Europe/Paris'
);
console.log(JSON.stringify(resp.data.datetime, null, 4));
await delay(501);
[seconds, nanoseconds] = process.hrtime()
console.log('Time is: ' + (seconds + nanoseconds/1000000000) + ' secs')
resp = await axios.get(
'https://worldtimeapi.org/api/timezone/Europe/Paris'
);
console.log(JSON.stringify(resp.data.datetime, null, 4))
} catch (err) {
console.error(err)
}
};
delayCall()
In packages.json
{
"dependencies": {
"axios": "^1.2.1"
},
"type": "module"
}
Install and run it from terminal
npm install axios
node data.js
Result - it will guaranty more than 501 msec API call
$ node data.js
Time is: 3074.690104402 secs
"2022-12-07T18:21:41.093291+01:00"
Time is: 3077.166384501 secs
"2022-12-07T18:21:43.411450+01:00"
So your code
const delay = time => new Promise(res=>setTimeout(res,time));
// API1 SOMETIMES RECEIVES DOZENS OF API CALLS PER SECOND
app.post("/api1", async (req, res) => {
const data = req.body.object;
const transformedData = await transformData(data);
await delay(501);
// API2 ONLY ACCEPTS 2 REQUEST PER SECOND
const resp = await sendToApi2WithAxios(transformedData);
})

How to improve async node fetch with a synchronous internet connection?

I have a fast synchronous fibre connection at home. The speed is great for streams and large packages. However multiple async node fetch are very slow due to the connection overhead. I never have more than 2 async fetch from my localhost. With the connection overhead this takes roughly 1 second for every two fetches. I need more than a minute to process about 100 fetches async.
Via my 4g phone as hotspot this takes less than 2 seconds.
Is there any way to bundle fetches for synchronous internet connections?
I run this test case with node 14.
const fetch = require('node-fetch')
const promises = []
for(var i = 0; i < 100; i++) {
promises.push(fetch('https://geolytix.github.io/public/geolytix.svg'))
}
console.time('promise all')
Promise
.all(promises)
.then(arr => {
console.log(arr.length)
console.timeEnd('promise all')
})
.catch(error => {
console.error(error)
})
10 fetch over 4g take 0.2 seconds, 100 take 1 second.
Over my gigabit line 10 fetch requests take 4 seconds, and 100 take 50 seconds.
The bevaviour with axios.get() is exactly the same.
I was able to resolve this by using a custom user agent for node-fetch. The custom user agent keeps alive and has 1 maxSockets. Increasing the maxSockets will affect the performance with synchronous internet connections.
const https = require('https');
const httpsAgent = new https.Agent({
keepAlive: true,
maxSockets: 1
})
const options = {
agent: httpsAgent
}
const getPromise = () => new Promise(resolve=>{
fetch('https://geolytix.github.io/public/geolytix.svg', options)
.then(response => response.text())
//.then(text => console.log(text))
.then(() => resolve())
})

axios post socket hang up only in docker

I have a server that takes upwards of 10 minutes to begin responding to a request. I have a nodejs client that uses axios to send a post request to this server. I have configured axios timeout to be 30 minutes.
When running the code directly on my machine, axios correctly waits for the response, and after 10 minutes I receive a 200 OK. When the same script is run in docker (node:10 base image), after 5.5 minutes I receive a socket hang up error.
server:
const http = require('http')
const server = http.createServer(function (request, response) {
if (request.method == 'POST') {
var body = ''
request.on('data', function (data) {
body += data
console.log('Partial body: ' + body)
})
request.on('end', function () {
setTimeout(() => {
response.writeHead(200, { 'Content-Type': 'text/html' })
response.end('post received')
}, 1000 * 60 * 10);
});
} else {
response.writeHead(200, { 'Content-Type': 'text/plain' });
response.end('Ok');
}
})
const port = 5000
const host = '10.0.0.50'
server.setTimeout(1000 * 60 * 30);
server.listen(port, host)
console.log(`Listening at http://${host}:${port}`)
client:
let axios = require('axios');
async function run () {
const axiosInstance = axios.create({
baseURL: 'http://livedoc.transmissionmedia.ca/',
timeout: (1000 * 60 * 30),
});
axiosInstance.defaults.timeout = (1000 * 60 * 30);
console.log(`${new Date().toISOString()} - start`);
// const resp = await axiosInstance.post(`http://10.0.0.50:5000/weatherforecast`);
const resp = await axiosInstance({
method: 'post',
url: `http://10.0.0.50:5000/weatherforecast`,
timeout: 1000 * 60 * 30
});
// const resp = await axiosInstance.post(`http://10.0.0.50:5000/weatherforecast`);
console.log(`${new Date().toISOString()} - end`);
}
run()
.then(() => { console.error('Succeeded!'); process.exit(0); })
.catch(err => { console.error('Failed!'); console.error(err); process.exit(-1); });
When the client script is run in a docker container, I receive the following error after about 5.5 minutes:
{ Error: socket hang up
at createHangUpError (_http_client.js:332:15)
at Socket.socketOnEnd (_http_client.js:435:23)
at Socket.emit (events.js:203:15)
at endReadableNT (_stream_readable.js:1145:12)
at process._tickCallback (internal/process/next_tick.js:63:19)
code: 'ECONNRESET',
...
}
I don't know if this is the solution to your problem, but I was having a socket hangup issue with API calls only in docker, running on Google Compute Engine.
When I executed the code directly it was fine, when I ran the same code in docker, I would get socket hangup errors (ERRCONNRESET) if the payload was larger than some seemingly arbitrary but not fixed size.
In the end, the problem was that the MTU of the default docker bridge network didn't match the MTU of the host network.
I also tried --network=host but I got the error:
"cannot run with network enabled in root network namespace: unknown"
Which apparently no-one else has ever seen.
In the end, I followed the advice here on how to configure my docker bridge network MTU:
https://mlohr.com/docker-mtu/
"With the command ip link you can display the locally configured network cards and their MTU:"
"If the outgoing interface (in this case ens3) has an MTU smaller than 1500, some action is required. If it is greater than or equal to 1500, this problem does not apply to you."
I edited my /etc/docker/daemon.json file to look like this:
{
"runtimes": {
"runsc": {
"path": "/usr/bin/runsc"
}
},
"mtu": 1460
}
where 1460 was the MTU of the host network interface, and the problem was solved!
Hope this helps you or someone else it was doing my head in.

angular 2 http get request getting called twice, when the size of data is large

I m trying to fetch history data for one month using angular http get request. I m using angular 5,server is in node js and db is mongodb.
On first click on button it hits the server and console is printed on server. After 2 mins again server consoles as ui hits server, but there is no call made by the ui to server. After 1 min, sent successful response is showed by server, but ui console displays like error-"connection refused". i tried to resolve it using share from observable. But it did not work
This is my service.
getReports(params): Observable<number>{
let headers = new Headers({ 'Content-Type': 'application/json' });
let options = new RequestOptions({ headers: headers });
return this.http.get(url, options)
.map((res: Response) => res.json())
.catch(this.handleError)}
try like below
return this.http.get(url, options)
.timeout(3000, new Error('timeout exceeded'))
.map((res: Response) => res.json())
.subscribe(
data => this.data = data,
error => console.debug('ERROR', error),
() => console.log('END')
);
.catch(this.handleError)}
using timeout it is possible. 3000 means 3 secs or 3000 ms

express - Limit a request to one at a time

I'm using express to build a API that should be used internally. One of the request trigger an heavy process on the server and should return a CSV out of that. This process might take more than 10 minutes.
To not overloading the server I want to restrict the call of this API and make it so that, since the process isn't terminated, we can't request the same URL again.
For this I tried to use express-rate-limit with the following configuration:
new RateLimit({
windowMs: 30 * 60 * 1000, // 30 minutes
max: 1,
delayMs: 0, // disabled
message: 'Their is already a running execution of the request. You must wait for it to be finished before starting a new one.',
handler: function handler(req, res) {
logger.log('max request achieved');
logger.log(res);
},
});
But it seems that the 'max request' is reached every time after exactly 2 mins even if I start only once. I suspect the browser to retry the request after 2 min if doesn't get any answer, is it possible?
I would like that this request doesn't have any retry-strategy and that the only way the max request is reached is by manually asking the server to execute this request 2 times in a row.
Thank's.
Edit
Here is my full code:
const app = express();
const port = process.env.API_PORT || 3000;
app.enable('trust proxy');
function haltOnTimedout(req, res, next) {
if (!req.timedout) { next(); }
}
app.use(timeout(30 * 60 * 1000)); // 30min
app.use(haltOnTimedout);
app.listen(port, () => {
logger.log(`Express server listening on port ${port}`);
});
// BILLING
const billingApiLimiter = new RateLimit({
windowMs: 30 * 60 * 1000, // 30 minutes
max: 1,
delayMs: 0, // disabled
message: 'Their is already a running execution of the request. You must wait for it to be finished before starting a new one.',
handler: function handler(req, res) {
logger.log('max request achieved');
},
});
app.use('/billing', billingApiLimiter);
app.use('/billing', BillingController);
And the code of my route:
router.get('/billableElements', async (request, response) => {
logger.log('Route [billableElements] called');
const { startDate } = request.query;
const { endDate } = request.query;
try {
const configDoc = await metadataBucket.getAsync(process.env.BILLING_CONFIG_FILE || 'CONFIG_BILLING');
const billableElements = await getBillableElements(startDate, endDate, configDoc.value);
const csv = await produceCSV(billableElements);
logger.log('csv produced');
response.status(200).send(`${csv}`);
} catch (err) {
logger.error('An error occured while getting billable elements.', err);
response.status(500).send('An internal error occured.');
}
});
I found the answer thank's to this GitHub issue: https://github.com/expressjs/express/issues/2512.
TLDR: I added request.connection.setTimeout(1000 * 60 * 30); to avoid firing the request every 2 minutes.
But considering the code I wrote inside my question, #Paul's advices are still good to be taken into account.

Resources