Let's assume you have a server in koa and you have a POST route which takes ~ 3 minutes to return a request.
According to the various sources, you set-up your server with the timeout by setting up
let app = new Koa();
let server=app.listen(3000);
server.timeout=5*60*1000; // set to 5 minutes
On the client side, you set-up a timeout as well and you send the post request with the timeout
const request = await axios.post('https://somewebsite.herokuapp.com/testkeepalive', {}, {timeout:240000})
console.log('request is ', request)
Why when sending the above, on the server it still returns a timeout error and doesn't execute as intended?
Info about /testkeepalive
.post('/testkeepalive', async ctx => {
console.log('request received')
ctx.request.socket.setTimeout(5 * 60 * 1000)
ctx.request.socket.setKeepAlive(true)
await delay(3 * 60 * 1000) // we delay 3 mintues before returning the request
ctx.body = "nice"
ctx.status = 202
})
Client side error:
EDIT:
Code above is correct. The only issue was that I was using Heroku which times out to 30seconds. Moving the server elsewhere (e.g AWS EC2 + deploy server with https ssl certificate) made it work.
If you still want to use heroku, you can only do it by implementing background jobs.
https://devcenter.heroku.com/articles/background-jobs-queueing
Axios is showing you ERR_NETWORK so I think that means that some intermediary (perhaps a proxy in your hosting environment) has closed the socket.
Below is a test app I wrote that lets me separately set the serverTimeout, the clientTimeout and the serverDelay before sending a response. This allows you to simulate either client or server timeout and see exactly what the axios response is. This is generic Express since I'm not a koa person, but presumably, it is the same http server object in either case.
With these values configured such that the serverDelay is less than both serverTimeout and clientTimeout:
const serverTimeout = 10 * 1000;
const clientTimeout = 8 * 1000;
const serverDelay = 7 * 1000;
Then, I get the appropriate response from the server (no error).
With these values configured such that the serverTimeout is shorter than the serverDelay:
const serverTimeout = 5 * 1000;
const clientTimeout = 8 * 1000;
const serverDelay = 7 * 1000;
Then, I get Error socket hang up ECONNRESET.
With these values configured such that the clientTimeout is shorter than the serverDelay or serverTimeout:
const serverTimeout = 10 * 1000;
const clientTimeout = 5 * 1000;
const serverDelay = 7 * 1000;
Then, I get Error timeout of 5000ms exceeded ECONNABORTED.
So, it all seems to be working for me with no extra infrastructure in between the client and server. So, that plus the ERR_NETWORK error you see from Axios makes me think that a proxy in your hosting infrastructure is responsible for the error.
Here's the code I used:
import express from 'express';
import axios from 'axios';
const serverTimeout = 10 * 1000;
const clientTimeout = 8 * 1000;
const serverDelay = 7 * 1000;
function delay(t) {
return new Promise(resolve => {
setTimeout(resolve, t);
})
}
const app = express();
app.get("/timeout", async (req, res) => {
await delay(serverDelay);
res.send("Hello");
});
const server = app.listen(80);
server.timeout = serverTimeout;
try {
let result = await axios.get("http://localhost/timeout", { timeout: clientTimeout });
console.log("Got Result", result.data);
} catch (e) {
console.log("Error", e.message, e.code);
}
server.close();
Note also that there is no need to set a timeout separately on each incoming socket. The server timeout itself will handle that for you.
Related
here is my issue :
I built a Node express app that handles incoming requests from a webhook that sometimes sends dozens of packages in one second. Once the request has been processed I need to make an API POST request using Axios with the transformed data.
Unfortunetely this API has a rate limit of 2 request per second.
I am looking for a way to build some kind of queuing system that accepts every incoming requests, and send the outgoing request at a limited rate of 2 request per seconds.
I tried adding a delay with SetTimeout, but it only delayed the load => when hundreds of requests were receieved, the handling of each of them was delayed by 10 seconds, but they were still being sent out at nearly the same time, just 10 seconds later.
I was thinking of trying to log the time of each outgoing request, and only send a new outgoing request if (time - timeOfLastRequest > 500ms) but I'm pretty sure this is not the right way to handle this.
Here is a very basic version of the code to illustrate my issue :
// API1 SOMETIMES RECEIVES DOZENS OF API CALLS PER SECOND
app.post("/api1", async (req, res) => {
const data = req.body.object;
const transformedData = await transformData(data);
// API2 ONLY ACCEPTS 2 REQUEST PER SECOND
const resp = await sendToApi2WithAxios(transformedData);
})
Save this code with data.js file
you can replace the get call with your post call.
import axios from 'axios'
const delay = time => new Promise(res=>setTimeout(res,time));
const delayCall = async () => {
try {
let [seconds, nanoseconds] = process.hrtime()
console.log('Time is: ' + (seconds + nanoseconds/1000000000) + ' secs')
let resp = await axios.get(
'https://worldtimeapi.org/api/timezone/Europe/Paris'
);
console.log(JSON.stringify(resp.data.datetime, null, 4));
await delay(501);
[seconds, nanoseconds] = process.hrtime()
console.log('Time is: ' + (seconds + nanoseconds/1000000000) + ' secs')
resp = await axios.get(
'https://worldtimeapi.org/api/timezone/Europe/Paris'
);
console.log(JSON.stringify(resp.data.datetime, null, 4))
} catch (err) {
console.error(err)
}
};
delayCall()
In packages.json
{
"dependencies": {
"axios": "^1.2.1"
},
"type": "module"
}
Install and run it from terminal
npm install axios
node data.js
Result - it will guaranty more than 501 msec API call
$ node data.js
Time is: 3074.690104402 secs
"2022-12-07T18:21:41.093291+01:00"
Time is: 3077.166384501 secs
"2022-12-07T18:21:43.411450+01:00"
So your code
const delay = time => new Promise(res=>setTimeout(res,time));
// API1 SOMETIMES RECEIVES DOZENS OF API CALLS PER SECOND
app.post("/api1", async (req, res) => {
const data = req.body.object;
const transformedData = await transformData(data);
await delay(501);
// API2 ONLY ACCEPTS 2 REQUEST PER SECOND
const resp = await sendToApi2WithAxios(transformedData);
})
I am using the new (as of version 18) Node.js "fetch" API to perform HTTP requests e.g.
const response = await fetch(SOMEURL)
const json = await response.json()
This works, but I want to "mock" those HTTP requests so that I can do some automated testing and be able to simulate some HTTP responses to see if my code works correctly.
Normally I have used the excellent nock package alongside Axios to mock HTTP requests, but it doesn't appear to work with fetch in Node 18.
So how can I mock HTTP requests and responses when using fetch in Node.js?
Node 18's fetch function isn't built on the Node.js http module like most HTTP libraries (including Axios, request etc) - it's a total rewrite of an HTTP client built on the lower-level "net" library called undici. As such, "nock" cannot intercept requests made from the fetch function (I believe the team are looking to fix this, but at the time of writing, Nock 13.2.9, Nock does not work for fetch requests).
The solution is to use a MockAgent that is built into the undici package.
Let's say your code looks like this:
// constants
const U = 'http://127.0.0.1:5984'
const DB = 'users'
const main = async () => {
// perform request
const r = await fetch(U + '/' + DB)
// parse response as JSON
const j = await r.json()
console.log(j)
}
main()
This code makes a real HTTP request to a CouchDB server running on localhost.
To mock this request, we need to add undici into our project:
npm install --save-dev undici
and add some code to intercept the request:
// constants
const U = 'http://127.0.0.1:5984'
const DB = 'users'
// set up mocking of HTTP requests
const { MockAgent, setGlobalDispatcher } = require('undici')
const mockAgent = new MockAgent()
const mockPool = mockAgent.get(U)
setGlobalDispatcher(mockAgent)
const main = async () => {
// intercept GET /users requests
mockPool.intercept({ path: '/' + DB }).reply(200, { ok: true })
// perform request
const r = await fetch(U + '/' + DB,)
// parse response as JSON
const j = await r.json()
console.log(j)
}
main()
The above code now has its HTTP "fetch" request intercepted with mocked response.
I am unable to get the mocking to work. I am not sure the sample above is doing what it is intended to do. I think it's actually making a call to localhost. I haven't been able to use MockAgent to mock the fetch call. I keep getting the result of the request (which is a google error page in this example). It should be equivalent as the code above, but I changed the url. When I had localhost I got fetch errors (since it wasn't being intercepted).
const { MockAgent, setGlobalDispatcher, } = require('undici');
test('test', async ()=>{
// constants
const U = 'http://www.google.com'
const DB = 'users'
const mockAgent = new MockAgent()
const mockPool = mockAgent.get(U)
setGlobalDispatcher(mockAgent)
// intercept GET /users requests
mockPool.intercept({ path: '/' + DB }).reply(200, { ok: true })
const opts = { method: 'GET' };
// perform request
const r = await fetch(U + '/' + DB,opts)
// parse response as JSON
const j = await r.text()
console.log(j)
});
I have a Google Function that never returns; it just hits the timeout limit as a Google Function. It works fine locally within under 60 seconds. Not sure what the issue might be. Code is below:
/**
* Responds to any HTTP request.
*
* #param {!express:Request} req HTTP request context.
* #param {!express:Response} res HTTP response context.
*/
const {Storage} = require('#google-cloud/storage');
exports.main = async (req, res) => {
const storage = new Storage({projectId: 'our-project'});
const store = storage.bucket('our-bucket');
const incomplete = {
LT04: [],
LT05: [],
LE07: [],
LC08: []
};
store.getFilesStream({prefix : 'prefixToMatch', autoPaginate : true })
.on('error', (err) => {
return console.error(err.toString())
})
.on('data', (file) => {
// Find small/bad files
if (file.metadata.size === 162) {
const split = file.name.split('/');
const prefix = split[2].substr(0, 4);
incomplete[prefix].push(file.name);
}
})
.on('end', () => {
return JSON.stringify(incomplete, false, ' ');
});
};
Your code it seems ok. But you need to take into account some additional details about this.
Does your Cloud function's memory is enough for this? I think that you could increase the memory allocated of your CF.
Are you sure that this is due to a timeout issue? If you have not seen the logs you can do it going to the Error reporting section.
In case that you have already confirm this, another option could be to change the timeout duration.
I think the issue was that I needed to send a "res.send" instead a Promise.resolve. As well, I needed to remove the async before the function.
Thanks for the quick response with guidelines, error was easier than that apparently.
If I have a client that may be making a request to an http Google Cloud Function multiple times in a relatively short amount of time how can I use keep-alive? Is having the client send the connection keep-alive header enough?
I saw this on the Google docs:
https://cloud.google.com/functions/docs/bestpractices/networking
const http = require('http');
const agent = new http.Agent({keepAlive: true});
/**
* HTTP Cloud Function that caches an HTTP agent to pool HTTP connections.
*
* #param {Object} req Cloud Function request context.
* #param {Object} res Cloud Function response context.
*/
exports.connectionPooling = (req, res) => {
req = http.request(
{
host: '',
port: 80,
path: '',
method: 'GET',
agent: agent,
},
resInner => {
let rawData = '';
resInner.setEncoding('utf8');
resInner.on('data', chunk => {
rawData += chunk;
});
resInner.on('end', () => {
res.status(200).send(`Data: ${rawData}`);
});
}
);
req.on('error', e => {
res.status(500).send(`Error: ${e.message}`);
});
req.end();
};
But, that would only apply to making outbound requests from the cloud function right?
There was also something about the global (instance-wide) scope here:
https://cloud.google.com/functions/docs/bestpractices/tips
Is there anything I need to do to reuse connections on requests sent from the end user?
When you define agent at the global scope in your function, that is only retained for as long as any given server instance where that is running. So, while your connections may keep alive in that one instance, it will not share any connections with other instances that are allocated when the load on your function increases. You don't have much direct control over when Cloud Functions will spin up a new instance to handle new load, or when it will deallocate that instance. You just have to accept that they will come and go over time, along with any HTTP connections that are kept alive by global scope objects.
I'm using express to build a API that should be used internally. One of the request trigger an heavy process on the server and should return a CSV out of that. This process might take more than 10 minutes.
To not overloading the server I want to restrict the call of this API and make it so that, since the process isn't terminated, we can't request the same URL again.
For this I tried to use express-rate-limit with the following configuration:
new RateLimit({
windowMs: 30 * 60 * 1000, // 30 minutes
max: 1,
delayMs: 0, // disabled
message: 'Their is already a running execution of the request. You must wait for it to be finished before starting a new one.',
handler: function handler(req, res) {
logger.log('max request achieved');
logger.log(res);
},
});
But it seems that the 'max request' is reached every time after exactly 2 mins even if I start only once. I suspect the browser to retry the request after 2 min if doesn't get any answer, is it possible?
I would like that this request doesn't have any retry-strategy and that the only way the max request is reached is by manually asking the server to execute this request 2 times in a row.
Thank's.
Edit
Here is my full code:
const app = express();
const port = process.env.API_PORT || 3000;
app.enable('trust proxy');
function haltOnTimedout(req, res, next) {
if (!req.timedout) { next(); }
}
app.use(timeout(30 * 60 * 1000)); // 30min
app.use(haltOnTimedout);
app.listen(port, () => {
logger.log(`Express server listening on port ${port}`);
});
// BILLING
const billingApiLimiter = new RateLimit({
windowMs: 30 * 60 * 1000, // 30 minutes
max: 1,
delayMs: 0, // disabled
message: 'Their is already a running execution of the request. You must wait for it to be finished before starting a new one.',
handler: function handler(req, res) {
logger.log('max request achieved');
},
});
app.use('/billing', billingApiLimiter);
app.use('/billing', BillingController);
And the code of my route:
router.get('/billableElements', async (request, response) => {
logger.log('Route [billableElements] called');
const { startDate } = request.query;
const { endDate } = request.query;
try {
const configDoc = await metadataBucket.getAsync(process.env.BILLING_CONFIG_FILE || 'CONFIG_BILLING');
const billableElements = await getBillableElements(startDate, endDate, configDoc.value);
const csv = await produceCSV(billableElements);
logger.log('csv produced');
response.status(200).send(`${csv}`);
} catch (err) {
logger.error('An error occured while getting billable elements.', err);
response.status(500).send('An internal error occured.');
}
});
I found the answer thank's to this GitHub issue: https://github.com/expressjs/express/issues/2512.
TLDR: I added request.connection.setTimeout(1000 * 60 * 30); to avoid firing the request every 2 minutes.
But considering the code I wrote inside my question, #Paul's advices are still good to be taken into account.