handle 102 Status (processing) in Node.js - node.js

I'm issuing a request from my Server (Node8.9.0LTS) to other service, and I receive a 102 processing status code:
const firstRes = await ajaxService.post('/data/search', params)
<-- issue a request with a timeout -->
ctx.data = secondRes.response.data //return the response to the client
the ajaxService returns an async function that uses axios to issue the request.
How can I write the code to issue the same request with an interval of 1 second, limited to 5 seconds (so I'll return timeout to the client) with async/await?

Related

RxJava - timeout while using retryWhen

I need to send requests in 10s intervals. It means the next request should be sent in 10s after response from previous request has been received. The last request should start within 60 seconds.
For example:
0:00: 1st request: sent
0:05: 1st request: response
0:15: 2nd request: sent
0:45: 2nd request: response
0:55: 3rd request: sent
1:10: 3rd request: response
No more requests
This is my code:
mainRepository.getFlightSearchResults(uuid)
.repeatWhen { it.delay(10, TimeUnit.SECONDS) }
.timeout(60, TimeUnit.SECONDS)
.observeOn(schedulerProvider.ui)
where getFlightSearchResults returns Observable. Requests are being sent as described above, however requests are not stopping being sent after 60s. How can I stop sending (not receiving response) requests after 60s?
Solved using takeWhile:
val startTimeMillis = System.currentTimeMillis()
mainRepository.getFlightSearchResults(uuid)
.repeatWhen { it.delay(10, TimeUnit.SECONDS) }
.takeWhile { System.currentTimeMillis() - startTimeMillis < 60_000 }
.observeOn(schedulerProvider.ui)

How to make my frontend's axios request wait long enough for a 2 minute response from nodeJS

I have a React frontend that uses axios within a generator. I've added timeout of 0 in both the instance and the request itself just to be sure as I've read that 0 means there's no timing out. I also tried a 600000 timeout value.
// REACT / REDUX SAGA
axios.defaults.timeout = 0;
const response = yield axios.post('/iteminfo/', {
data
}, { timeout: 0 });
const itemInfo = response.data;
On my nginx / node server with express I have it requesting from another server, data that takes a long time to gather. It takes 1 minute and 20 seconds approximately.
I added 10 minutes of timeout to the node app to try to address this.
// NODE / EXPRESS SERVER
app.timeout = 1000 * 60 * 10; // 10 minutes
app.listen(port, () => {
console.log(`${portalStatus} NodeJS Listening on port ${port}`)
});
// SERVER TO SERVER REQUEST
axios.defaults.timeout = 1000 * 60 * 10; // 10 minutes
axios.post(baseURL2 + 'itemgather_api.php', qs.stringify(req.body))
.then(function (response) {
console.log('Successfully gathered item...', response.data);
res.json(response.data);
})
.catch(function (error) {
handleError(error);
res.status(400).json({error:error});
});
Looking at the backend PM2 logs I can confirm that the axios request from my server requesting the long data from another server had been resolved. But when it comes to pushing that data back as a response for my frontend there's something that goes wrong.
My frontend's axios POST request leads to this error after only 1 minute :
Error: Request failed with status code 504
at e.exports (2.9443c470.chunk.js:1)
at e.exports (2.9443c470.chunk.js:1)
at XMLHttpRequest.d.onreadystatechange (2.9443c470.chunk.js:1)
My thinking is that it is the frontend that's timing out, not the backend, and it closes the connection after 1 minute, can anyone help me address this please?
I already set both frontend and backend's timeout to be longer based on some stackoverflow answers so I can't figure out what else I'm missing.

Asynchronous processing of data in Expressjs

I've an Express route which receives some data and process it, then insert into mongo (using mongoose).
This is working well if I return a response after the following steps are done:
Receive request
Process the request data
Insert the processed data into Mongo
Return 204 response
But client will be calling this API concurrently for millions of records. Hence the requirement is not to block the client for processing the data. Hence I made a small change in the code:
Receive request
Return response immediately with 204
Process the requested data
Insert the processed data into Mongo
The above is working fine for the first few requests (say 1000s), after that client is getting socket exception: connection reset peer error. I guess it is because server is blocking the connection as the port is not free and at some point of time, I notice my nodejs process is throwing out Out of memory error.
Sample code is as follows:
async function enqueue(data) {
// 1. Process the data
// 2. Insert the data in mongo
}
async function expressController(request, response) {
logger.info('received request')
response.status(204).send()
try {
await enqueue(request.body)
} catch (err) {
throw new Error(err)
}
}
Am I doing something wrong here?

Node.js multiple requests - no responses

I am trying to get data from web API with axios (Node.js). I need to execute approximately 200 requests with different URLs to fetch some data for further analysis. I tried to use multiple libs for http callouts but in every case i have the same issue. I did not receive success or error callback. Request just stock somewhere.
async function sendRequest(url) {
let resp = await axios.get(url);
return resp.data;
}
I am calling this function in foor loop
for (var url in urls) {
try {
setData(url)
} catch (e) {
console.log(e);
}
}
async function setData(url) {
var data = await sendRequest(url);
// Set this data in global variable.
globalData[url] = data;
}
I often received this error:
Error: read ECONNRESET
I think this is all connected with too many requests in small interval.
What should I do to receive all requests. My temporary fix is to periodicly send 20 request per 20 seconds (still not ok, but I received more responses). But this is slow and it takes to many time.
However, I need all data from 200 requests in one variable for further analysis. If I wait for every request It takes too many time.

Node request queue backed up

TL;DR - Are there any best practices when configuring the globalAgent that allow for high throughput with a high volume of concurrent requests?
Here's our issue:
As far as I can tell, connection pools in Node are managed by the http module, which queues requests in a globalAgent object, which is global to the Node process. The number of requests pulled from the globalAgent queue at any given time is determined by the number of open socket connections, which is determined by the maxSockets property of globalAgent (defaults to 5).
When using "keep-alive" connections, I would expect that as soon as a request is resolved, the connection that handled the request would be available and can handle the next request in the globalAgent's queue.
Instead, however, it appears that each connection up to the max number is resolved before any additional queued requests are handled.
When watching networking traffic between components, we see that if maxSockets is 10, then 10 requests resolve successfully. Then there is a pause 3-5 second pause (presumably while new tcp connections are established), then 10 more requests resolve, then another pause, etc.
This seems wrong. Node is supposed to excel at handling a high volume of concurrent requests. So if, even with 1000 available socket connections, if request 1000 cannot be handled until 1-999 resolve, you'd hit a bottleneck. Yet I can't figure out what we're doing incorrectly.
Update
Here's an example of how we're making requests -- though it's worth noting that this behavior occurs whenever a node process makes an http request, including when that request is initiated by widely-used third-party libs. I don't believe it is specific to our implementation. Nevertheless...
class Client
constructor: (#endpoint, #options = {}) ->
#endpoint = #_cleanEndpoint(#endpoint)
throw new Error("Endpoint required") unless #endpoint && #endpoint.length > 0
_.defaults #options,
maxCacheItems: 1000
maxTokenCache: 60 * 10
clientId : null
bearerToken: null # If present will be added to the request header
headers: {}
#cache = {}
#cards = new CardMethods #
#lifeStreams = new LifeStreamMethods #
#actions = new ActionsMethods #
_cleanEndpoint: (endpoint) =>
return null unless endpoint
endpoint.replace /\/+$/, ""
_handleResult: (res, bodyBeforeJson, callback) =>
return callback new Error("Forbidden") if res.statusCode is 401 or res.statusCode is 403
body = null
if bodyBeforeJson and bodyBeforeJson.length > 0
try
body = JSON.parse(bodyBeforeJson)
catch e
return callback( new Error("Invalid Body Content"), bodyBeforeJson, res.statusCode)
return callback(new Error(if body then body.message else "Request failed.")) unless res.statusCode >= 200 && res.statusCode < 300
callback null, body, res.statusCode
_reqWithData: (method, path, params, data, headers = {}, actor, callback) =>
headers['Content-Type'] = 'application/json' if data
headers['Accept'] = 'application/json'
headers['authorization'] = "Bearer #{#options.bearerToken}" if #options.bearerToken
headers['X-ClientId'] = #options.clientId if #options.clientId
# Use method override (AWS ELB problems) unless told not to do so
if (not config.get('clients:useRealHTTPMethods')) and method not in ['POST', 'PUT']
headers['x-http-method-override'] = method
method = 'POST'
_.extend headers, #options.headers
uri = "#{#endpoint}#{path}"
#console.log "making #{method} request to #{uri} with headers", headers
request
uri: uri
headers: headers
body: if data then JSON.stringify data else null
method: method
timeout: 30*60*1000
, (err, res, body) =>
if err
err.status = if res && res.statusCode then res.statusCode else 503
return callback(err)
#_handleResult res, body, callback
To be honest, coffeescript isn't my strong point so can't comment really on the code.
However, I can give you some thoughts: in what we are working on, we use nano to connect to cloudant and we're seeing up to 200requests/s into cloudant from a micro AWS instance. So you are right, node should be up to it.
Try using request https://github.com/mikeal/request if you're not already. (I don't think it will make a difference, but nevertheless worth a try as that is what nano uses).
These are the areas I would look into:
The server doesn't deal well with multiple requests and throttles it. Have you run any performance tests against your server? If it can't handle the load for some reason or your requests are throttled in the os, then it doesn't matter what your client does.
Your client code has a long running function somewhere which prevents node processing any reponses you get back from the server. Perhaps 1 specific response causes a response callback to spend far too long.
Are the endpoints all different servers/hosts?

Resources