NodeJS to NodeJS http requests hang - node.js

I have 2 services: 1. worker 2.fetch documents
Both are Nodejs express services.
The worker uses Promise.all to send 48 http POST (dont ask why post and not GET) requests to the fetch service, the fetch service then fetches all 48 documents (each in a separate request) and sends them back to the worker.
Logs are showing the fetch service finished everything successfuly and the worker sending all 48 requests, but showing partial responses (sometimes 0, sometimes 15/48 sometimes 31/48 but never fully succeeds)
The worker keeps waiting for a response it seems, untill the 15 minute timeout for the job end and its moved to failed.
code examples:
worker.js (service 1 - NodeJS with express)
await Promise.all(docIDs.map(async (docID) => {
try {
logger.info(`Worker Get Document: Fetching ${docID.docID} transaction ID: ${transactionId} / timeStamp: ${timeStamp}`, {})
var document = await axios.post(getVariableValue("GET_DOCUMENT_SERVICE_URL"), docID, {
httpsAgent: new https.Agent({
rejectUnauthorized: false,
keepAlive: false
}),
auth: {
username: getVariableValue("WEBAPI_OPTIDOCS_SERVICE_USERNAME"),
password: getVariableValue("WEBAPI_OPTIDOCS_SERVICE_PASSWORD")
},
headers: {
"x-global-transaction-id": transactionId,
"timeStamp": timeStamp
}
});
logger.info(`Worker Get Document: Fetched ${docID.docID} Status: ${document.data.status}. transaction ID: ${transactionId} / timeStamp: ${timeStamp}`, {})
documents.push(document.data.content);
}
catch (err) {
const responseData = err.response ? err.response.data : err.message
logger.error(`Worker Get Document: Failed DocID ${docID.docID}, Error received from Get Document: ${responseData} - transaction ID: ${transactionId} / timeStamp: ${timeStamp}`, {})
throw Error(responseData)
}
}));
fetch.js (service 2 - NodeJS with express)
module.exports = router.post('/getDocument', async (req, res, next) => {
try {
var transactionId = req.headers["x-global-transaction-id"]
var timeStamp = req.headers["timestamp"]
logger.info(`GetDocumentService: API started. - transaction ID: ${transactionId} / timeStamp: ${timeStamp}`, {})
var document = await getDocuemntService(req.body, transactionId, timeStamp);
var cloneDocument = clone(document)
logger.info(`GetDocumentService: Document size: ${JSON.stringify(cloneDocument).length / 1024 / 1024}. - transaction ID: ${transactionId} / timeStamp: ${timeStamp}`, {})
logger.info(`GetDocumentService: API Finished. - transaction ID: ${transactionId} / timeStamp: ${timeStamp}`, {})
res.status(200);
res.set("Connection", "close");
res.json({
statusDesc: "Success",
status: true,
content: cloneDocument
});
logger.info(`GetDocumentService: Response status: ${res.finished} . - transaction ID: ${transactionId} / timeStamp: ${timeStamp}`, {})
}
catch (err) {
logger.error(`GetDocumentService: Error:${err.message} / Stack: ${err.stack}. - transaction ID: ${transactionId} / timeStamp: ${timeStamp}`, {})
res.status(500)
res.send("stack: " + err.stack + "err: " + err.message + " fromgetdocument")
}
});
So the logs from the get document are full and show success.
The logs from the worker show like this:
"fetching" X48
"fetched" X0, X15, X31 (three different attempts at fetching 48 docs)
I have tried changing keep-alive to true.
Anything else I might be missing? anyone knows why it hangs forever (atleast 15 minutes untill job gets "timed out")
Thanks :)

I only have guesses here but I feel they are strong guesses.
Axios has a default timeout of none but I'm gonna guess somewhere there is a timeout of 900 seconds. Edit: This timeout is very likely in the server side web server.
You are hammering the server with 48 request in a fraction of a second. I'm not surprised this is failing.
Their server is likely rate limiting you or just getting overloaded.
For testing limit your request to one at a time. When one finishes send the next.
Simple work queue example - Not tested
The addWork or doWork caller will need to await
let queue = []
const sleep = (ms)=> new Promise((resolve)=>setTimeout(resolve,ms))
async addWork(work, cb, sleepMS) {
if (Array.isArray(work) queue = [...queue, ...work]
else queue.push(work)
if (cb) return doWork(cb, sleepMS)
}
async doWork(cb, sleepMS) {
let work = queue.pop()
var document = await getDocuemntService(work)
cb(document)
if (queue.length > 0) {
if (sleepMS) await sleep(sleepMS)
return doWork(cb, sleepMS)
}
}
Once you confirm that one at a time works then you can look at a more complex work queue. I'd check NPM for something I'm sure there is something out there to handle this.

Related

NodeJS API NJS-003: invalid connection

First, i make an API using nodejs and oracledb.
I have 2 routes with different response time, let say route A with 10s response time and route B 1s response time. When i execute the route A followed by route B , i got the error NJS-003: invalid connection because route B finish and close the connection followed by route A.
Any ideas how to solve this problem?
I'm using oracle pool , getConnection and close connection every API request.
async function DBGetData(req, res, query, params = {}) {
try {
connection = await oracledb.getConnection();
connection.callTimeout = 10 * 1000;
result = await connection.execute(
query,
params,
{
outFormat: oracledb.OUT_FORMAT_OBJECT,
}
);
// send query result
res.json({
status: res.statusCode,
length: result.rows.length,
results: result.rows,
});
} catch (err) {
return res.status(400).json({ error: err.toString() });
} finally {
if (connection) {
// Always close connections
await connection.close();
}
}
}
Add a let connection; before the try so that each DBGetData() invocation is definitely using its own connection. Currently it seems that you are referencing a global variable.

Nodejs fork on("message, ..) message limits

Using the following code in NodeJs:
const { fork } = require('child_process');
const thread = fork(path.join(__dirname, '/thread.js'));
thread.on('message', (results) => {
console.log('RES', results.length);
if (results.error) {
res.send({ error: true, message: results.message });
return;
}
res.send(results.data);
});
thread.on('error', (err) => {
console.log(err);
});
thread.send({ data: JSON.stringify(dataToProcess) });
thread.on('exit', () => {
if (thread) {
thread.kill();
return;
}
});
When sending larger (1.5mb) messages from child to parent, it doesn't send anything. Smaller messages are sent without issue. Is there some hard limit? If so, can it be increased?
Testing different payload sizes now:
1mb - fails to send
0.5mb - fails
0.2mb - fails
0.15mb - ok
0.1mb - ok
On windows its OK....linux seems to have a limit
I wasn't able to figure it out, but I did change the fork to web-workers and it seemed to send out larger payloads. I don't know the full repercussions yet, but I'll continue to monitor it

express-rate-limit - catching the message

In my application, I would like to be able to catch the message that is being produced by the express-rate-limit package. This is an example of the code I have. I would like to be able to catch the message part with middleware so I can post-process it (in this case I have multiple languages ).
const apiCreatingAccountLimiter = rateLimit({
windowMs: 10 * 60 * 1000, // 10 minutes
max: 10, // limit each IP to 10 requests per windowMs
message: {
limiter: true,
type: "error",
message: 'maximum_accounts'
}
});
and then
router.post('/signup', apiCreatingAccountLimiter, (req, res, next) => {
// handling the post request
})
I have a similar solution middleware setup for some of my other API messages:
// error processing middleware
app.use((err, req, res, next) => {
const statusCode = err.statusCode || 500;
res.status(statusCode).send({
type: 'error',
message: err.message,
fields: err.fields === '' ? '' : err.fields,
code: err.code === '' ? '' : err.code,
section: err.section === '' ? 'general' : err.section
});
});
However, when trying to read a message from the express-rate-limit package it does not seem to be passing via this middleware at all. I guess it's because it happens before it can even reach any API and trigger this middleware.
Looking at the res part passing through, I can see there is an object with the following data:
rateLimit:
{ limit: 10,
current: 10,
remaining: 0,
resetTime: 2019-10-21T12:35:46.919Z
},
But that does not seem to be transporting the message object that is set at the very top in the apiCreatingAccountLimiter. I wonder how could I get to it?
Does anyone know how this can be done? I do not want those messages to be translated on the front end. I need the translation to happen on the NodeJS server. I am only interested in the middleware part where I can catch the message and post-process it.
In reading the source code, instead of using another middleware, you should play with the handler options as an option.
const apiCreatingAccountLimiter = rateLimit({
windowMs: 10 * 60 * 1000, // 10 minutes
max: 10, // limit each IP to 10 requests per windowMs
message: "my initial message",
handler: function(req, res /*, next*/) {
var myCustomMessage = require('anotherModuleYouWannaUse_ForExemple');
res.status(options.statusCode).send(myCustomMessage);
},
});
At this end, you'll find an extract of the source code
function RateLimit(options) {
options = Object.assign(
{
windowMs: 60 * 1000, // milliseconds - how long to keep records of requests in memory
max: 5, // max number of recent connections during `window` milliseconds before sending a 429 response
message: "Too many requests, please try again later.",
statusCode: 429, // 429 status = Too Many Requests (RFC 6585)
headers: true, //Send custom rate limit header with limit and remaining
skipFailedRequests: false, // Do not count failed requests (status >= 400)
skipSuccessfulRequests: false, // Do not count successful requests (status < 400)
// allows to create custom keys (by default user IP is used)
keyGenerator: function(req /*, res*/) {
return req.ip;
},
skip: function(/*req, res*/) {
return false;
},
handler: function(req, res /*, next*/) {
res.status(options.statusCode).send(options.message);
},
onLimitReached: function(/*req, res, optionsUsed*/) {}
},
options
);
I found that my frontend was only able to catch the message if I set the statusCode to 200, even though technically it should be a 429. So try this instead:
const apiCreatingAccountLimiter = rateLimit({
windowMs: 10 * 60 * 1000, // 10 minutes
max: 10, // limit each IP to 10 requests per windowMs,
statusCode: 200,
message: {
status: 429, // optional, of course
limiter: true,
type: "error",
message: 'maximum_accounts'
}
});
I did my best to match what you already had. Personally mine just looks like this basically:
const loginRatelimiter = rateLimit({
windowMs: 6 * 60 * 1000,
max: 10,
statusCode: 200,
message: {
status: 429,
error: 'You are doing that too much. Please try again in 10 minutes.'
}
})
Then, on my frontend I just check for res.data.error when the response comes in, and display that to the user if it exists.

Apollo Server timeout while waiting for stream data

I'm attempting to wait for the result of a stream with my Apollo Server. My resolver looks like this.
async currentSubs() {
try {
const stream = gateway.subscription.search(search => {
search.status().is(braintree.Subscription.Status.Active);
});
const data = await stream.pipe(new CollectObjects()).collect();
return data;
} catch (e) {
console.log(e);
throw new Meteor.Error('issue', e.message);
}
},
This resolver works just fine when the data stream being returned is small, but when the data coming in is larger, I'm getting a 503 (Service Unavailable). I looks like the timeout is happening around 30 seconds. I've tried increasing the timeout of my Express server with graphQLServer.timeout = 240000; but that hasn't made a difference.
How can I troubleshoot this & where is the 30 second timeout coming from? It only fails when the results take longer.
I'm using https://github.com/mrdaniellewis/node-stream-collect to collect the results from the stream.
Error coming in from the try catch:
I20180128-13:09:26.872(-7)? { proxy:
I20180128-13:09:26.872(-7)? { error: 'Post http://127.0.0.1:26474/graphql: net/http: request canceled (Client.Timeout exceeded while awaiting headers)',
I20180128-13:09:26.872(-7)? level: 'error',
I20180128-13:09:26.873(-7)? msg: 'Error sending request to origin.',
I20180128-13:09:26.873(-7)? time: '2018-01-28T13:09:26-07:00',
I20180128-13:09:26.873(-7)? url: 'http://127.0.0.1:26474/graphql' } }
Had this same problem and was a pretty simple solution. My calls were lasting a bit over 30 seconds and the default timeout was returning 503s as well so I increased that.
Assuming you're using apollo-engine (this may be true for some other forms of Apollo), you can set your engine configs like so:
export function startApolloEngine() {
const engine = new Engine({
engineConfig: {
stores: [
{
name: "publicResponseCache",
memcache: {
url: [environmentSettings.memcache.server],
keyPrefix: environmentSettings.memcache.keyPrefix
}
}
],
queryCache: {
publicFullQueryStore: "publicResponseCache"
},
reporting: {
disabled: true
}
},
// GraphQL port
graphqlPort: 9001,
origin: {
requestTimeout: "50s"
},
// GraphQL endpoint suffix - '/graphql' by default
endpoint: "/my_api_graphql",
// Debug configuration that logs traffic between Proxy and GraphQL server
dumpTraffic: true
});
engine.start();
app.use(engine.expressMiddleware());
}
Notice the part where I specify
origin: {
requestTimeout: "50s"
}
That alone is what fixed it for me. Hope this helps!
You can find more information about that here

Node.js timeout with requestify POST

Im trying to request a status of a user, with a POST from node.js to a PHP file.
My issue is that the webservice Im calling is VERY slow to reply(4 sec), so I think the .then finishes before the 4 sec, and therefore returns nothing. Got any idea if i can extend the time for the request?
requestify.post('https://example.com/', {
email: 'foo#bar.com'
})
.then(function(response) {
var answer = response.getBody();
console.log("answer:" + answer);
});
I am not that knowledgeable on requestify but are you sure you can use post to a https address? In the readme only requestify.request(...) uses a https address as an example. (see readme)
One tip I can definitely give you though is to always catch your promise:
requestify.get(URL).then(function(response) {
console.log(response.getBody())
}).catch(function(err){
console.log('Requestify Error', err);
next(err);
});
This should at least give you the error of your promise and you can specify your problem.
Each call to Requestify allows you to pass through an Options object, the definition of that object is described here: Requestify API Reference
You are using the short method for POST, so I'll show that first, but this same syntax will work for put as well, notice that get, delete, head do not accept a data argument, you send url query parameters through the params config property.
requestify.post(url, data, config)
requestify.put(url, data, config)
requestify.get(url, config)
requestify.delete(url, config)
requestify.head(url, config)
Now, config has a timeout property
timeout {number}
Set a timeout (in milliseconds) for the request.
So we can specify the a timeout of 60 seconds with this syntax:
var config = {};
config.timeout = 60000;
requestify.post(url, data, config)
or inline:
requestify.post(url, data, { timeout: 60000 })
So lets put that together now into your original request:
as #Jabalaja pointed out, you should catch any exception messages, however you should do this with the error argument on the continuation.
(.then)
requestify.post('https://example.com/', {
email: 'foo#bar.com'
}, {
timeout: 60000
})
.then(function(response) {
var answer = response.getBody();
console.log("answer:" + answer);
}, function(error) {
var errorMessage = "Post Failed";
if(error.code && error.body)
errorMessage += " - " + error.code + ": " + error.body
console.log(errorMessage);
// dump the full object to see if you can formulate a better error message.
console.log(error);
});

Resources