express-rate-limit - catching the message - node.js

In my application, I would like to be able to catch the message that is being produced by the express-rate-limit package. This is an example of the code I have. I would like to be able to catch the message part with middleware so I can post-process it (in this case I have multiple languages ).
const apiCreatingAccountLimiter = rateLimit({
windowMs: 10 * 60 * 1000, // 10 minutes
max: 10, // limit each IP to 10 requests per windowMs
message: {
limiter: true,
type: "error",
message: 'maximum_accounts'
}
});
and then
router.post('/signup', apiCreatingAccountLimiter, (req, res, next) => {
// handling the post request
})
I have a similar solution middleware setup for some of my other API messages:
// error processing middleware
app.use((err, req, res, next) => {
const statusCode = err.statusCode || 500;
res.status(statusCode).send({
type: 'error',
message: err.message,
fields: err.fields === '' ? '' : err.fields,
code: err.code === '' ? '' : err.code,
section: err.section === '' ? 'general' : err.section
});
});
However, when trying to read a message from the express-rate-limit package it does not seem to be passing via this middleware at all. I guess it's because it happens before it can even reach any API and trigger this middleware.
Looking at the res part passing through, I can see there is an object with the following data:
rateLimit:
{ limit: 10,
current: 10,
remaining: 0,
resetTime: 2019-10-21T12:35:46.919Z
},
But that does not seem to be transporting the message object that is set at the very top in the apiCreatingAccountLimiter. I wonder how could I get to it?
Does anyone know how this can be done? I do not want those messages to be translated on the front end. I need the translation to happen on the NodeJS server. I am only interested in the middleware part where I can catch the message and post-process it.

In reading the source code, instead of using another middleware, you should play with the handler options as an option.
const apiCreatingAccountLimiter = rateLimit({
windowMs: 10 * 60 * 1000, // 10 minutes
max: 10, // limit each IP to 10 requests per windowMs
message: "my initial message",
handler: function(req, res /*, next*/) {
var myCustomMessage = require('anotherModuleYouWannaUse_ForExemple');
res.status(options.statusCode).send(myCustomMessage);
},
});
At this end, you'll find an extract of the source code
function RateLimit(options) {
options = Object.assign(
{
windowMs: 60 * 1000, // milliseconds - how long to keep records of requests in memory
max: 5, // max number of recent connections during `window` milliseconds before sending a 429 response
message: "Too many requests, please try again later.",
statusCode: 429, // 429 status = Too Many Requests (RFC 6585)
headers: true, //Send custom rate limit header with limit and remaining
skipFailedRequests: false, // Do not count failed requests (status >= 400)
skipSuccessfulRequests: false, // Do not count successful requests (status < 400)
// allows to create custom keys (by default user IP is used)
keyGenerator: function(req /*, res*/) {
return req.ip;
},
skip: function(/*req, res*/) {
return false;
},
handler: function(req, res /*, next*/) {
res.status(options.statusCode).send(options.message);
},
onLimitReached: function(/*req, res, optionsUsed*/) {}
},
options
);

I found that my frontend was only able to catch the message if I set the statusCode to 200, even though technically it should be a 429. So try this instead:
const apiCreatingAccountLimiter = rateLimit({
windowMs: 10 * 60 * 1000, // 10 minutes
max: 10, // limit each IP to 10 requests per windowMs,
statusCode: 200,
message: {
status: 429, // optional, of course
limiter: true,
type: "error",
message: 'maximum_accounts'
}
});
I did my best to match what you already had. Personally mine just looks like this basically:
const loginRatelimiter = rateLimit({
windowMs: 6 * 60 * 1000,
max: 10,
statusCode: 200,
message: {
status: 429,
error: 'You are doing that too much. Please try again in 10 minutes.'
}
})
Then, on my frontend I just check for res.data.error when the response comes in, and display that to the user if it exists.

Related

Using an Express app with MongoDB to execute a 'create' function on clients browser instead of the server

I have created an internet speed test app using client speed, location, and ISP. Problem is that when this code is executed it pulls the speed, ISP and location of where the data centre is wherever it is deployed(in my example an AWS server in Virginia via Heroku). My thought is that I need this code to execute on the client's browser side instead of on the server. Is this possible using Express, Mongoose, and EJS?
Relevant code posted below. This is from my controller. I didn't include the rest because it's just logic for something separate/render code.
function index(req, res, next) {
let speedtest = new FastSpeedtest({
token: hidden, // required
verbose: false, // default: false
timeout: 10000, // default: 5000
https: true, // default: true
urlCount: 5, // default: 5
bufferSize: 8, // default: 8
unit: FastSpeedtest.UNITS.Mbps, // default: Bps
proxy: "http://optional:auth#my-proxy:123", // default: undefined
});
speedtest.getSpeed().then((s) => {
fetch(ipApiToken).then(function (response) {
response.json().then((jsonData) => {
res.render("testSpeed", { s, jsonData });
});
});
})
.catch((e) => {
console.error(e.message);
});
}
async function create(req, res, next) {
let userIsp = req.body.isp
let userSpeed = req.body.speed
let userLocation = req.body.location
let newSpeedTest = await importSpeed.speedModel.create({speed: Math.round(userSpeed),location: userLocation, isp: '', isp_id: ''});
it's not possible since you send the request from the server.
If you want to obtain data such as ping / download speed etc. from a computer, you have to send the request from this very computer.
The API I'm pulling JSON info from (ipapi) can pull data using an IP address. Luckily I found that Express has a built-in feature in 'req' for pulling the client's IP address. I still have the issue of getting accurate speed tests but this might help other people in the future.
var ip = req.headers['x-forwarded-for'] || req.connection.remoteAddress;
This pulls the client IP address in a controller function.
I just changed it to:
speedtest.getSpeed().then((s) => {
fetch('https://ipapi.co/' + ip + '/json/?key=' + ipApiToken).then(function (response) {
response.json().then((jsonData) => {
res.render("testSpeed", { s, jsonData });
});
});
})
The full code with these changes:
function index(req, res, next) {
var ip = req.headers['x-forwarded-for'] || req.connection.remoteAddress;
let speedtest = new FastSpeedtest({
token: 'YXNkZmFzZGxmbnNkYWZoYXNkZmhrYWxm', // required
verbose: false, // default: false
timeout: 10000, // default: 5000
https: true, // default: true
urlCount: 5, // default: 5
bufferSize: 8, // default: 8
unit: FastSpeedtest.UNITS.Mbps, // default: Bps
proxy: "http://optional:auth#my-proxy:123", // default: undefined
});
speedtest.getSpeed().then((s) => {
fetch('https://ipapi.co/' + ip + '/json/?key=' + ipApiToken).then(function (response) {
response.json().then((jsonData) => {
res.render("testSpeed", { s, jsonData });
});
});
})
.catch((e) => {
console.error(e.message);
});
}

NodeJS to NodeJS http requests hang

I have 2 services: 1. worker 2.fetch documents
Both are Nodejs express services.
The worker uses Promise.all to send 48 http POST (dont ask why post and not GET) requests to the fetch service, the fetch service then fetches all 48 documents (each in a separate request) and sends them back to the worker.
Logs are showing the fetch service finished everything successfuly and the worker sending all 48 requests, but showing partial responses (sometimes 0, sometimes 15/48 sometimes 31/48 but never fully succeeds)
The worker keeps waiting for a response it seems, untill the 15 minute timeout for the job end and its moved to failed.
code examples:
worker.js (service 1 - NodeJS with express)
await Promise.all(docIDs.map(async (docID) => {
try {
logger.info(`Worker Get Document: Fetching ${docID.docID} transaction ID: ${transactionId} / timeStamp: ${timeStamp}`, {})
var document = await axios.post(getVariableValue("GET_DOCUMENT_SERVICE_URL"), docID, {
httpsAgent: new https.Agent({
rejectUnauthorized: false,
keepAlive: false
}),
auth: {
username: getVariableValue("WEBAPI_OPTIDOCS_SERVICE_USERNAME"),
password: getVariableValue("WEBAPI_OPTIDOCS_SERVICE_PASSWORD")
},
headers: {
"x-global-transaction-id": transactionId,
"timeStamp": timeStamp
}
});
logger.info(`Worker Get Document: Fetched ${docID.docID} Status: ${document.data.status}. transaction ID: ${transactionId} / timeStamp: ${timeStamp}`, {})
documents.push(document.data.content);
}
catch (err) {
const responseData = err.response ? err.response.data : err.message
logger.error(`Worker Get Document: Failed DocID ${docID.docID}, Error received from Get Document: ${responseData} - transaction ID: ${transactionId} / timeStamp: ${timeStamp}`, {})
throw Error(responseData)
}
}));
fetch.js (service 2 - NodeJS with express)
module.exports = router.post('/getDocument', async (req, res, next) => {
try {
var transactionId = req.headers["x-global-transaction-id"]
var timeStamp = req.headers["timestamp"]
logger.info(`GetDocumentService: API started. - transaction ID: ${transactionId} / timeStamp: ${timeStamp}`, {})
var document = await getDocuemntService(req.body, transactionId, timeStamp);
var cloneDocument = clone(document)
logger.info(`GetDocumentService: Document size: ${JSON.stringify(cloneDocument).length / 1024 / 1024}. - transaction ID: ${transactionId} / timeStamp: ${timeStamp}`, {})
logger.info(`GetDocumentService: API Finished. - transaction ID: ${transactionId} / timeStamp: ${timeStamp}`, {})
res.status(200);
res.set("Connection", "close");
res.json({
statusDesc: "Success",
status: true,
content: cloneDocument
});
logger.info(`GetDocumentService: Response status: ${res.finished} . - transaction ID: ${transactionId} / timeStamp: ${timeStamp}`, {})
}
catch (err) {
logger.error(`GetDocumentService: Error:${err.message} / Stack: ${err.stack}. - transaction ID: ${transactionId} / timeStamp: ${timeStamp}`, {})
res.status(500)
res.send("stack: " + err.stack + "err: " + err.message + " fromgetdocument")
}
});
So the logs from the get document are full and show success.
The logs from the worker show like this:
"fetching" X48
"fetched" X0, X15, X31 (three different attempts at fetching 48 docs)
I have tried changing keep-alive to true.
Anything else I might be missing? anyone knows why it hangs forever (atleast 15 minutes untill job gets "timed out")
Thanks :)
I only have guesses here but I feel they are strong guesses.
Axios has a default timeout of none but I'm gonna guess somewhere there is a timeout of 900 seconds. Edit: This timeout is very likely in the server side web server.
You are hammering the server with 48 request in a fraction of a second. I'm not surprised this is failing.
Their server is likely rate limiting you or just getting overloaded.
For testing limit your request to one at a time. When one finishes send the next.
Simple work queue example - Not tested
The addWork or doWork caller will need to await
let queue = []
const sleep = (ms)=> new Promise((resolve)=>setTimeout(resolve,ms))
async addWork(work, cb, sleepMS) {
if (Array.isArray(work) queue = [...queue, ...work]
else queue.push(work)
if (cb) return doWork(cb, sleepMS)
}
async doWork(cb, sleepMS) {
let work = queue.pop()
var document = await getDocuemntService(work)
cb(document)
if (queue.length > 0) {
if (sleepMS) await sleep(sleepMS)
return doWork(cb, sleepMS)
}
}
Once you confirm that one at a time works then you can look at a more complex work queue. I'd check NPM for something I'm sure there is something out there to handle this.

Retry logic for https.get in node.js

Is there a way to implement retries for https.get method in node.js using async-retry.
If you are using this module https://github.com/zeit/async-retry
your answer is there in README.md file
// Packages
const retry = require('async-retry')
const fetch = require('node-fetch')
await retry(async bail => {
// if anything throws, we retry
const res = await fetch('https://google.com')
if (403 === res.status) {
// don't retry upon 403
bail(new Error('Unauthorized'))
return
}
const data = await res.text()
return data.substr(0, 500)
}, {
retries: 5
})
and if you want more popular solution/npm module you can find it here https://www.npmjs.com/package/requestretry
const request = require('requestretry');
...
// use await inside async function
const response = await request.get({
url: 'https://api.domain.com/v1/a/b',
json: true,
fullResponse: true, // (default) To resolve the promise with the full response or just the body
// The below parameters are specific to request-retry
maxAttempts: 5, // (default) try 5 times
retryDelay: 5000, // (default) wait for 5s before trying again
retryStrategy: request.RetryStrategies.HTTPOrNetworkError // (default) retry on 5xx or network errors
})

express - Limit a request to one at a time

I'm using express to build a API that should be used internally. One of the request trigger an heavy process on the server and should return a CSV out of that. This process might take more than 10 minutes.
To not overloading the server I want to restrict the call of this API and make it so that, since the process isn't terminated, we can't request the same URL again.
For this I tried to use express-rate-limit with the following configuration:
new RateLimit({
windowMs: 30 * 60 * 1000, // 30 minutes
max: 1,
delayMs: 0, // disabled
message: 'Their is already a running execution of the request. You must wait for it to be finished before starting a new one.',
handler: function handler(req, res) {
logger.log('max request achieved');
logger.log(res);
},
});
But it seems that the 'max request' is reached every time after exactly 2 mins even if I start only once. I suspect the browser to retry the request after 2 min if doesn't get any answer, is it possible?
I would like that this request doesn't have any retry-strategy and that the only way the max request is reached is by manually asking the server to execute this request 2 times in a row.
Thank's.
Edit
Here is my full code:
const app = express();
const port = process.env.API_PORT || 3000;
app.enable('trust proxy');
function haltOnTimedout(req, res, next) {
if (!req.timedout) { next(); }
}
app.use(timeout(30 * 60 * 1000)); // 30min
app.use(haltOnTimedout);
app.listen(port, () => {
logger.log(`Express server listening on port ${port}`);
});
// BILLING
const billingApiLimiter = new RateLimit({
windowMs: 30 * 60 * 1000, // 30 minutes
max: 1,
delayMs: 0, // disabled
message: 'Their is already a running execution of the request. You must wait for it to be finished before starting a new one.',
handler: function handler(req, res) {
logger.log('max request achieved');
},
});
app.use('/billing', billingApiLimiter);
app.use('/billing', BillingController);
And the code of my route:
router.get('/billableElements', async (request, response) => {
logger.log('Route [billableElements] called');
const { startDate } = request.query;
const { endDate } = request.query;
try {
const configDoc = await metadataBucket.getAsync(process.env.BILLING_CONFIG_FILE || 'CONFIG_BILLING');
const billableElements = await getBillableElements(startDate, endDate, configDoc.value);
const csv = await produceCSV(billableElements);
logger.log('csv produced');
response.status(200).send(`${csv}`);
} catch (err) {
logger.error('An error occured while getting billable elements.', err);
response.status(500).send('An internal error occured.');
}
});
I found the answer thank's to this GitHub issue: https://github.com/expressjs/express/issues/2512.
TLDR: I added request.connection.setTimeout(1000 * 60 * 30); to avoid firing the request every 2 minutes.
But considering the code I wrote inside my question, #Paul's advices are still good to be taken into account.

Apollo Server timeout while waiting for stream data

I'm attempting to wait for the result of a stream with my Apollo Server. My resolver looks like this.
async currentSubs() {
try {
const stream = gateway.subscription.search(search => {
search.status().is(braintree.Subscription.Status.Active);
});
const data = await stream.pipe(new CollectObjects()).collect();
return data;
} catch (e) {
console.log(e);
throw new Meteor.Error('issue', e.message);
}
},
This resolver works just fine when the data stream being returned is small, but when the data coming in is larger, I'm getting a 503 (Service Unavailable). I looks like the timeout is happening around 30 seconds. I've tried increasing the timeout of my Express server with graphQLServer.timeout = 240000; but that hasn't made a difference.
How can I troubleshoot this & where is the 30 second timeout coming from? It only fails when the results take longer.
I'm using https://github.com/mrdaniellewis/node-stream-collect to collect the results from the stream.
Error coming in from the try catch:
I20180128-13:09:26.872(-7)? { proxy:
I20180128-13:09:26.872(-7)? { error: 'Post http://127.0.0.1:26474/graphql: net/http: request canceled (Client.Timeout exceeded while awaiting headers)',
I20180128-13:09:26.872(-7)? level: 'error',
I20180128-13:09:26.873(-7)? msg: 'Error sending request to origin.',
I20180128-13:09:26.873(-7)? time: '2018-01-28T13:09:26-07:00',
I20180128-13:09:26.873(-7)? url: 'http://127.0.0.1:26474/graphql' } }
Had this same problem and was a pretty simple solution. My calls were lasting a bit over 30 seconds and the default timeout was returning 503s as well so I increased that.
Assuming you're using apollo-engine (this may be true for some other forms of Apollo), you can set your engine configs like so:
export function startApolloEngine() {
const engine = new Engine({
engineConfig: {
stores: [
{
name: "publicResponseCache",
memcache: {
url: [environmentSettings.memcache.server],
keyPrefix: environmentSettings.memcache.keyPrefix
}
}
],
queryCache: {
publicFullQueryStore: "publicResponseCache"
},
reporting: {
disabled: true
}
},
// GraphQL port
graphqlPort: 9001,
origin: {
requestTimeout: "50s"
},
// GraphQL endpoint suffix - '/graphql' by default
endpoint: "/my_api_graphql",
// Debug configuration that logs traffic between Proxy and GraphQL server
dumpTraffic: true
});
engine.start();
app.use(engine.expressMiddleware());
}
Notice the part where I specify
origin: {
requestTimeout: "50s"
}
That alone is what fixed it for me. Hope this helps!
You can find more information about that here

Resources