Maximum time-to-live (TTL) of WNS push service? - web-push

I'm currently testing some examples of web push, and I have a question about WNS push service.
As far as I know, when I request to send a push event to push services, I can specify required TTL at request header. If TTL exceeds push services own limit, they adjust the TTL and respond with actual TTL they applied.
So, I expect if I put pretty large number for TTL, such as 1234567890, the response would tell me the maximum TTL of each push service.
But only mozilla push service responds with maximum TTL, which is "ttl": "5184000" and others like FCM, WNS didn't respond with adjusted TTL.
I tried to find maximum TTL of FCM and WNS on the docs, and it seems FCM's maximum TTL is 28 days. But I couldn't find WNS's maximum TTL.
Does anyone knows what is the maximum TTL of WNS push service?
This is simplified test code I'm currently using. (js)
const webpush = require("web-push");
webpush
.sendNotification(subscription, "TTL test", {
TTL: 1234567890,
})
.then((res) => console.log(`${JSON.stringify(res)}`))
.catch((res) => {
console.log(`${JSON.stringify(res)}`);
});

Related

express-rate-limit blocking requests from all users

I'm using express-rate-limit npm package, I deployed my backend on AWS (t2 micro ec2 instance), while limiter is on, requests are blocked from ALL users who try to interact with my API, it works for a couple of minutes and stops for about 10 minutes.
when I comment out the limiter part everything is working fine,I think too many requests should be blocked for only one user who tries to hammer the server with requests but what happens is ALL users get blocked, all users are treated like only 1 user, that's my conclusion.
If that's the case what should I do? I need my rate limiter on, and if there is any other explanation what would it be?
By default, express-rate-limit has a keyGenerator of req.ip. When I log this on my server it is '::ffff:127.0.0.1' which is obviously going to be the same for every request, thus limiting for all IP addresses once it's limited for one.
My solution was to use request-ip to get the correct IP address like so:
const rateLimit = require('express-rate-limit');
const requestIp = require('request-ip');
const app = express();
app.use(requestIp.mw());
app.use(rateLimit({
windowMs: 60 * 1000, // 1 minute
max: 30, // limit each IP to 30 requests per windowMs
keyGenerator: (req, res) => {
return req.clientIp // IP address from requestIp.mw(), as opposed to req.ip
}
}));
keyGenerator: function (req: any) {
return req.headers["x-forwarded-for"] || req.connection.remoteAddress;
}
It blocks based on iP
The express-rate-limit package blocks requests based on IP Address and that's because it provides a very basic configuration for rate-limiting that would be suitable for most applications. If you block based on user, someone can easily configure a bot to hit your APIs until the limit is reached on one user account and make a new account automatically to start hitting your server again. Blocking based on IP avoids such risks as one IP means one Device no matter how many users request from that IP. In most cases, one device is most likely to be used by one person so this solution works pretty well.

Slow Post Vulnerability (R U Dead Yet) - Express.js - data rate limit Solution?

I am trying to solve the issue of Slow Post Vulnerability on my application.
Issue: https://blog.qualys.com/securitylabs/2011/07/07/identifying-slow-http-attack-vulnerabilities-on-web-applications
To limit the number of connections from a user, I have used express-rate-limit so that the application does not go unavailable.
const rateLimit = require('express-rate-limit')
const limiter = rateLimit({ windowMs: 60 * 1000, // 1 minute max: 100 // limit each IP to 100 requests per windowMs })
app.use(limiter)
But If I try to test my application with slowtesttool and run a test with 2 connections (with rate 1 connection per sec and follow up data every 10sec), I see the connections never get closed.
I have set timeout to the connection as below, but it doesn't seem to work!
app.use((req, res, next) => {
req.connection.setTimeout(30000, () => {
req.socket.end()
})
next()
})
Is there a way I can limit the rate of accepting data, i.e. specifying the max time I can wait for every next chunk of body?
One solution could be to use the capacities of your front webserver (I assume that you will expose your app behind a server such as nginx, apapche, caddy, ...).
Nginx and caddy have this built-it, others probably too.

Issue with using keepAlive if set for 20 seconds or more

Having a bit of trouble with keepAlive for Apollo subscriptions. When ever I set a time to seconds or more the listening subscriptions errors out.
{
"error": "Could not connect to websocket endpoint ws://website.test:8000/graphql. Please check if the endpoint url is correct."
}
Here is ApolloServer setup
const apollo = new ApolloServer({
introspection: true,
playground: true,
typeDefs: schema,
subscriptions: {
keepAlive: 40000,
},
resolvers,
context: ........
}
In my local environment when I do not set keepAlive it will stay open indefinitely. If I set it at 10000 works great. With keep alive set at 40000 I get the error and connection closes
UPDATE
One thing we just noticed is that this issue happens on the playground but not on our web app. Maybe just a playground thing?
If you check the documentation provided by Apollo GraphQL:
keepAlive - Number
The frequency with which the subscription endpoint should send
keep-alive messages to open client connections, in milliseconds, if
any.
If this value isn't provided, the subscription endpoint doesn't send
keep-alive messages.
As per your provided configuration, with keep alive set at 40000, this is equivalent to 40 seconds. So the subscription endpoint is pinging keep-alive messages every 40 seconds. This probably is too long and the connections is already closed.
You should use a smaller value
If none if these help I suggest you open a issue on the Apollo repository:
I had the same issue just a few days ago. I resolved this by also adding the path option. Also declared the path inside of express middleware - https://www.reddit.com/r/graphql/comments/fpb3yt
WebSocket KeepAlive: WebSockets ping/pong, why not TCP keepalive?
How to handle WebSocket connection loss: Handling connection loss with websockets
Similar: Apollo Server - GraphQL's subscription websocket connection doesn't persist

Issue with client requesting Azure API Management endpoint - Error: ClientConnectionFailure: at transfer-response

I have a nodejs client making request to an azure api management endpoint. I am getting the following exception sometimes:
ClientConnectionFailure: at transfer-response
So i am using the request package, and in the client i am doing a straightforward request:
request({
method: "GET",
headers: {
"contentType": "application/json",
"Ocp-Apim-Subscription-Key": key
},
uri: endpointUrl
}, function (error, response, body) {
(...)
});
So is it a timeout eventually happening in the client, in the middle of the request, and consequently failing the connection with the Azure APIM endpoint?
Or something else? And how do you think i could solve this problem? I thought about increase the timeout in the request, but i am assuming that in omission of timeout, it is assuming the default timeout of the server (Azure Function App) that is 120 seconds, right?
Thanks.
ClientConnectionFailure suggests that client broke the connection wile APIM was processing request. at transfer-response means that this happened when APIM was sending response to client. APIM does not cache request/response body by default, so when it's sending response to client it's at the same time reading it from backend. This may lead to client breaking off the connection if backend takes too long to respond with actual data.
This behavior is driven purely by client deciding to stop waiting for data. Check how long it takes for you on client before you send request and see the error. Try adjusting client timeout.
We have seen this behavior when the back end is taking too long to respond.
My approach would be to take a look at the backend first which is Function app in this case and see the time taken and then see if there are any timeout limit set at APIM.
The timeout duration of a function app is defined by the functionTimeout property in the host.json project file. The following table shows the default and maximum values in minutes for both plans and in both runtime versions:
For troubleshooting Performance issue , follow this Troubleshoot slow app performance issues in Azure App Service

How To Rate-Limit Google Cloud Pub/Sub Queue

I'm using Google's Pub/Sub queue to handle messages between services. Some of the subscribers connect to rate-limit APIs.
For example, I'm pushing street addresses onto a pub/sub topic. I have a Cloud function which subscribes (via push) to that topic, and calls out to an external rate-limited geocoding service. Ideally, my street addresses could be pushed onto the topic with no delay, and the topic would retain those messages - calling the subscriber in a rate-limited fashion.
Is there anyway to configure such a delay, or a message distribution rate limit? Increasing the Ack window doesn't really help: I've architected this system to prevent long-running functions.
Because there's no answer so far describing workarounds, I'm going to answer this now by stating that there is currently no way to do this. There are workarounds (see the comments on the question that explain how to create a queueing system using Cloud Scheduler), but there's no way to just set a setting on a pull subscription that creates a rate limit between it and its topic.
I opened a feature request for this though. Please speak up on the tracked issue if you'd like this feature.
https://issuetracker.google.com/issues/197906331
An aproach to solve your problem is by using: async.queue
There you have a concurrency attribute wich you can manage the rate limit.
// create a queue object with concurrency 2
var q = async.queue(function(task, callback) {
console.log('hello ' + task.name);
callback();
}, 2);
// assign a callback
q.drain = function() {
console.log('all items have been processed');
};
// add some items to the queue
q.push({name: 'foo'}, function(err) {
console.log('finished processing foo');
});
// quoted from async documentation
GCP cloud task queue enables you to limit the number of tasks. Check this doc

Resources