Throttling only for users, Skip throttle for frontend server and getServerSideProps - node.js

I'm developing an app with NestJs where I'm using throttle module for banning abusive requests.
One thing that I couldn't find a clear answer is that if it's going to block abusive requests (for example, more than 20 requests per minute) will it also block the frontend requests made by nodejs server?
I mean getServerSideProps will make a request in every render. If our website has more than 100 visitors per minute, what will be happened in this situation? Considering that
Frontend and backend projects both are on same server with same IP
They are hosted on different servers with different IP addresses

Your suspicion is valid because #nestjs/throttler does not differentiate between local and remote requests so yes your NextJs server will be blocked quickly.
I'd suggest you to use reverse proxies instead which are more mature and also does not check local requests.

Related

I run a timer for each http request I get, I'd want to add a loadbalancer but the second request may/will not be sent to the right backend server

I run a timer per user API request (over HTTP).
I want to grow horizontally (adding servers) but if I had some servers behind a loadbalancer the user may not be sent to the same backend server for the second request and my timing function wouldn't work.
If I could use cookies it would be easy with sticky sessions.
I can recognize the user using a parameter in the URL but I would prefer not to have to create my own loadbalancing scheme using Nginx or similar solutions.
If that helps:
- App is in nodejs
- Hosted at DigitalOcean.
Anyone struck by a great idea?

Identify Internal Request Nginx + NodeJS Express

I have an NodeJS API using Express Framework.
I use Nginx for Load Balancing betwween my NodeJS instances. I use PM2 to spawn NodeJS Instances.
I identified in the log that Ngnix makes some "dummy/internal" requests, probably to identify if the instance is on (heartbeat requests could be the appropriate name for this requests).
My question is: Which is the right method to identifiy these "dummy/internal" requests on my API?
I'm fairly certain that nginx only uses passive health checks for upstream servers. In other words – because all HTTP requests are assumed to result in a response, nginx says "If I send this server a bunch of requests and don't get responses for them, I'll consider the server to be unhealthy".
Can you share some access logs of the requests you're seeing?
As far as I know, nginx does not send any requests to upstream servers that are not ultimately initiated by a client.

Node socket.io on load balanced Amazon EC2

I have a standard LAMP EC2 instance set-up running on Amazon's AWS. Having also installed Node.js, socket.io and Express to meet the demands of live updating, I am now at the stage of load balancing the application. That's all working, but my sockets aren't. This is how my set-up looks:-
--- EC2 >> Node.js + socket.io
/
Client >> ELB --
\
--- EC2 >> Node.js + socket.io
[RDS MySQL - EC2 instances communicate to this]
As you can see, each instance has an installation of Node and socket.io. However, occasionally Chrome debug will 400 the socket request returning the reason {"code":1,"message":"Session ID unknown"}, and I guess this is because it's communicating to the other instance.
Additionally, let's say I am on page A and the socket needs to emit to page B - because of the load balancer these two pages might well be on a different instance (they will both be open at the same time). Using something like Sticky Sessions, to my knowledge, wouldn't work in that scenario because both pages would be restricted to their respective instances.
How can I get around this issue? Will I need a whole dedicated instance just for Node? That seems somewhat overkill...
The issues come up when you consider both websocket traffic (layer 4 -ish) and HTTP traffic (layer 7) moving across a load balancer that can only inspect one layer at a time. For example, if you set the ELB to load balance on layer 7 (HTTP/HTTPS) then websockets will not work at all across the ELB. However, if you set the ELB to load balance on layer 4 (TCP) then any fallback HTTP polling requests could end up at any of the upstream servers.
You have two options here. You can figure out a way to effectively load balance both HTTP and websocket requests or find a way to deterministically map requests to upstream servers regardless of the protocol.
The first one is pretty involved and requires another load balancer. A good walkthrough can be found here. It's worth noting that when that post was written HAProxy didn't have native SSL support. Now that this is the case it might be possible to just remove the ELB entirely, if that's the route you want to go. If that's the case the second option might be better.
Otherwise you can use HAProxy on its own (or a paid version of Nginx) to implement a deterministic load balancing mechanism. In this case you would use IP hashing since socket.io does not provide a route-based mechanism to identify a particular server like sockjs. This would use the first 3 octets of the IP address to determine which upstream server gets each request so unless the user changes IP addresses between HTTP polls then this should work.
The solution would be for the two(or more) node.js installs to use a common session source.
Here is a previous question on using REDIS as a common session store for node.js How to share session between NodeJs and PHP using Redis?
and another
Node.js Express sessions using connect-redis with Unix Domain Sockets

Node.js Reverse Proxy/Load Balancer

I am checking node-http-proxy and nodejs-proxy to build a DIY reverse proxy/load balancer in Node.js. After coding a small version, I setup 2 WEBrick servers for the same Rails app so I could load balance (round robin) between them. However each HTTP request is sent to one or another server which is very inefficient since the loading process of CSS and Javascript files from the home page is performed with more than 25 GET requests.
I tried to play a bit with socket events but I didn't get anywhere because by default it uses keep-alive connections (possibly this is why nginx just support http/1.0).
Ok, so I am wondering how can my proxy send a block of HTTP requests (for instance loading a webpage entirely, etc) to only one server so I could send the next block to another server.
You need to consider stickiness or session persistence. This will ensure future connections after the first connection inbound will get 'stuck' to the chosen server for the duration of the session or until the persistence connection times out.

nginx produces four times more traffic than node-http-proxy

we are using node-http-proxy for a while and it works fine. But as our system grows bigger, we want to move to nginx.
We consume about 100 request per second at the moment, which produce an outgoing traffic of about 1mb/s.
Our tests with nginx (same amount of requests, same backend servers and same responses) produce an outgoing traffic of about 4mb/s.
We checked the headers, because that could have been the only difference in the response, but the headers didn't change that much.
Does anyone have an idea what else could produce this traffic increase?
Thanks, Kim
EDIT:
We don't use clustering, they are just dumb reverse proxies. Requests from domain A go to server A, domain B to server B, ...
We did tests in our production environment, so the backend servers stayed the same during tests, just the proxies changed.
We found out what happens: The old node.js server doesn't send all the required ssl certificates. The nginx sends all certificates (intermediate, etc.) with every request.

Resources