I run a timer for each http request I get, I'd want to add a loadbalancer but the second request may/will not be sent to the right backend server - node.js

I run a timer per user API request (over HTTP).
I want to grow horizontally (adding servers) but if I had some servers behind a loadbalancer the user may not be sent to the same backend server for the second request and my timing function wouldn't work.
If I could use cookies it would be easy with sticky sessions.
I can recognize the user using a parameter in the URL but I would prefer not to have to create my own loadbalancing scheme using Nginx or similar solutions.
If that helps:
- App is in nodejs
- Hosted at DigitalOcean.
Anyone struck by a great idea?

Related

Throttling only for users, Skip throttle for frontend server and getServerSideProps

I'm developing an app with NestJs where I'm using throttle module for banning abusive requests.
One thing that I couldn't find a clear answer is that if it's going to block abusive requests (for example, more than 20 requests per minute) will it also block the frontend requests made by nodejs server?
I mean getServerSideProps will make a request in every render. If our website has more than 100 visitors per minute, what will be happened in this situation? Considering that
Frontend and backend projects both are on same server with same IP
They are hosted on different servers with different IP addresses
Your suspicion is valid because #nestjs/throttler does not differentiate between local and remote requests so yes your NextJs server will be blocked quickly.
I'd suggest you to use reverse proxies instead which are more mature and also does not check local requests.

Identify Internal Request Nginx + NodeJS Express

I have an NodeJS API using Express Framework.
I use Nginx for Load Balancing betwween my NodeJS instances. I use PM2 to spawn NodeJS Instances.
I identified in the log that Ngnix makes some "dummy/internal" requests, probably to identify if the instance is on (heartbeat requests could be the appropriate name for this requests).
My question is: Which is the right method to identifiy these "dummy/internal" requests on my API?
I'm fairly certain that nginx only uses passive health checks for upstream servers. In other words – because all HTTP requests are assumed to result in a response, nginx says "If I send this server a bunch of requests and don't get responses for them, I'll consider the server to be unhealthy".
Can you share some access logs of the requests you're seeing?
As far as I know, nginx does not send any requests to upstream servers that are not ultimately initiated by a client.

Why are there multiple POST requests with /FotorShopSurpport API on my AWS Elastic Bean Stalk node server?

I see multiple POST requests throughout my logs:
POST /FotorShopSurpport/fetchModulesByAppkey
POST /FotorShopSurpport/fetchRecommendResource
POST /FotorShopSurpport/batchResourcePkgNumByType
I don't have any API matching that route neither am I calling this APIs on my server. I recently created this server and no one even knows link to the server apart from me.
Is this something Elastic Beanstalk doing? Or is it totally different?
I have several other servers through elastic beanstalk and these requests are the first time I have seen in any logs.
Found some access logs containing "FotorShopSurpport" on google. The requests are for store.fotor.com.
$ host store.fotor.com
store.fotor.com is an alias for elb-store-376424179.us-west-2.elb.amazonaws.com.
elb-store-376424179.us-west-2.elb.amazonaws.com has address 52.34.194.249
elb-store-376424179.us-west-2.elb.amazonaws.com has address 35.160.57.75
Some client trying to access store.fotor.com is using the wrong IP, maybe because of too agressive caching. ELB keeps changing IPs. I have seen such request in my access logs too. Make sure your webserver is configured to only serve requests for your own hostnames.

Node.js Reverse Proxy/Load Balancer

I am checking node-http-proxy and nodejs-proxy to build a DIY reverse proxy/load balancer in Node.js. After coding a small version, I setup 2 WEBrick servers for the same Rails app so I could load balance (round robin) between them. However each HTTP request is sent to one or another server which is very inefficient since the loading process of CSS and Javascript files from the home page is performed with more than 25 GET requests.
I tried to play a bit with socket events but I didn't get anywhere because by default it uses keep-alive connections (possibly this is why nginx just support http/1.0).
Ok, so I am wondering how can my proxy send a block of HTTP requests (for instance loading a webpage entirely, etc) to only one server so I could send the next block to another server.
You need to consider stickiness or session persistence. This will ensure future connections after the first connection inbound will get 'stuck' to the chosen server for the duration of the session or until the persistence connection times out.

Reverse proxy websockets (SSL), traffic through Stunnel to many node.js apps

I'm looking for some ideas...
I have a series of robust node.js apps that need to be delivered to specific users (post authentication), virtually no file serving, only the initial delivery of the index. The rest of the communication is all done via socket.io.
ClientA (login) needs to be connected to an application on lets say :90001
ClientB (login) on :90002
ClientC (login) on :90003
*All HTTP/1.1 ws need to be secure
I have tried a few configurations:
stunnel/varnish/nginx
stunnel/haproxy
stunnel/nginx
I was thinking a good approach would be to somehow use redis to store sessions and validate against a cookie, however that would most likely be done by (using node) exposing node.js on the frontend.
questions:
What are the risks in using node-http-proxy as the front piece?
Is this something that i should deem possible (to have one piece that "securely" redirects ws traffic and manages specific sessions to many independent/exclusive backends).
I am aware that nginx 1.3 (in dev) is to support ws, is this worth holding out for?
Has anyone had any thorough experience with yao's tcp_proxy module for nginx (reliability / scalability)?
I can't say I have done this before, but I can offer some ideas perhaps:
1 node authentication server which takes login details and sets a cookie specific to the server the user should connect to. It then redirects to the index page at which point, haproxy can direct the request based on the cookie. See this question https://serverfault.com/questions/75385/is-there-a-way-to-configure-haproxy-to-send-traffic-based-on-a-cookie
Alternatively, you could have the above authentication on all servers instead of just one. Haproxy would have to be configured to balance across all nodes if there is no relevant cookie header. Each node would do the set-cookie + redirect and subsequent requests should end up on the specific node instance.
bts, haproxy 1.5 dev now has built in support for SSL, so no need for stunnel anymore.

Resources