Identify Internal Request Nginx + NodeJS Express - node.js

I have an NodeJS API using Express Framework.
I use Nginx for Load Balancing betwween my NodeJS instances. I use PM2 to spawn NodeJS Instances.
I identified in the log that Ngnix makes some "dummy/internal" requests, probably to identify if the instance is on (heartbeat requests could be the appropriate name for this requests).
My question is: Which is the right method to identifiy these "dummy/internal" requests on my API?

I'm fairly certain that nginx only uses passive health checks for upstream servers. In other words – because all HTTP requests are assumed to result in a response, nginx says "If I send this server a bunch of requests and don't get responses for them, I'll consider the server to be unhealthy".
Can you share some access logs of the requests you're seeing?
As far as I know, nginx does not send any requests to upstream servers that are not ultimately initiated by a client.

Related

Why is a web server (i.e Nginx) REQUIRED for FastAPI but not Express API?

A typical request gets processed like this:
request -> Nginx reverse proxy -> AWS EC2 -> Express API/FastAPI -> response
but my biggest confusion is why a FastAPI absolutely NEEDS Nginx to work, but an Express API doesn't (despite nodejs and python both having the http module and hence able to make web servers). Why do I need Nginx at all for FastAPI? Can't an AWS EC2 instance act as a web server like Nginx?
This post says it is so we can hide the port number in the url, but that being the only reason sounds unreasonable to me.
Why is a web server (i.e Nginx) REQUIRED for FastAPI
Nginx is not required for FastAPI. You can listen on external ports and handle requests without a reverse proxy. Even FastAPI documentation includes setting up Nginx under the Advanced section (Ref)
This post says it's so we can hide the port number in the url, but this being the only reason seems silly to me.
You can still hide the port number in the url if you run the Express app with sudo and listen on port 80 or 443.
Can't an AWS EC2 instance act as a web server like Nginx?
Yes it can.
There are benefits of using a proxy server like Nginx. Proxy servers can handle load balancing, caching, SSL termination and they can handle large number of concurrent connections efficiently.
Using a proxy server is a best practice. However, it is not a requirement for FastAPI or Express.

I run a timer for each http request I get, I'd want to add a loadbalancer but the second request may/will not be sent to the right backend server

I run a timer per user API request (over HTTP).
I want to grow horizontally (adding servers) but if I had some servers behind a loadbalancer the user may not be sent to the same backend server for the second request and my timing function wouldn't work.
If I could use cookies it would be easy with sticky sessions.
I can recognize the user using a parameter in the URL but I would prefer not to have to create my own loadbalancing scheme using Nginx or similar solutions.
If that helps:
- App is in nodejs
- Hosted at DigitalOcean.
Anyone struck by a great idea?

how to use nginx 3rd party module when proxying connections to application servers

I developed a Nginx 3rd party dynamic module and did required configuration in nginx.conf. Able to run that module and see it doing processing. This module reads request request header, cookies etc., does some business logic execution and modify response header then send back response to client.
Problem -: "How to use nginx module when proxying connections to application servers"
I'm be using Nginx as proxy server and Tomcat or Node as application server and my application hosted on app server. I'm able to route the request through both web & app server and get response back but module isn't getting invoked. Not sure how to link/configure it so that my able to intercept request and modify response header as per need.
Flow -: Browser <-> Web Server (module sits here) <-> Application Server
Has anybody explored this part? If yes then please help.

How to get nginx to take advantage of http2 with express

I am using express with node and nginx as a reverse proxy. I'd like to know how to take advantage of http/2 with nginx to serve static content, with all other requests being forwarded to the express API.
At the moment, my express server is being served via http/1 and nginx is accepting http/2 connections, and forwarding them to express. How do I set up nginx so that it uses http/2 to serve everything in my statics folder, but forwards all requests to the API as http1?
I will break your questions into two parts:
How to take advantages of http/2.0 to serve static files from nginx?
How to setup nginx to send http/1.1 request to the backend server in case where nginx act as a reverse proxy?
Answer 1:
For the case of serving static files the major performance benefit can come from using the multiplexing feature of the http/2.0 protocol.
Multiplexing enhances the pipelining feature introduced in http/1.1 and overcomes the problem of HOL blocking. With multiplexing you can use the same underlying TCP connection to load multiple resources in parallel using one http connection. You should also consider the stream prioritisation to assign priority to the resource which you want to load first on the page otherwise loading of some of the critical resources can be delayed since all the resources will contend for same multiplexed connection.
Answer 2:
Sending http/1.1 request to the backend server is the default behaviour. So if you have already configured nginx to use http/2.0 you do not have to do anything special to proxy http/1.1 request to your backend. This is because nginx does not support http/2.0 in proxy module as of now. Refer to this ticket. Also, please check this digital ocean tutorial which will guide you to setup nginx with http/2.0 configured on ubuntu 16.04.

Node.js Reverse Proxy/Load Balancer

I am checking node-http-proxy and nodejs-proxy to build a DIY reverse proxy/load balancer in Node.js. After coding a small version, I setup 2 WEBrick servers for the same Rails app so I could load balance (round robin) between them. However each HTTP request is sent to one or another server which is very inefficient since the loading process of CSS and Javascript files from the home page is performed with more than 25 GET requests.
I tried to play a bit with socket events but I didn't get anywhere because by default it uses keep-alive connections (possibly this is why nginx just support http/1.0).
Ok, so I am wondering how can my proxy send a block of HTTP requests (for instance loading a webpage entirely, etc) to only one server so I could send the next block to another server.
You need to consider stickiness or session persistence. This will ensure future connections after the first connection inbound will get 'stuck' to the chosen server for the duration of the session or until the persistence connection times out.

Resources