NGINX Load Balancer routing to different ip if request fail - node.js

I have following nginx configuration for my reverse proxy load balancer.
upstream appserver{
server 192.168.1.101:3800;
server 192.168.1.102:3800;
server 192.168.1.103:3800;
server 192.168.1.104:3800;
}
server {
location /api {
proxy_pass http://appserver;
}
}
If by any chance or error my one node instance is breaking or getting restart in middle of processing request, my load balancer is redirecting the request to other ip in reverse proxy. I dont want this to happen. I want it should not redirect and respond back with 500 or something else.

If I understand your request correctly I think you need proxy_next_upstream off;
http://nginx.org/en/docs/http/ngx_http_proxy_module.html#proxy_next_upstream

Related

Cloudflare - No further requests possible during download

My site allows users to download big .zip files. A problem I'm dealing with right now is that whenever the user is currently downloading such a file, all other requests to the site wait until the download is finished or cancelled, making the site practically unusable. In the Chrome network tab, the request shows as pending. Why could this be?
The server itself is implemented in Node.js using Express and is proxied through NGINX and then through Cloudflare. When I connect to the Express server or the NGINX proxy directly, this problem doesn't come up, only when it's routed through Cloudflare from what I have observed.
This is my NGINX config, if of any help:
server {
listen 80;
listen [::]:80;
server_name marbleland.vani.ga;
client_max_body_size 20m;
location / {
proxy_pass "http://localhost:20020/";
}
}
Am I missing something obvious?

Nginx upstream servers all go down when one of them shuts down

I'm trying to set up upstream servers with nginx. All run the same Node.js app on port 8080 with pm2. Here is the nginx default.conf of the main server
upstream backend {
ip_hash;
server localhost:8080;
server sv1_ip_address;
server sv2_ip_address;
}
server {
listen 443 ssl;
location / {
proxy_pass http://backend;
...
}
...
}
And on sv1 and sv2, I have the same default.conf as follows
server {
listen 80 default_server;
location / {
proxy_pass http://localhost:8080;
...
}
}
Now when I tried shutting down either sv1 or sv2 (using pm2 kill for Node or even reboot), all upstream servers went down and I receive a 500 error (?) when accessing the app by the domain name. So I thought there was something wrong with nginx on those secondary servers and I replaced upstream backend with this
upstream backend {
ip_hash;
server localhost:8080;
server sv1_ip_address:8080;
server sv2_ip_address:8080;
}
and now shutting down or rebooting were handled correctly (meaning nginx will route the requests to one of the living servers). Is this an expected behavior, or am I doing something wrong here? I don't think routing requests directly to port 8080 is a good idea though.
I donot know why you had to install nginx service on sv1 and sv2 servers.
When you reboot sv1 , sv2 servers, it should be enabling nginx first. Please check service nginx status is running or not once reboot is done.
And you kill node meaning application is down, so you got 500 error on nginx

Use Reverse Proxy from Https client to Http server running locally on my machine

I have a published site that uses HTTPS. The site needs to communicates with a HTTP node Express API. The API is run on my local machine. Everything worked fine until I switched the client application to use HTTPS. Now I receive mixed content warnings. I have been reading about reverse proxys and wonder if this could be the solution to my problem. Is it possible to proxy a request to my localhost? Or will localhost point to the server the proxy is on?
I have been looking at using nginx as the reverse proxy server but I have zero experience with proxys and not positive how to go about it.
I am mainly wondering if it is possible or not before I dig any deeper.
Yes, this is a pretty standard use case for using nginx (or any other reverse proxy). You would configure the location prefixes, etc that need to go to your backend application and proxy (via proxy_pass directive) to them. Any static content can be served directly from nginx. All of this can then behind nginx.
Assuming that your application is never issuing absolute urls which make use of "http://" this should resolve your mixed content warnings.
You will probably want to read some tutorials but the basics of your configuration would be:
server {
listen 443 ssl; # you can also add http2
server_name hostnames that you listen for;
ssl_certificate_key /path/to/cert.key;
ssl_certificate /path/to/cert.pem;
root /var/www/sites/foo.com;
location /path/handled/by/application {
proxy_pass http://localhost:8000; # or whatever port is
}
}

Azure App Service getting Error 404 when redirected via NGINX

I created a VM, port 80 is open and installed NGINX on it.
I created 2 App Services which can be accessed via x1.azurewebsites.net and x2.azurewebsites.net
I configured the VM to act as an load balancer but when redirecting the traffic I get the following: https://i.gyazo.com/b94bed9c90d3b0f0c400c83f762f0544.png
I am not using my own domain. Does someone know what the issue could be?
I got the following configurations:
upstream backend {
server xx.azurewebsites.net;
server xxx.azurewebsites.net;
}
server {
listen 80 default_server;
listen [::]:80 default_server;
root /var/www/html;
server_name_;
location / {
proxy_pass http://backend;
}
}
Azure App Service uses cookies for ARR (Application Request Routing). You have to make sure that your NGinx reverse proxy configuration pass the correct cookie / header to your web app.
The other possibility (to make sure the behavior comes from ARR) is to disable it: https://azure.microsoft.com/en-us/blog/disabling-arrs-instance-affinity-in-windows-azure-web-sites/

Nginx upstream configuration

I am trying to configure nginx with upstream.
We have 3 machines where we run application server and proxy passing all requests from nginx to application serves.
I used following configuration in nginx:
upstream appcluster {
server host1.example.com:8080 max_fails=2 fail_timeout=300s;
server host2.example.com:8080 max_fails=2 fail_timeout=300s;
}
Now issue is if the request comes to nginx when one server is down due to unknown reasons it's waiting for a long time getting response or sometimes its getting connection timeout.
Can someone suggest me the right configuration to get a response from the appcluster without latency or connection timeout whenever a server won't respond?
Then these can help, check the proxy_next_upstream
These directive determines in what cases the request will be transmitted to the next server.
Your server block should look like for example:
server {
location / {
proxy_pass http://appcluster;
proxy_next_upstream error timeout http_404;
}
}

Resources