Nginx upstream configuration - linux

I am trying to configure nginx with upstream.
We have 3 machines where we run application server and proxy passing all requests from nginx to application serves.
I used following configuration in nginx:
upstream appcluster {
server host1.example.com:8080 max_fails=2 fail_timeout=300s;
server host2.example.com:8080 max_fails=2 fail_timeout=300s;
}
Now issue is if the request comes to nginx when one server is down due to unknown reasons it's waiting for a long time getting response or sometimes its getting connection timeout.
Can someone suggest me the right configuration to get a response from the appcluster without latency or connection timeout whenever a server won't respond?

Then these can help, check the proxy_next_upstream
These directive determines in what cases the request will be transmitted to the next server.
Your server block should look like for example:
server {
location / {
proxy_pass http://appcluster;
proxy_next_upstream error timeout http_404;
}
}

Related

504:Gateway Timeout in Web App using Docker

I am Deploying a web application When executing the app url its showing this error. In logs, displaying this error, "Waiting for response to warmup request for container". Please help me with this.
Building a Python Based App. Tried using Docker Container in Azure Portal.
Try to increase the timeout on your webserver (e.g. nginx) or/and at python level, to allow your page to execute for longer.
You will receive a 504 error when the webserver waits too long for a page to be generated. The webserver closes the http request because it has reached a timeout.
Here some Nginx configurations to increase the timeout :
server {
server_name _;
root /var/www/html/public/;
###
# increase timeouts to avoid 504 gateway timeouts, digits are seconds
# for big files uploads
###
client_max_body_size 0;
proxy_send_timeout 3600;
proxy_read_timeout 3600;
fastcgi_send_timeout 3600;
fastcgi_read_timeout 3600;
# ... your config
}
I'm not a Python developer but you may find some timeout configurations at Python level too (because there's a config in PHP to define the max_execution_time).

Cloudflare - No further requests possible during download

My site allows users to download big .zip files. A problem I'm dealing with right now is that whenever the user is currently downloading such a file, all other requests to the site wait until the download is finished or cancelled, making the site practically unusable. In the Chrome network tab, the request shows as pending. Why could this be?
The server itself is implemented in Node.js using Express and is proxied through NGINX and then through Cloudflare. When I connect to the Express server or the NGINX proxy directly, this problem doesn't come up, only when it's routed through Cloudflare from what I have observed.
This is my NGINX config, if of any help:
server {
listen 80;
listen [::]:80;
server_name marbleland.vani.ga;
client_max_body_size 20m;
location / {
proxy_pass "http://localhost:20020/";
}
}
Am I missing something obvious?

Nginx upstream servers all go down when one of them shuts down

I'm trying to set up upstream servers with nginx. All run the same Node.js app on port 8080 with pm2. Here is the nginx default.conf of the main server
upstream backend {
ip_hash;
server localhost:8080;
server sv1_ip_address;
server sv2_ip_address;
}
server {
listen 443 ssl;
location / {
proxy_pass http://backend;
...
}
...
}
And on sv1 and sv2, I have the same default.conf as follows
server {
listen 80 default_server;
location / {
proxy_pass http://localhost:8080;
...
}
}
Now when I tried shutting down either sv1 or sv2 (using pm2 kill for Node or even reboot), all upstream servers went down and I receive a 500 error (?) when accessing the app by the domain name. So I thought there was something wrong with nginx on those secondary servers and I replaced upstream backend with this
upstream backend {
ip_hash;
server localhost:8080;
server sv1_ip_address:8080;
server sv2_ip_address:8080;
}
and now shutting down or rebooting were handled correctly (meaning nginx will route the requests to one of the living servers). Is this an expected behavior, or am I doing something wrong here? I don't think routing requests directly to port 8080 is a good idea though.
I donot know why you had to install nginx service on sv1 and sv2 servers.
When you reboot sv1 , sv2 servers, it should be enabling nginx first. Please check service nginx status is running or not once reboot is done.
And you kill node meaning application is down, so you got 500 error on nginx

NGINX Load Balancer routing to different ip if request fail

I have following nginx configuration for my reverse proxy load balancer.
upstream appserver{
server 192.168.1.101:3800;
server 192.168.1.102:3800;
server 192.168.1.103:3800;
server 192.168.1.104:3800;
}
server {
location /api {
proxy_pass http://appserver;
}
}
If by any chance or error my one node instance is breaking or getting restart in middle of processing request, my load balancer is redirecting the request to other ip in reverse proxy. I dont want this to happen. I want it should not redirect and respond back with 500 or something else.
If I understand your request correctly I think you need proxy_next_upstream off;
http://nginx.org/en/docs/http/ngx_http_proxy_module.html#proxy_next_upstream

502 Gateway Error on AWS - API

I am creating a test web app and have deployed it to AWS Ubuntu server using nginx..
I am getting a 502 Bad Gateway error when it tries to reach my API..
I am new to this and have started node.js and all seems to working fine except when I want to perform an API call to mongodb to read or write information. it is working fine locally so I am at a loss....
GET http://ec2-54-72-145-112.eu-west-1.compute.amazonaws.com/api/rest/golf 502 (Bad Gateway)
this is nginx server config
location /xxxxxxxxxxxxxxx{
alias /home/ubuntu/xxxxxxxxxxxxxx/site/public;
}
location /api/ {
proxy_pass http://127.0.x.1:8180/api/;
}
..
I know I may not be giving enough info but hopefully someone has an idea..
Thanks!
The nginx error message HTTP 502 indicates the nginx is working fine but cannot reach your specified proxy. So I'd suggest you to check whether the port and binding IP are correct.
You can check which ports are bound by which application using this command on your Ubuntu machine:
netstat -tulpen
You should see a line with the column "Local Address" and in your case the value 127.0.x.1:8180. If it's not there try to figure out which port is bound by your node application and reconfigure the nginx to use that port.

Resources