setup nginx to use another gateway in case of 504 error - node.js

I got the following nginx config:
server {
listen 80;
server_name domainName.com;
gzip_types text/plain text/css application/json application/x-javascript text/xml application/xml application/rss+xml text/javascript image/svg+xml application/vnd.ms-fontobject application/x-font-ttf font/opentype;
access_log /var/log/nginx/logName.access.log;
location / {
proxy_pass http://127.0.0.1:9000/;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
}
location ~ ^/(min/|images/|ckeditor/|img/|javascripts/|apple-touch-icon-ipad.png|apple-touch-icon-ipad3.png|apple-touch-icon-iphone.png|apple-touch-icon-iphone4.png|generated/|js/|css/|stylesheets/|robots.txt|humans.txt|favicon.ico) {
root /root/Dropbox/nodeApps/nodeJsProject/port/public;
access_log off;
expires max;
}
}
It is proxy for node.js application on port 9000.
Is it possible to change this config to let nginx use another proxy url (on port 9001 for example) in case nginx got 504 error.
I need this in case when node.js server is down on port 9000 and need several seconds to restart automatically and several seconds nginx gives 504 error for every request. I want nginx to "guess" that node.js site on port 9000 is down and use reserve node.js site on port 9001

Use upstream module.
upstream node {
server 127.0.0.1:9000;
server 127.0.0.1:9001 backup;
}
server {
...
proxy_pass http://node/;
...
}

Related

Why is my NGINX to pm2 upstream slow when restarting?

I run a home server with nginx reverse proxied to a Node.js/PM2 upstream. Normally it works perfectly. However, when I want to make changes, I run pm2 reload pname or pm2 restart pname, which results in nginx throwing 502 Bad Gateway for about 10-20 seconds before it finds the new upstream.
My Node.js app starts very fast and I am 99% sure it is not actually taking that long for the upstream to start and bind to the port (when I don't use the nginx layer it is accessible instantly). How can I eliminate the extra time it takes for nginx to figure things out?
From nginx/error.log:
2021/01/29 17:50:35 [error] 18462#0: *85 no live upstreams while connecting to upstream, client: [ip], server: hostname.com, request: "GET /path HTTP/1.1", upstream: "http://localhost/path", host: "www.hostname.com"
From my nginx domain config:
server {
listen 80;
server_name hostname.com www.hostname.com;
return 301 https://$host$request_uri;
}
server {
listen 443 ssl;
server_name hostname.com www.hostname.com;
# ...removed ssl stuff...
gzip_types text/plain text/css text/xml application/json application/javascript application/xml+rss application/atom+xml image/svg+xml;
gzip_proxied no-cache no-store private expired auth;
gzip_min_length 1000;
location / {
proxy_pass http://localhost:3010;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection 'upgrade';
proxy_set_header Host $host;
proxy_cache_bypass $http_upgrade;
proxy_set_header X-Forwarded-For $remote_addr;
proxy_read_timeout 240s;
}
}
This is caused by the default behavior for an upstream, this may not be obvious since you're not explicitly declaring your upstream using the upstream directive. Your configuration with an upstream directive would look like this:
upstream backend {
server localhost:3010;
}
...
server {
listen 443 ssl;
...
location / {
proxy_pass http://backend;
...
}
}
In this form it's apparent you're just relying on the default options for the server directive. The server directive has many options, but two of them are important here: max_fails and fail_timeout. These options control failure states and how nginx should handle them. By default max_fails=1 and fail_timeout=10 seconds, this means that after one unsuccessful attempt to communicate with the upstream nginx will wait 10 seconds before attempting again.
To avoid this in your environment you could simply disable this mechanism by setting max_fails=0:
upstream backend {
server localhost:3010 max_fails=0;
}

NGinx not routing between Node.js back-end and React front-end

I have a deployed a web app wih a Node.js back-end and React front-end on AWS Elastic Beanstalk using the NGINX default configuration.
upstream nodejs {
server 127.0.0.1:8081;
keepalive 256;
}
server {
listen 8080;
location / {
proxy_pass http://nodejs;
proxy_set_header Connection "";
proxy_http_version 1.1;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
}
gzip on;
gzip_comp_level 4;
gzip_types text/plain text/css application/json application/javascript application/x-javascript text/xml application/xml application/xml+rss text/javascript;
}
My back-end runs on port 8081 (with Express.js) and doesn't receive any of the calls made by the front-end i.e. fetch("http:127.0.0.1/api/volatility").
In the console I see GET https://foo-bar.us-east-1.elasticbeanstalk.com:8080/api/volatility net::ERR_CONNECTION_TIMED_OUT.
Any way to this fix this?
My Elastic Beanstalk service didn't have permission to read/write in the database.

Compressing assets with NGINX in reverse proxy mode

I'm using NGINX as a reverse proxy in front of a Node.js app. The basic proxy works perfectly fine and I'm able to compress assets on the Node server with compression middleware.
To test if it's possible to delegate the compression task to NGINX, I've disabled the middleware and now I'm trying to gzip with NGINX with the following configuration:
worker_processes 1;
events {
worker_connections 1024;
}
http {
include mime.types;
default_type application/octet-stream;
sendfile on;
keepalive_timeout 300;
server {
listen 80;
## gzip config
gzip on;
gzip_min_length 1000;
gzip_comp_level 5;
gzip_proxied any;
gzip_vary on;
gzip_types text/plain
text/css
text/javascript
image/gif
image/png
image/jpeg
image/svg+xml
image/x-icon;
location / {
proxy_pass http://app:3000/;
proxy_http_version 1.1;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection 'upgrade';
proxy_cache_bypass $http_upgrade;
}
}
}
With this configuration, NGINX doesn't compress the assets. I've tried declaring these in the location context with different options but none of them seems to do the trick.
I couldn't find relevant resources on this so I'm questioning if it could be done this way at all.
Important points:
1- Node and NGINX are on different containers so I'm not serving the static assets with NGINX. I'm just proxying to the node server which is serving these files. All I'm trying to achieve is offload the node server with getting NGINX to do the gzipping.
2- I'm testing all the responses with "Accept-Encoding: gzip" enabled.
Try to add the application/javascript content type:
gzip_types
text/css
text/javascript
text/xml
text/plain
text/x-component
application/javascript
application/json
application/xml
application/rss+xml
font/truetype
font/opentype
application/vnd.ms-fontobject
image/svg+xml;
I took values from this conf H5BP:

AWS EB - Redirect all traffic to https

My nodejs app is deployed on AWS EB. I already configured the https server and it is working fine. Now I need to redirect every non-https request to https with the www. as prefix, like this:
GET example.com => https://www.example.com
I'm using nginx and my EB instance is a single instance without load balancer in front of it.
I have created a config file in the .ebextensions folder with this code
Resources:
sslSecurityGroupIngress:
Type: AWS::EC2::SecurityGroupIngress
Properties:
GroupId: {"Fn::GetAtt" : ["AWSEBSecurityGroup", "GroupId"]}
IpProtocol: tcp
ToPort: 443
FromPort: 443
CidrIp: 0.0.0.0/0
files:
/etc/nginx/conf.d/999_nginx.conf:
mode: "000644"
owner: root
group: root
content: |
upstream nodejsserver {
server 127.0.0.1:8081;
keepalive 256;
}
# HTTP server
server {
listen 8080;
server_name localhost;
return 301 https://$host$request_uri;
}
# HTTPS server
server {
listen 443;
server_name localhost;
ssl on;
ssl_certificate /etc/pki/tls/certs/server.crt;
ssl_certificate_key /etc/pki/tls/certs/server.key;
ssl_session_timeout 5m;
ssl_protocols TLSv1 TLSv1.1 TLSv1.2;
ssl_ciphers "EECDH+AESGCM:EDH+AESGCM:AES256+EECDH:AES256+EDH";
ssl_prefer_server_ciphers on;
location / {
proxy_pass http://nodejsserver;
proxy_set_header Connection "";
proxy_http_version 1.1;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto https;
}
}
/etc/pki/tls/certs/server.crt:
mode: "000400"
owner: root
group: root
content: |
-----BEGIN CERTIFICATE-----
my crt
-----END CERTIFICATE-----
/etc/pki/tls/certs/server.key:
mode: "000400"
owner: root
group: root
content: |
-----BEGIN RSA PRIVATE KEY-----
my key
-----END RSA PRIVATE KEY-----
/etc/nginx/conf.d/gzip.conf:
content: |
gzip on;
gzip_comp_level 9;
gzip_http_version 1.0;
gzip_types text/plain text/css image/png image/gif image/jpeg application/json application/javascript application/x-javascript text/javascript text/xml application/xml application/rss+xml application/atom+xml application/rdf+xml;
gzip_proxied any;
gzip_disable "msie6";
commands:
00_enable_site:
command: 'rm -f /etc/nginx/sites-enabled/*'
I'm sure aws is taking in account my config because de ssl is working fine. But the http block does not work.. There is no redirect.
Maybe my problem is about rewriting the original nginx config of EB, do you know how to achieve this ?
Can you help me with that please ? I've tried a lot of things..
Thank you
OK, found the issue, EB creates a default config file /etc/nginx/conf.d/00_elastic_beanstalk_proxy.conf which is listening to 8080. So your re-direct isn't being picked up as Nginx is using the earlier defined rule for 8080.
Here's a config file that I use that works. The file it generates will precede the default rule.
https://github.com/jozzhart/beanstalk-single-forced-ssl-nodejs-pm2/blob/master/.ebextensions/https-redirect.config

Load balance nodejs app for zero downtime deploy with nginx serving different static assets

I have 2 directories www-1 and www-2 and both have the the same nodejs app. which has some jade views, some endpoints, etc.
Also have 2 upstart scripts that run:
PORT=5000 node www-1/app.js
PORT=5001 node www-2/app.js
Now, i have the following nginx configuration to load balance incoming traffic to one or the other.
upstream backend {
server 127.0.0.1:5000 fail_timeout=0;
server 127.0.0.1:5001 fail_timeout=0;
}
server {
listen 80 default_server;
listen [::]:80 default_server;
gzip on;
gzip_types text/plain application/xml text/css application/x-javascript text/javascript application/javascript image/x-icon image/jpeg;
gzip_vary on;
charset UTF-8;
index index.html index.htm;
server_name myserver.com;
location / {
proxy_pass http://backend;
proxy_next_upstream error timeout invalid_header http_500;
proxy_connect_timeout 2;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_redirect off;
proxy_buffering off;
}
location ~* \.(ico|css|js|gif|jpe?g|png|svg|woff2?|ttf|eot)$ {
expires 168h;
add_header Pragma public;
add_header Cache-Control "public, must-revalidate, proxy-revalidate";
root /home/user/myserver.com/www-1/public;
}
}
https://gist.github.com/dciccale/2331d2e0a1a6e76e05bd
This works, however as you can see in line 34 i am also serving all statics from nginx, however i want some way to specify the root to be www-1 or www-2 depending on which server is up.
Let me explain why:
In the server i have a git repo which normally I would git pull whenever i make a change. then i would build the new code gulp dist which generates a dist directory and then rm -rf www-1 && cp -r dist www-1 so while that last command executes, there may be some downtime in www-1 (app running in 5000) for serving for example jade files giving a 500 for not finding the view or whatever, so nginx will balance to port 5001. that works, but also nginx would fail serving static files from www-1 if those static assets are being replaced. After this also i would restart the first upstart script to re-run the new deployed app. After this i would do the same process for www-2.
so that is my question, how to make that root for static assets dynamic. or if there is a better way of handling this i would appreciate some help.
EDIT: A second configuration
I setted up another configuration which works however for some seconds a user could see the updated content while another one could see the old content, the best i could get.
By creating 2 new server blocks that listen to port 3000 and 3001 which do a proxy_pass to 5000 and 5001 respectively and each of this server blocks has a route for static assets, one pointing to www-1 and the other to www-2.
Also needed to add http_502 and http_404 to the proxy_next_upstream directive to make sure that all failing requests should be load balanced (like a 404 in an image that is being replaced)
https://gist.github.com/dciccale/2331d2e0a1a6e76e05bd#file-my-nginx-2-conf
You could consider putting a version id in the URL for static content and simply leave the previous version alive as long as you're refreshing your back ends.

Resources