Compressing assets with NGINX in reverse proxy mode - node.js

I'm using NGINX as a reverse proxy in front of a Node.js app. The basic proxy works perfectly fine and I'm able to compress assets on the Node server with compression middleware.
To test if it's possible to delegate the compression task to NGINX, I've disabled the middleware and now I'm trying to gzip with NGINX with the following configuration:
worker_processes 1;
events {
worker_connections 1024;
}
http {
include mime.types;
default_type application/octet-stream;
sendfile on;
keepalive_timeout 300;
server {
listen 80;
## gzip config
gzip on;
gzip_min_length 1000;
gzip_comp_level 5;
gzip_proxied any;
gzip_vary on;
gzip_types text/plain
text/css
text/javascript
image/gif
image/png
image/jpeg
image/svg+xml
image/x-icon;
location / {
proxy_pass http://app:3000/;
proxy_http_version 1.1;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection 'upgrade';
proxy_cache_bypass $http_upgrade;
}
}
}
With this configuration, NGINX doesn't compress the assets. I've tried declaring these in the location context with different options but none of them seems to do the trick.
I couldn't find relevant resources on this so I'm questioning if it could be done this way at all.
Important points:
1- Node and NGINX are on different containers so I'm not serving the static assets with NGINX. I'm just proxying to the node server which is serving these files. All I'm trying to achieve is offload the node server with getting NGINX to do the gzipping.
2- I'm testing all the responses with "Accept-Encoding: gzip" enabled.

Try to add the application/javascript content type:
gzip_types
text/css
text/javascript
text/xml
text/plain
text/x-component
application/javascript
application/json
application/xml
application/rss+xml
font/truetype
font/opentype
application/vnd.ms-fontobject
image/svg+xml;
I took values from this conf H5BP:

Related

Node with NGINX reverse proxy seems to kill connection even when configured using long timeout

I have this Node app running behind an NGINX reverse proxy. My Node app functionality is to download a large XLS file that consumes for about 80-120 seconds. It works locally without NGINX, but when I used NGINX, it seems that it just hangs and gives me timeout error.
I use MongoDB and Mongoose as a database in my Node App, and it will query the database to download the XLSX
Here is a piece of NGINX configuration:
keepalive_timeout 70;
client_max_body_size 16m;
location / {
gzip on;
gzip_min_length 1100;
gzip_buffers 4 32k;
gzip_types text/css text/javascript text/xml text/plain text/x-component application/javascript application/x-javascript application/json application/xml application/rss+xml font/truetype application/x-font-ttf font/opentype application/vnd.ms-fontobject image/svg+xml;
gzip_vary on;
gzip_comp_level 6;
proxy_pass http://indorelawan-80;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection $http_connection;
proxy_set_header Host $http_host;
proxy_set_header X-Forwarded-Proto $scheme;
proxy_set_header X-Forwarded-For $remote_addr;
proxy_set_header X-Forwarded-Port $server_port;
proxy_set_header X-Request-Start $msec;
proxy_buffer_size 128k;
proxy_buffers 4 256k;
proxy_busy_buffers_size 256k;
proxy_connect_timeout 600;
proxy_send_timeout 600;
proxy_read_timeout 600;
send_timeout 600;
}
As you can see, it is using proxy_send_timeout and proxy_read_timeout for 600 seconds. When I tried it in local (without NGINX), it will download the XLS for about 83 seconds or so. But, in Production using NGINX, it will halt and return timeout. Is there any way to fix this?
Never mind, I moved to using queues like BullMQ instead.

NGinx not routing between Node.js back-end and React front-end

I have a deployed a web app wih a Node.js back-end and React front-end on AWS Elastic Beanstalk using the NGINX default configuration.
upstream nodejs {
server 127.0.0.1:8081;
keepalive 256;
}
server {
listen 8080;
location / {
proxy_pass http://nodejs;
proxy_set_header Connection "";
proxy_http_version 1.1;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
}
gzip on;
gzip_comp_level 4;
gzip_types text/plain text/css application/json application/javascript application/x-javascript text/xml application/xml application/xml+rss text/javascript;
}
My back-end runs on port 8081 (with Express.js) and doesn't receive any of the calls made by the front-end i.e. fetch("http:127.0.0.1/api/volatility").
In the console I see GET https://foo-bar.us-east-1.elasticbeanstalk.com:8080/api/volatility net::ERR_CONNECTION_TIMED_OUT.
Any way to this fix this?
My Elastic Beanstalk service didn't have permission to read/write in the database.

Browser stalls request when serving static files with Nginx

I have a situation where when I open the page for the first time, everything works fast, if I reload the page soon after everything works ok, and if I reload the page 3rd time browser stalles the request for about 25 seconds. Sometimes more, sometimes less. Sometimes it's a request for root, sometimes for some static file. If I wait some time and refresh again, everything is opening fast again until about 2nd or 3rd refresh of the webpage.
What could this be? If I use Nginx but serve static files with Node, I don't have that kind of a problem?
daemon off;
worker_processes <%= ENV['NGINX_WORKERS'] || 4 %>;
worker_rlimit_nofile 10000;
events {
use epoll;
accept_mutex on;
multi_accept on;
worker_connections 1024;
}
error_log logs/nginx/error.log;
http {
charset utf-8;
include mime.types;
default_type application/octet-stream;
access_log off;
log_format l2met 'measure#nginx.service=$request_time request_id=$http_x_request_id';
access_log logs/nginx/access.log l2met;
sendfile on;
tcp_nopush on;
tcp_nodelay on;
types_hash_max_size 2048;
open_file_cache max=1000 inactive=20s;
open_file_cache_valid 30s;
open_file_cache_min_uses 2;
open_file_cache_errors on;
limit_conn_zone $binary_remote_addr zone=conn_limit_per_ip:10m;
limit_req_zone $binary_remote_addr zone=req_limit_per_ip:10m rate=5r/s;
client_body_buffer_size 16k;
client_header_buffer_size 1k;
client_max_body_size 8m;
large_client_header_buffers 2 1k;
# # - Configure Timeouts
reset_timedout_connection on;
client_body_timeout 12;
client_header_timeout 12;
keepalive_timeout 15;
keepalive_requests 100;
send_timeout 10;
server_tokens off;
# # - Dynamic gzip compression
gzip on;
gzip_http_version 1.1;
gzip_disable "msie6";
gzip_vary on;
gzip_min_length 20;
gzip_buffers 4 16k;
gzip_comp_level 4;
gzip_proxied any;
# Turn on gzip for all content types that should benefit from it.
gzip_types application/ecmascript;
gzip_types application/javascript;
gzip_types application/json;
gzip_types application/pdf;
gzip_types application/postscript;
gzip_types application/x-javascript;
gzip_types image/svg+xml;
gzip_types text/css;
gzip_types text/csv;
gzip_types text/javascript;
gzip_types text/plain;
gzip_types text/xml;
gzip_types text/json;
# proxying requests to other servers
upstream nodebeats {
server unix:/tmp/nginx.socket fail_timeout=0;
}
server {
listen <%= ENV['PORT'] %>;
server_name _;
root "/app/";
limit_conn conn_limit_per_ip 5;
limit_req zone=req_limit_per_ip burst=10 nodelay;
location ~* \.(js|css|jpg|png|ico|json|xml|svg)$ {
root "/app/src/dist/";
add_header Pragma public;
add_header Cache-Control public;
expires 1y;
gzip_static on;
gzip off;
log_not_found off;
access_log off;
}
location / {
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header Host $http_host;
proxy_set_header Connection "";
proxy_redirect off;
proxy_http_version 1.1;
proxy_pass http://nodebeats;
add_header Cache-Control no-cache;
proxy_read_timeout 60s;
}
}
}
Ok after inspecting I saw that this is happening because of some Chrome (and other browsers) limitation of 6 tcp connection per server simultaneals. When I look into chrome://net-internals/#sockets I see this. The problem is ssl_socket_pool and 6 active connections. They take a long time to go from active to idle (and then page continues to load). How to fix that?
I tried opening some other pages that have much more static content and http request than mine (which has 8) and they reload fast always. I looked at the same place and saw that there is nothing in Active. All the connections are immediately idle after page reload.

NodeBB horribly slow over Nginx reverse proxy

I'm running NodeBB behind an Nginx reverse proxy, and i'm experiencing load times over 10 seconds occasionally, otherwise average 2 second load time (still way too much). It should also be noted that the load time is about 200ms in total when i access the forum on the port NodeBB is running on, but i shouldn't need to do that.
I cannot for the life of me figure out why this reverse proxy is as slow as it is.
If you want to figure out what parts are loading slowly, feel free to inspect the network traffic on the NodeBB install.
All suggestions are welcome and appreciated!
Here's my Nginx server:
server {
listen 80;
server_name forums.hydroxium.com;
location / {
proxy_http_version 1.1;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection 'upgrade';
proxy_set_header Host $host;
proxy_redirect off;
proxy_pass http://127.0.0.1:4567;
}
And here's my Nginx config:
user www-data;
worker_processes 4;
worker_rlimit_nofile 20480;
pid /run/nginx.pid;
events {
worker_connections 5120;
multi_accept on;
use epoll;
}
http {
##
# Basic Settings
##
sendfile on;
tcp_nopush on;
tcp_nodelay on;
keepalive_timeout 65;
types_hash_max_size 2048;
charset utf-8;
client_body_timeout 65;
client_header_timeout 65;
client_max_body_size 10m;
include /etc/nginx/mime.types;
default_type application/octet-stream;
##
# Logging Settings
##
access_log /var/log/nginx/access.log;
error_log /var/log/nginx/error.log;
##
# Gzip Settings
##
gzip on;
gzip_disable "msie6";
gzip_vary on;
gzip_proxied any;
gzip_comp_level 6;
gzip_buffers 16 8k;
gzip_http_version 1.1;
gzip_types text/plain text/css application/json application/x-javascript text/xml application/xml application/xml+rss text/javascript;
}
Turns out the VPS it was running on was the problem, it simply didn't have enough power to run everything at the same time without being slow.

setup nginx to use another gateway in case of 504 error

I got the following nginx config:
server {
listen 80;
server_name domainName.com;
gzip_types text/plain text/css application/json application/x-javascript text/xml application/xml application/rss+xml text/javascript image/svg+xml application/vnd.ms-fontobject application/x-font-ttf font/opentype;
access_log /var/log/nginx/logName.access.log;
location / {
proxy_pass http://127.0.0.1:9000/;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
}
location ~ ^/(min/|images/|ckeditor/|img/|javascripts/|apple-touch-icon-ipad.png|apple-touch-icon-ipad3.png|apple-touch-icon-iphone.png|apple-touch-icon-iphone4.png|generated/|js/|css/|stylesheets/|robots.txt|humans.txt|favicon.ico) {
root /root/Dropbox/nodeApps/nodeJsProject/port/public;
access_log off;
expires max;
}
}
It is proxy for node.js application on port 9000.
Is it possible to change this config to let nginx use another proxy url (on port 9001 for example) in case nginx got 504 error.
I need this in case when node.js server is down on port 9000 and need several seconds to restart automatically and several seconds nginx gives 504 error for every request. I want nginx to "guess" that node.js site on port 9000 is down and use reserve node.js site on port 9001
Use upstream module.
upstream node {
server 127.0.0.1:9000;
server 127.0.0.1:9001 backup;
}
server {
...
proxy_pass http://node/;
...
}

Resources