NodeBB horribly slow over Nginx reverse proxy - node.js

I'm running NodeBB behind an Nginx reverse proxy, and i'm experiencing load times over 10 seconds occasionally, otherwise average 2 second load time (still way too much). It should also be noted that the load time is about 200ms in total when i access the forum on the port NodeBB is running on, but i shouldn't need to do that.
I cannot for the life of me figure out why this reverse proxy is as slow as it is.
If you want to figure out what parts are loading slowly, feel free to inspect the network traffic on the NodeBB install.
All suggestions are welcome and appreciated!
Here's my Nginx server:
server {
listen 80;
server_name forums.hydroxium.com;
location / {
proxy_http_version 1.1;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection 'upgrade';
proxy_set_header Host $host;
proxy_redirect off;
proxy_pass http://127.0.0.1:4567;
}
And here's my Nginx config:
user www-data;
worker_processes 4;
worker_rlimit_nofile 20480;
pid /run/nginx.pid;
events {
worker_connections 5120;
multi_accept on;
use epoll;
}
http {
##
# Basic Settings
##
sendfile on;
tcp_nopush on;
tcp_nodelay on;
keepalive_timeout 65;
types_hash_max_size 2048;
charset utf-8;
client_body_timeout 65;
client_header_timeout 65;
client_max_body_size 10m;
include /etc/nginx/mime.types;
default_type application/octet-stream;
##
# Logging Settings
##
access_log /var/log/nginx/access.log;
error_log /var/log/nginx/error.log;
##
# Gzip Settings
##
gzip on;
gzip_disable "msie6";
gzip_vary on;
gzip_proxied any;
gzip_comp_level 6;
gzip_buffers 16 8k;
gzip_http_version 1.1;
gzip_types text/plain text/css application/json application/x-javascript text/xml application/xml application/xml+rss text/javascript;
}

Turns out the VPS it was running on was the problem, it simply didn't have enough power to run everything at the same time without being slow.

Related

Why my website pause at "Redirecting" after calling api many times?

I take Nginx as my reverse proxy to forward the requests to websites and api. But if I call the api many times, the website will stop at "Redirecting" page and I have to click the url manually.
Here is the screen
Here is my nginx confiuration(I hidden the ssl congifuration):
user www-data;
worker_processes auto;
pid /run/nginx.pid;
include /etc/nginx/modules-enabled/*.conf;
events {
worker_connections 768;
}
http {
sendfile on;
tcp_nopush on;
tcp_nodelay on;
keepalive_timeout 65;
types_hash_max_size 2048;
include /etc/nginx/mime.types;
default_type application/octet-stream;
access_log /var/log/nginx/access.log;
error_log /var/log/nginx/error.log;
gzip on;
server {
listen 80;
server_name alpha.hunghingprinting.com;
rewrite ^(.*) https://$host$1 permanent;
}
server {
listen 443;
# set proper server name after domain set
server_name alpha.hunghingprinting.com;
# Add Headers for odoo proxy mode
proxy_set_header X-Forwarded-Host $host;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
proxy_set_header X-Real-IP $remote_addr;
add_header X-Frame-Options "SAMEORIGIN";
add_header X-XSS-Protection "1; mode=block";
proxy_set_header X-Client-IP $remote_addr;
proxy_set_header HTTP_X_FORWARDED_HOST $remote_addr;
# SSL parameters
ssl on;
ssl_prefer_server_ciphers on;
# odoo log files
access_log /var/log/nginx/odoo14-access.log;
error_log /var/log/nginx/odoo14-error.log;
# increase proxy buffer size
proxy_buffers 16 64k;
proxy_buffer_size 128k;
proxy_read_timeout 900s;
proxy_connect_timeout 900s;
proxy_send_timeout 900s;
# force timeouts if the backend dies
proxy_next_upstream error timeout invalid_header http_500 http_502
http_503;
types {
text/less less;
text/scss scss;
}
# enable data compression
gzip on;
gzip_min_length 1100;
gzip_buffers 4 32k;
gzip_types text/css text/less text/plain text/xml application/xml application/json application/javascript application/pdf image/jpeg image/png;
gzip_vary on;
client_header_buffer_size 4k;
large_client_header_buffers 4 64k;
client_max_body_size 0;
location / {
proxy_pass http://127.0.0.1:8069;
# by default, do not forward anything
proxy_redirect off;
}
location /longpolling {
proxy_pass http://127.0.0.1:8072;
#proxy_pass http://odoochat;
}
location ~* .(js|css|png|jpg|jpeg|gif|ico)$ {
expires 2d;
proxy_pass http://127.0.0.1:8069;
add_header Cache-Control "public, no-transform";
}
# cache some static data in memory for 60mins.
location ~ /[a-zA-Z0-9_-]*/static/ {
proxy_cache_valid 200 302 60m;
proxy_cache_valid 404 1m;
proxy_buffering on;
expires 864000;
proxy_pass http://127.0.0.1:8069;
}
}
}
And if I don't use my api too many times, things are normal.
If you want nginx to rewrite the url directly you can remove this line:
proxy_redirect off;
Please check documentation: https://nginx.org/en/docs/http/ngx_http_proxy_module.html#proxy_redirect
Otherwise it's an issue with your browser not nginx.

Nginx reverse proxy not working on domain name

I have tried all the solution on SO but no success. I want to use Nginx as a "Node.js" app reverse proxy. With my current configurations, I was able to make it work when connecting to it through the server IP but not when using its domain name.My configuration details pastebin.com/gMqpmDwj
http://Ipaddress:3000 works but http://example.com doesn't.
Here is the configuration of my Nginx proxy, stored in /etc/Nginx/conf.d/domain.conf.
server {
listen 80;
server_name domain_name;
location / {
proxy_set_header X-Forwarded-For $remote_addr;
proxy_set_header Host $http_host;
proxy_pass http://ipaddress:3000;
}
}
But when I try to access it works fine on ip:port but when on domain:port or without port it doesn't
Try this configuration:
/etc/nginx/nginx.conf
user nobody;
worker_processes 1;
pid /var/run/nginx.pid;
events {
worker_connections 1024;
multi_accept on;
}
http {
sendfile on;
tcp_nopush on;
tcp_nodelay on;
keepalive_timeout 15;
types_hash_max_size 2048;
client_max_body_size 8M;
server_tokens off;
include /etc/nginx/mime.types;
default_type application/octet-stream;
access_log off;
error_log /var/log/nginx/error.log crit;
gzip on;
gzip_min_length 100;
gzip_http_version 1.1;
gzip_types text/plain text/css application/json application/javascript text/xml application/xml application/xml+rss text/javascript;
include /etc/nginx/cloudflare.inc;
include /etc/nginx/conf.d/*.conf;
}
/etc/nginx/conf.d/domain.conf
upstream nodejs_app {
server <ipaddress>:3000;
keepalive 8;
}
server {
listen 80;
listen [::]:80;
server_name <domain_name>;
location / {
# websocket support
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "upgrade";
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header Host $http_host;
proxy_set_header X-NginX-Proxy true;
proxy_pass http://nodejs_app/;
proxy_redirect off;
}
}
I solved my issue after following this link.I had multiple configuration files active that was causing problem.
How to Configure Nginx Reverse Proxy for Nodejs on Centos

Browser stalls request when serving static files with Nginx

I have a situation where when I open the page for the first time, everything works fast, if I reload the page soon after everything works ok, and if I reload the page 3rd time browser stalles the request for about 25 seconds. Sometimes more, sometimes less. Sometimes it's a request for root, sometimes for some static file. If I wait some time and refresh again, everything is opening fast again until about 2nd or 3rd refresh of the webpage.
What could this be? If I use Nginx but serve static files with Node, I don't have that kind of a problem?
daemon off;
worker_processes <%= ENV['NGINX_WORKERS'] || 4 %>;
worker_rlimit_nofile 10000;
events {
use epoll;
accept_mutex on;
multi_accept on;
worker_connections 1024;
}
error_log logs/nginx/error.log;
http {
charset utf-8;
include mime.types;
default_type application/octet-stream;
access_log off;
log_format l2met 'measure#nginx.service=$request_time request_id=$http_x_request_id';
access_log logs/nginx/access.log l2met;
sendfile on;
tcp_nopush on;
tcp_nodelay on;
types_hash_max_size 2048;
open_file_cache max=1000 inactive=20s;
open_file_cache_valid 30s;
open_file_cache_min_uses 2;
open_file_cache_errors on;
limit_conn_zone $binary_remote_addr zone=conn_limit_per_ip:10m;
limit_req_zone $binary_remote_addr zone=req_limit_per_ip:10m rate=5r/s;
client_body_buffer_size 16k;
client_header_buffer_size 1k;
client_max_body_size 8m;
large_client_header_buffers 2 1k;
# # - Configure Timeouts
reset_timedout_connection on;
client_body_timeout 12;
client_header_timeout 12;
keepalive_timeout 15;
keepalive_requests 100;
send_timeout 10;
server_tokens off;
# # - Dynamic gzip compression
gzip on;
gzip_http_version 1.1;
gzip_disable "msie6";
gzip_vary on;
gzip_min_length 20;
gzip_buffers 4 16k;
gzip_comp_level 4;
gzip_proxied any;
# Turn on gzip for all content types that should benefit from it.
gzip_types application/ecmascript;
gzip_types application/javascript;
gzip_types application/json;
gzip_types application/pdf;
gzip_types application/postscript;
gzip_types application/x-javascript;
gzip_types image/svg+xml;
gzip_types text/css;
gzip_types text/csv;
gzip_types text/javascript;
gzip_types text/plain;
gzip_types text/xml;
gzip_types text/json;
# proxying requests to other servers
upstream nodebeats {
server unix:/tmp/nginx.socket fail_timeout=0;
}
server {
listen <%= ENV['PORT'] %>;
server_name _;
root "/app/";
limit_conn conn_limit_per_ip 5;
limit_req zone=req_limit_per_ip burst=10 nodelay;
location ~* \.(js|css|jpg|png|ico|json|xml|svg)$ {
root "/app/src/dist/";
add_header Pragma public;
add_header Cache-Control public;
expires 1y;
gzip_static on;
gzip off;
log_not_found off;
access_log off;
}
location / {
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header Host $http_host;
proxy_set_header Connection "";
proxy_redirect off;
proxy_http_version 1.1;
proxy_pass http://nodebeats;
add_header Cache-Control no-cache;
proxy_read_timeout 60s;
}
}
}
Ok after inspecting I saw that this is happening because of some Chrome (and other browsers) limitation of 6 tcp connection per server simultaneals. When I look into chrome://net-internals/#sockets I see this. The problem is ssl_socket_pool and 6 active connections. They take a long time to go from active to idle (and then page continues to load). How to fix that?
I tried opening some other pages that have much more static content and http request than mine (which has 8) and they reload fast always. I looked at the same place and saw that there is nothing in Active. All the connections are immediately idle after page reload.

Config nginx with nodejs, don't work with upload file POST request?

I am trying to config nginx with nodejs ( sails.js framework ).
Nginx listen requests on port 80 and pass to 8080. All the request work fine ( all is post ), except the upload file post request.
Below is my nginx config file :
events {
worker_connections 768;
# multi_accept on;
}
http {
##
# Basic Settings
##
sendfile on;
tcp_nopush on;
tcp_nodelay on;
keepalive_timeout 65;
types_hash_max_size 2048;
# server_tokens off
upstream node {
# One failed response will take a server out of circulation for 20 seconds.
server localhost:8080 fail_timeout=20s;
keepalive 512;
}
server {
listen 80 default_server;
listen 8191;
listen 443 ssl;
ssl on;
ssl_certificate /home/ubuntu/APP/cert.pem;
ssl_certificate_key /home/ubuntu/APP/key.pem;
server_name localhost;
location / {
proxy_pass https://localhost:8080;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection 'upgrade';
proxy_set_header Host $host;
proxy_cache_bypass $http_upgrade;
# define buffers, necessary for proper communication to prevent 502s
proxy_buffer_size 128k;
proxy_buffers 4 256k;
proxy_busy_buffers_size 256k;
}
}
# server_names_hash_bucket_size 64;
# server_name_in_redirect off;
include /etc/nginx/mime.types;
default_type application/octet-stream;
##
# Logging Settings
##
access_log /var/log/nginx/access.log;
error_log /var/log/nginx/error.log;
##
# Gzip Settings
##
gzip on;
gzip_disable "msie6";
# gzip_vary on;
# gzip_proxied any;
# gzip_comp_level 6;
# gzip_buffers 16 8k;
# gzip_http_version 1.1;
# gzip_types text/plain text/css application/json application/x-javascript text/xml application/xml application/xml+rss text/javascript;
##
# nginx-naxsi config
##
# Uncomment it if you installed nginx-naxsi
##
#include /etc/nginx/naxsi_core.rules;
##
# nginx-passenger config
##
# Uncomment it if you installed nginx-passenger
##
#passenger_root /usr;
#passenger_ruby /usr/bin/ruby;
##
# Virtual Host Configs
##
include /etc/nginx/conf.d/*.conf;
include /etc/nginx/sites-enabled/*;
}
Have you tried uncommenting these lines?
#passenger_root /usr;
#passenger_ruby /usr/bin/ruby;

Nginx 502 Bad Gateway when uploading files

I get the following error when I try to upload files to my node.js based web app:
2014/05/20 04:30:20 [error] 31070#0: *5 upstream prematurely closed connection while reading response header from upstream, client: ... [clipped]
I'm using a front-end proxy here:
upstream app_mywebsite {
server 127.0.0.1:3000;
}
server {
listen 0.0.0.0:80;
server_name {{ MY IP}} mywebsite;
access_log /var/log/nginx/mywebsite.log;
# pass the request to the node.js server with the correct headers and much more can be added, see nginx config options
location / {
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header Host $http_host;
proxy_set_header X-NginX-Proxy true;
proxy_pass http://app_mywebsite;
proxy_redirect off;
# web socket support
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "upgrade";
}
}
This is my nginx.conf file:
user www-data;
worker_processes 4;
pid /run/nginx.pid;
events {
worker_connections 2048;
multi_accept on;
}
http {
##
# Basic Settings
##
sendfile on;
tcp_nopush on;
tcp_nodelay on;
keepalive_timeout 20;
types_hash_max_size 2048;
# server_tokens off;
# server_names_hash_bucket_size 64;
# server_name_in_redirect off;
include /etc/nginx/mime.types;
# default_type application/octet-stream;
default_type text/html;
charset UTF-8;
##
# Logging Settings
##
access_log /var/log/nginx/access.log;
error_log /var/log/nginx/error.log;
##
# Gzip Settings
##
gzip on;
gzip_disable "msie6";
gzip_vary on;
gzip_proxied any;
gzip_min_length 256;
gzip_comp_level 5;
gzip_buffers 16 8k;
gzip_http_version 1.1;
gzip_types text/plain text/css application/json application/x-javascript text/xml application/xml application/xml+rss text/javascript;
##
# nginx-naxsi config
##
# Uncomment it if you installed nginx-naxsi
##
#include /etc/nginx/naxsi_core.rules;
##
# nginx-passenger config
##
# Uncomment it if you installed nginx-passenger
##
#passenger_root /usr;
#passenger_ruby /usr/bin/ruby;
##
# Virtual Host Configs
##
include /etc/nginx/conf.d/*.conf;
include /etc/nginx/sites-enabled/*;
}
Any idea on how to better debug this? The things I've found haven't really worked (e.g. removing the tailing slash from my proxy_pass
Try adding the following to your server{} block, I was able to solve an Nginx reverse proxy issue by defining these proxy attributes:
# define buffers, necessary for proper communication to prevent 502s
proxy_buffer_size 128k;
proxy_buffers 4 256k;
proxy_busy_buffers_size 256k;
The issue may be caused by PM2. If you're enabled watching, the app will restart on every single file change(and new uploads too). The solution could be disabling watching completely or adding the uploads folder to ignore list.
More: https://pm2.keymetrics.io/docs/usage/watch-and-restart/
So in the end I ended up changing in my keepalive from 20 to 64 and it seems to handle large files fine now. The bummer about it is that I re-wrote from scratch the image upload library I was using node-imager, but at least I learned something from it.
server {
location / {
keepalive 64
}
}
Try adding the following below to the http section of your /etc/nginx/nginx.conf:
fastcgi_read_timeout 400s;
and restart nginx.
Futher reading: nginx docs
Try this:
client_max_body_size - Maximum uploadable file size
http {
send_timeout 10m;
client_header_timeout 10m;
client_body_timeout 10m;
client_max_body_size 100m;
large_client_header_buffers 8 32k;
}
and server section:
server {
location / {
proxy_buffer_size 32k;
}
}
large_client_header_buffers 8 32k and proxy_buffer_size 32k
- is enough for most scripts, but you can try 64k, 128k, 256k...
(sorry, im not english speaking) =)

Resources