nodebb install correctly then 502 Bad Gateway - node.js

I try to install nodebb on my server for the domain le.club.systemes.sonores.rocks.
I would like it works like that:
nginx > node.js
For information:
nginx is configured to use FPM.
# node -v
v5.4.0
# npm -v
3.3.12
What i did
I install nodebb with these pages:
https://docs.nodebb.org/en/latest/installing/os/ubuntu.html
https://docs.nodebb.org/en/latest/configuring/proxies/nginx.html
I update node.js with this page:
https://davidwalsh.name/upgrade-nodejs
At the beginning, i get a 502 Bad Gateway error.
I try to solve it with the help of these pages:
nginx: connect() failed (111: Connection refused) while connecting to upstream
http://jvdc.me/fix-502-bad-gateway-error-on-nginx-server-after-upgrading-php/
Finally i get the config page. At the end, i get a successful message and th button to launch the forum.
The forum never run and at reload i get a 502 Bad Gateway error.
I try start in the folder nodebb but it doesn't change anything apparently.
I try to restart FPM and nginxwithout success.
service php5-fpm restart
service nginx restart
My configuration:
my nginx host
#vi /etc/nginx/sites-available/le.club.systemes.sonores.rocks
upstream le.club.systemes.sonores.rocks {
ip_hash;
server localhost:4567;
keepalive 8;
}
server {
listen 80;
listen [::]:80;
#root /usr/share/nginx/html/node/le.club.systemes.sonores.rocks;
#index index.php index.html index.htm;
# Make site accessible from http://le.club.systemes.sonores.rocks/
server_name le.club.systemes.sonores.rocks;
#large_client_header_buffers 4 32k;
# Logs
access_log /var/log/leclubsystemessonoresrocks.access_log;
error_log /var/log/leclubsystemessonoresrocks.error_log;
location / {
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header Host $http_host;
proxy_set_header X-NginX-Proxy true;
#proxy_pass http://localhost:4567/;
proxy_pass http://le.club.systemes.sonores.rocks/;
proxy_redirect off;
# Socket.IO Support
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "upgrade";
}
# Redirect server error pages to the static page
#error_page 403 /403.html;
#error_page 404 /404.html;
#error_page 500 502 503 504 /50x.html;
}
nodebb config
/usr/share/nginx/html/node/le.club.systemes.sonores.rocks/nodebb# vi config.json
{
"url": "http://localhost:4567",
"secret": "xxx",
"database": "mongo",
"port": 4567,
"mongo": {
"host": "localhost",
"port": "4567",
"username": "yyy",
"database": "0"
}
}
nginx.conf
user www-data;
worker_processes 4;
pid /run/nginx.pid;
events {
worker_connections 768;
# multi_accept on;
}
http {
##
# Basic Settings
##
sendfile on;
tcp_nopush on;
tcp_nodelay on;
keepalive_timeout 65;
types_hash_max_size 2048;
server_tokens off;
# server_names_hash_bucket_size 64;
# server_name_in_redirect off;
include /etc/nginx/mime.types;
default_type application/octet-stream;
##
# Logging Settings
##
access_log /var/log/nginx/access.log;
error_log /var/log/nginx/error.log;
##
# Gzip Settings
##
gzip on;
gzip_disable "msie6";
# gzip_vary on;
# gzip_proxied any;
# gzip_comp_level 6;
# gzip_buffers 16 8k;
# gzip_http_version 1.1;
# gzip_types text/plain text/css application/json application/x-javascript text/xml application/xml application/xml+rss text/javascript;
##
# nginx-naxsi config
##
# Uncomment it if you installed nginx-naxsi
##
#include /etc/nginx/naxsi_core.rules;
##
# nginx-passenger config
##
# Uncomment it if you installed nginx-passenger
##
#passenger_root /usr;
#passenger_ruby /usr/bin/ruby;
##
# Virtual Host Configs
##
include /etc/nginx/conf.d/*.conf;
include /etc/nginx/sites-enabled/*;
##
# Special for 502 error
##
fastcgi_buffers 8 16k;
fastcgi_buffer_size 32k;
fastcgi_connect_timeout 300;
fastcgi_send_timeout 300;
fastcgi_read_timeout 300;
##
# Access control
##
include blockips.conf;
}
www.config
# vi /etc/php5/fpm/pool.d/www.conf
listen = /var/run/php5-fpm.sock
;listen = 127.0.0.1:9000
Error log
for nginx
# vi /var/log/leclubsystemessonoresrocks.error_log
2016/01/08 23:20:28 [error] 31636#0: *29 connect() failed (111: Connection refused) while connecting to upstream, client: 2a01:e35:xxx:xxx:xxx:xxx:xxx:xxx, server: le.club.systemes.sonores.rocks, request: "GET / HTTP/1.1", upstream: "http://127.0.0.1:4567/", host: "le.club.systemes.sonores.rocks"
for nodebb [edit]
/usr/share/nginx/html/node/le.club.systemes.sonores.rocks/nodebb/l
ogs# vi output.log
8/1 22:58 [30578] - ^[[32minfo^[[39m: Time: Fri Jan 08 2016 22:58:01 GMT+0100 (CET)
8/1 22:58 [30578] - ^[[32minfo^[[39m: Initializing NodeBB v0.9.3
8/1 22:58 [30578] - ^[[31merror^[[39m: NodeBB could not connect to your Mongo database. Mongo returned the following error: connect ECONNREFUSED 127.0.0.1:4567
8/1 22:58 [30578] - ^[[31merror^[[39m: Error: connect ECONNREFUSED 127.0.0.1:4567
at Object.exports._errnoException (util.js:856:11)
at exports._exceptionWithHostPort (util.js:879:20)
at TCPConnectWrap.afterConnect [as oncomplete] (net.js:1063:14)
[cluster] Child Process (30578) has exited (code: 0, signal: null)
Do i have to configure to start node.js when nginx start?
Could it be a problem with the server configuration?
Of the proxy config?
Thank you in advance for any help.
jb

Related

502 bad gateway error when deploying to nginx web server

I built a react app using the create-react-app and npm run buildcommands and connected it to node with server.jsfile in the directory created by create-react-app.
When running the command node server locally it works perfectly fine however when I pushed the changes to my nginx server I started to get a 502 bad gateway status. Why is this happening? Node is running when I get this error.
Here is the server.js code
onst express = require('express');
const path = require('path');
const app = express();
app.use('/js', express.static(path.join(__dirname, 'src/')));
app.use('/css', express.static(path.join(__dirname, 'src/')));
app.use(express.static(path.join(__dirname , '/public/build')));
// Handles any requests that don't match the ones above
app.get('/', (req,res) =>{
res.sendFile(path.join(__dirname , "/public/build/index.html"));
});
const port = process.env.PORT || 5000;
app.listen(port);
console.log('App is listening on port ' + port);
the error log
[error] 7422#7422: *4477 connect() failed (111: Connection refused) while connecting to upstream, client: 162.84.158.175, server: anthonyjimenez.me, request: "GET / HTTP/2.0", upstream: "http://127.0.0.1:3000/", host: "anthonyjimenez.me"
and the config file
http {
##
# Basic Settings
##
sendfile on;
tcp_nopush on;
tcp_nodelay on;
keepalive_timeout 65;
types_hash_max_size 2048;
# server_tokens off;
# server_names_hash_bucket_size 64;
# server_name_in_redirect off;
include /etc/nginx/mime.types;
default_type application/octet-stream;
##
# SSL Settings
##
ssl_protocols TLSv1 TLSv1.1 TLSv1.2; # Dropping SSLv3, ref: POODLE
ssl_prefer_server_ciphers on;
##
# Logging Settings
##
access_log /var/log/nginx/access.log;
error_log /var/log/nginx/error.log;
##
# Gzip Settings
##
gzip on;
# gzip_vary on;
# gzip_proxied any;
# gzip_comp_level 6;
# gzip_buffers 16 8k;
# gzip_http_version 1.1;
# gzip_types text/plain text/css application/json application/javascript text/xml application/xml application/xml+rss text/javascript;
##
# Virtual Host Configs
##
include /etc/nginx/conf.d/*.conf;
include /etc/nginx/sites-enabled/*;
}
go to /nginx/sites-enabled/default and add the following blocks. You will have to tell Nginx and forward this request this way
server {
listen 443 ssl ;
ssl_certificate /etc/letsencrypt/live/example.com/fullchain.pem; //ssl
ssl_certificate_key /etc/letsencrypt/live/example.com/privkey.pem; //ssl
index index.html index.htm index.nginx-debian.html;
server_name example.com;
location / {
proxy_pass http://localhost:3001; //Your port goes here
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection 'upgrade';
proxy_set_header Host $host;
proxy_cache_bypass $http_upgrade;
}
}

NGINX reverse proxy / port-forwarding rule to send http traffic to port 3000 for my Node Express application causes the application to be unusable

I had a Node JS server running with Express, that is being used as a web server. It connects to my database to run queries for the end user.
I have a VPS set up on Digital Ocean, with a Node App running on port 3000. When I access the Node app on ip:3000 it runs fine and as fast as to be expected. If I set up a reverse proxy with nginx, or a firewall rule that forwards traffic from port 80 to port 3000, parts of the page seem to run extremely slowly, or not at all. I can't seem to find a link as to why, as some of the database queries run fine, but some don't load at all and cause the page to hang. If I access the site using port 3000, the site still continues to run fine, even with nginx running. It's only the access from port 80 that is slow.
My NGINX conf is:
user www-data;
worker_processes auto;
pid /run/nginx.pid;
include /etc/nginx/modules-enabled/*.conf;
events {
worker_connections 768;
# multi_accept on;
}
http {
##
# Basic Settings
##
sendfile on;
tcp_nopush on;
tcp_nodelay on;
keepalive_timeout 65;
types_hash_max_size 2048;
# server_tokens off;
# server_names_hash_bucket_size 64;
# server_name_in_redirect off;
include /etc/nginx/mime.types;
default_type application/octet-stream;
##
# SSL Settings
##
ssl_protocols TLSv1 TLSv1.1 TLSv1.2; # Dropping SSLv3, ref: POODLE
ssl_prefer_server_ciphers on;
##
# Logging Settings
##
access_log /var/log/nginx/access.log;
error_log /var/log/nginx/error.log;
##
# Gzip Settings
# server_names_hash_bucket_size 64;
# server_name_in_redirect off;
include /etc/nginx/mime.types;
default_type application/octet-stream;
##
# SSL Settings
##
ssl_protocols TLSv1 TLSv1.1 TLSv1.2; # Dropping SSLv3, ref: POODLE
ssl_prefer_server_ciphers on;
##
# Logging Settings
##
access_log /var/log/nginx/access.log;
error_log /var/log/nginx/error.log;
##
# Gzip Settings
##
gzip on;
# gzip_vary on;
# gzip_proxied any;
# gzip_comp_level 6;
# gzip_buffers 16 8k;
# gzip_http_version 1.1;
# gzip_types text/plain text/css application/json application/javascript text/xml application/xml application/$
##
# Virtual Host Configs
##
server_names_hash_bucket_size 64;
include /etc/nginx/conf.d/*.conf;
include /etc/nginx/sites-enabled/*;
}
#mail {
# # See sample authentication script at:
# # http://wiki.nginx.org/ImapAuthenticateWithApachePhpScript
#
# # auth_http localhost/auth.php;
# # pop3_capabilities "TOP" "USER";
# # imap_capabilities "IMAP4rev1" "UIDPLUS";
#
# server {
# listen localhost:110;
# protocol pop3;
# proxy on;
# }
#
# server {
# listen localhost:143;
# protocol imap;
# proxy on;
# }
#}
My example.com file is (where 'example.com' is my site address):
server {
listen 80;
listen [::]:80;
root /var/www/example.com/html;
index index.html index.htm index.nginx-debian.html;
server_name example.com www.example.com;
location / {
proxy_pass http://localhost:3000;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection 'upgrade';
proxy_set_header Host $host;
proxy_cache_bypass $http_upgrade;
}
}
I recommend using PM2 to start instance of your node app in production https://github.com/Unitech/pm2
Try following NGINX configurations
upstream prod_nodejs_upstream {
server 127.0.0.1:3000;
keepalive 64;
}
server {
listen 80;
server_name example.com;
root /home/www/example;
location / {
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header Host $http_host;
proxy_set_header X-NginX-Proxy true;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "upgrade";
proxy_max_temp_file_size 0;
proxy_pass http://prod_nodejs_upstream/;
proxy_redirect off;
proxy_read_timeout 240s;
}
}
Once these changes applied you must restart NGINX using commands sudo nginx -t and then sudo systemctl restart nginx
Please update configuration with as below and share output of file so that time taken by upstream can be measured
upstream prod_nodejs_upstream {
server 127.0.0.1:3000;
keepalive 64;
}
server {
listen 80;
server_name example.com;
root /home/www/example;
location / {
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header Host $http_host;
proxy_set_header X-NginX-Proxy true;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "upgrade";
proxy_max_temp_file_size 0;
proxy_pass http://prod_nodejs_upstream/;
proxy_redirect off;
proxy_read_timeout 240s;
}
log_format apm '"$time_local" client=$remote_addr '
'method=$request_method request="$request" '
'request_length=$request_length '
'status=$status bytes_sent=$bytes_sent '
'body_bytes_sent=$body_bytes_sent '
'referer=$http_referer '
'user_agent="$http_user_agent" '
'upstream_addr=$upstream_addr '
'upstream_status=$upstream_status '
'request_time=$request_time '
'upstream_response_time=$upstream_response_time '
'upstream_connect_time=$upstream_connect_time '
'upstream_header_time=$upstream_header_time';
}

Kibana4 can't connect to Elasticsearch by IP, only localhost

After successfully completing this tutorial:
ELK on Cent OS
I'm now working on an ELK stack consisting of:
Server A: Kibana / Elasticsearch
Server B: Elasticsearch / Logstash
(After A and B work, scaling)
Server N: Elasticsearch / Logstash
So far, I've been able to install ES on server A / B, with successful curls to each server's ES instance via IP (curl -XGET "server A and B's IP:9200", returns 200 / status message.) The only changes to each ES's elasticsearch.yml file are as follows:
Server A:
host: "[server A ip]"
elasticsearch_url: "[server a ip:9200]"
Server B:
network.host: "[server b ip]"
I can also curl Kibana on server A via [server a ip]:5601
Unfortunately, when I try to open kibana in a browser, I get 502 bad gateway.
Help?
nginx config from server A (which I can't really change much due to project requirements):
user nginx;
worker_processes auto;
error_log /var/log/nginx/error.log;
pid /run/nginx.pid;
events {
worker_connections 1024;
}
http {
log_format main '$remote_addr - $remote_user [$time_local] "$request" '
'$status $body_bytes_sent "$http_referer" '
'"$http_user_agent" "$http_x_forwarded_for"';
access_log /var/log/nginx/access.log main;
sendfile on;
tcp_nopush on;
tcp_nodelay on;
keepalive_timeout 65;
types_hash_max_size 2048;
include /etc/nginx/mime.types;
default_type application/octet-stream;
# Load modular configuration files from the /etc/nginx/conf.d directory.
# See http://nginx.org/en/docs/ngx_core_module.html#include
# for more information.
include /etc/nginx/conf.d/*.conf;
server {
listen 80 default_server;
listen [::]:80 default_server;
server_name _;
root /usr/share/nginx/html;
# Load configuration files for the default server block.
include /etc/nginx/default.d/*.conf;
location / {
}
error_page 404 /404.html;
location = /40x.html {
}
error_page 500 502 503 504 /50x.html;
location = /50x.html {
}
}
}
kibana.conf "in conf.d"
server {
listen 80;
server_name kibana.redacted.com;
auth_basic "Restricted Access";
auth_basic_user_file /etc/nginx/htpasswd.users;
location / {
proxy_pass http://localhost:5601;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection 'upgrade';
proxy_set_header Host $host;
proxy_cache_bypass $http_upgrade;
}
}
nginx error log:
2015/10/15 14:41:09 [error] 3416#0: *7 connect() failed (111: Connection refused) while connecting to upstream, client: [my vm "centOS", no clue why it's in here], server: kibana.redacted.com, request: "GET / HTTP/1.1", upstream: "http://127.0.0.1:5601/", host: "kibana.redacted.com"
When I loaded in test data "one index, one doc" things magically worked. In Kibana3, you could still get a dash and useful errors even if it couldn't connect.
But, that is not how the Kibana4 ... do.

Config nginx with nodejs, don't work with upload file POST request?

I am trying to config nginx with nodejs ( sails.js framework ).
Nginx listen requests on port 80 and pass to 8080. All the request work fine ( all is post ), except the upload file post request.
Below is my nginx config file :
events {
worker_connections 768;
# multi_accept on;
}
http {
##
# Basic Settings
##
sendfile on;
tcp_nopush on;
tcp_nodelay on;
keepalive_timeout 65;
types_hash_max_size 2048;
# server_tokens off
upstream node {
# One failed response will take a server out of circulation for 20 seconds.
server localhost:8080 fail_timeout=20s;
keepalive 512;
}
server {
listen 80 default_server;
listen 8191;
listen 443 ssl;
ssl on;
ssl_certificate /home/ubuntu/APP/cert.pem;
ssl_certificate_key /home/ubuntu/APP/key.pem;
server_name localhost;
location / {
proxy_pass https://localhost:8080;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection 'upgrade';
proxy_set_header Host $host;
proxy_cache_bypass $http_upgrade;
# define buffers, necessary for proper communication to prevent 502s
proxy_buffer_size 128k;
proxy_buffers 4 256k;
proxy_busy_buffers_size 256k;
}
}
# server_names_hash_bucket_size 64;
# server_name_in_redirect off;
include /etc/nginx/mime.types;
default_type application/octet-stream;
##
# Logging Settings
##
access_log /var/log/nginx/access.log;
error_log /var/log/nginx/error.log;
##
# Gzip Settings
##
gzip on;
gzip_disable "msie6";
# gzip_vary on;
# gzip_proxied any;
# gzip_comp_level 6;
# gzip_buffers 16 8k;
# gzip_http_version 1.1;
# gzip_types text/plain text/css application/json application/x-javascript text/xml application/xml application/xml+rss text/javascript;
##
# nginx-naxsi config
##
# Uncomment it if you installed nginx-naxsi
##
#include /etc/nginx/naxsi_core.rules;
##
# nginx-passenger config
##
# Uncomment it if you installed nginx-passenger
##
#passenger_root /usr;
#passenger_ruby /usr/bin/ruby;
##
# Virtual Host Configs
##
include /etc/nginx/conf.d/*.conf;
include /etc/nginx/sites-enabled/*;
}
Have you tried uncommenting these lines?
#passenger_root /usr;
#passenger_ruby /usr/bin/ruby;

Nginx 502 Bad Gateway when uploading files

I get the following error when I try to upload files to my node.js based web app:
2014/05/20 04:30:20 [error] 31070#0: *5 upstream prematurely closed connection while reading response header from upstream, client: ... [clipped]
I'm using a front-end proxy here:
upstream app_mywebsite {
server 127.0.0.1:3000;
}
server {
listen 0.0.0.0:80;
server_name {{ MY IP}} mywebsite;
access_log /var/log/nginx/mywebsite.log;
# pass the request to the node.js server with the correct headers and much more can be added, see nginx config options
location / {
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header Host $http_host;
proxy_set_header X-NginX-Proxy true;
proxy_pass http://app_mywebsite;
proxy_redirect off;
# web socket support
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "upgrade";
}
}
This is my nginx.conf file:
user www-data;
worker_processes 4;
pid /run/nginx.pid;
events {
worker_connections 2048;
multi_accept on;
}
http {
##
# Basic Settings
##
sendfile on;
tcp_nopush on;
tcp_nodelay on;
keepalive_timeout 20;
types_hash_max_size 2048;
# server_tokens off;
# server_names_hash_bucket_size 64;
# server_name_in_redirect off;
include /etc/nginx/mime.types;
# default_type application/octet-stream;
default_type text/html;
charset UTF-8;
##
# Logging Settings
##
access_log /var/log/nginx/access.log;
error_log /var/log/nginx/error.log;
##
# Gzip Settings
##
gzip on;
gzip_disable "msie6";
gzip_vary on;
gzip_proxied any;
gzip_min_length 256;
gzip_comp_level 5;
gzip_buffers 16 8k;
gzip_http_version 1.1;
gzip_types text/plain text/css application/json application/x-javascript text/xml application/xml application/xml+rss text/javascript;
##
# nginx-naxsi config
##
# Uncomment it if you installed nginx-naxsi
##
#include /etc/nginx/naxsi_core.rules;
##
# nginx-passenger config
##
# Uncomment it if you installed nginx-passenger
##
#passenger_root /usr;
#passenger_ruby /usr/bin/ruby;
##
# Virtual Host Configs
##
include /etc/nginx/conf.d/*.conf;
include /etc/nginx/sites-enabled/*;
}
Any idea on how to better debug this? The things I've found haven't really worked (e.g. removing the tailing slash from my proxy_pass
Try adding the following to your server{} block, I was able to solve an Nginx reverse proxy issue by defining these proxy attributes:
# define buffers, necessary for proper communication to prevent 502s
proxy_buffer_size 128k;
proxy_buffers 4 256k;
proxy_busy_buffers_size 256k;
The issue may be caused by PM2. If you're enabled watching, the app will restart on every single file change(and new uploads too). The solution could be disabling watching completely or adding the uploads folder to ignore list.
More: https://pm2.keymetrics.io/docs/usage/watch-and-restart/
So in the end I ended up changing in my keepalive from 20 to 64 and it seems to handle large files fine now. The bummer about it is that I re-wrote from scratch the image upload library I was using node-imager, but at least I learned something from it.
server {
location / {
keepalive 64
}
}
Try adding the following below to the http section of your /etc/nginx/nginx.conf:
fastcgi_read_timeout 400s;
and restart nginx.
Futher reading: nginx docs
Try this:
client_max_body_size - Maximum uploadable file size
http {
send_timeout 10m;
client_header_timeout 10m;
client_body_timeout 10m;
client_max_body_size 100m;
large_client_header_buffers 8 32k;
}
and server section:
server {
location / {
proxy_buffer_size 32k;
}
}
large_client_header_buffers 8 32k and proxy_buffer_size 32k
- is enough for most scripts, but you can try 64k, 128k, 256k...
(sorry, im not english speaking) =)

Resources