Nginx 502 Bad Gateway error on EC2 Instance - node.js

I've been having some trouble configuring an nginx server on a EC2 Linux instance. I'm running an application on port 3000 and want to map that to port 80 using nginx.
Here is my configuration file:
user nginx;
worker_processes auto;
error_log /var/log/nginx/error.log;
pid /var/run/nginx.pid;
include /usr/share/nginx/modules/*.conf;
events {
worker_connections 1024;
}
http {
log_format main '$remote-addr - $remote_user [$time_local] "$request" '
'$status $body_bytes_sent "$http_referer" '
'"$http_user_agent" "$http_x_forwarded_for"';
access_log /var/log/nginx/access.log main;
sendfile on;
top_nopush on;
tcp_nodelay on;
keepalive_timeout 65;
types_hash_max_size 2048;
server_names_hash_bucket_size 128;
include /etc/nginx/mime.types;
default_type application/octet-stream;
include /etc/nginx/conf.d/*.conf;
index index.html index.htm
server {
listen 80 default_server;
[::]:80 default_server;
server_name localhost;
include /etc/nginx/default.d/*.conf;
location / {
}
error_page 404 /404.html;
location =/40x.html {
}
error_page 500 502 503 504 /50x.html;
location =/50x.html {
}
}
include /etc/nginx/sites-enabled/default;
This is the default file that comes with nginx with very slight changes by me, most notably the inclusion of a custom file called default, whose contents are as follows:
server {
listen 80;
server_name [my_domain_name];
location / {
proxy_pass http://[my_private_ip]:3000;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection 'upgrade';
proxy_set_header Host $host;
proxy_cache_bypass $http_upgrade;
}
}
With the items in square brackets replaced with the correct values. Whenever I try to navigate to the website I get 502 Bad Gateway nginx/1.12.1.
My server is a node.js server running on port 3000.
I've tried troubleshooting and reading other stackoverflow questions about bad gateways but I can't figure out the solution. Thank you

Follow a different approach. Allow your application to run on port 3000 (and listen on 3000 as well). In this case, you would then have to open it as
http://url:3000
Now we just need to forward all requests coming to port 80 to 3000 which can be easily done using iptables:
sudo iptables -A PREROUTING -t nat -i eth0 -p tcp --dport 80 -j REDIRECT --to-port 3000
Now you should be simply able to open it with the url, without the port number

Related

Should I open port 3000 of my server to serve NodeJS application?

I have an Angular + Node.JS app. When I was running the program locally I defined a baseurl = http://localhost:3000/ in my Angular app and used this prefix for accessing to my NodeJS backend in my program defined links, but now when I wanted to deploy my app on a remote server, I changed the baseurldefinition to the baseurl = http://111.222.333.444:3000/(111.222.333.444 is my server ip address for example), but it doesn't work!
How should I connect my Angular app to the NodeServer on a remote server?
EDIT: This is my /etc/nginx/nginx.conf file content:
user nginx;
worker_processes auto;
error_log /var/log/nginx/error.log;
pid /run/nginx.pid;
# Load dynamic modules. See /usr/share/doc/nginx/README.dynamic.
include /usr/share/nginx/modules/*.conf;
events {
worker_connections 1024;
}
http {
log_format main '$remote_addr - $remote_user [$time_local] "$request" '
'$status $body_bytes_sent "$http_referer" '
'"$http_user_agent" "$http_x_forwarded_for"';
access_log /var/log/nginx/access.log main;
sendfile on;
tcp_nopush on;
tcp_nodelay on;
keepalive_timeout 65;
types_hash_max_size 2048;
include /etc/nginx/mime.types;
default_type application/octet-stream;
# Load modular configuration files from the /etc/nginx/conf.d directory.
# See http://nginx.org/en/docs/ngx_core_module.html#include
# for more information.
include /etc/nginx/conf.d/*.conf;
server {
listen 80 default_server;
listen [::]:80 default_server;
server_name _;
root /demo/stock-front9/dist/strategy;
# Load configuration files for the default server block.
include /etc/nginx/default.d/*.conf;
location / {
#proxy_pass http://localhost:3000;
#proxy_http_version 1.1;
# First attempt to serve request as file, then
# as directory, then redirect to index(angular) if no file found.
try_files $uri $uri/ /index.html;
}
error_page 404 /404.html;
location = /40x.html {
}
error_page 500 502 503 504 /50x.html;
location = /50x.html {
}
}
I would not, I think is better to run the Node app with a tool like PM2 and then place a reverse proxy using Nginx in front of it, PM2 will act as orchestrator over your service while Nginx will provide access only through standard web ports (80, 443).
And in the case of Angular, when compiling, it should generate a static web app which you can serve using the same Nginx reverse proxy, doing it like so you'll save yourself the effort of configuring things like CORS, API routes and so forth, everything will go through Nginx.
Update on an example of Nginx config file
server {
listen 80;
server_name example.org;
location /api {
proxy_pass http://localhost:3000;
proxy_set_header Host $host;
proxy_set_header X-Forwarded-For $remote_addr;
proxy_redirect off;
proxy_http_version 1.1;
}
location / {
root /path/to/angular/compiled/app;
index index.html;
}
}
And then the angular app should point to the same host.
Good luck and cheers :)
You can still run your angular app locally. And for backend server, you can use proxy.
Please take a look at this.
https://github.com/angular/angular-cli/blob/master/docs/documentation/stories/proxy.md#using-corporate-proxy

Angular don't showing index page after restart nginx

I have a application in a vps server that have the backend in node.js and the frontend in Angular;
I restart the nginx and something problems starts. My api don't work more in https, only in http (before i can make request in https);
When i access in browser the link of my application i receive a message from my backend, as if i'm making a get in this route, but before than i restart the nginx when i access this link my frontend shows the login page...
My angular dist files is in public_html and my node app is in /nodeapp;
This is my nginx conf:
#user nobody;
worker_processes 1;
#error_log logs/error.log;
#error_log logs/error.log notice;
#error_log logs/error.log info;
#pid logs/nginx.pid;
events {
worker_connections 1024;
}
http {
include mime.types;
default_type application/octet-stream;
log_format main '$remote_addr - $remote_user [$time_local] "$request" '
'$status $body_bytes_sent "$http_referer" '
'"$http_user_agent" "$http_x_forwarded_for"';
access_log /var/log/nginx/access.log;
error_log error.log warn;
sendfile on;
#tcp_nopush on;
#keepalive_timeout 0;
keepalive_timeout 65;
#gzip on;
server {
listen 80;
listen [::]:80 ipv6only=on;
server_name knowhowexpressapp.com;
location / {
proxy_pass http://189.90.138.98:3333;
proxy_http_version 1.1;
proxy_set_header Connection "upgrade";
proxy_set_header Host $host;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
}
}
}
I try some things like:
pm2 restart server;
nginx -s reload
service nginx restart
but my frontend is still not showing when i try to access the page.
As we have been able to deduce together, the configuration of nginx started a redirect towards the backend incorrectly.
Our solution was to not use nginx and expose the port we needed on the server so that the angular application could reach it.
Of course we could also use nginx in this regard and redirect only one path to a specific port.

Upstream Node server closing connection to nginx

I'm using nginx as a proxy for a Node server that's rate-limiting requests. The rate is one request every 30 seconds; most requests return a response fine, but if a request is kept open for an extended period of time, I get this:
upstream prematurely closed connection while reading response header from upstream
I cannot figure out what might be causing this. Below is my nginx configuration:
# For more information on configuration, see:
# * Official English Documentation: http://nginx.org/en/docs/
# * Official Russian Documentation: http://nginx.org/ru/docs/
user nginx;
worker_processes auto;
error_log /var/log/nginx/error.log;
pid /run/nginx.pid;
# Load dynamic modules. See /usr/share/nginx/README.dynamic.
# include /usr/share/nginx/modules/*.conf;
events {
worker_connections 1024;
}
http {
log_format main '$remote_addr - $remote_user [$time_local] "$request" '
'$status $body_bytes_sent "$http_referer" '
'"$http_user_agent" "$http_x_forwarded_for"';
access_log /var/log/nginx/access.log main;
sendfile on;
tcp_nopush on;
tcp_nodelay on;
keepalive_timeout 65;
types_hash_max_size 2048;
include /etc/nginx/mime.types;
default_type application/octet-stream;
# Load modular configuration files from the /etc/nginx/conf.d directory.
# See http://nginx.org/en/docs/ngx_core_module.html#include
# for more information.
include /etc/nginx/conf.d/*.conf;
server {
listen 80 default_server;
listen [::]:80 default_server;
server_name _;
root /srv/www/main/htdocs;
# Load configuration files for the default server block.
include /etc/nginx/default.d/*.conf;
location /vcheck {
proxy_pass http://127.0.0.1:8080$is_args$query_string;
# proxy_buffer_size 128k;
# proxy_buffers 4 256k;
# proxy_busy_buffers_size 256k;
# proxy_http_version 1.1;
# proxy_set_header Upgrade $http_upgrade;
# proxy_set_header Connection 'upgrade';
# proxy_set_header Host $host;
# proxy_cache_bypass $http_upgrade;
# proxy_redirect off;
proxy_read_timeout 600s;
}
location ~ \.php$ {
include fastcgi.conf;
fastcgi_split_path_info ^(.+\.php)(/.+)$;
fastcgi_pass unix:/var/run/php-fpm/php-fpm.sock;
fastcgi_index routes.php$is_args$query_string;
}
location / {
if (-f $request_filename) {
expires max;
break;
}
if ($request_filename !~ "\.(js|htc|ico|gif|jpg|png|css)$") {
rewrite ^(.*) /routes.php last;
}
}
}
}
Is there a reason why Node could be closing the connection early?
EDIT: I'm using Node's built-in HTTP server.
Seems like You've to extend response timeout of nodejs application.
So if it's expressjs app so I can guess You try this one:
install: npm i --save connect-timeout
use:
var timeout = require('connect-timeout');
app.use(timeout('60s'));
But I recommend to not to keep connection waiting and fix issue in nodejs app, find why it's halting so long.
Seems like nodejs app has issues that cannot respond and request is getting lost keeping nginx waiting.

Nginx configuration on a centos 6.7

Im working on a centos 6.7 machine and I’m trying to configure nginx to serve a node.js application. I feel like I’m really close but I’m missing something. So heres my nginx.conf and below that is my server.conf thats in my sites-enabled directory.
When I go to the public IP address it gives me a 502 bad gateway error. But if I curl the private IP with the correct port on my centos machine I can see the node application running. What am I missing here? is it a firewall issue or maybe something else?
nginx.conf
user nginx;
worker_processes 1;
error_log /var/log/nginx/error.log warn;
pid /var/run/nginx.pid;
events {
worker_connections 1024;
}
http {
include /etc/nginx/mime.types;
default_type application/octet-stream;
log_format main '$remote_addr - $remote_user [$time_local] "$request" '
'$status $body_bytes_sent "$http_referer" '
'"$http_user_agent" "$http_x_forwarded_for"';
access_log /var/log/nginx/access.log main;
sendfile on;
#tcp_nopush on;
keepalive_timeout 65;
#gzip on;
#include /etc/nginx/conf.d/*.conf;
include /etc/nginx/sites-enabled/*;
}
sites-enabled/server.conf
server {
listen 80;
#server_name localhost;
location / {
proxy_pass http://192.xxx.x.xx:8000; // private IP
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection 'upgrade';
proxy_set_header Host $host;
proxy_cache_bypass $http_upgrade;
}
}
UPDATE:
I figured this out! heres the server block that worked for me
server {
listen 80 default_server;
listen [::]:80 default_server;
#server_name _;
root /usr/share/nginx/html;
include /etc/nginx/default.d/*.conf;
location / {
proxy_pass http://127.0.0.1:9000;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection 'upgrade';
proxy_set_header Host $host;
proxy_cache_bypass $http_upgrade;
}
error_page 404 /404.html;
location = /40x.html {
}
error_page 500 502 503 504 /50x.html;
location = /50x.html {
}
}
I want to write a comment but stack overflow does not let me do it.
I am 99% sure that Node.js website does NOT need to work with nginx or apache.
If the script setup correctly, the Node.js Application should listen to the port by itself.
Since you did not really say much of your structure, I guess you can just try to access through the public IP with the port of Node.js.

Kibana4 can't connect to Elasticsearch by IP, only localhost

After successfully completing this tutorial:
ELK on Cent OS
I'm now working on an ELK stack consisting of:
Server A: Kibana / Elasticsearch
Server B: Elasticsearch / Logstash
(After A and B work, scaling)
Server N: Elasticsearch / Logstash
So far, I've been able to install ES on server A / B, with successful curls to each server's ES instance via IP (curl -XGET "server A and B's IP:9200", returns 200 / status message.) The only changes to each ES's elasticsearch.yml file are as follows:
Server A:
host: "[server A ip]"
elasticsearch_url: "[server a ip:9200]"
Server B:
network.host: "[server b ip]"
I can also curl Kibana on server A via [server a ip]:5601
Unfortunately, when I try to open kibana in a browser, I get 502 bad gateway.
Help?
nginx config from server A (which I can't really change much due to project requirements):
user nginx;
worker_processes auto;
error_log /var/log/nginx/error.log;
pid /run/nginx.pid;
events {
worker_connections 1024;
}
http {
log_format main '$remote_addr - $remote_user [$time_local] "$request" '
'$status $body_bytes_sent "$http_referer" '
'"$http_user_agent" "$http_x_forwarded_for"';
access_log /var/log/nginx/access.log main;
sendfile on;
tcp_nopush on;
tcp_nodelay on;
keepalive_timeout 65;
types_hash_max_size 2048;
include /etc/nginx/mime.types;
default_type application/octet-stream;
# Load modular configuration files from the /etc/nginx/conf.d directory.
# See http://nginx.org/en/docs/ngx_core_module.html#include
# for more information.
include /etc/nginx/conf.d/*.conf;
server {
listen 80 default_server;
listen [::]:80 default_server;
server_name _;
root /usr/share/nginx/html;
# Load configuration files for the default server block.
include /etc/nginx/default.d/*.conf;
location / {
}
error_page 404 /404.html;
location = /40x.html {
}
error_page 500 502 503 504 /50x.html;
location = /50x.html {
}
}
}
kibana.conf "in conf.d"
server {
listen 80;
server_name kibana.redacted.com;
auth_basic "Restricted Access";
auth_basic_user_file /etc/nginx/htpasswd.users;
location / {
proxy_pass http://localhost:5601;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection 'upgrade';
proxy_set_header Host $host;
proxy_cache_bypass $http_upgrade;
}
}
nginx error log:
2015/10/15 14:41:09 [error] 3416#0: *7 connect() failed (111: Connection refused) while connecting to upstream, client: [my vm "centOS", no clue why it's in here], server: kibana.redacted.com, request: "GET / HTTP/1.1", upstream: "http://127.0.0.1:5601/", host: "kibana.redacted.com"
When I loaded in test data "one index, one doc" things magically worked. In Kibana3, you could still get a dash and useful errors even if it couldn't connect.
But, that is not how the Kibana4 ... do.

Resources