HTTPS POST request fails to nginx/nodejs Bad Gateway - node.js

Ive got a rpi2 running node.js with an app configured via .env on port 442. Nginx is configured to serve https with letsencrypt certificate. I tried the node app by itself on http and it responded fine. I tried the served index.html on https on nginx from my mac on the lan and it worked fine. The issue is now that Im trying to combine them.
Im test posting from hurl.it but getting a bad gateway error and the error log on nginx for the site says:
POST /API/switches/sw1?password=123456 HTTP/1.1", upstream:
"http://192.168.1.53:442/50x.html", host: "subdomain.domain.com"
2017/04/23 20:08:38 [error] 20424#0: *4 upstream prematurely closed
connection while reading response header from upstream, client:
192.168.1.56, server: subdomain.domain.com, request: "GET /aism/ HTTP/1.1", upstream: "http://192.168.1.53:442/aism/", host:
"subdomain.domain.com" 2017/04/23 20:08:38 [error] 20424#0: *4
upstream prematurely closed connection while reading response header
from upstream, client: 192.168.1.56, server: subdomain.domain.com,
request: "GET /aism/ HTTP/1.1", upstream:
"http://192.168.1.53:442/50x.html", host: "subdomain.domain.com"
2017/04/23 20:09:25 [error] 20467#0: *1 upstream prematurely closed
connection while reading response header from upstream, client:
23.20.198.108, server: subdomain.domain.com, request: "POST /API/switches/sw1?password=123456 HTTP/1.1", upstream:
"http://192.168.1.53:442/API/switches/sw1?password=123456", host:
"subdomain.domain.com"
Here is my site config:
#server {
# listen 80;
# listen [::]:80;
# server_name subdomain.domain.com;
# return 301 https://$server_name$request_uri;
#}
server {
listen 443 ssl;
listen [::]:443 ssl;
server_name subdomain.domain.com;
ssl_certificate /etc/letsencrypt/live/subdomain.domain.com/fullchain.pem;
ssl_certificate_key /etc/letsencrypt/live/subdomain.domain.com/privkey.pem;
root /www/subdomain.domain.com/aism;
index index.php index.html index.htm;
error_page 404 /404.html;
error_page 500 502 503 504 /50x.html;
# Error & Access logs
error_log /www/subdomain.domain.com/logs/error.log error;
access_log /www/subdomain.domain.com/logs/access.log;
location / {
index index.html index.php;
proxy_pass http://192.168.1.53:442;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection 'upgrade';
proxy_set_header Host $host;
proxy_cache_bypass $http_upgrade;
}
location ~ /.well-known {
allow all;
}
location /public {
root /www/subdomain.domain.com/aism;
}
location ~ ^/(images/|img/|javascript/|js/|css/|stylesheets/|flash/|media/|static/) {
}
#location ~ [^/].php(/|$) {
# fastcgi_split_path_info ^(.+?.php)(/.*)$;
# fastcgi_pass unix:/var/run/php5-fpm.sock;
# fastcgi_index index.php;
# include fastcgi_params;
#}
}
What is wrong with my config file for the site?
I am making muy posts on hurl.it to my router's public ip:
https://routerIP/API/switches/sw1?password=123456
that gets routed to 192.168.1.53:443 by my router
which according to the config file gets proxied to 192.168.1.53:442

Related

Nginx 502 Bad Gateway errors

About a month ago I configured a Digital Ocean Droplet to forward all requests to mydomain.com to Webflow (a no-code site-builder) and any requests to mydomain.com/api/v1 to the Node.js backend running on the same Droplet.
Everything was working, but today I went to the site and got a 502 Bad Gateway Nginx error, and I'm not sure why. Whenever I try and connect, I get these errors:
2022/10/16 19:52:44 [error] 1571#1571: *7 SSL_do_handshake() failed (SSL: error:0A000438:SSL routines::tlsv1 alert internal error:SSL alert number 80) while SSL handshaking to upstream, client: ipAddress, server: mydomain.com, request: "GET / HTTP/1.1", upstream: "https://ipAddress:443/", host: "mydomain.com"
2022/10/16 19:52:45 [error] 1571#1571: *7 SSL_do_handshake() failed (SSL: error:0A000438:SSL routines::tlsv1 alert internal error:SSL alert number 80) while SSL handshaking to upstream, client: ipAddress, server: mydomain.com, request: "GET / HTTP/1.1", upstream: "https://ipAddress:443/", host: "mydomain.com"
2022/10/16 19:52:45 [error] 1571#1571: *7 SSL_do_handshake() failed (SSL: error:0A000438:SSL routines::tlsv1 alert internal error:SSL alert number 80) while SSL handshaking to upstream, client: 162.229.177.82, server: mydomain.com, request: "GET / HTTP/1.1", upstream: "https://ipAddress:443/", host: "mydomain.com"
2022/10/16 19:52:45 [error] 1571#1571: *7 no live upstreams while connecting to upstream, client: ipAddress, server: mydomain.com, request: "GET /favicon.ico HTTP/1.1", upstream: "https://webflow/favicon.ico", host: "mydomain.com", referrer: "https://example.com/"
For privacy I've changed any IP addresses to "ipAddress" and the host to "mydomain.com". What do these errors mean, and what are some potential fixes?
If it helps, my Nginx sites-available file looks like this:
upstream webflow {
server proxy-ssl.webflow.com:443;
}
resolver 8.8.8.8 8.8.4.4;
server {
root /var/www/html;
index index.html index.htm index.nginx-debian.html;
server_name mydomain.com www.mydomain.com;
location / {
proxy_pass https://webflow;
proxy_ssl_server_name on;
proxy_ssl_name $host;
proxy_set_header Host $host;
proxy_set_header X-Forwarded-Proto https;
}
location /api/v1/ {
proxy_pass http://dropletIp:3001;
}
listen [::]:443 ssl ipv6only=on; # managed by Certbot
listen 443 ssl; # managed by Certbot
ssl_certificate /etc/letsencrypt/live/mydomain.com/fullchain.pem; # managed by Certbot
ssl_certificate_key /etc/letsencrypt/live/mydomain.com/privkey.pem; # managed by Certbot
include /etc/letsencrypt/options-ssl-nginx.conf; # managed by Certbot
ssl_dhparam /etc/letsencrypt/ssl-dhparams.pem; # managed by Certbot
}

Node.js+Nginx throwing 502 bad gateway error

I just installed Node.js application to dev environment. Configuration is :
Ubuntu 16.x
PHP 7.0
Node.js 8.x
Mysql
PhpMyAdmin
Nginx
My node app is using port 2000 and the subfolder name is nodeapp. Though phpmyadmin is opening properly, Node app is giving 502 Bad gateway.
Here is the nginx conf file :
server {
listen 80 default_server;
listen [::]:80 default_server;
listen 2000;
root /home/pjsp/public_html;
index index.php index.html index.htm index.nginx-debian.html app.js;
server_name mydomain.com;
location /nodeapp {
proxy_pass http://localhost/nodeapp:2000;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection 'upgrade';
proxy_set_header Host $host;
proxy_cache_bypass $http_upgrade;
}
location ~ \.php$ {
include snippets/fastcgi-php.conf;
fastcgi_pass unix:/run/php/php7.0-fpm.sock;
}
location ~ /\.ht {
deny all;
}
}
Below is the error I am getting at /var/log/nginx/error.log file:
2018/06/02 13:13:15 [error] 32209#32209: *763 recv() failed (104: Connection reset by peer) while reading response header from upstream, client: 127.0.0.1, server: mydomain.com, request: "GET /nodeapp:2000:2000:2000:2000:2000:2000:2000:2000:2000:2000:2000:2000:2000:2000:2000:2000:2000:2000:2000:2000:2000:2000:2000:2000:2000:2000:2000:2000:2000:2000:2000:2000:2000:2000:2000:2000:2000:2000:2000:2000:2000:2000:2000:2000:2000:2000:2000:2000:2000:2000:2000:2000:2000:2000:2000:2000:2000:2000:2000:2000:2000:2000:2000:2000:2000:2000:2000:2000:2000:2000:2000:2000:2000:2000:2000:2000:2000:2000:2000:2000:2000:2000:2000:2000:2000:2000:2000:2000:2000:2000:2000:2000:2000:2000:2000:2000:2000:2000:2000:2000:2000:2000:2000:2000:2000:2000:2000:2000:2000:2000:2000:2000:2000:2000:2000:2000:2000:2000:2000:2000:2000:2000:2000:2000:2000:2000:2000:2000:2000:2000:2000:2000:2000:2000:2000:2000:2000:2000:2000:2000:2000:2000:2000:2000:2000:2000:2000:2000:2000:2000:2000:2000:2000:2000:2000:2000:2000:2000:2000:2000:2000:2000:2000:2000:2000:2000:2000:2000:2000:2000:2000:2000:2000:2000:2000:2000:2000:2000:2000:2000:2000:2000:2000:2000:2000:2000:2000:2000:2000:2000:2000:2000:2000:2000:2000:2000:2000:2000:2000:2000:2000:2000:2000:2000:2000:2000:2000:2000:2000:2000:2000:2000:2000:2000:2000:2000:2000:2000:2000:2000:2000:2000:2000:2000:2000:2000:2000:2000:2000:2000:2000:2000:2000:2000:2000:2000:2000:2000:2000:2000:2000:2000:2000:2000:2000:2000:2000:2000:2000:2000:2000:2000:2000:2000:2000:2000:2000:2000:2000:2000:2000:2000:2000:2000:2000:2000:2000:2000:2000:2000:2000:2000:2000:2000:2000:2000:2000:2000:2000:2000:2000:2000:2000:2000:2000:2000:2000:2000:2000:2000:2000:2000:2000:2000:2000:2000:2000:2000:2000:2000:2000:2000:2000:2000:2000:2000:2000:2000:2000:2000:2000:2000:2000:2000:2000:2000:2000:2000:2000:2000:2000:2000:2000:2000:2000:2000:2000:2000:2000:2000:2000:2000:2000:2000:2000:2000:2000:2000:2000:2000:2000:2000:2000:2000:2000:2000:2000:2000:2000:2000:2000:2000:2000:2000:2000:2000:2000:2000:2000:2000:2000:2000:2000:2000:2000:2000:2000:2000:2000
Please help!
Update:
New Config file:
server {
listen 80 default_server;
listen [::]:80 default_server;
#listen 2000;
root /home/pjsp/public_html;
index index.php index.html index.htm index.nginx-debian.html app.js;
server_name app.pajasa.com www.app.pajasa.com;
location /nodeapp {
proxy_pass http://localhost;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection 'upgrade';
proxy_set_header Host $host;
proxy_cache_bypass $http_upgrade;
}
location ~ \.php$ {
include snippets/fastcgi-php.conf;
fastcgi_pass unix:/run/php/php7.0-fpm.sock;
}
location ~ /\.ht {
deny all;
}
}
Now getting error :
2018/06/02 14:52:35 [alert] 3026#3026: *765 768 worker_connections are not enough while connecting to upstream, client: 127.0.0.1, server: app.pajasa.com, request: "GET /nodeapp:2000 HTTP/1.1", upstream: "http://127.0.0.1:80/nodeapp:2000", host: "www.app.pajasa.com"
url : www.app.pajasa.com/nodeapp:2000
You have an infinite loop inside your nginx. that's why you see:
/nodeapp:2000:2000:2000:2000:2000:2000:2000:2000:2000:2000:2000....
If your Node app, is listening on port 2000, don't use listen 2000; on Nnginx.
And after you drop that, request to: http://localhost/nodeapp will be passed onto your node app.
Furthermore, your proxy_pass is incorrect, it should be:
proxy_pass http://localhost:2000;
UPDATE
Your URL is wrong
www.app.pajasa.com/nodeapp:2000 // INCORRECT
First of all as we already mention, if you're using Nginx to proxy to your Node.js APP, you don't have to add the port on the URL, secondly and more important is that isn't how ports work:
www.app.pajasa.com:2000 // This is correct
Drop :2000 from the URL, Nginx will proxy it to your node app.
www.app.pajasa.com/nodeapp
proxy_pass http://localhost:2000/nodeapp;
Your URL scheme is wrong, its always PROTO://DOMAIN:PORT/PATH
listen 2000;
Your nginx should not listen to the app port. In this case the nginx is calling it self recursively.

Socket.io + nginx + SSL = *31 upstream prematurely closed connection

I am trying to run a Node.js server (with socket.io) as a pub/sub server for my main app. I pushed it to the server and created a subdomain (with SSL). My client HTML page can load the socket.io/socket.io.js, but the WSS handshake dont work as expected:
WebSocket connection to
'wss://ws.example.com/socket.io/?EIO=3&transport=websocket&sid=ogEegJXhjFh5lplgAAAF'
failed: Error during WebSocket handshake: Unexpected response code: 502
I've got a SSL subdomain defined in my nginx like this:
server {
listen 80;
listen 443 default ssl;
server_name ws.example.com;
# ssl on;
ssl_certificate /etc/nginx/certificates/certificate-ws-example-com.crt;
ssl_certificate_key /etc/nginx/certificates/ws.example.com.key;
# Redirect all non-SSL traffic to SSL.
if ($ssl_protocol = "") {
rewrite ^ https://$host$request_uri? permanent;
}
location / {
proxy_pass http://localhost:6969;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection upgrade;
proxy_set_header Host $host;
access_log /var/log/nginx/example_production.access.log;
error_log /var/log/nginx/example_production.error.log;
}
}
And my Node.js (pm2) server responds me:
[error] 29726#0: *31 upstream prematurely closed connection
while reading response header from upstream,
client: 86.228.47.218,
server: ws.example.com,
request:
"GET /socket.io/?EIO=3&transport=websocket&sid=ogEegJXhjFh5lplgAAAF HTTP/1.1",
upstream:
"http://127.0.0.1:6969/socket.io/?EIO=3&transport=websocket&sid=ogEegJXhjFh5lplgAAAF",
host: "ws.example.com"

Kibana4 can't connect to Elasticsearch by IP, only localhost

After successfully completing this tutorial:
ELK on Cent OS
I'm now working on an ELK stack consisting of:
Server A: Kibana / Elasticsearch
Server B: Elasticsearch / Logstash
(After A and B work, scaling)
Server N: Elasticsearch / Logstash
So far, I've been able to install ES on server A / B, with successful curls to each server's ES instance via IP (curl -XGET "server A and B's IP:9200", returns 200 / status message.) The only changes to each ES's elasticsearch.yml file are as follows:
Server A:
host: "[server A ip]"
elasticsearch_url: "[server a ip:9200]"
Server B:
network.host: "[server b ip]"
I can also curl Kibana on server A via [server a ip]:5601
Unfortunately, when I try to open kibana in a browser, I get 502 bad gateway.
Help?
nginx config from server A (which I can't really change much due to project requirements):
user nginx;
worker_processes auto;
error_log /var/log/nginx/error.log;
pid /run/nginx.pid;
events {
worker_connections 1024;
}
http {
log_format main '$remote_addr - $remote_user [$time_local] "$request" '
'$status $body_bytes_sent "$http_referer" '
'"$http_user_agent" "$http_x_forwarded_for"';
access_log /var/log/nginx/access.log main;
sendfile on;
tcp_nopush on;
tcp_nodelay on;
keepalive_timeout 65;
types_hash_max_size 2048;
include /etc/nginx/mime.types;
default_type application/octet-stream;
# Load modular configuration files from the /etc/nginx/conf.d directory.
# See http://nginx.org/en/docs/ngx_core_module.html#include
# for more information.
include /etc/nginx/conf.d/*.conf;
server {
listen 80 default_server;
listen [::]:80 default_server;
server_name _;
root /usr/share/nginx/html;
# Load configuration files for the default server block.
include /etc/nginx/default.d/*.conf;
location / {
}
error_page 404 /404.html;
location = /40x.html {
}
error_page 500 502 503 504 /50x.html;
location = /50x.html {
}
}
}
kibana.conf "in conf.d"
server {
listen 80;
server_name kibana.redacted.com;
auth_basic "Restricted Access";
auth_basic_user_file /etc/nginx/htpasswd.users;
location / {
proxy_pass http://localhost:5601;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection 'upgrade';
proxy_set_header Host $host;
proxy_cache_bypass $http_upgrade;
}
}
nginx error log:
2015/10/15 14:41:09 [error] 3416#0: *7 connect() failed (111: Connection refused) while connecting to upstream, client: [my vm "centOS", no clue why it's in here], server: kibana.redacted.com, request: "GET / HTTP/1.1", upstream: "http://127.0.0.1:5601/", host: "kibana.redacted.com"
When I loaded in test data "one index, one doc" things magically worked. In Kibana3, you could still get a dash and useful errors even if it couldn't connect.
But, that is not how the Kibana4 ... do.

Nginx Load Balancer having two load balanced nginx+php-fpm (Primary script unknown) error

We have two web servers with nginx+php-fpm ( 10.0.0.10 and 10.0.0.20 ), which is load balanced behind another nginx server ( just nginx ), when we try to browse we get file not found error, with error logs listed at the bottom.
Load Balancer (10.0.0.1)
nginx.conf
upstream test_rack {
server 10.0.0.10:80;
server 10.0.0.20:80;
}
server {
location / {
proxy_pass http://test_rack;
}
}}
Upstream Server (10.0.0.20)
subdomains.conf
server {
listen 80;
server_name ~^(?<sub>.+)\.example\.com$;
root /data/vhost/$sub.example.com/htdocs;
location / {
try_files $uri /index.php;
}
location ~ \.php$ {
fastcgi_pass unix:/var/run/php5-fpm.sock;
fastcgi_index index.php;
fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name;
include fastcgi_params;
}}
Error on webserver (10.0.0.10 and 10.0.0.20)
*1 FastCGI sent in stderr: "Primary script unknown" while reading response header from upstream, client: 10.0.0.1, server: ~^(?<sub>.+)\.example\.com$, request: "GET / HTTP/1.0", upstream: "fastcgi://unix:/var/run/php5-fpm.sock:", host: "test_rack"
Solutions tried :
fastcgi_param SCRIPT_FILENAME /data/vhost/$sub.example.com/htdocs/$fastcgi_script_name;
Add proxy_set_header Host $host; to first nginx.
Otherwise your upstreams get test_rack instead of original hostname and $sub variable is empty.

Resources