"websocket connection invalid" when using nginx on node.js server - node.js

I'm using Express.js to create a server to which I can connect using web sockets.
Even though it eventually seems to work (that, is connects and passes an event to the client), I initially get an error in Chrome's console:
Unexpected response code: 502
On the backend, the socket.io only logs warn - websocket connection invalid.
However, nginx logs this:
2012/02/12 23:30:03 [error] 25061#0: *81 upstream prematurely closed
connection while reading response header from upstream, client:
71.122.117.15, server: www.example.com, request: "GET /socket.io/1/websocket/1378920683898138448 HTTP/1.1", upstream:
"http://127.0.0.1:8090/socket.io/1/websocket/1378920683898138448",
host: "www.example.com"
Note: I have nginx dev running: nginx version: nginx/1.1.14 so it should support HTTP/1.1.
Also note that if I just use the node.js server without the nginx it works without any warnings.
Finally, here is my nginx config file:
server {
listen 0.0.0.0:80;
server_name www.example.com;
access_log /var/log/nginx/example.com.log;
location / {
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header Host $http_host;
proxy_set_header X-NginX-Proxy true;
proxy_pass http://node;
proxy_redirect off;
}
}
upstream node {
server 127.0.0.1:8090;
}
Any help would be greatly appreciated. I tried the fix suggested in this question but that didn't work either.

nginx has some kind of Web Socket support in unstable 1.1 branch only. See Socket.IO wiki.
Afaik there are currently only few stable Node.js based http proxies that support Web Sockets properly.
Check out node-http-proxy (we use this):
https://github.com/nodejitsu/node-http-proxy
and bouncy:
https://github.com/substack/bouncy
Or you can use pure TCP proxy such as HAproxy
Update!
nginx (1.3.13>=) supports websockets out of the box!
http://nginx.org/en/docs/http/websocket.html

Related

Connection refused, while connecting to upstream on engintron/nginx deployment

I've dealt with this error in the nginx error log for the past few hours.
*2 connect() failed (111: Connection refused) while connecting to upstream, client: my ip, server: my domain, request: "GET / HTTP/2.0", upstream: "http://127.0.0.1:3000/", host: "my domain"
I'm currently trying to deploy a next.js app with nginx using engintron for CPanel as well as pm2.
default.conf
server {
listen [::]:80 default_server ipv6only=off;
server_name my domain domain-ip;
# deny all; # DO NOT REMOVE OR CHANGE THIS LINE - Used when Engintron is disabled to block Nginx from becoming an open proxy
# Set the port for HTTP proxying
set $PROXY_TO_PORT 8080;
include common_http.conf;
common_http.conf
location / {
proxy_pass http://127.0.0.1:3000;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection 'upgrade';
proxy_set_header Host $host;
proxy_cache_bypass $http_upgrade;
}
There aren't any errors on pm2's front, and sudo nginx -t works just fine, so I'm confused on what exactly the issue is.
Any sort of help is appreciated, have a good rest of your day :)
Fixed the issue, for some reason my pm2 wasn't working properly, after a clean reinstall I got this issue fixed, but now I've got a 404 not found for my index file, the fun life of being a developer lol

Running nginx to serve files and act as a reverse proxy for Node app on same domain

I am currently trying to run Nginx as a reverse proxy for a small Node application and serve up files for the core of a site.
E.g.
/ Statically served files for root of website
/app/ Node app running on port 3000 with Nginx reverse proxy
server {
listen 80;
listen [::]:80;
server_name example.com www.example.com;
root /var/www/example.com/html;
index index.html index.htm;
# Set path for access_logs
access_log /var/log/nginx/access.example.com.log combined;
# Set path for error logs
error_log /var/log/nginx/error.example.com.log notice;
# If set to on, Nginx will issue log messages for every operation
# performed by the rewrite engine at the notice error level
# Default value off
rewrite_log on;
# Settings for main website
location / {
try_files $uri $uri/ =404;
}
# Settings for Node app service
location /app/ {
# Header settings for application behind proxy
proxy_set_header Host $host;
# proxy_set_header X-NginX-Proxy true;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "upgrade";
# Proxy pass settings
proxy_pass http://127.0.0.1:3000/;
# Proxy redirect settings
proxy_redirect off;
# HTTP version settings
proxy_http_version 1.1;
# Response buffering from proxied server default 1024m
proxy_max_temp_file_size 0;
# Proxy cache bypass define conditions under the response will not be taken from cache
proxy_cache_bypass $http_upgrade;
}
}
This appeared to work at first glance, but what I have found over time is that I am being served 502 errors constantly on the Node app route. This applies to both the app itself, as well as static assets included in the app.
I've tried using various different variations of the above config, but nothing I can find seems to fix the issue. I had read of issues with SELinux, but this is currently not on the server in question.
Few additional bits of information;
Server: Ubuntu 18.04.3
Nginx: nginx/1.17.5
2020/02/09 18:18:07 [error] 8611#8611: *44 recv() failed (104: Connection reset by peer) while reading response header from upstream, client: x, server: example.com, request: "GET /app/assets/images/image.png HTTP/1.1", upstream: "http://127.0.0.1:3000/assets/images/image.png", host: "example.com", referrer: "http://example.com/overlay/"
2020/02/09 18:18:08 [error] 8611#8611: *46 connect() failed (111: Connection refused) while connecting to upstream, client: x, server: example.com, request: "GET /app/ HTTP/1.1", upstream: "http://127.0.0.1:3000/", host: "example.com"
Has anyone encountered similar issues, or knows what it is that I've done wrong?
Thanks in advance!
It's may be because of your node router.It's better to share nodes code too.
Anyway try put your main router and static route like app.use('/app', mainRouter); and see it make any sense?

node.js server 502 bad gateway with no errors in application logs (nginx reverse proxy setup)

I have a node.js (+Express) application hosted on ubuntu 16.04 machine serving a http web application and an nginx reverse proxy serving a https server (proxying requests to my node application to port 8080). When somebody is using my web app via the browser, after a couple of requests sent back and forth between the browser and the server, the applications stops responding and returns a 502 bad gateway response.
From what I read about upstream errors in nginx, the fault lies probably with the node.js application and bad error handling - server crashing and restarting. Unfortunately there is nothing in my node logs, the logs just "fall silent" at one point and log nothing. So I am frankly at a loss at how to debug the issue. I do have an error handler set up in my node app - setup as a middleware, the last to be used by the express app.
One other thing I find very weird is that when I get a 502 bad gateway in chrome (after 2mins of app hanging/loading), the site just won't load or reload. But when I open the site in chrome incognito, I manage to open the landing page, go to login page and send a POST request with login details. Only after that do the app hang (and send a 502 bad gateway, after about 2mins). And when I use chrome incognito the logs do show some request, the last one is usually
GET /js/24.a34f9a13b9032f4d89b4.chunk.js HTTP/1.1 then the log goes silent again. (So express never receives the POST request with login data)
Could anyone point me in the right direction to find and fix that problem? Please be patient with me, since I am mostly a beginner in web development.
Below is the error from nginx logs:
2018/03/28 17:34:45 [error] 19696#19696: *2078 connect() failed (111: Connection refused) while connecting to upstream, client: 91.89.32.129, server: dashboard.hsseowayds.com, request: "GET /assets/css/font-awesome.min.css HTTP/1.1", upstream: "http://[::1]:8080/assets/css/font-awesome.min.css", host: "dashboard.hsseowayds.com", referrer: "https://dashboard.hsseowayds.com/"
2018/03/28 17:34:50 [error] 19696#19696: *2036 upstream prematurely closed connection while reading response header from upstream, client: 91.89.32.129, server: dashboard.hsseowayds.com, request: "POST /auth/login HTTP/1.1", upstream: "http://127.0.0.1:8080/auth/login", host: "dashboard.hsseowayds.com", referrer: "https://dashboard.hsseowayds.com/"
2018/03/28 17:34:50 [error] 19696#19696: *2036 no live upstreams while connecting to upstream, client: 91.89.32.129, server: dashboard.hsseowayds.com, request: "GET /favicon.ico HTTP/1.1", upstream: "http://localhost/favicon.ico", host: "dashboard.hsseowayds.com", referrer: "https://dashboard.hsseowayds.com/auth/login"
I also did a tcpdump for ports 80, 443 and 8080 for all interfaces (both ethernet and loopback) when I was using chrome incognito during the issues with the server and tried to use wireshark to figure out what was wrong, but had no success. (I also used wireshark to caputure the traffic between my computer and the server which yielded nothing helpful to me either). The tcpdump command I used was:
sudo tcpdump -l -w tcpdump_any_fail_1832.pcap -tttt -i any -s0 port 80 or port 443 or port 8080
If anyone wants to have a look, here is a screenshot from wireshark and the .pcap file I can send you privately (I changed the login data inside), since I don't think I can attach it here:
wireshark screenshot
And this is my nginx file from sites-availables:
server {
listen 80;
server_name dashboard.hsseowayds.com dashboard.hsseowayds.com;
return 301 https://dashboard.hsseowayds.com$request_uri;
}
server {
listen 443 ssl http2 default_server;
listen [::]:443 ssl http2 default_server;
server_name dashboard.hsseowayds.com;
ssl on;
ssl_certificate /etc/nginx/ssl/dashboard.hsseowayds.com/rapidSSL.crt;
ssl_certificate_key /etc/nginx/ssl/dashboard.hsseowayds.com/ssl_private_key.pem;
ssl_session_timeout 180m;
ssl_protocols TLSv1 TLSv1.1 TLSv1.2;
ssl_session_cache shared:SSL:20m;
ssl_stapling on;
ssl_stapling_verify on;
ssl_prefer_server_ciphers on;
ssl_ciphers ECDH+AESGCM:ECDH+AES256:ECDH+AES128:DHE+AES128:!ADH:!AECDH:!MD5;
ssl_dhparam /etc/nginx/cert/dhparam.pem;
add_header Strict-Transport-Security "max-age=31536000; includeSubDomains" always;
underscores_in_headers on;
location / {
proxy_pass http://localhost:8080;
proxy_http_version 1.1;
proxy_cache_bypass $http_upgrade;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection '';
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
proxy_connect_timeout 160s;
proxy_send_timeout 600;
proxy_read_timeout 600;
}
}

Socket.io behind load balancer

I am using socket.io behind nginx which is behind azure load balancer. How ever On the client side I am constantly getting errors like
WebSocket connection to 'ws://**********/socket.io/? session_id=i9bk_iqkVrUveKwvIBz4fMNDbkoYuaITQ_APO73sgQd6-tQBaRkjp8RR8N9LTA5LnqMeKXzZg5AXXgjEevFKqSKRJJI8iaK3&id=dc978ae038af4746baf68ead35d182f4&EIO=3&transport=websocket&sid=61LGLpdw53xaMYqBAAJR' failed: Error during WebSocket handshake: Unexpected response code: 502
Also nginx gives the error below
[error] 15348#0: *84812 upstream prematurely closed connection while reading response header from upstream, client: 10.100.50.14, server: _, request: "GET /prod/socket.io/?session_id=nR0P30IDeUutoavDyjcqAQ8hUw_3l7dtAHQ3tqzW4zVT8eBOxwbHZq_7mWd9K7qRNO2Aq45QXm8w2KSvzyFlq3O4w7P2tl2q&id=955bb63a4f804b42b9d85ac8cf9172a7&EIO=3&transport=websocket&sid=xXGRnAsjKX6Gj-SAAAls HTTP/1.1", upstream: "http://127.0.0.1:3000/socket.io/?session_id=nR0P30IDeUutoavDyjcqAQ8hUw_3l7dtAHQ3tqzW4zVT8eBOxwbHZq_7mWd9K7qRNO2Aq45QXm8w2KSvzyFlq3O4w7P2tl2q&id=955bb63a4f804b42b9d85ac8cf9172a7&EIO=3&transport=websocket&sid=xXGRnAsjKX6Gj-SAAAls"
Anyone have an idea about the reason?
Nginx conf:
proxy_pass http://lb-prod/; # Load balance the URL location "/" to the upstream lb1
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection 'upgrade';
proxy_set_header Host $host;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
The problem here was about session stickyness. Configuring my load balancer to use sticky session solved the problem!
There is a blog from Nginx offical website which is the best practice for using NGINX and NGINX Plus with Node.js and Socket.IO.
I think it's very helpful for you now.

nginx + node.js = failed while connecting to upstream with iframely

I try to use Iframely. I install the self hosted version on my server ubuntu + nginx:
https://iframely.com/docs/host
When i start node like this:
# node server
Iframely works well
Otherwise, i get a 502 bad gateway error.
ERROR
In the log error:
2016/01/25 06:06:58 [error] 13265#0: *4476 connect() failed (111: Connection refused) while connecting to upstream, client: xx:xx:xx:xx:xx:xx, server: iframely.custom.com, request: "GET /iframely?url=http://coub.com/view/2pc24rpb HTTP/1.1", upstream: "http://127.0.0.1:8061/iframely?url=http://coub.com/view/2pc24rpb", host: "iframely.custom.com"
When i try:
# curl -i 127.0.0.1:8061/iframely?url=http://coub.com/view/2pc24rpb
It confirm the error:
curl: (7) Failed to connect to 127.0.0.1 port 8061: Connection refused
I begin with node and i understand that maybe node.js is not listening on port 8061.
When i try:
netstat -pantu
I don't see the port in question but others like this one used by another node.js app which works perfectly:
tcp6 0 0 127.0.0.1:4567 127.0.0.1:60724 ESTABLISHED 12329/node
CONFIGURATION
My host configuration:
upstream iframely.custom.com {
ip_hash;
server localhost:8061;
keepalive 8;
}
server {
listen 80;
listen [::]:80;
server_name iframely.custom.com;
root /usr/share/nginx/html/iframely.custom.com;
# Logs
access_log /var/log/iframely.access_log;
error_log /var/log/iframely.error_log;
location / {
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header Host $http_host;
proxy_set_header X-NginX-Proxy true;
proxy_pass http://iframely.custom.com/;
proxy_redirect off;
# Socket.IO Support
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "upgrade";
}
# Exclude from the logs to avoid bloating when it's not available
include drop.conf;
}
I have tried to change in the configuration localhost for 127.0.0.1 but it doesn't change anything.
How to keeps a node.js app alive: do i have to restart it forever?
Could it be a problem with ipv4 or ipv6?
I post this question on serverfault because i was thinking it's a problem with nginx configuration. But someone suggest i am wrong.
Thank you in advance for any help.
jb
Firstly, you should make node application to listen port 8061 and it should be shown in "netstat -tpln" e.g.:
tcp 0 0 127.0.0.1:8061 0.0.0.0:* LISTEN 21151/node
Secondly, you should test it with curl. If the response is taken, then node server works perfectly.
Finally, shift focus to nginx.
With only one backend, there's no benefit to using the upstream module. You can remove your upstream section and update your proxy_pass line like this:
proxy_pass http://127.0.0.1:8061/;
It's also possible the backend is listening on the IP, but is not responding to the name "localhost". It's unlikely, but possible. But it must be listening on some IP address, so using the IP address is safer.
The advice above by Vladislav is good, too.
I solve the issue using forever: https://github.com/foreverjs/forever

Resources