Nginx serves node app then randomly throws 502 errors - node.js

I am trying to run a node app (using express) on port 3000 with nginx.
This is my nginx.conf for the site:
server {
listen [::]:80 ipv6only=off;
server_name website.dev;
location / {
proxy_pass http://127.0.0.1:3000;
proxy_set_header Host $http_host;
}
}
I'm running the node app through a nodemon task. I'm also running a gulp watch task that compiles less to css, browserify, babelify, etc.
The problem I'm having is that nginx will serve the node app fine for about 30 seconds, then (apparently randomly) start serving 502 errors. The nodemon task doesn't stop during these times, and the gulp task doesn't run either.
I can't find any errors happening in the node application itself, and nothing shows up in error.log or access.log for nginx.
I've verified that the node app is actually running on port 3000 and that nginx is listening on port 80.
Here is the output of netstat -nlt:
Proto Recv-Q Send-Q Local Address Foreign Address State
tcp 0 0 0.0.0.0:22 0.0.0.0:* LISTEN
tcp 0 0 0.0.0.0:34490 0.0.0.0:* LISTEN
tcp 0 0 0.0.0.0:111 0.0.0.0:* LISTEN
tcp6 0 0 :::22 :::* LISTEN
tcp6 0 0 :::3000 :::* LISTEN
tcp6 0 0 :::111 :::* LISTEN
tcp6 0 0 :::80 :::* LISTEN
tcp6 0 0 :::42481 :::* LISTEN
Finally, after a couple of minutes, nginx goes back to serving the app again for about 30 seconds, and repeat.
If I run curl localhost:3000 I get my node app. Even though nginx will still be throwing 502 errors.

Working nginx conf file looks like below,
upstream project{
server 127.0.0.1:3000;
#you can add multiple nodes here for load balancing
}
server{
listen 80;
server_name website.dev;
location / {
proxy_pass http://project;
}
}

It turned out to be a zombie vagrant box.
I noticed that after running vagrant halt I still got the nginx 502 page. So I went looking for another vagrant install.
There turned out to be a vagrant box still running that was not showing up on vagrant global-status and was not found by Vagrant Manager.
The host machine, for whatever reason, was switching between the boxes it was referring to randomly, which is why I was randomly getting 502 errors (from the zombie box).
Deleting the box from .vagrant/machines/ solved the issue.

Related

Nginx does not listen to port 3000

I have a Nuxt app and I try to config Nginx , when I use curl localhost:3000 I can see my website tag and status 200 , but I can not open my website with domain name , it's take to load for like 2 minutes and then it will show me " This site can’t be reached " I made an error log and this was in error log
[error] 72792#72792: *1 connect() failed (111: Connection refused) while connecting to upstream, client: 172.68.50.201, server: domain.com , request: "GET / HTTP/1.1", upstream: "http://[::1]:3000/", host: "domain.com"
I try this command netstat -tulpn to make sure nginx is listening . and here is the result
Proto Recv-Q Send-Q Local Address Foreign Address State PID/Program name
tcp 0 0 0.0.0.0:3000 0.0.0.0:* LISTEN 70267/node
tcp 0 0 0.0.0.0:443 0.0.0.0:* LISTEN 72791/nginx: master
tcp 0 0 127.0.0.1:33060 0.0.0.0:* LISTEN 773/mysqld
tcp 0 0 127.0.0.1:3306 0.0.0.0:* LISTEN 773/mysqld
tcp 0 0 0.0.0.0:80 0.0.0.0:* LISTEN 72791/nginx: master
tcp 0 0 127.0.0.53:53 0.0.0.0:* LISTEN 61184/systemd-resol
tcp 0 0 0.0.0.0:22 0.0.0.0:* LISTEN 689/sshd: /usr/sbin
tcp6 0 0 :::443 :::* LISTEN 72791/nginx: master
tcp6 0 0 :::80 :::* LISTEN 72791/nginx: master
tcp6 0 0 :::22 :::* LISTEN 689/sshd: /usr/sbin
udp 0 0 127.0.0.53:53 0.0.0.0:* 61184/systemd-resol
this is my nginx config
server {
listen 443 ssl ;
listen [::]:443 ssl ;
include snippets/self-signed.conf;
include snippets/ssl-params.conf;
server_name domain.com www.domain.com;
root /path/to/my/front-app;
error_log /var/log/nginx/nuxt.error.log;
index index.html index.htm;
location / {
proxy_pass http://localhost:3000;
include /etc/nginx/proxy_params;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection 'upgrade';
proxy_set_header Host $host;
proxy_cache_bypass $http_upgrade;
}
}
server {
listen 80;
listen [::]:80;
server_name myServerIp;
return 301 $https://myServerDomain;
}
I will write an answer here mainly because I will feel too limited in a comment regarding layout/presentation. Please don't take it as a definite answer per se.
(also, OP reached out directly to me hence my effort here)
First, I would start by migrating to the latest version of Nuxt (v3.0 stable) rather than an RC. So that way, you're on the latest and greatest.
Then, I'll double-check that I'm using Node v18 because this is the LTS and you should use the LTS.
I'll start debugging the app locally, hence yarn build && yarn preview. If that works well, then the issue is probably not coming from Nuxt.
(I also recommend yarn or PNPM as package managers rather than npm because of speed + error explicitness when installing any NPM packages)
You can then try to host your app on a Node.js-powered PASS like Render.com or Heroku. That way, you leave the configuration of the VPS to somebody else to manage and focus primarily on your app. Follow the official guides available here.
If that works, then it's definitely not a Nuxt issue.
I'm a bit rusty on the Nginx part, so I'm not sure to help quickly with that part as I used to.
My main question here is, "Do you really need to manage your own server?"
If not, then you could focus on the code itself and let the deployment part aside, especially if is taking 2 weeks of your time and not bringing a lot of value.
If it is something that you truly care about, then the first step would be to double-check some of those points:
try to expose a simple .html file to your domain name, access it from www.yourcoolwebsite.com
try to expose a Node.js app, and render a view to double-check the Node.js part
if the 2 above work well, your Nuxt app should be working totally fine too!
Here you're trying to achieve too many things at the same time, you need to proceed by elimination and debug with what you know/are not sure of.
Fixing an issue with various variables without isolation is too time-consuming and not efficient.

Site deployed on EC2 returning ERR_CONNECTION_REFUSED when making requests to Node Express Server

I am trying to make a request to the my Node Express server via the browser and I am being returned with ERR_CONNECTION_REFUSED.
i.e.
POST http://localhost:9000/api/search net::ERR_CONNECTION_REFUSED
Requests from the Chrome console are also refused.
However, when I make curl requests from the EC2 terminal, the request is successful and i'm returned a JSON.
My nginx.conf file is detailed below:
server {
listen 80 default_server;
server_name _;
location / {
root /usr/share/nginx/html;
include /etc/nginx/mime.types;
try_files $uri $uri/ /index.html;
add_header 'Access-Control-Allow-Headers' *;
add_header 'Access-Control-Allow-Origin' *;
add_header 'Access-Control-Allow-Methods' *;
}
location /sockjs-node {
proxy_pass http://localhost:80;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "Upgrade";
}
location /api {
proxy_pass http://localhost:9000;
}
}
From within the EC2 instance, the firewall status is:
sudo ufw status
Status: active
To Action From
-- ------ ----
OpenSSH ALLOW Anywhere
Nginx Full ALLOW Anywhere
9000 ALLOW Anywhere
OpenSSH (v6) ALLOW Anywhere (v6)
Nginx Full (v6) ALLOW Anywhere (v6)
9000 (v6) ALLOW Anywhere (v6)
netstat -tunlp returns
(Not all processes could be identified, non-owned process info
will not be shown, you would have to be root to see it all.)
Active Internet connections (only servers)
Proto Recv-Q Send-Q Local Address Foreign Address State PID/Program name
tcp 0 0 0.0.0.0:80 0.0.0.0:* LISTEN -
tcp 0 0 127.0.0.53:53 0.0.0.0:* LISTEN -
tcp 0 0 0.0.0.0:22 0.0.0.0:* LISTEN -
tcp 0 0 127.0.0.1:27017 0.0.0.0:* LISTEN -
tcp6 0 0 :::22 :::* LISTEN -
tcp6 0 0 :::9000 :::* LISTEN -
udp 0 0 127.0.0.53:53 0.0.0.0:* -
udp 0 0 172.31.2.45:68 0.0.0.0:* -
udp 0 0 127.0.0.1:323 0.0.0.0:* -
udp6 0 0 ::1:323 :::* -
My EC2 security group rules look like this
I've no idea what the issue could be. Any help would be appreciated.
SOLUTION: I've managed to resolve the issue by changing all fetch requests on the front-end to use the EC2 IP address instead of localhost. This doesn't seem very optimal though. Is there some sort of wildcard operator I could use instead as the EC2 IP address changes on restart. Any advances would be appreciated!
SOLUTION: I've managed to resolve the issue by changing all fetch requests on the front-end to use the EC2 IP address instead of localhost. This doesn't seem very optimal though. Is there some sort of wildcard operator I could use instead as the EC2 IP address changes on restart. Any advances would be appreciated!

Pointing EC2 instance via domain inside Route 53 with timeout

I've spent a lot of time looking for a solution, but this is a quite weird and tricky issue.
I have AWS EC2 instance (Ubuntu)
I have a configured domain in AWS Route 53
Everything works properly via IP address of EC2 instance in web browser, but when I'm changing nginx.conf and adding server_name with my domain properties it's instantly throwing a timeout.
To be clear:
Route 53:
added proper IP as A address
added proper NS addresses
checked everything via dig in terminal - it's okay.
EC2 instance:
ubuntu instance
node js app on port 8000
configured security group with Outbound: All, Inbound: HTTP port 80 and Custom TCP Rule Port Range 8000
server {
listen 80;
listen [::]:80;
server_name mydomain.dev www.mydomain.dev;
return 301 http://$server_name$request_uri;
root /home/ubuntu/mydomainfolder_dev/;
location / {
proxy_pass http://localhost:8000;
#proxy_http_version 1.1;
#proxy_set_header Upgrade $http_upgrade;
#proxy_set_header Connection 'upgrade';
#proxy_set_header Host $host;
}
}
after this nginx.conf change and restarting a service (sudo service nginx restart) makes a proper redirect of EC2 address to my domain, but there is a timeout... any ideas how to fix it guys?
also: sudo netstat -tulpn output:
Active Internet connections (only servers)
Proto Recv-Q Send-Q Local Address Foreign Address State PID/Program name
tcp 0 0 0.0.0.0:80 0.0.0.0:* LISTEN 4581/nginx: master
tcp 0 0 127.0.0.53:53 0.0.0.0:* LISTEN 1608/systemd-resolv
tcp 0 0 0.0.0.0:22 0.0.0.0:* LISTEN 935/sshd
tcp6 0 0 :::80 :::* LISTEN 4581/nginx: master
tcp6 0 0 :::22 :::* LISTEN 935/sshd
tcp6 0 0 :::8000 :::* LISTEN 2486/node /home/ubu
SOLUTION
I guess I found something, checking sudo nano /var/log/syslog gives me weird DNS error:
Server returned error NXDOMAIN, mitigating potential DNS violation DVE-2018-0001, retrying transaction with reduced feature level UDP.
Alter the security group to allow port 443 as well as port 80
You will need an SSL certificate on the nginx server also

Secure WebSocket Connection through nginx 1.10.3 and LetsEncrypt

Been ripping hair out because I cannot find out why I cannot connect to my WebSocket server through HTTPS.
So, I have an Express 4 server running a vue app. It uses the ws library from npm to connect to the WebSocket server. The nginx is 1.10.3. I use LetsEncrypt to secure it.
When I go online, I get (in the console):
main.js:12 WebSocket connection to 'wss://play.mysite.com:8443/' failed: Error in connection establishment: net::ERR_CONNECTION_CLOSED
Line 12:
window.ws = new WebSocket(${tls}://${window.location.hostname}:${port});`
That is wss://play.mysite.com:8443. It is indeed on a subdomain.
Here is my nginx block:
update: I got it!
upstream websocket {
server 127.0.0.1:8443;
}
server {
root /var/www/play.mysite.com/dist;
index index.html;
access_log /var/log/wss-access-ssl.log;
error_log /var/log/wss-error-ssl.log;
server_name play.mysite.com www.play.mysite.com;
location /ws {
proxy_pass http://websocket;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "upgrade";
proxy_set_header Host $host;
}
listen [::]:443 ssl ipv6only=on default_server; # managed by Certbot
listen 443 ssl http2 default_server; # managed by Certbot
ssl_certificate /etc/letsencrypt/live/play.mysite.com/fullchain.pem; # managed by Certbot
ssl_certificate_key /etc/letsencrypt/live/play.mysite.com/privkey.pem; # managed by Certbot
include /etc/letsencrypt/options-ssl-nginx.conf; # managed by Certbot
ssl_dhparam /etc/letsencrypt/ssl-dhparams.pem; # managed by Certbot
}
As you see, we have the subdomain on https://play.mysite.com. My Express server always listen on port 4000 and my WebSocket server listens to wss / port 8443 on production.
When I view the /var/log/wss-error-ssl.log, I get:
2018/03/01 04:45:17 [error] 30343#30343: *1 connect() failed (111: Connection refused) while connecting to upstream, client: 68.117.172.118, server: play.mysite.com, request: "GET /static/css/app.67b8cb3fda61e1e2deaa067e29b52040.css HTTP/1.1", upstream: "http://[::1]:4000/static/css/app.67b8cb3fda61e1e2deaa067e29b52040.css", host: "play.mysite.com", referrer: "https://play.mysite.com/"
So, what exactly am I doing wrong?
The proxy_pass was set right. Listening to port 8443 for WebSocket on both client/server. What is the meaning? Thank you.
edit: and yes, I know the port 8443 is running:
root#MySITE:/var/www/play.mysite.com# netstat -l
Active Internet connections (only servers)
Proto Recv-Q Send-Q Local Address Foreign Address State
tcp 0 0 *:https *:* LISTEN
tcp 0 0 localhost:4000 *:* LISTEN
tcp 0 0 localhost:mysql *:* LISTEN
tcp 0 0 *:http *:* LISTEN
tcp 0 0 *:ssh *:* LISTEN
tcp6 0 0 [::]:8443 [::]:* LISTEN
tcp6 0 0 [::]:https [::]:* LISTEN
tcp6 0 0 [::]:http [::]:* LISTEN
tcp6 0 0 [::]:ssh [::]:* LISTEN
udp 0 0 *:bootpc *:*
udp 0 0 45.76.1.234.vultr.c:ntp *:*
udp 0 0 localhost:ntp *:*
udp 0 0 *:ntp *:*
udp6 0 0 2001:19f0:5:2b9c:54:ntp [::]:*
udp6 0 0 fe80::5400:1ff:fe61:ntp [::]:*
udp6 0 0 localhost:ntp [::]:*
udp6 0 0 [::]:ntp [::]:*
TL'DR Add a second location block for /websocket and have the javascript client hit wss://play.mysite.com/websocket
I see a couple of issues with your nginx server conf.
Nginx is not listening on port 8443 yet your javascript is hitting wss://play.mysite.com:8443. So while nginx is serving up ports 443 and 80 for the play.mysite.com domain, you javascript is trying to bypass nginx using wss (TLS) and hit NodeJS directly -- though I suppose you meant to offload the SSL onto Nginx. The wss call must go through nginx in order for the SSL handshake to be successful.
You are passing all calls to localhost:4000. Instead I suggest that you have two location blocks location / { ... } and location /websocket { ... }. This way in the second websocket location block you can proxy pass the calls to localhost:8443. This would require the client to connect to wss://play.mysite.com/websocket.

Why does Node.js work as a proxy to backend Node.js app, but not Nginx?

I have a simple nginx config file
server {
listen 80 default_server;
listen [::]:80 default_server;
server_name ec2-x-x-x-x.compute-1.amazonaws.com;
#root /home/ec2-user/dashboard;
# Load configuration files for the default server block.
# include /etc/nginx/default.d/*.conf;
location / {
proxy_pass http://127.0.0.1:4000;
}
error_page 500 502 503 504 /50x.html;
location = /50x.html {
root html;
}
}
But when I send the request, it says it cannot access the server.
the server works fine from port 4000 though, and sudo netstat -tulpn gives me this
Active Internet connections (only servers)
Proto Recv-Q Send-Q Local Address Foreign Address State PID/Program name
tcp 0 0 0.0.0.0:80 0.0.0.0:* LISTEN 6512/nginx: master
tcp 0 0 0.0.0.0:22 0.0.0.0:* LISTEN 1640/sshd
tcp 0 0 127.0.0.1:25 0.0.0.0:* LISTEN 1247/master
tcp6 0 0 :::80 :::* LISTEN 6512/nginx: master
tcp6 0 0 :::22 :::* LISTEN 1640/sshd
tcp6 0 0 :::3000 :::* LISTEN 15985/node
tcp6 0 0 ::1:25 :::* LISTEN 1247/master
tcp6 0 0 :::4000 :::* LISTEN 3488/node
udp 0 0 0.0.0.0:68 0.0.0.0:* 484/dhclient
udp 0 0 127.0.0.1:323 0.0.0.0:* 451/chronyd
udp 0 0 0.0.0.0:1510 0.0.0.0:* 484/dhclient
udp6 0 0 ::1:323 :::* 451/chronyd
udp6 0 0 :::1458 :::* 484/dhclient
Also, when I use node as a proxy server
var http = require('http'),
httpProxy = require('http-proxy');
httpProxy.createProxyServer({target:'http://localhost:4000'}).listen(80);
this works just fine.
any ideas as to what I'm doing wrong?
Thanks for the useful netstat output. It appears the issue is that your Node.js app is only listening on IPv6, as represented by :::* in the output.
Nginx is trying to connect it via IPv4, where it is not listening.
Your Node.js proxy probably works because it shares the same issue on both ends. :)
You didn't share which Node.js version you are using. Some versions had an issue where attempting to set up an IPv4 connection would result in an IPv6 connection. Either you've run into a bug like that, or your Node.js app is actually misconfigured to listen on IPv6.
If the Node.js app on port 400 was correctly configured to listen on IPv4, you would see this kind of entry in the netstat output:
tcp 0 0 127.0.0.1:4000 0.0.0.0:* LISTEN 12345/node

Resources