Best practices when running Node.js with port 80 (Ubuntu / Linode) [closed] - linux

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 6 years ago.
Improve this question
I am setting up my first Node.js server on a cloud Linux node and I am fairly new to the details of Linux admin. (BTW I am not trying to use Apache at the same time.)
Everything is installed correctly, but I found that unless I use the root login, I am not able to listen on port 80 with node. However I would rather not run it as root for security reason.
What is the best practice to:
Set good permissions / user for node so that it is secure / sandboxed?
Allow port 80 to be used within these constraints.
Start up node and run it automatically.
Handle log information sent to console.
Any other general maintenance and security concerns.
Should I be forwarding port 80 traffic to a different listening port?
Thanks

Port 80
What I do on my cloud instances is I redirect port 80 to port 3000 with this command:
sudo iptables -t nat -A PREROUTING -i eth0 -p tcp --dport 80 -j REDIRECT --to-port 3000
Then I launch my Node.js on port 3000. Requests to port 80 will get mapped to port 3000.
You should also edit your /etc/rc.local file and add that line minus the sudo. That will add the redirect when the machine boots up. You don't need sudo in /etc/rc.local because the commands there are run as root when the system boots.
Logs
Use the forever module to launch your Node.js with. It will make sure that it restarts if it ever crashes and it will redirect console logs to a file.
Launch on Boot
Add your Node.js start script to the file you edited for port redirection, /etc/rc.local. That will run your Node.js launch script when the system starts.
Digital Ocean & other VPS
This not only applies to Linode, but Digital Ocean, AWS EC2 and other VPS providers as well. However, on RedHat based systems /etc/rc.local is /ect/rc.d/local.

Give Safe User Permission To Use Port 80
Remember, we do NOT want to run your applications as the root user, but there is a hitch: your safe user does not have permission to use the default HTTP port (80). You goal is to be able to publish a website that visitors can use by navigating to an easy to use URL like http://ip:port/
Unfortunately, unless you sign on as root, you’ll normally have to use a URL like http://ip:port - where port number > 1024.
A lot of people get stuck here, but the solution is easy. There a few options but this is the one I like. Type the following commands:
sudo apt-get install libcap2-bin
sudo setcap cap_net_bind_service=+ep `readlink -f \`which node\``
Now, when you tell a Node application that you want it to run on port 80, it will not complain.
Check this reference link

Drop root privileges after you bind to port 80 (or 443).
This allows port 80/443 to remain protected, while still preventing you from serving requests as root:
function drop_root() {
process.setgid('nobody');
process.setuid('nobody');
}
A full working example using the above function:
var process = require('process');
var http = require('http');
var server = http.createServer(function(req, res) {
res.write("Success!");
res.end();
});
server.listen(80, null, null, function() {
console.log('User ID:',process.getuid()+', Group ID:',process.getgid());
drop_root();
console.log('User ID:',process.getuid()+', Group ID:',process.getgid());
});
See more details at this full reference.

For port 80 (which was the original question), Daniel is exactly right. I recently moved to https and had to switch from iptables to a light nginx proxy managing the SSL certs. I found a useful answer along with a gist by gabrielhpugliese on how to handle that. Basically I
Created an SSL Certificate Signing Request (CSR) via OpenSSL
openssl genrsa 2048 > private-key.pem
openssl req -new -key private-key.pem -out csr.pem
Got the actual cert from one of these places (I happened to use Comodo)
Installed nginx
Changed the location in /etc/nginx/conf.d/example_ssl.conf to
location / {
proxy_pass http://localhost:3000;
proxy_set_header X-Real-IP $remote_addr;
}
Formatted the cert for nginx by cat-ing the individual certs together and linked to it in my nginx example_ssl.conf file (and uncommented stuff, got rid of 'example' in the name,...)
ssl_certificate /etc/nginx/ssl/cert_bundle.cert;
ssl_certificate_key /etc/nginx/ssl/private-key.pem;
Hopefully that can save someone else some headaches. I'm sure there's a pure-node way of doing this, but nginx was quick and it worked.

Related

AWS SSL on EC2 instance without Load Balancer - NodeJS

Is it possible to have an EC2 instance running, listening on port 443, without a load balancer? I'm trying right now in my Node.JS app but it doesn't work when I call the page using https://. However, if I set it to port 80 everything works fine with http://.
I had it working earlier with a load balancer and route53, but I don't want to pay $18/mo for an ELB anymore, especially when I only have one server running.
Thanks for the help
You're right, if it's only the one instance and you feel like you don't need to be prepared for large increases in traffic, you shouldn't have to pay for an ELB.
From a high-level standpoint you'll have to go through the following steps:
Install an nginx server to serve your NodeJS application.
Install your SSL certificates on the nginx server.
-- Either do this manually, ssh'ing into the server and installing the certs as described here.
-- OR include the necessary files in your application (I believe this only works for elastic beanstalk?) which will overwrite the nginx configuration files automatically as described here.
Make sure nginx is listening on port 443 (should've been completed in the previous step)
Open the EC2 server's security group corresponding to where you want traffic to enter the server (port 80 / port 443)
Is it possible? Yes of course. It sounds like you had an SSL certificate installed on the ELB and now you've deleted the ELB. You will have to install an SSL certificate on the EC2 server now. You can't use AWS ACM SSL certificates without an ELB or CloudFront distribution. If you don't want to pay for either of those services you will have to obtain an SSL certificate elsewhere.
For our projects (much like the other poster described) we used this setup:
nginx as load balancer and proxy for all calls on port 80 (no direct call to node.js server on port 3000 which is closed to the public)
pm2 as process manager for Node.js (and for deployment)
keymetrics.io for monitoring
Nodejs v6.9.3 boron/lts (through NVM)
Mongodb 3.2 with WiredTiger Engine (Compose.io)
Amazon EC2 instances for hosting (Amazon Linux not Ubuntu)
This setup works very well for us. And in this setup we're able to setup SSL without using the amazon load balancers.
Once you have your certificate files, it's not so hard. You can even do this without Nginx.
Let's first create an express webserver
const app = express();
For the sake of example, you could put a static website inside a folder.
const wwwFolder = express.static(path.join(__dirname, '/../www'));
app.use(wwwFolder);
Next, yYou basically need to read your certificate files
const key = readFileSync(__dirname + '/ssl/privkey.pem', 'utf8');
const cert = readFileSync(__dirname + '/ssl/cert.pem', 'utf8');
const ca = readFileSync(__dirname + '/ssl/chain.pem', 'utf8');
const serverOptions: https.ServerOptions = { key, cert, ca };
And finally, you create a https server using those certificates.
const server = https.createServer(serverOptions, app);
server.listen(httpsPort, () => log.debug("createWebServers", `server is listening on port ${httpsPort}`));
For security reasons it's probably not possible to listen directly on port 443. Instead, for instance use a port like 4201 and then use port forwarding.
If you use systemd to start/stop your service, then this port forwarding can be defined in your service configuration file. An easy solution:
[Unit]
Description=my.service
After=network.target
[Service]
Type=simple
TimeoutSec=0
User=ubuntu
PermissionsStartOnly=true
ExecStartPre=/sbin/iptables -t nat -A PREROUTING -p tcp --dport 443 -j REDIRECT --to-port 4201
ExecStart=/usr/local/bin/node /home/ubuntu/project/server.js
ExecStopPost=/sbin/iptables -t nat -D PREROUTING -p tcp --dport 443 -j REDIRECT --to-port 4201
Restart=on-failure
[Install]
WantedBy=multi-user.target
There are various ways to create and refresh your certificate files. So, I won't go into detail here about that. But most importantly, you don't need an amazon certificate to accomplish it. LetsEncrypt is free and easy and works fine.
Usually I also add a http server (without HTTPS) and apply a redirect. And then I also use port forwarding for that. So, I add a 2nd port forwarding rule in the service file.

Running nodejs app on Centos7 apache server

I'm trying to run a node web app (built with meteor) on a Centos7 server running EasyApache4 with WHM cPanel. I'm trying to run it on a subdomain off of one of our main websites on port 8080.
When going to the subdomain on port 8080, the connection just times out, but can see the html when using curl to access it.
Does anyone have any ideas why it won't work through the browser, and also how I can get it to look like it's running straight from the subdomain instead of having to go directly to the port.
EDIT
Below is the curl we are using to view the html
curl http://subdomain.site.com:8080
Doing that brings back the html no problems
Had the same problem today. I am using Memset Centos7 server with WHM/CPanel, running EasyApache 4.
After trying everything I could think of, I realised that I had a basic firewall setup, which closed all ports that were not listed. After adding port 8080, it worked.
Used this:
sudo iptables -I INPUT 1 -i + -p tcp --dport 8080 -j ACCEPT
I am not 100% certain how secure this is, as I am still researching.

Go, sudo, and apache port 80

I am using gorilla/mux package in golang, but there are some problems. The first is I have no permissions to use port 80 on my application becuase I cannot run the application from sudo as the $GOPATH is not set when using sudo.
Here is the error I get from my program:
$ go run app.go
2014/06/28 00:34:12 Listening...
2014/06/28 00:34:12 ListenAndServe: listen tcp :80: bind: permission denied
exit status 1
I am unsure if it will even work when I fix the sudo problem, because apache is already using port 80 and I am not sure if both my app and apache can "play nice" together.
Any advice on how to solve this would be great. Thank you.
Quoting elithar's comment,
You have two options: either turn off Apache (because only one service
can bind to a port), or (better!) use Apache's ProxyPass to proxy any
incoming requests to a specific Hostname to your Go server running on
port (e.g.) 8000. The second method is very popular, robust, and you
can use Apache to handle request logging and SSL for you.
Reverse Proxying
Using Apache on port 80 in this way is called a reverse proxy. It receives all incoming connections on port 80 (and/or port 443 for https) and passes them on, usually unencrypted, via internal localhost connections only, to your Go program running on whatever port you choose. 8000 and 8080 are often used. The traffic between Apache and your server is itself HTTP traffic.
Because your Go program does not run as root, it is unable to alter critical functions on the server. Therefore it gives an extra degree of security, should your program ever contain security flaws, because any attacker would gain only limited access.
FastCGI
You can improve the overall performance of the reverse proxying by not using HTTP for the connection from Apache to the Go server. This is done via the FastCGI protocol, originally developed for shell, Perl and PHP scripts, but working well with Go too. To use this, you have to modify your Go server to listen using the fcgi API. Apache FastCGI is also required. The traffic from Apache to your server uses a more compact format (not HTTP) and this puts less load on each end.
The choice of socket type is also open: instead of the usual TCP sockets, it is possible to use Unix sockets, which reduce the processing load even further. I haven't done this in Go myself, but the API supports the necessary bits (see a related question).
Nginx
Whilst all the above describes using Apache, there are other server products that can provide a reverse proxy too. The most notable is Nginx (Nginx reverse proxy example), which will give you small but useful performance and scalability advantages. If you have this option on your servers, it is worth the effort to learn and deploy.
Based on this previous answer about environment variables, I was able to solve the sudo problem easily.
https://stackoverflow.com/a/8636711/2576956
sudo visudo
added these lines:
Defaults env_keep +="GOPATH"
Defaults env_keep +="GOROOT"
Using ubuntu 12.04 by the way. I think the previous answer about the proxy for using port 80 is the correct choice, because after fixing the sudo issue I was given this error about port 80 instead:
$ sudo go run app.go
2014/06/28 01:26:30 Listening...
2014/06/28 01:26:30 ListenAndServe: listen tcp :80: bind: address already in use
exit status 1
Meaning the sudo command was fixed but the proxy binding would not work with another service already using port 80 (apache).

webserver node.js as non root user

I'm a Linux beginner and have a Linux Ubuntu 12.04 server. I've installed node.js and created a webserver script. That works fine, but it runs as root user.
I know that's not good (root-user & webserver = unsafe).
How can I run the webserver script as an non-root user? Does somebody know a good detailed tutorial or can give me some advice?
You have two options:
Listen on port 80
Run as root, start your app's listen() on port 80 and them immediately drop to non-root. This is what Apache does, for example. Not recommended since it's easy to get this wrong, and lots of other details (writing to log files, initialization required before you can listen, etc.). Not standard practice in node.
Listen on port >=1024*
Run as non-root, listen on a port >= 1024 (say: 8000, or 8080), and have someone else listen on port 80 and relay port 80 traffic to you. That someone else can be:
A load-balancer, NAT, proxy, etc. (Maybe an EC2 load balancer if you're running on EC2, e.g.)
Another http server, say Apache httpd or ngnix.
For an ngnix example, see this: Node.js + Nginx - What now?
you can just run node hello.js

nginx not listening to port 80

I've just installed a Ubuntu 12.04 server and nginx 1.2.7, removed default from sites-enabled and added my own file into sites-available and symlink at sites-enabled. Then restarted nginx.
Problem: However going to the URL does not load the site. netstat -nlp | grep nginx and netstat -nlp | grep 80 both returns no results! lsof -i :80 also returns nothing. A dig from another server returns the correct ip address so it shouldn't be a DNS problem. I was able to connect to apache which I have now stopped its service. nginx logs also show nothing.
How should I troubleshoot this problem?
/etc/nginx/site-available/mysite.com
server {
listen 80;
server_name www.mysite.com mysite.com *.mysite.com;
access_log /var/log/nginx/access.log;
error_log /var/log/nginx/error.log;
root /var/www/mysite/public;
index index.php index.html;
location / {
try_files $uri $uri/ /index.php?$args ;
}
location ~ \.php$ {
fastcgi_pass unix:/var/run/php5-fpm.sock;
fastcgi_index index.php;
include fastcgi_params;
fastcgi_read_timeout 300;
}
}
I had this same problem, the solution was that I had not symlinked my siteconf file correctly. Try running vim /etc/nginx/sites-enabled/mysite.com—can you get to it? I was getting "Permission Denied."
If not run:
rm /etc/nginx/sites-enabled/mysite.com
ln -s /etc/nginx/sites-available/mysite.com /etc/nginx/sites-enabled/mysite.com
If your logs are silent on the issue, you may not be including the sites-enabled directory. One simple way to tell that the site is being loaded is to set the error/access log path within your server block to a unique path, reload nginx, and check if the files are created.
Ensure the following include directive exists within the http context in /etc/nginx/nginx.conf.
http {
...
include /etc/nginx/sites-enabled/*;
}
I've found it helpful to approach debugging nginx with the following steps:
1... Make sure nginx is running.
ps aux | grep nginx
2... Check for processes already bound to the port in question.
lsof -n -i:80
3... Make sure nginx has been reloaded.
sudo nginx -t
sudo nginx -s reload
On Mac, brew services restart nginx is not sufficient to reload nginx.
4... Try creating simple responses manually to make sure your location path isn't messed up. This is especially helpful when problems arise while using proxy_pass to forward requests to other running apps.
location / {
add_header Content-Type text/html;
return 200 'Here I am!';
}
I ran into the same problem, I got a Failed to load resource: net::ERR_CONNECTION_REFUSED error when connecting over HTTP, but fine over HTTPS. Ran netstat -tulpn and saw nginx not binding to port 80 for IPv4. Done everything described here. Turned out to be something very stupid:
Make sure the sites-available file with the default_server is actually enabled.
Hope this saved some other poor idiot out there some time.
You are probably binding nginx to port 80 twice. Is that your full config file? Don't you have another statement listening to port 80?
A semi-colon ; missing in /etc/nginx/nginx.conf for exemple on the line before include /etc/nginx/servers-enabled/*; can just bypass this intruction and nginx -t check will be successful anyway.
So just check that all instructions in /etc/nginx/nginx.conf are ended with a semi-colon ;.
I had faced the same problem over the server, here I am listing the how I had solved it :
Step 1 :: Installing the Ngnix
sudo apt update
sudo apt install nginx
Step 2 – Adjusting the Firewall
sudo ufw app list
You should get a listing of the application profiles:
Output
Available applications:
Nginx Full
Nginx HTTP
Nginx HTTPS
OpenSSH
As you can see, there are three profiles available for Nginx:
Nginx Full: This profile opens both port 80 (normal, unencrypted web traffic) and port 443 (TLS/SSL encrypted traffic)
Nginx HTTP: This profile opens only port 80 (normal, unencrypted web traffic)
Nginx HTTPS: This profile opens only port 443 (TLS/SSL encrypted traffic)
Since I haven’t configured SSL for our server yet in this guide, we will only need to allow traffic on port 80.You can enable this by typing:
sudo ufw allow 'Nginx HTTP'
You can verify the change by typing:
sudo ufw status
Step 3 – Checking your Web Server
systemctl status nginx
Now Check port 80 , It worked for me hope will work for you as well.
Have you checked if your nginx binary really exists? please check if
#whereis nginx
outputs the binary path and check this path with your init script from /etc/init.d/nginx. e.g.
DAEMON=/usr/sbin/nginx
(In my init script "test -x $DAEMON || exit 0" is invoked and in any case this script returned nothing - my binary was completely missing)
While we all think we don't make silly mistakes, we do.
So, if you are looking into NGINX issues and all signs are showing it should work then you should take a step away from the files and look downstream.
System Firewall, Hardware Firewall, Nat router/firewall.
For myself this issue was my router, I run a home lab and so I can access services behind my router from afar I use NGINX to reverse proxy as my router only handles incoming based on IP and doesn't do any handling of hostnames, I'm sure this is all fairly normal.
In any case my issue cropped up as I was securing my network a few days ago, removing some port forwarding that isnt needed any longer and I accidentally removed port 80.
Yes it was as simple as forwarding that port again to NGINX and all was fixed.
I will now walk away with my head hung in extreme shame though I leave this answer to show my gratitude to the people in this thread that lead me to find my own error.
So thank you.
In my case those network command's outputs showed nginx was correctly binding to port 80, yet the ports weren't externally accessible or visible with nmap.
While I suspected a firewall, it turns out that old iptables rules on the machine were redirecting traffic from those ports and conflicting with nginx. Use sudo iptables-save to view all currently applicable rules.
I am facing the same issue. Just reload the nginx help me
sudo nginx -t
If you got error then just delete the log.txt file
then,
sudo nginx -s reload

Resources