AWS SSL on EC2 instance without Load Balancer - NodeJS - node.js

Is it possible to have an EC2 instance running, listening on port 443, without a load balancer? I'm trying right now in my Node.JS app but it doesn't work when I call the page using https://. However, if I set it to port 80 everything works fine with http://.
I had it working earlier with a load balancer and route53, but I don't want to pay $18/mo for an ELB anymore, especially when I only have one server running.
Thanks for the help

You're right, if it's only the one instance and you feel like you don't need to be prepared for large increases in traffic, you shouldn't have to pay for an ELB.
From a high-level standpoint you'll have to go through the following steps:
Install an nginx server to serve your NodeJS application.
Install your SSL certificates on the nginx server.
-- Either do this manually, ssh'ing into the server and installing the certs as described here.
-- OR include the necessary files in your application (I believe this only works for elastic beanstalk?) which will overwrite the nginx configuration files automatically as described here.
Make sure nginx is listening on port 443 (should've been completed in the previous step)
Open the EC2 server's security group corresponding to where you want traffic to enter the server (port 80 / port 443)

Is it possible? Yes of course. It sounds like you had an SSL certificate installed on the ELB and now you've deleted the ELB. You will have to install an SSL certificate on the EC2 server now. You can't use AWS ACM SSL certificates without an ELB or CloudFront distribution. If you don't want to pay for either of those services you will have to obtain an SSL certificate elsewhere.

For our projects (much like the other poster described) we used this setup:
nginx as load balancer and proxy for all calls on port 80 (no direct call to node.js server on port 3000 which is closed to the public)
pm2 as process manager for Node.js (and for deployment)
keymetrics.io for monitoring
Nodejs v6.9.3 boron/lts (through NVM)
Mongodb 3.2 with WiredTiger Engine (Compose.io)
Amazon EC2 instances for hosting (Amazon Linux not Ubuntu)
This setup works very well for us. And in this setup we're able to setup SSL without using the amazon load balancers.

Once you have your certificate files, it's not so hard. You can even do this without Nginx.
Let's first create an express webserver
const app = express();
For the sake of example, you could put a static website inside a folder.
const wwwFolder = express.static(path.join(__dirname, '/../www'));
app.use(wwwFolder);
Next, yYou basically need to read your certificate files
const key = readFileSync(__dirname + '/ssl/privkey.pem', 'utf8');
const cert = readFileSync(__dirname + '/ssl/cert.pem', 'utf8');
const ca = readFileSync(__dirname + '/ssl/chain.pem', 'utf8');
const serverOptions: https.ServerOptions = { key, cert, ca };
And finally, you create a https server using those certificates.
const server = https.createServer(serverOptions, app);
server.listen(httpsPort, () => log.debug("createWebServers", `server is listening on port ${httpsPort}`));
For security reasons it's probably not possible to listen directly on port 443. Instead, for instance use a port like 4201 and then use port forwarding.
If you use systemd to start/stop your service, then this port forwarding can be defined in your service configuration file. An easy solution:
[Unit]
Description=my.service
After=network.target
[Service]
Type=simple
TimeoutSec=0
User=ubuntu
PermissionsStartOnly=true
ExecStartPre=/sbin/iptables -t nat -A PREROUTING -p tcp --dport 443 -j REDIRECT --to-port 4201
ExecStart=/usr/local/bin/node /home/ubuntu/project/server.js
ExecStopPost=/sbin/iptables -t nat -D PREROUTING -p tcp --dport 443 -j REDIRECT --to-port 4201
Restart=on-failure
[Install]
WantedBy=multi-user.target
There are various ways to create and refresh your certificate files. So, I won't go into detail here about that. But most importantly, you don't need an amazon certificate to accomplish it. LetsEncrypt is free and easy and works fine.
Usually I also add a http server (without HTTPS) and apply a redirect. And then I also use port forwarding for that. So, I add a 2nd port forwarding rule in the service file.

Related

Application stops after configuring nginx (docker) for https

I have followed this tutorial for deploying docker containers on AWS EC2 instance:
https://www.digitalocean.com/community/tutorials/how-to-secure-a-containerized-node-js-application-with-nginx-let-s-encrypt-and-docker-compose
and after reaching step 5 (where nginx is configured for HTTPS), the application just stops working. Here's my application: www.alphadevop.co
Here’s my nginx configuration:
https://github.com/cyrilcabo/alphadevelopment/blob/master/nginx-conf/nginx.conf
And here’s my docker-compose.yml:
https://github.com/cyrilcabo/alphadevelopment/blob/master/docker-compose.yml
[Here's the webserver logs][1]
[1]: https://i.stack.imgur.com/oawtD.png
Silly mistake, port 443 wasn't allowed on my application. I was confused because when i checked on my server, port 443 was open. Then I checked here, https://www.yougetsignal.com/tools/open-ports/ , saying it was closed. I then found out that there's an inbound rule for AWS EC2 instance top allow port 443.
Credits here: NGINX SSL Timeout

How do I make a NodeJs project publicly accessible on port 3000?

I have a NodeJs/Express project in Alibaba cloud based Ubuntu server.
When I run project and access with curl localhost:3000 and curl 127.0.0.1:3000 it works!
When I access with IP public, e.g. curl 192.x.x.x:3000 it doesn't work, even though I have edited config in Express project in some code to : server.listen(3000,"0.0.0.0") OR server.listen("3000","192.x.x.x").
FYI I have Apache on this server. When I access on Internet with IP public no problem.
What can I do to solve this problem? Thanks beforehand.
PS: the 192.x.x.x is my IP public and it works access with Apache project
Issue the following command to open port 3000 for TCP traffic.
sudo ufw allow 3000/tcp
You have to configure your security ground and create a inbound rule to allow port 3000. Follow this guideline.
https://www.alibabacloud.com/help/doc-detail/25471.htm
Make sure you allow TCP traffic or all traffic from all sources to the port 3000 as the inbound rule.
The fact that you can access your service locally - but not publicly could mean 2 possible configurations:
The server running your application has blocked the port 3000
You have not configured your server to map the port 80 of a specific route to the port 3000
It is highly possible that a most essential part of your server configuration has not been done.

Nginx is refusing to connect on AWS EC2

I'm trying to use nginx to setup a simple node.js server, I'm running the server in background on port 4000, my nginx config file is
server {
listen 80;
listen [::]:80;
server_name 52.53.196.173;
location / {
include /etc/nginx/proxy_params;
proxy_pass http://127.0.0.1:4000;
}
}
I saved it in /etc/nginx/sites-available and also symlinked it to sites-enabled, the nginx.conf file has the include line already to load files from sites-enabled, then i restarted the service using
sudo service nginx restart
I tried going to 52.53.196.173 and it refuses to connect, however going to 52.53.196.173:4000 with port 4000 it is working, but I'm trying to make it listen on port 80 with nginx, i tried putting my .ml domain as server_name and no luck, and i have the IP 52.53.196.173 as the A record in the domain dns settings, and I'm doing this on an AWS EC2 Instance Ubuntu Server 16.04, i even tried the full ec2 public dns url no luck, any ideas?
Edit: I solved it by moving the file directly in sites-enabled instead of a symlink
There is few possible things. First of all you need to verify that nginx server is running & listening on port 80. you can check the listening ports using the following command.
netstat -tunlp
Then you need to check your server firewall & also the selinux policies. ( OR disable selinux for test )
Then you need to verify that AWS security group configured to access the http/https connections on port 80.
PS : Outputs from the following command & configurations will be helpful for troubleshooting.
netstat -tunlp
sestatus
iptables -L
* AWS Security Group Rules
* Nginx configurations ( including main configuration if changed )
P.S : OP fixed the problem by moving the config file directly into site-enabled directory. maybe, reefer the comments for more info if you are having the same issue.
Most probably port 80 might not be open in your security group or nginx is not running to accept the connections. Please post the nginx status and check the security group
check belows:
in security group, add Http (80) and Https (443) in inbound section with 0.0.0.0 ip as follow:
for 80 :
for 443 :
in Network ACL, allow inbound on http and https. outbound set custom TCP role as follow:
inbound roles:
outbound roles:
assign a elastic ip on ec2 instance, listen to this ip for public.

nodejs timed out on all ports when hosting on godaddy server

I've trying to run my nodejs/expressjs application on my godaddy server, but any port I use times out. I've tried using the application on my local device and it works fine. I have a snippet of my connection below.
var app = express();
app.listen(8080, function() {
console.log("Listening on port " + 8080);
});
When I run the program through ssh, I get no errors
node index.js
Listening on port 8080
But when I go to the corresponding location in my browser, I get:
xxx took too long to respond.
ERR_CONNECTION_TIMED_OUT
I'm pretty sure it has to do with running on the godaddy server. If anyone has experience using this service with nodejs, is there a specific port I should be using, or is there any other setup I should do?
Do you have a VPS with GoDaddy right? So I assume you have also root access.
SSH into your GoDaddy server as root and check if the node.js app actually listens on that port:
netstat -tunlp | grep 8080
If you see any result there for the node.js app and that port then the port is open.
By default, there should be a firewall on your server which might block most of the ports and allows only the necessary incoming traffic.
You can check if there is any rule for that port by issuing the command bellow:
iptables -nvL | grep 8080
If any result is returned, then you have to add an iptables rule to allow access to that port. There are multiple methods to do that:
permit full access from your IP access to the server
permit your ip to access port 8080 on the godaddy server
permit outside world to access port 8080 on your server
You could read any iptables guy, it's pretty easy to add/edit/delete firewall rules. Most of the cPanel/WHM servers come with CSF Firewall (which is based on iptables and perl scripts).
In order to allow an ip address to your firewall (if you have CSF Firewall installed) you have to issue the following command:
csf -a ip-address
I hope that helps!

Best practices when running Node.js with port 80 (Ubuntu / Linode) [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 6 years ago.
Improve this question
I am setting up my first Node.js server on a cloud Linux node and I am fairly new to the details of Linux admin. (BTW I am not trying to use Apache at the same time.)
Everything is installed correctly, but I found that unless I use the root login, I am not able to listen on port 80 with node. However I would rather not run it as root for security reason.
What is the best practice to:
Set good permissions / user for node so that it is secure / sandboxed?
Allow port 80 to be used within these constraints.
Start up node and run it automatically.
Handle log information sent to console.
Any other general maintenance and security concerns.
Should I be forwarding port 80 traffic to a different listening port?
Thanks
Port 80
What I do on my cloud instances is I redirect port 80 to port 3000 with this command:
sudo iptables -t nat -A PREROUTING -i eth0 -p tcp --dport 80 -j REDIRECT --to-port 3000
Then I launch my Node.js on port 3000. Requests to port 80 will get mapped to port 3000.
You should also edit your /etc/rc.local file and add that line minus the sudo. That will add the redirect when the machine boots up. You don't need sudo in /etc/rc.local because the commands there are run as root when the system boots.
Logs
Use the forever module to launch your Node.js with. It will make sure that it restarts if it ever crashes and it will redirect console logs to a file.
Launch on Boot
Add your Node.js start script to the file you edited for port redirection, /etc/rc.local. That will run your Node.js launch script when the system starts.
Digital Ocean & other VPS
This not only applies to Linode, but Digital Ocean, AWS EC2 and other VPS providers as well. However, on RedHat based systems /etc/rc.local is /ect/rc.d/local.
Give Safe User Permission To Use Port 80
Remember, we do NOT want to run your applications as the root user, but there is a hitch: your safe user does not have permission to use the default HTTP port (80). You goal is to be able to publish a website that visitors can use by navigating to an easy to use URL like http://ip:port/
Unfortunately, unless you sign on as root, you’ll normally have to use a URL like http://ip:port - where port number > 1024.
A lot of people get stuck here, but the solution is easy. There a few options but this is the one I like. Type the following commands:
sudo apt-get install libcap2-bin
sudo setcap cap_net_bind_service=+ep `readlink -f \`which node\``
Now, when you tell a Node application that you want it to run on port 80, it will not complain.
Check this reference link
Drop root privileges after you bind to port 80 (or 443).
This allows port 80/443 to remain protected, while still preventing you from serving requests as root:
function drop_root() {
process.setgid('nobody');
process.setuid('nobody');
}
A full working example using the above function:
var process = require('process');
var http = require('http');
var server = http.createServer(function(req, res) {
res.write("Success!");
res.end();
});
server.listen(80, null, null, function() {
console.log('User ID:',process.getuid()+', Group ID:',process.getgid());
drop_root();
console.log('User ID:',process.getuid()+', Group ID:',process.getgid());
});
See more details at this full reference.
For port 80 (which was the original question), Daniel is exactly right. I recently moved to https and had to switch from iptables to a light nginx proxy managing the SSL certs. I found a useful answer along with a gist by gabrielhpugliese on how to handle that. Basically I
Created an SSL Certificate Signing Request (CSR) via OpenSSL
openssl genrsa 2048 > private-key.pem
openssl req -new -key private-key.pem -out csr.pem
Got the actual cert from one of these places (I happened to use Comodo)
Installed nginx
Changed the location in /etc/nginx/conf.d/example_ssl.conf to
location / {
proxy_pass http://localhost:3000;
proxy_set_header X-Real-IP $remote_addr;
}
Formatted the cert for nginx by cat-ing the individual certs together and linked to it in my nginx example_ssl.conf file (and uncommented stuff, got rid of 'example' in the name,...)
ssl_certificate /etc/nginx/ssl/cert_bundle.cert;
ssl_certificate_key /etc/nginx/ssl/private-key.pem;
Hopefully that can save someone else some headaches. I'm sure there's a pure-node way of doing this, but nginx was quick and it worked.

Resources