Application stops after configuring nginx (docker) for https - node.js

I have followed this tutorial for deploying docker containers on AWS EC2 instance:
https://www.digitalocean.com/community/tutorials/how-to-secure-a-containerized-node-js-application-with-nginx-let-s-encrypt-and-docker-compose
and after reaching step 5 (where nginx is configured for HTTPS), the application just stops working. Here's my application: www.alphadevop.co
Here’s my nginx configuration:
https://github.com/cyrilcabo/alphadevelopment/blob/master/nginx-conf/nginx.conf
And here’s my docker-compose.yml:
https://github.com/cyrilcabo/alphadevelopment/blob/master/docker-compose.yml
[Here's the webserver logs][1]
[1]: https://i.stack.imgur.com/oawtD.png

Silly mistake, port 443 wasn't allowed on my application. I was confused because when i checked on my server, port 443 was open. Then I checked here, https://www.yougetsignal.com/tools/open-ports/ , saying it was closed. I then found out that there's an inbound rule for AWS EC2 instance top allow port 443.
Credits here: NGINX SSL Timeout

Related

Cannot get AWS Elastic Beanstalk single instance (no load balancer) to listen on 443

No matter what I do I cannot get my application to listen on port 443 (https). I simply need nginx to forward traffic to my app which is running https on port 8080, but nginx will only listen on port 80 and will refuse to forward to my app unless it is also running on port 80.
I've followed the instructions in this article but it makes no difference.
I do not have a domain name yet, I am simply using a self signed cert so I don't believe certbot will help here.
Please help I am so frustrated hahaaaaaa

Cant access nginx server through public ip

im learning deploy my net core app to linux vps, after install nginx and start, i cant access server through ip. Port 80 already opened. Everything seems right but nginx default page doesnt show.
Check Port:
Nginx Status:
nginx.conf
site-enabled/default

Nginx Proxy Manager (Docker) + mail server

im having a server running ubuntu with docker.
I have a docker instance running Nginx Proxy Manager to serve my multiple domains.
I want to run a mail server but since Nginx is using port 443 for HTTPS and 80 for HTTP i cant install any docker image's since they make use of both 80 and 443.
Example https://poste.io/doc/getting-started#download its also make use of the same ports.
Any idea how to have a single IP and host both web and mail?

AWS SSL on EC2 instance without Load Balancer - NodeJS

Is it possible to have an EC2 instance running, listening on port 443, without a load balancer? I'm trying right now in my Node.JS app but it doesn't work when I call the page using https://. However, if I set it to port 80 everything works fine with http://.
I had it working earlier with a load balancer and route53, but I don't want to pay $18/mo for an ELB anymore, especially when I only have one server running.
Thanks for the help
You're right, if it's only the one instance and you feel like you don't need to be prepared for large increases in traffic, you shouldn't have to pay for an ELB.
From a high-level standpoint you'll have to go through the following steps:
Install an nginx server to serve your NodeJS application.
Install your SSL certificates on the nginx server.
-- Either do this manually, ssh'ing into the server and installing the certs as described here.
-- OR include the necessary files in your application (I believe this only works for elastic beanstalk?) which will overwrite the nginx configuration files automatically as described here.
Make sure nginx is listening on port 443 (should've been completed in the previous step)
Open the EC2 server's security group corresponding to where you want traffic to enter the server (port 80 / port 443)
Is it possible? Yes of course. It sounds like you had an SSL certificate installed on the ELB and now you've deleted the ELB. You will have to install an SSL certificate on the EC2 server now. You can't use AWS ACM SSL certificates without an ELB or CloudFront distribution. If you don't want to pay for either of those services you will have to obtain an SSL certificate elsewhere.
For our projects (much like the other poster described) we used this setup:
nginx as load balancer and proxy for all calls on port 80 (no direct call to node.js server on port 3000 which is closed to the public)
pm2 as process manager for Node.js (and for deployment)
keymetrics.io for monitoring
Nodejs v6.9.3 boron/lts (through NVM)
Mongodb 3.2 with WiredTiger Engine (Compose.io)
Amazon EC2 instances for hosting (Amazon Linux not Ubuntu)
This setup works very well for us. And in this setup we're able to setup SSL without using the amazon load balancers.
Once you have your certificate files, it's not so hard. You can even do this without Nginx.
Let's first create an express webserver
const app = express();
For the sake of example, you could put a static website inside a folder.
const wwwFolder = express.static(path.join(__dirname, '/../www'));
app.use(wwwFolder);
Next, yYou basically need to read your certificate files
const key = readFileSync(__dirname + '/ssl/privkey.pem', 'utf8');
const cert = readFileSync(__dirname + '/ssl/cert.pem', 'utf8');
const ca = readFileSync(__dirname + '/ssl/chain.pem', 'utf8');
const serverOptions: https.ServerOptions = { key, cert, ca };
And finally, you create a https server using those certificates.
const server = https.createServer(serverOptions, app);
server.listen(httpsPort, () => log.debug("createWebServers", `server is listening on port ${httpsPort}`));
For security reasons it's probably not possible to listen directly on port 443. Instead, for instance use a port like 4201 and then use port forwarding.
If you use systemd to start/stop your service, then this port forwarding can be defined in your service configuration file. An easy solution:
[Unit]
Description=my.service
After=network.target
[Service]
Type=simple
TimeoutSec=0
User=ubuntu
PermissionsStartOnly=true
ExecStartPre=/sbin/iptables -t nat -A PREROUTING -p tcp --dport 443 -j REDIRECT --to-port 4201
ExecStart=/usr/local/bin/node /home/ubuntu/project/server.js
ExecStopPost=/sbin/iptables -t nat -D PREROUTING -p tcp --dport 443 -j REDIRECT --to-port 4201
Restart=on-failure
[Install]
WantedBy=multi-user.target
There are various ways to create and refresh your certificate files. So, I won't go into detail here about that. But most importantly, you don't need an amazon certificate to accomplish it. LetsEncrypt is free and easy and works fine.
Usually I also add a http server (without HTTPS) and apply a redirect. And then I also use port forwarding for that. So, I add a 2nd port forwarding rule in the service file.

Configure subdomain port forwarding on EC2 VPC

I am running a Linux instance on EC2. It is running Apache on port 80 and a custom nodejs server on port 8080.
I would like to use a subdomain to redirect the petitions from port 80 to 8080.
Traffic to nodejs.mydomain.com:80 should be redirected to the EC2 server on port 8080.
Is that possible using AWS VPC?
I do not want to configure Apache as a proxy.
More details about the current configuration:
The instance has attached an Elastic IP address.
I'm using Route 53 to point my domain to this IP
The instance is running inside a VPC
The server is on production so I'm looking for a minimum downtime
Thanks for your help.

Resources