"Timeout during connect (likely firewall problem)" while renewing Certbot - linux

I am facing the following error when I try to renew my ssl certificate using
certbot renew
Challenge failed for domain ***********.com
Some challenges have failed.
The following errors were reported by the server:
Domain: arjunbroker.com
Type: connection
Detail: Fetching
http://arjunbroker.com/.well-known/acme-challenge/F9nlyrRQBpJGOpPLHGPCj1vzdJOd_rBISU7q2aX7t_o:
Timeout during connect (likely firewall problem)
I have checked UFW and firewalld. And both port 80 and 443 are open.

I finally realised that prior to installing SSL on this server, I used to forward port 80 to port 8080 using
sudo /sbin/iptables -t nat -I PREROUTING -p tcp --dport 80 -j REDIRECT --to-port 8080
So I simply forwarded port 80 back to port 80.
Lesson learnt, for Certbot to work port 80 forwarding should be in place.

I finally realized that I ONLY had http/https open to my test client machines. I opened them wide for the certbot run then closed them again. I'll try to determine what IP needs to be open for letsencrypt probes so I can automate the certbot renewals.

For me the issue was that Let's Encrypt uses IPv6 if possible to do the http challenge and my site worked fine over IPv4 but not over IPv6 (as I had it setup wrong). You can use this site to test your IPv6 setup.

I solved this by disabling 'Permanent SEO-safe 301 redirect from HTTP to HTTPS' (in Hosting Settings for Plesk / CentOS Linux 7.9).
LetsEncrypt wouldn't assign or renew its SSL certificates otherwise. Spent a day re-configuring, DNS, panel.ini, firewall, etc., and eventually pinpointed this as the specific cause.
The issue surfaced about 10 months ago and we only realised what was happening recently.

I fixed that in AWS EC2 updating the Group Security like this:
More about EC2 Group Security: https://docs.aws.amazon.com/pt_br/AWSEC2/latest/UserGuide/ec2-security-groups.html

Related

Nginx is refusing to connect on AWS EC2

I'm trying to use nginx to setup a simple node.js server, I'm running the server in background on port 4000, my nginx config file is
server {
listen 80;
listen [::]:80;
server_name 52.53.196.173;
location / {
include /etc/nginx/proxy_params;
proxy_pass http://127.0.0.1:4000;
}
}
I saved it in /etc/nginx/sites-available and also symlinked it to sites-enabled, the nginx.conf file has the include line already to load files from sites-enabled, then i restarted the service using
sudo service nginx restart
I tried going to 52.53.196.173 and it refuses to connect, however going to 52.53.196.173:4000 with port 4000 it is working, but I'm trying to make it listen on port 80 with nginx, i tried putting my .ml domain as server_name and no luck, and i have the IP 52.53.196.173 as the A record in the domain dns settings, and I'm doing this on an AWS EC2 Instance Ubuntu Server 16.04, i even tried the full ec2 public dns url no luck, any ideas?
Edit: I solved it by moving the file directly in sites-enabled instead of a symlink
There is few possible things. First of all you need to verify that nginx server is running & listening on port 80. you can check the listening ports using the following command.
netstat -tunlp
Then you need to check your server firewall & also the selinux policies. ( OR disable selinux for test )
Then you need to verify that AWS security group configured to access the http/https connections on port 80.
PS : Outputs from the following command & configurations will be helpful for troubleshooting.
netstat -tunlp
sestatus
iptables -L
* AWS Security Group Rules
* Nginx configurations ( including main configuration if changed )
P.S : OP fixed the problem by moving the config file directly into site-enabled directory. maybe, reefer the comments for more info if you are having the same issue.
Most probably port 80 might not be open in your security group or nginx is not running to accept the connections. Please post the nginx status and check the security group
check belows:
in security group, add Http (80) and Https (443) in inbound section with 0.0.0.0 ip as follow:
for 80 :
for 443 :
in Network ACL, allow inbound on http and https. outbound set custom TCP role as follow:
inbound roles:
outbound roles:
assign a elastic ip on ec2 instance, listen to this ip for public.

Run node app with SSL on 443 port (on 80 is working)

It's my first time when I try configure a server running on Amazon EC2.
I figured out how run my node app on 80 port but now I'm trying to run on 443 port with Letsencrypt SSL. Before to work on 80 port I added
sudo iptables -t nat -A PREROUTING -p tcp --dport 80 -j REDIRECT --to-ports 3000
and
sudo iptables -t nat -A OUTPUT -p tcp --dport 80 -j REDIRECT --to-ports 3000
and everything worked fine. But now after install Letsencrypt I try to do same thing but with 433 port instead 80 and it's not working.
Letsencrypt config automatically for me all files so now redirect from http to https is working fine and when my iptable is empty on https:// I see ubuntu default website. When I run lines mentioned above with 443 port app is still not working (browser can't even load anything). It's only working with http:/...:3000
I've added 443 port to Security Groups on EC2.
What I can do? Thanks.
You need to check your security group Inbound/Outbound rules, you need to see if port 443 is assigned to which host. A valid but dangerous configuration, just for testing, is allow everything on Inbound and Outbound, to see if its a problem on your Security Group.
Beyond that, you need to be sure if the binding port is listening. Are you using Amazon Linux?

How to make Wildfly 10.1.0 work in port 80 and 443 (SSL) with h2 (HTTP/2) protocol in Linux Ubuntu 16.04

I'm trying to make the wildfly work on ubuntu in production.
I was able to make it work with its standard 8080 and 8443 ports, and managed to redirect ports 80 to 8080 and 443 to 8443 using iptables from ubuntu.
But when performing this redirection, the page opens in https but the h2 protocol (HTTP / 2) and gzip do not work.
If I go direct in the standard wildfly protocol (www.example.com:8443) gzip and h2 work perfectly.
Here is the iptables redirect command:
Iptables -t nat -A PREROUTING -i eth0 -p tcp -dport 80 -j REDIRECT
--to-port 8080
Iptables -t nat -A PREROUTING -i eth0 -p tcp -dport 443 -j REDIRECT
--to-port 8443
I've tried using nginx to do the redirect and the same problem happens.
I also tried configuring wildfly to use port 80 and 443 directly but Ubuntu does not allow it.
I have the following status in firewall:
ufw status verbose of server
If there is a way to make the wildfly in port 80 and 443 or make the redirect work in h2 and gzip.
System:
Ubuntu : 16.04.1
Wildfly : 10.1.0.Final
Please help me solve this problem.
Thank you very much.
I just found the solution.
The problem is in my Windows 10 Anti-Virus (More specifically BitDefender 2017).
All the tests I did was on a Windows 10 operating system, by the time I switched to Linux (I have dual boot) the site finally got http2
So I saw that the name of the issuer of the certificate that was being used was: Bitdefender Personal CA.Net-Defender.
It was at this point that I realized that my certificate created by letsencrypt was being overwritten by another bitdefender certificate.
SOLUTION: In BitDefender enter the module settings, and go to the internet module and disable the option to verify SSL certificates. Restart your browser and you're done.
So beware when testing a website using an antivirus.

AWS SSL on EC2 instance without Load Balancer - NodeJS

Is it possible to have an EC2 instance running, listening on port 443, without a load balancer? I'm trying right now in my Node.JS app but it doesn't work when I call the page using https://. However, if I set it to port 80 everything works fine with http://.
I had it working earlier with a load balancer and route53, but I don't want to pay $18/mo for an ELB anymore, especially when I only have one server running.
Thanks for the help
You're right, if it's only the one instance and you feel like you don't need to be prepared for large increases in traffic, you shouldn't have to pay for an ELB.
From a high-level standpoint you'll have to go through the following steps:
Install an nginx server to serve your NodeJS application.
Install your SSL certificates on the nginx server.
-- Either do this manually, ssh'ing into the server and installing the certs as described here.
-- OR include the necessary files in your application (I believe this only works for elastic beanstalk?) which will overwrite the nginx configuration files automatically as described here.
Make sure nginx is listening on port 443 (should've been completed in the previous step)
Open the EC2 server's security group corresponding to where you want traffic to enter the server (port 80 / port 443)
Is it possible? Yes of course. It sounds like you had an SSL certificate installed on the ELB and now you've deleted the ELB. You will have to install an SSL certificate on the EC2 server now. You can't use AWS ACM SSL certificates without an ELB or CloudFront distribution. If you don't want to pay for either of those services you will have to obtain an SSL certificate elsewhere.
For our projects (much like the other poster described) we used this setup:
nginx as load balancer and proxy for all calls on port 80 (no direct call to node.js server on port 3000 which is closed to the public)
pm2 as process manager for Node.js (and for deployment)
keymetrics.io for monitoring
Nodejs v6.9.3 boron/lts (through NVM)
Mongodb 3.2 with WiredTiger Engine (Compose.io)
Amazon EC2 instances for hosting (Amazon Linux not Ubuntu)
This setup works very well for us. And in this setup we're able to setup SSL without using the amazon load balancers.
Once you have your certificate files, it's not so hard. You can even do this without Nginx.
Let's first create an express webserver
const app = express();
For the sake of example, you could put a static website inside a folder.
const wwwFolder = express.static(path.join(__dirname, '/../www'));
app.use(wwwFolder);
Next, yYou basically need to read your certificate files
const key = readFileSync(__dirname + '/ssl/privkey.pem', 'utf8');
const cert = readFileSync(__dirname + '/ssl/cert.pem', 'utf8');
const ca = readFileSync(__dirname + '/ssl/chain.pem', 'utf8');
const serverOptions: https.ServerOptions = { key, cert, ca };
And finally, you create a https server using those certificates.
const server = https.createServer(serverOptions, app);
server.listen(httpsPort, () => log.debug("createWebServers", `server is listening on port ${httpsPort}`));
For security reasons it's probably not possible to listen directly on port 443. Instead, for instance use a port like 4201 and then use port forwarding.
If you use systemd to start/stop your service, then this port forwarding can be defined in your service configuration file. An easy solution:
[Unit]
Description=my.service
After=network.target
[Service]
Type=simple
TimeoutSec=0
User=ubuntu
PermissionsStartOnly=true
ExecStartPre=/sbin/iptables -t nat -A PREROUTING -p tcp --dport 443 -j REDIRECT --to-port 4201
ExecStart=/usr/local/bin/node /home/ubuntu/project/server.js
ExecStopPost=/sbin/iptables -t nat -D PREROUTING -p tcp --dport 443 -j REDIRECT --to-port 4201
Restart=on-failure
[Install]
WantedBy=multi-user.target
There are various ways to create and refresh your certificate files. So, I won't go into detail here about that. But most importantly, you don't need an amazon certificate to accomplish it. LetsEncrypt is free and easy and works fine.
Usually I also add a http server (without HTTPS) and apply a redirect. And then I also use port forwarding for that. So, I add a 2nd port forwarding rule in the service file.

Amazon Linux cannot access nginx on port 80

I have installed nginx on my AMI by yum
sudo yum install nginx
And then, I open all port in my AMI security group
All traffic - All - All - 0.0.0.0/0
And then, I start nginx by command
sudo service nginx start
And then, I access my nginx web service by http://public-ip
but I cannot access by this way.
I try to check the connection in my server.
ssh my_account#my_ip
And then,
wget http://localhost -O-
And It worked fine.
I cannot figure out what is the root cause, and then I change nginx port from 80 to 8081 and I restart the nginx server.
And then, I try to access again. It worked fine. WTH...
http://public-ip:8081
I don't know exactly what is going on?
Could you tell me what is the problem.
I see a few possibilities:
You are blocking the connections with a firewall on the host.
Security Group rules disallow this access
You are in a VPC and have not set up an Internet Gateway or route to host
Your Nginx configurations are set to explicitly listen on host and port combinations such that it responds to "localhost" but not to the public IP or host name. You could post your Nginx configs and be more specific about how it doesn't work when you try remotely. It is timing out? Not resolving? Receiving an HTTP response but not what you expected?

Resources