This is my first time building a server and hosting it to AWS EC2. When running the command sudo certbot certonly --standalone or sudo certbot certonly --webroot I recieved this error below
Certbot failed to authenticate some domains (authenticator: standalone). The Certificate Authority reported these problems:
Domain: matthieuxroger.com
Type: unauthorized
Detail: Invalid response from http://matthieuxroger.com/.well-known/acme-challenge/nWRAFCcRUeVxZ0C5YtRg_9bihG2YQeqacUcGjxdCMzg [18.205.22.32]: "<!DOCTYPE html>\n<html>\n <head>\n <title>Matthieux Roger</title>\n <link rel='stylesheet' href='/stylesheets/style.css' />\n "
I am using nodejs on ubuntu 20 running on AWS EC2. Any help would be apprieciated.
When using the webroot method with Certbot, a web server is spun up that serves a single file, so that Let's Encrypt can verify the ownership of the server at a domain. But when LE accessed your domain, it got a different server that served a 404 page. It seems that the DNS for your domain isn't pointing to the EC2 instance that is requesting a certificate. (or perhaps it has been updated but just hasn't propagated yet). You need to update the DNS records to point to the server requesting a certificate with certbot. Alternatively, you can use a different challenge type that doesn't require running a server to prove ownership (such as dns-01).
Related
I am using ubuntu ec2 apache server hosting nodejs webapp
and I am trying to use Lets-encrypt to get SSL certificate for my domain drugfair.org
the problem is when I applying sudo certbot --apache
get HTTP 01 challenge fail
Waiting for verification...
Challenge failed for domain drugfair.org
Challenge failed for domain www.drugfair.org
http-01 challenge for drugfair.org
http-01 challenge for www.drugfair.org
Cleaning up challenges
Some challenges have failed.
IMPORTANT NOTES:
- The following errors were reported by the server:
Domain: drugfair.org
Type: unauthorized
Detail: Invalid response from
http://drugfair.org/.well-known/acme-challenge/9hPXdeQ4ymWoNAoMtG0ewLzdQxljPMTuDUrTVBJWM7E
[18.***.*.**]: "<!DOCTYPE html>\n<html lang=\"en\">\n<head>\n<meta
charset=\"utf-8\">\n<title>Error</title>\n</head>\n<body>\n<pre>ReferenceError:
/var/www"
Domain: www.drugfair.org
Type: unauthorized
Detail: Invalid response from
http://www.drugfair.org/.well-known/acme-challenge/AYVcrbDpcp3ubI0P-pXp0wx_McMlGiopZOzJzhAqyQw
[18.***.*.**]: "<!DOCTYPE html>\n<html lang=\"en\">\n<head>\n<meta
charset=\"utf-8\">\n<title>Error</title>\n</head>\n<body>\n<pre>ReferenceError:
/var/www"
To fix these errors, please make sure that your domain name was
entered correctly and the DNS A/AAAA record(s) for that domain
contain(s) the right IP address.
but when I try to access my website from browser through
http://drugfair.org or through 18.***.*.**:3100/
it is accessible without reference errors
but when I try to access my website from inside ec2 using this command
curl http://18.***.**.*:3100
gives me reference error
I would like to create an SSL certificate using certbot for subdomains pointing to my domain. But upon generating the command below, I am receiving this error: certbot.errors.PluginError: Unable to find a Route53 hosted zone for _acme-challenge.hello.example.com
Here's the command:
sudo certbot certonly -d hello.example.com --dns-route53 -n --agree-tos --non-interactive --server https://acme-v02.api.letsencrypt.org/directory -i apache
Main subdomain: my.subdomain.com
Other subdomains:
hello.example.com -> CNAME to secure.subdomain.com
world.another-example.com -> CNAME to secure.subdomain.com
So visiting these subdomains should show the my.subdomain.com webpage with their corresponding SSL certificate.
This error usually occurs when you have valid AWS credentials, but your domain is not listed in Route53 of that AWS account.
Check ~/.aws/credentials file - does it contain credentials for particular aws account with this hosted-zone
It should be default profile. e.x.
[default]
aws_access_key_id = your_aws_id_hosted_zone
aws_secret_access_key = your_aws_secret_hosted_zone
I am new to SSL encryptions and need help! (Using cert bot).
I recently activated SSL on a website that runs on apache and linux on port 80. So, the current website looks like:
http://example.com --> https://example.com (done)
However, I have backend running on port 4000 and want to encrypt that as well to avoid "Mixed Content" page error:
http://example.com:4000 --> https://example.com:4000 (Not done yet)
This is exactly what I need and no work around would help. Please guide.
Thanks in advance! :-)
You can create a new subdomain subdomain.example.com and point it to example.com:4000 and then request a new SSL certificate from LetsEncrypt and specify multiple subdomains when requesting a certificate using certbot.
certbot certonly --webroot -w /var/www/example/ -d www.example.com -d example.com -w /var/www/other -d other.example.net -d another.other.example.net
When you have the certificate and key, add it in your webserver config
Check out the official certbot documentation here
I have a NodeJS app on ubuntu EC2 with dokku. My domain is pointing on server with wildcard and I have a SSL certificate with wildcard as well. Some time ago I added keys to dokku in app/tls/. Back then I had two apps online, production and staging. The last created on dokku (created, deployed) was intercepting all requests to host so api.my.domain and api-stage.my.domain and blah and whatever. If I typed http:// then there was no redirect. Deadline was close so I wasn't fighting with it anymore and I just made production to be the one who intercepts everything. Today I had problems with deployment, I've seen rejects over and over. I've deleted some plugins including not used anywhere dokku-domains, restarted docker few times run this command:
sudo wget -O /etc/init/docker.conf https://raw.github.com/dotcloud/docker/master/contrib/init/upstart/docker.conf
and there was no rejects anymore but... all requests to host returns 502 Bad Gateway. and there was no rejects anymore but... all requests to host returns 502 Bad Gateway including those with green padlock. I remember that previously when app was during deployment there was some info about configuring SSL, now there is none. After deleting an app and creating from scratch there is no nginx.conf file and SSL doesn't work at all.
I have Varnish load balancing three front end Rails servers with Nginx acting as a reverse proxy for FastCGI workers. Yesterday, our certificate expired, and I got a new certificate from GoDaddy, and installed it. When accessing static resources directly, I see the updated certificate, but when accessing them from a "virtual subdomain" I'm seeing the old certificate. My nginx config only cites my new chained certificate, so I'm wondering how the old certificate is being displayed. I've even removed it from the directory.
example:
https://www212.doostang.com/javascripts/base_packaged.js?1331831461 (no certificate problem with SSL)
https://asset5.doostang.com/javascripts/base_packaged.js?1331831461 (the old certificate is being used!) (maps to www212.doostang.com)
I've reloaded and even stopped-and-restarted nginx, tested nginx to make sure that it's reading from the right config, and restarted varnish with a new cache file.
When I curl the file at asset5.doostang.com I get a certificate error:
curl: (60) SSL certificate problem, verify that the CA cert is OK. Details:
error:14090086:SSL routines:SSL3_GET_SERVER_CERTIFICATE:certificate verify failed
More details here: http://curl.haxx.se/docs/sslcerts.html
curl performs SSL certificate verification by default, using a "bundle"
of Certificate Authority (CA) public keys (CA certs). If the default
bundle file isn't adequate, you can specify an alternate file
using the --cacert option.
If this HTTPS server uses a certificate signed by a CA represented in
the bundle, the certificate verification probably failed due to a
problem with the certificate (it might be expired, or the name might
not match the domain name in the URL).
If you'd like to turn off curl's verification of the certificate, use
the -k (or --insecure) option.
When I add the -k option, I get the file requested, and I can see it in my nginx access log. I don't get an nginx error when I don't provide the -k; nginx is silent about the certificate error.
10.99.110.27 - - [20/Apr/2012:18:02:52 -0700] "GET /javascripts/base_packaged.js?1331831461 HTTP/1.0" 200 5740 "-"
"curl/7.21.3 (x86_64-pc-linux-gnu) libcurl/7.21.3 OpenSSL/0.9.8o
zlib/1.2.3.4 libidn/1.18"
I've put what I think is the relevant part of the nginx config, below:
server {
# port to listen on. Can also be set to an IP:PORT
listen 443;
server_name www.doostang.com, *.doostang.com;
passenger_enabled on;
rails_env production;
ssl on;
ssl_certificate /.../doostang_combined.crt;
ssl_certificate_key /.../doostang.com.key;
ssl_protocols SSLv3;
# doc root
root /.../public/files;
if ($host = 'doostang.com' ) {
rewrite ^/(.*)$ https://www.doostang.com/$1 permanent;
}
}
# Catchall redirect
server {
# port to listen on. Can also be set to an IP:PORT
listen 443;
ssl on;
ssl_certificate /.../doostang_combined.crt;
ssl_certificate_key /.../doostang.com.key;
rewrite ^(.*)$ https://www.doostang.com$1;
}
Ba dum ching. My non-standardized load balancer actually had nginx running for SSL termination. I failed to notice this, but I think I did everything else correctly. Point being, when you take over operations upon acquisition, standardize and document! There are some really odd engineers out there :)