Let's Encrypt Certificate for domain mapped to localhost - node.js

I need to install Let's Encrypt Certificate for https to localhost, for development reason,
I mapped localhost to dev.{ MyDomain }.com in file /etc/hosts, but when I run
certbot certonly --manual
and I set dev.{ MyDomain }.com
I get this error:
Certbot failed to authenticate some domains (authenticator: manual). The Certificate Authority reported these problems:
Domain: dev.{ MyDomain }.com
Type: dns
Detail: DNS problem: NXDOMAIN looking up A for dev.{ MyDomain }.com - check that a DNS record exists for this domain; DNS problem: NXDOMAIN looking up AAAA for dev.{ MyDomain }.com - check that a DNS record exists for this domain
Hint: The Certificate Authority failed to verify the manually created challenge files. Ensure that you created these in the correct location.
Some challenges have failed.
off course I have the correct string when I make a http request to http://dev.{ MyDomain }.com/.well-known/acme-chellenge/<challeng-file>

Related

Certbot DNS problem - not using /etc/hosts

I am trying to install a certificate using certbot from LetsEncrypt on a Raspberry Pi. I have installed Apache2 and created a webserver at http://subdomain.mydomain.com on the Raspberry Pi. The certbot command obtains a certificate and writes it to http://subdomain.mydomain.com/.well-known/acme-challenge/<etc.>
Background Info: I am doing this because I need a local server to address IoT devices and my Ajax calls are failing because I am not allowed to mix http with https. The IoT devices are incapable of a hosting a webserver with SSL - they use a simple http:/192.168.1.xx/<string> format
I don't want to create a DNS entry at my registrar/ISP because I am trying to create a scalable solution and creating hundreds (perhaps thousands if we do well) of subdomain entries there is impractical. Creating my own DNS server is a possibility, but I would rather just do it all on the Pi - my bash installation script will take care of everything (once I get it to work!).
I tried first to create an entry into the local hosts (/etc/hosts) file which looks like this:
127.0.0.1 localhost
::1 localhost ip6-localhost ip6-loopback
ff02::1 ip6-allnodes
ff02::2 ip6-allrouters
127.0.1.1 SubDomain
192.168.1.111 subdomain.mydomain.com
This works for commands like ping, but not for nslookup or dig and definitely not for certbot. The certbot command finds my main server - DNS is configured with a * to go to my Public IP for all unknown subdomains:
A * xx.xx.xx.xx //My public IP address
So then I installed dnsmasq (See: When using proxy_pass, can /etc/hosts be used to resolve domain names instead of "resolver"?) and followed the configuration options shown here: How to Setup a Raspberry Pi DNS Server
However, that doesn't work either. certbot still looks at my main (external DNS) and finds my Public (wildcard) IP. Here's a summary of the changes made in /etc/dnsmasq.conf
domain-needed ## enabled
bogus-priv ## enabled
no-resolv ## enabled
server=8.8.8.8 ## added (#server=/localnet/192.168.0.1 left as is)
server=8.8.4.4 ## added
cache-size=1500 ##increased from 150
How can I force certbot to find and use my local/private IP 192.168.1.111? Any alternative solutions using scripts/redirection?
Create a wildcard certificate using Let's Encrypt DNS validation. You will then have to renew the certificate manually. Otherwise, your server must be on the public Internet with correct DNS settings.
I finally solved my problem but I abandoned LetsEncrypt entirely. The answer was not in DNS, but in approaching it from a completely different angle. This was pretty much 95% of the solution.
Important! This only works if you have control over the browser. We do, since it is for our kiosk application which runs in a browser.
Step 1: Become your own CA
Step 2: Sign your SSL certificate as a CA
Step 3: Import the signed CA (.pem file) into the browser (under Authorities)
Step 4: Point your Apache conf file to the local SSL (the process generates .key and .crt files for this as well).

Certbot failed to authenticate some domains

This is my first time building a server and hosting it to AWS EC2. When running the command sudo certbot certonly --standalone or sudo certbot certonly --webroot I recieved this error below
Certbot failed to authenticate some domains (authenticator: standalone). The Certificate Authority reported these problems:
Domain: matthieuxroger.com
Type: unauthorized
Detail: Invalid response from http://matthieuxroger.com/.well-known/acme-challenge/nWRAFCcRUeVxZ0C5YtRg_9bihG2YQeqacUcGjxdCMzg [18.205.22.32]: "<!DOCTYPE html>\n<html>\n <head>\n <title>Matthieux Roger</title>\n <link rel='stylesheet' href='/stylesheets/style.css' />\n "
I am using nodejs on ubuntu 20 running on AWS EC2. Any help would be apprieciated.
When using the webroot method with Certbot, a web server is spun up that serves a single file, so that Let's Encrypt can verify the ownership of the server at a domain. But when LE accessed your domain, it got a different server that served a 404 page. It seems that the DNS for your domain isn't pointing to the EC2 instance that is requesting a certificate. (or perhaps it has been updated but just hasn't propagated yet). You need to update the DNS records to point to the server requesting a certificate with certbot. Alternatively, you can use a different challenge type that doesn't require running a server to prove ownership (such as dns-01).

Certbot Unable to find a Route53 hosted zone for _acme-challenge.subdomain.domain.com

I would like to create an SSL certificate using certbot for subdomains pointing to my domain. But upon generating the command below, I am receiving this error: certbot.errors.PluginError: Unable to find a Route53 hosted zone for _acme-challenge.hello.example.com
Here's the command:
sudo certbot certonly -d hello.example.com --dns-route53 -n --agree-tos --non-interactive --server https://acme-v02.api.letsencrypt.org/directory -i apache
Main subdomain: my.subdomain.com
Other subdomains:
hello.example.com -> CNAME to secure.subdomain.com
world.another-example.com -> CNAME to secure.subdomain.com
So visiting these subdomains should show the my.subdomain.com webpage with their corresponding SSL certificate.
This error usually occurs when you have valid AWS credentials, but your domain is not listed in Route53 of that AWS account.
Check ~/.aws/credentials file - does it contain credentials for particular aws account with this hosted-zone
It should be default profile. e.x.
[default]
aws_access_key_id = your_aws_id_hosted_zone
aws_secret_access_key = your_aws_secret_hosted_zone

certbot giving error DNS problem: SERVFAIL looking up CAA for

problem statement
I am in the process to create a certificate for my domain with the help of certbot. same procedure I tried for other environments, from the same machine, and for the same domain. but today I am unable to create a certificate.
steps taken (same steps I took for other environments in past for same domain and it works fine)
please note my domain is registered domain nameAzurezure
certbot -d api.stg.<my domain> --manual --preferred-challenges dns certonly
this command provide me text record. this txt record i added in my AZure DNS zone with key _acme-challenge.api.stg
when I click enter I get the following error
Failed authorization procedure. api.stg.<my domain> (dns-01): urn:ietf:params:acme:error:dns :: DNS problem: SERVFAIL looking up CAA for api.stg.<my domain>
IMPORTANT NOTES:
- The following errors were reported by the server:
Domain: api.stg.<my domain>
Type: None
Detail: DNS problem: SERVFAIL looking up CAA for
api.stg.<my domain>
you need to create CAA dns record for Azure DNS hosted (or any other) domain zones for lets encrypt to work. Here's example powershell (cannot be done in portal as of now):
New-AzDnsRecordSet -Name # -ResourceGroupName %rg% -ZoneName %zone% -Ttl 3600 -RecordType CAA `
-DnsRecords (New-AzDnsRecordConfig -Caaflags 0 -CaaTag "issue" -CaaValue "letsencrypt.org")

Varnish/Nginx cached SSL Certificate mystery

I have Varnish load balancing three front end Rails servers with Nginx acting as a reverse proxy for FastCGI workers. Yesterday, our certificate expired, and I got a new certificate from GoDaddy, and installed it. When accessing static resources directly, I see the updated certificate, but when accessing them from a "virtual subdomain" I'm seeing the old certificate. My nginx config only cites my new chained certificate, so I'm wondering how the old certificate is being displayed. I've even removed it from the directory.
example:
https://www212.doostang.com/javascripts/base_packaged.js?1331831461 (no certificate problem with SSL)
https://asset5.doostang.com/javascripts/base_packaged.js?1331831461 (the old certificate is being used!) (maps to www212.doostang.com)
I've reloaded and even stopped-and-restarted nginx, tested nginx to make sure that it's reading from the right config, and restarted varnish with a new cache file.
When I curl the file at asset5.doostang.com I get a certificate error:
curl: (60) SSL certificate problem, verify that the CA cert is OK. Details:
error:14090086:SSL routines:SSL3_GET_SERVER_CERTIFICATE:certificate verify failed
More details here: http://curl.haxx.se/docs/sslcerts.html
curl performs SSL certificate verification by default, using a "bundle"
of Certificate Authority (CA) public keys (CA certs). If the default
bundle file isn't adequate, you can specify an alternate file
using the --cacert option.
If this HTTPS server uses a certificate signed by a CA represented in
the bundle, the certificate verification probably failed due to a
problem with the certificate (it might be expired, or the name might
not match the domain name in the URL).
If you'd like to turn off curl's verification of the certificate, use
the -k (or --insecure) option.
When I add the -k option, I get the file requested, and I can see it in my nginx access log. I don't get an nginx error when I don't provide the -k; nginx is silent about the certificate error.
10.99.110.27 - - [20/Apr/2012:18:02:52 -0700] "GET /javascripts/base_packaged.js?1331831461 HTTP/1.0" 200 5740 "-"
"curl/7.21.3 (x86_64-pc-linux-gnu) libcurl/7.21.3 OpenSSL/0.9.8o
zlib/1.2.3.4 libidn/1.18"
I've put what I think is the relevant part of the nginx config, below:
server {
# port to listen on. Can also be set to an IP:PORT
listen 443;
server_name www.doostang.com, *.doostang.com;
passenger_enabled on;
rails_env production;
ssl on;
ssl_certificate /.../doostang_combined.crt;
ssl_certificate_key /.../doostang.com.key;
ssl_protocols SSLv3;
# doc root
root /.../public/files;
if ($host = 'doostang.com' ) {
rewrite ^/(.*)$ https://www.doostang.com/$1 permanent;
}
}
# Catchall redirect
server {
# port to listen on. Can also be set to an IP:PORT
listen 443;
ssl on;
ssl_certificate /.../doostang_combined.crt;
ssl_certificate_key /.../doostang.com.key;
rewrite ^(.*)$ https://www.doostang.com$1;
}
Ba dum ching. My non-standardized load balancer actually had nginx running for SSL termination. I failed to notice this, but I think I did everything else correctly. Point being, when you take over operations upon acquisition, standardize and document! There are some really odd engineers out there :)

Resources