Redirect all undefined subdomains using Caddy - caddy

Let's say my Caddyfile defines two sites: a.example.com and b.example.com.
What should I do so that any subdomain other than these two is redirected to a.example.com? For example, going to c.example.com or xyz.example.com should redirect to a.example.com
In other words, I want something like a 404 rule but for non-existent subdomains rather than non-existent files.

Point your DNS A/AAAA records for *.example.com to your server.
a.example.com {
# handle here
}
b.example.com {
# handle here
}
*.example.com {
redir https://a.example.com
}
You handle the a.example.com and b.example.com domains as you normally would, and set up a redir for *.example.com to https://a.example.com. Note that Caddy will automatically setup a redirect from http to https so that isn't needed.
Because you are using a wildcard domain in your Caddyfile, you can either use Caddy's on-demand TLS (which will fetch a certificate for that subdomain whenever such a request is received) or use a wildcard certificate (which requires the DNS challenge).
On-demand TLS
*.example.com {
tls {
on_demand
}
}
There are some pitfalls to this approach which make wildcard certificates more attractive:
Clients can abuse the setup to get as many certificates as they want, resulting in disk / memory exhaustion.
Rate limiting by certificate authorities
Wildcard certificates
*.example.com {
tls {
dns cloudflare {env.CLOUDFLARE_AUTH_TOKEN}
}
}
To use the DNS challenge, you need to
Use a nameserver that supports programmatic access
Build Caddy with a provider for that nameserver configured. For example, for Cloudflare you could build with https://github.com/caddy-dns/cloudflare (xcaddy build master --with github.com/caddy-dns/cloudflare) and save the token in the environment variable CLOUDFLARE_AUTH_TOKEN.

Related

Changing Netlify DNS results in SSL_ERROR_BAD_CERT_DOMAIN "Your connection is not private"

Changing Netlify DNS results in SSL_ERROR_BAD_CERT_DOMAIN "Your connection is not private" in Chrome
This was after the domain configuration was set (i.e. a change of settings)
Solution was to renew certificate after changing the domain information (which at this time was not specified in the docs)
Options found under:
Site settings > Domain management > HTTPS > Renew Certificate

Automatic https when using caddy for domain and ip address together

I am trying to access my website using both its domain name and static ip address via https protocol.
When I use just domain it works as expected, but when I add ip address as following:
my.domain.com {
respond "Hello from domain"
}
10.20.30.40 {
tls internal
respond "Hello"
}
it does not work. Moreover if I use tls internal for different port:
my.domain.com {
respond "Hello from domain"
}
:8080 { <----------- Here I use port
tls internal <------------- and tls internal
respond "Hello"
}
accessing by domain name in browser now warns that certs are not publicly trusted. I assume that tls internal in second block affected first block. Is it right? Why so?
Anyway, my main question is how to access my website both via domain name and ip address even if I need to use different ports with caddy over https. I know, that for some historical reason ip addresses cannot have publicly trusted certs so it is ok if ip address will use self signed certs.
pls help!
Caddy version: v2.3.0

Does the TLS cert would require an common SAN

Based on the below reference link in configuring Haproxy with TLS:
Do i need to have the certificates generated with common SAN(Subject ALternate name) on all the target nodes (or)
Having the individual certs without any common SAN would work ?
https://serversforhackers.com/c/using-ssl-certificates-with-haproxy
Look at https://security.stackexchange.com/questions/172626/chrome-requires-san-names-in-certificate-when-will-other-browsers-ie-follow : some browsers (Chrome) require names to be in the SAN part as they disregard now completely the CN field
So even for a one domain certificate you need the domain both in the CN (as this is not optional) and in the SAN part.
It is also in the CAB Forum requirements, section 7.1.4.2.1 :
Certificate Field: extensions:subjectAltName
Required/Optional: Required
Contents: This extension MUST contain at least one entry.
Each entry MUST be either a dNSName containing the Fully-Qualified
Domain Name or an iPAddress containing the IP address of a server.
The CA MUST confirm that the Applicant controls the Fully-Qualified
Domain Name or IP address or has been granted the right to use it by
the Domain Name Registrant or IP address assignee, as appropriate.
Wildcard FQDNs are permitted.
Note that some other browsers, like Firefox, fallback to the CN instead, see https://bugzilla.mozilla.org/show_bug.cgi?id=1245280 and see beginning of patch at https://hg.mozilla.org/mozilla-central/rev/dc40f46fae48 for the security.pki.name_matching_mode configuration option.

API Gateway - ALB: Hostname/IP doesn't match certificate's altnames

My setup currently looks like:
API Gateway --- ALB --- ECS Cluster --- NodeJS Applications
|
-- Lambda
I also have a custom domain name set on API Gateway (UPDATE: I used the default API gateway link and got the same problem, I don't think this is a custom domain issue)
When 1 service in ECS cluster calls another service via API gateway, I get
Hostname/IP doesn't match certificate's altnames: "Host: someid.ap-southeast-1.elb.amazonaws.com. is not in the cert's altnames: DNS:*.execute-api.ap-southeast-1.amazonaws.com"
Why is this?
UPDATE
I notice when I start a local server that calls the API gateway I get a similar error:
{
"error": "Hostname/IP doesn't match certificate's altnames: \"Host: localhost. is not in the cert's altnames: DNS:*.execute-api.ap-southeast-1.amazonaws.com\""
}
And if I try to disable the HTTPS check:
const response = await axios({
method: req.method,
url,
baseURL,
params: req.params,
query: req.query,
data: body || req.body,
headers: req.headers,
httpsAgent: new https.Agent({
: false // <<=== HERE!
})
})
I get this instead ...
{
"message": "Forbidden"
}
When I call the underlying API gateway URL directly on Postman it works ... somehow it reminds me of CORS, where the server seems to be blocking my server either localhost or ECS/ELB from accessing my API gateway?
It maybe quite confusing so a summary of what I tried:
In the existing setup, services inside ECS may call another via API gateway. When that happens it fails because of the HTTPS error
To resolve it, I set rejectUnauthorized: false, but API gateway returns HTTP 403
When running on localhost, the error is similar
I tried calling ELB instead of API gateway, it works ...
There are various workarounds, which introduce security implications, instead of providing a proper solution. in order to fix it, you need to add a CNAME entry for someid.ap-southeast-1.elb.amazonaws.com. to the DNS (this entry might already exists) and also to one SSL certificate, alike it is being described in the AWS documentation for Adding an Alternate Domain Name. this can be done with the CloudFront console & ACM. the point is, that with the current certificate, that alternate (internal !!) host-name will never match the certificate, which only can cover a single IP - therefore it's much more of an infrastructural problem, than it would be a code problem.
When reviewing it once again... instead of extending the SSL certificate of the public-facing interface - a better solution might be to use a separate SSL certificate, for the communication in between the API Gateway and the ALB, according to this guide; even self-signed is possible in this case, because the certificate would never been accessed by any external client.
Concerning that HTTP403 the docs read:
You configured an AWS WAF web access control list (web ACL) to monitor requests to your Application Load Balancer and it blocked a request.
I hope this helps setting up end-to-end encryption, while only the one public-facing interface of the API gateway needs a CA certificate, for whatever internal communication, self-signed should suffice.
This article is about the difference in between ELB and ALB - while it might be worth a consideration, if indeed the most suitable load-balancer for the given scenario had been chosen. in case no content-based routing is required, cutting down on useless complexity might be helpful. this would eliminate the need to define the routing rules ...which you should also review once, in case sticking to ALB. I mean, the questions only shows the basic scenario and some code which fails, but not the routing rules.

SSL certificate propagation issue with custom domain on Bluemix app

I uploaded my SSL certificate in the section of my custom domain in the space of my organization. I linked the domain with my application and I have created the CNAME record in my DNS to my broken app xxxxx-gb.bluemix.net .eu.
When I try to reach my application through my domain custom, I served me the Bluemix certificate and not mine.
I tried to add a proxy on my server (NodeJS) but the situation does not change.
app.enable('trust proxy');
app.use(function (req, res, next) {
if (req.secure) {
// request was via https, so do no special handling
next();
} else {
// request was via http, so redirect to https
res.redirect('https://' + req.headers.host + req.url);
}
});
How can I fix the problem ? I need my certificate, to call my API, from my mobile application, it is the certificate must necessarily be mine and then TRUSTED
You need to map the CNAME to the secure endpoint for the Bluemix region you are using, in your case it should be secure.eu-gb.bluemix.net.
When receiving the request from your custom domain Bluemix will map it internally to your app.
More details in the documentation link below:
https://new-console.ng.bluemix.net/docs/manageapps/updapps.html#domain

Resources