Automatic https when using caddy for domain and ip address together - caddy

I am trying to access my website using both its domain name and static ip address via https protocol.
When I use just domain it works as expected, but when I add ip address as following:
my.domain.com {
respond "Hello from domain"
}
10.20.30.40 {
tls internal
respond "Hello"
}
it does not work. Moreover if I use tls internal for different port:
my.domain.com {
respond "Hello from domain"
}
:8080 { <----------- Here I use port
tls internal <------------- and tls internal
respond "Hello"
}
accessing by domain name in browser now warns that certs are not publicly trusted. I assume that tls internal in second block affected first block. Is it right? Why so?
Anyway, my main question is how to access my website both via domain name and ip address even if I need to use different ports with caddy over https. I know, that for some historical reason ip addresses cannot have publicly trusted certs so it is ok if ip address will use self signed certs.
pls help!
Caddy version: v2.3.0

Related

Redirect all undefined subdomains using Caddy

Let's say my Caddyfile defines two sites: a.example.com and b.example.com.
What should I do so that any subdomain other than these two is redirected to a.example.com? For example, going to c.example.com or xyz.example.com should redirect to a.example.com
In other words, I want something like a 404 rule but for non-existent subdomains rather than non-existent files.
Point your DNS A/AAAA records for *.example.com to your server.
a.example.com {
# handle here
}
b.example.com {
# handle here
}
*.example.com {
redir https://a.example.com
}
You handle the a.example.com and b.example.com domains as you normally would, and set up a redir for *.example.com to https://a.example.com. Note that Caddy will automatically setup a redirect from http to https so that isn't needed.
Because you are using a wildcard domain in your Caddyfile, you can either use Caddy's on-demand TLS (which will fetch a certificate for that subdomain whenever such a request is received) or use a wildcard certificate (which requires the DNS challenge).
On-demand TLS
*.example.com {
tls {
on_demand
}
}
There are some pitfalls to this approach which make wildcard certificates more attractive:
Clients can abuse the setup to get as many certificates as they want, resulting in disk / memory exhaustion.
Rate limiting by certificate authorities
Wildcard certificates
*.example.com {
tls {
dns cloudflare {env.CLOUDFLARE_AUTH_TOKEN}
}
}
To use the DNS challenge, you need to
Use a nameserver that supports programmatic access
Build Caddy with a provider for that nameserver configured. For example, for Cloudflare you could build with https://github.com/caddy-dns/cloudflare (xcaddy build master --with github.com/caddy-dns/cloudflare) and save the token in the environment variable CLOUDFLARE_AUTH_TOKEN.

Node wont make connection to server with self signed certificate

A little background:
I have a Tesla Powerwall which has it's own built in web server that can be accessed on the local network. It only allows SSL connections and uses a self signed certificate. I have setup port forwarding that allows me to connect to the web server remotely. For a while, i've had working node.js apps both on a local Pi and also a remote AWS instance that made requests to the Powerwall web server to retrieve bits of information.
Since yesterday, Tesla updated my Powerwall and now everything has stopped working. I can only assume they have changed something regarding how the web server handles it's self signed SSL certificate.
Firstly, my Pi running on the local network would not make successful node.js requests to the local server. I managed to get this working by adding an entry to my /etc/hosts file like this:
192.168.1.42 powerwall
and now my node.js app can successfully connect again using https://powerwall
When using Safari or Chrome to connect remotely, I can connect if I use my IP address (After trusting the self signed cert) but cannot connect when using my DDNS address that points to home. (I have confirmed the DDNS is working). It gives me the error:
Safari can’t open the page “https://home.xxxxxx.com:4444” because Safari can’t establish a secure connection to the server “ home.xxxxxx.com”.
My AWS node.js app will not connect regardless of me using the IP address or DDNS address giving me the error:
Client network socket disconnected before secure TLS connection was established
This is how I am trying to connect:
request({
url: 'https://xx.xx.xx.xx:xxxx/api/system_status/soe',
method: 'GET',
rejectUnauthorized: false,
requestCert: true,
agent: false,
headers: headers
}
I have tried adding:
secureProtocol: 'TLSv1_method'
and attempted with the methods TLSv1_method TLSv1_1_method TLSv1_2_method in case it needed a specific method, with no luck.
Does the above sound like the SSL settings on the server have been screwed down?
What can I do to:
a) access the site remotely through a browser using the DDNS address
b) force node.js to not be interested in the SSL certificate at all and just connect
----- EDIT
Certificate:
Data:
Version: 3 (0x2)
Serial Number:
46:.....
Signature Algorithm: ecdsa-with-SHA256
Issuer: C=US, ST=California, L=Palo Alto, O=Tesla, OU=Tesla Energy Products, CN=335cbec3e3d8baee7742f095bd4f8f17
Validity
Not Before: Mar 29 22:17:28 2019 GMT
Not After : Mar 22 22:17:28 2044 GMT
Subject: C=US, ST=California, L=Palo Alto, O=Tesla, OU=Tesla Energy Products, CN=335cbec3e3d8baee7742f095bd4f8f17
Subject Public Key Info:
Public Key Algorithm: id-ecPublicKey
Public-Key: (256 bit)
pub:
04:ca...
ASN1 OID: prime256v1
NIST CURVE: P-256
X509v3 extensions:
X509v3 Key Usage: critical
Digital Signature, Key Encipherment
X509v3 Extended Key Usage:
TLS Web Server Authentication
X509v3 Basic Constraints: critical
CA:FALSE
X509v3 Subject Alternative Name:
DNS:teg, DNS:powerwall, DNS:powerpack, IP Address:192.168.90.1, IP Address:192.168.90.2, IP Address:192.168.91.1
With HTTPS, the domain needs to match what’s signed in the cert; it’s usually the public domain.
It’s not supposed to be the IP, and it certainly won't be the DDNS hostname (if I understood correctly) you’re pointing at it.
There are 3 possible approaches;
Add the certificate from the powerwall as a ‘known’ rootCA (as already suggested),
Tell node.js to skip checking the validity of the certificate, or
Try with HTTP 😬
Proper operation of the HTTPS connection process will also depend on you accessing the powerwall using the domain name registered in the certificate (which may require your DNS server to respond with the appropriate IP when the lookup is made ~> like DNS spoofing proof-of-concept for a CTF).
Also, to your musings in comments, while some browsers may allow you to override an expired or self-signed cert (or when connecting via IP), but it’s very sketchy to connect with a domain and get a cert that specifies and entirely different domain (which is why the browser might not even present you the option).
HTH
Post-resolution update:
How to get the DNS name to match what's on the certificate:
add an entry in the client system's /etc/hosts or equivalent
connect using the hostname (not the IP)
When connecting over public Internet:
How to get public-internet connections through to the local host:
get a public-facing HTTPS cert (e.g.) that matches your DDNS domain or /etc/hosts entry
Host a HTTP-proxy
relay requests from Internet (hopefully with filtering/validation) to the powerwall
(you will have 2 HTTPS connections: one from AWS -> proxy, one from proxy->powerwall)
Host a custom API that will return exactly the [minimum] info needed by the AWS service
How to trust a self-signed certificate? (this wasn't the blocking factor)
Try this for debugging:
openssl s_client \
-connect 192.168.1.42:4444 \
-CAfile /path/to/self-signed-cert \
-verify_hostname powerwall \
-debug
Can find more options in openssl s_client -help
Do you have any servers running on your home network (apache, nginx, etc)? You're probably trying to connect to https://my.ddns.com and you're passing it directly to powerwall, which has a certificate for powerwall.
Connecting to a host that returns a certificate which does not contain that hostname will cause a TLS error. You probably want to run a forward proxy, where your server hosts my.ddns.com, sets up the TLS connection and then forwards the traffic (without TLS) to 192.168.1.44.

private and public ip using AWS api gateway + lambda + Nodejs

I am trying to get the users private IP and public IP in an AWS environment. Based on this answer (https://stackoverflow.com/a/46021715/4283738) there should be a header X-Forwarded-For , separated ips and also from forum (https://forums.aws.amazon.com/thread.jspa?threadID=200198)
But when I have deployed my api via API Gateway + lambda + nodejs v8. I have consoled out the JSON for event and context varaibles for the nodejs handler function arguments for debugging (https://y0gh8upq9d.execute-api.ap-south-1.amazonaws.com/prod) I am not getting the private ips.
The lambda function is
const AWS = require('aws-sdk');
exports.handler = function(event, context, callback){
callback(null, {
"statusCode": 200,
"body": JSON.stringify({event,context})
});
}
API Gateway Details
GET - Integration Request
Integration type -- Lambda Function
Use Lambda Proxy integration -- True
Function API : https://y0gh8upq9d.execute-api.ap-south-1.amazonaws.com/prod
Case-1 : You can not get the private IP of the user for the security reasons(If configured by the user, this is done by NAT or PAT (Network Address Translation or Port Address Translation behind the screen. NAT will add this ip in his table and send the request ahead with the public id or can say router-id).
Case-2: If here your private ip means is if multiple users are using the same public network(WIFI etc). Then again you can define two IPs one is public which is common for all but inside there public network they have another ip which is unique inside that public network.
For example: Let's say there is WIFI with public IP (1.1.1.1). It has two users A and B.Notably, as they are sharing the same WIFI so the router will have only one IP(public and common for all) but inside this router, A and B will have different IPs such as 192.1.1.1 and 192.1.1.2 which can be called as private.
In both cases, you will get only the public IP(At position 0 in X-Forwarded-For header).
You can get X-Forwarded-For header inside
event.headers.multiValueHeaders.
If you can access both then what is the benefit of having private and public ip?
To access AWS VPC private subnet as well you have to use NAT and the client will never know the actual IP for the security reasons. I request you to re-review your requirements once again.
Don't know what makes you stuck in here, correct me if I'm wrong.
From Wiki:
The X-Forwarded-For (XFF) HTTP header field is a common method for identifying the originating IP address of a client connecting to a web server through an HTTP proxy or load balancer.
I set X-Forwarded-For in header & test with Postman:
https://imgur.com/a/8QZEdyH
The "X-Forwarded-For" header shows the public ip of the user.
Thats all you get.
Internal IPs are not visible.
Only the "public ip" which is indicated in the header.

API Gateway - ALB: Hostname/IP doesn't match certificate's altnames

My setup currently looks like:
API Gateway --- ALB --- ECS Cluster --- NodeJS Applications
|
-- Lambda
I also have a custom domain name set on API Gateway (UPDATE: I used the default API gateway link and got the same problem, I don't think this is a custom domain issue)
When 1 service in ECS cluster calls another service via API gateway, I get
Hostname/IP doesn't match certificate's altnames: "Host: someid.ap-southeast-1.elb.amazonaws.com. is not in the cert's altnames: DNS:*.execute-api.ap-southeast-1.amazonaws.com"
Why is this?
UPDATE
I notice when I start a local server that calls the API gateway I get a similar error:
{
"error": "Hostname/IP doesn't match certificate's altnames: \"Host: localhost. is not in the cert's altnames: DNS:*.execute-api.ap-southeast-1.amazonaws.com\""
}
And if I try to disable the HTTPS check:
const response = await axios({
method: req.method,
url,
baseURL,
params: req.params,
query: req.query,
data: body || req.body,
headers: req.headers,
httpsAgent: new https.Agent({
: false // <<=== HERE!
})
})
I get this instead ...
{
"message": "Forbidden"
}
When I call the underlying API gateway URL directly on Postman it works ... somehow it reminds me of CORS, where the server seems to be blocking my server either localhost or ECS/ELB from accessing my API gateway?
It maybe quite confusing so a summary of what I tried:
In the existing setup, services inside ECS may call another via API gateway. When that happens it fails because of the HTTPS error
To resolve it, I set rejectUnauthorized: false, but API gateway returns HTTP 403
When running on localhost, the error is similar
I tried calling ELB instead of API gateway, it works ...
There are various workarounds, which introduce security implications, instead of providing a proper solution. in order to fix it, you need to add a CNAME entry for someid.ap-southeast-1.elb.amazonaws.com. to the DNS (this entry might already exists) and also to one SSL certificate, alike it is being described in the AWS documentation for Adding an Alternate Domain Name. this can be done with the CloudFront console & ACM. the point is, that with the current certificate, that alternate (internal !!) host-name will never match the certificate, which only can cover a single IP - therefore it's much more of an infrastructural problem, than it would be a code problem.
When reviewing it once again... instead of extending the SSL certificate of the public-facing interface - a better solution might be to use a separate SSL certificate, for the communication in between the API Gateway and the ALB, according to this guide; even self-signed is possible in this case, because the certificate would never been accessed by any external client.
Concerning that HTTP403 the docs read:
You configured an AWS WAF web access control list (web ACL) to monitor requests to your Application Load Balancer and it blocked a request.
I hope this helps setting up end-to-end encryption, while only the one public-facing interface of the API gateway needs a CA certificate, for whatever internal communication, self-signed should suffice.
This article is about the difference in between ELB and ALB - while it might be worth a consideration, if indeed the most suitable load-balancer for the given scenario had been chosen. in case no content-based routing is required, cutting down on useless complexity might be helpful. this would eliminate the need to define the routing rules ...which you should also review once, in case sticking to ALB. I mean, the questions only shows the basic scenario and some code which fails, but not the routing rules.

Node.js generate LetsEncrypt.org SSL certificate with specific common name

I am currently trying to create a LetsEncrypt SSL certificate package using the node letsencrypt package (https://www.npmjs.com/package/letsencrypt). I have managed to generate a standard certificate suite using the following code.
'use strict';
var express = require('express');
var LE = require('letsencrypt');
var le;
// server = staging for test encryption cert generation
// server = production for generating verified certificate
var le = LE.create({ server: 'production' });
// Define encryption certificate options as JSON object
var opts = {
domains: ['www.mydomain.com'], email: 'me#mydomain.com', agreeTos: true
};
// Submit certificate signing request to LetsEncrypt.
// Print certificates, keys (i.e. pem files) when received from server.
le.register(opts).then(function (certs) {
console.log(certs);
// privkey, cert, chain, expiresAt, issuedAt, subject, altnames
}, function (err) {
console.error(err);
});
var app = express();
// Create server listening on port 80 to handle ACME challenges
// i.e. basic web server to serve files so CA can verify website ownership
app.listen(80, function () {
console.log('Basic server started and listening on port 80');
console.log('Server handling basic ACME protocol challenges');
});
// Allow access to all static files in server directory
// Enables CA to access file served up to verify domain ownership
app.use('/', le.middleware());
Which works fine and generates me a trusted certificate from LetsEncrypt.org when accessed via www.mydomain.com. However, when I try to access my website on my internal (local) network via 192.168.0.myserveraddress. I get the following error:
Does anyone know how I can modify the common name in the certificate request to LetsEncrypt to 192.168.0.myserveraddress so I don't get this error when accessing my website via our local area network?
I actually solved this issue by setting up our local area network to allow loopback connections using what is called NAT Loopback.
This means I do not need to use the local IP address (192.168.0.myserveraddress) to access my server anymore and can just use www.mydomain.com to access it internally.
Since this maintains the domain name the certificate is now trusted and I no longer have the above error.
Additionally I believe that certificate authorities (i.e. LetsEncrypt) will not issue certificates for IP addresses. So the only way you can resolve the above error is to access the website via its domain name. See link below.
https://community.letsencrypt.org/t/certificate-for-static-ip/84.

Resources