{
auto_https off
servers {
protocol {
experimental_http3
}
}
}
:80 {
redir https://{host}{uri}
}
aero.xyz:443, cdn.aero.xyz:443 {
tls /etc/caddy/aero.xyz.crt /etc/caddy/aero.xyz.key
# other stuff
}
The above code is my Caddy file, I only want to keep aero.xyz renewed automatically since cdn.aero.xyz is on cloudflare and it manages the certificate for me.
I see it is possible to do it with tls configuration here: https://caddy.community/t/using-caddy-to-keep-certificates-renewed/7525
But is it possible while I am running a https server? If it is, how do I modify the tls setting? Or how exactly should I change the Caddyfile?
Related
I have this simple caddy JSON config to proxy request from https://localhost to my localhost server running on port 8080. That's working fine.
{
"apps": {
"http": {
"servers": {
"localhost": {
"listen": [":443"],
"routes": [
{
"match": [
{
"host": ["localhost"]
}
],
"handle": [
{
"handler": "reverse_proxy",
"upstreams": [
{
"dial": "localhost:8080"
}
]
}
]
}
]
}
}
}
}
}
But, I would also like my caddy server to either:
reverse proxy calls from wss:localhost:3000/ws to ws:localhost:3000/ws (HTTPS to HTTP)
or simply ignore websocket calls and let them reach ws:localhost:3000/ws
I have not found the proper example or documentation for JSON config.
Check if Caddy 2.5 (Apr. 2022) could help in your use case.
It comes with Reverse proxy: Dynamic upstreams:
The ability to get the list of upstreams at every request (more specifically, every iteration in the proxy loop of every request) rather than just once at config-load time.
Dynamic upstream modules can be plugged in to provide Caddy with the latest list of backends in real-time.
Two standard modules have been implemented which can get upstreams from SRV and A/AAAA record lookups.
Warning: This deprecates the lookup_srv JSON field for upstreams (and srv+ scheme prefix in the Caddyfile), which will be removed in the future.
See PR 4470:
Right now, proxy upstreams have to be hard-coded into the config (with the only dyanamism coming from placeholders, which all act as a single upstream anyway).
This change adds supports for truly dynamic upstreams, with the potential for every request to have different upstreams -- not only every request, but every retry within a single request, too.
Instead of (or in addition to) specifying upstreams in your config, you could specify dynamic_upstreams and then define your upstream source module. Currently I'm implementing SRV and A/AAAA lookups as sources.
I recently setup SSL certificates for a web server with NodeJS similar to the following manner and everything works great:
process.env.HTTPS_PORT = 3000; // Listen on port 3000 for HTTPS.
process.env.HTTP_PORT = 6000; // Listen on port 6000 for HTTP.
// Create HTTPS server all workflow.
https
.createServer(
{
key: await fs.readFile('/etc/pki/tls/private/key.txt'),
cert: await fs.readFile('/etc/pki/tls/certs/xxxxxxxxxxxxxx.crt'),
ca: await fs.readFile('/etc/pki/tls/certs/gd_bundle-g2-g1.crt'),
},
app
)
.listen(process.env.HTTPS_PORT, () => {
console.log(`Main server is running on: ${process.env.HTTPS_PORT}`);
})
.on('error', (err) => {
console.log(`Failed to start main server: ${err}`);
});
// Create HTTP server for redirection to HTTPS only.
http.createServer(app)
.listen(process.env.HTTP_PORT, () => {
console.log(`Redirection server is running on: ${process.env.HTTP_PORT}`);
})
.on('error', (err) => {
console.log(`Failed to start redirection server: ${err}`);
});
I'm listening on two ports, one serving HTTPS and the other HTTP. The HTTP server's purpose is to redirect to HTTPS only for which I have a route setup.
This setup works for the server's FQDN (app.subdomain1.domain.com). The server also has a CNAME (web.subdomain2.domain.com). From the research I've done, it looks like the CNAME needs to be handled separately as the browser still expects a valid certificate for the URL the user requests. It is expected that users will use either of the URLs to access the application.
I could not find much information on how I can setup SSL certificates with NodeJS / ExpressJS for such CNAMEs. Any information on this would be really helpful.
The certificate must match the domain given in the URL. The simplest way is have a single multi-domain certificate which includes all the domains. Another way is to have multiple certificates and create a different context for each . see server.addContext(hostname, context) and Serve two https hostnames from single node process & port.
Short background: If we go back in time to about 2006-ish: We (ie: my company) used a java client app embedded in the browser that connected via port 443 to a C program backend running on port 8068 on an in-house server. At the time when the java app was first developed, port 443 was the only port that we knew would not be blocked by our customers that used the software (ease of installation and possibly the customer in-house staff didn't have the power or knowledge to control their internal firewall).
Fast-forward to 2016, and I'm hired to help develop a NodeJS/Javascript version of that Java app. The Java app continues to be used during development of its replacement, but whoops - we learn that browsers will drop support for embedded Java in the near future. So we switch to Java Web Start, so that the customers can continue to download the app and it still connects to the in house server with it's port 443->8068 routing.
2017 rolls around and don't you know, we can't use the up-coming JS web-app with HTTPS/SSL and the Java app at the same time, 'cause they use the same port. "Ok let's use NGINX to solve the problem." But due to in house politics, customer needs, and a turn-over of web-developer staff, we never get around to truly making that work.
So here we are at 2020, ready to deploy the new web version of the client software, and the whole 443 mess rears it's ugly head again.
Essentially I am looking to allow (for the time being) the Java app to continue using 443, but now need to let the web app use HTTPS too. Back in 2017/2018 we Googled ways to let them cohabitate through NGINX, but we never really got them to work properly, or the examples and tutorials were incomplete or confusing. It seemed like we needed to either use streaming along the lines of https://www.nginx.com/blog/running-non-ssl-protocols-over-ssl-port-nginx-1-15-2/ , or look at the incoming HTTPS header and do an 'if (https) { route to nodeJS server } else { assume it must be the java app and route to port 8068 }' -sort of arrangement inside the NGINX config file.
Past Googled links appear to not exist anymore, so if anyone knows of an NGINX configuration that allows an HTTPS website to hand off to a non-SSL application that still needs to use 443, I would greatly appreciate it. And any docs and/or tutorials that point us in the right direction would be helpful too. Thanks in advance!
You can do this using ssl_preread option. Basically, this option will allow access to the variable $ssl_preread_protocol, that contains the protocol negotiated at SSL port. If no valid protocol was detected, the variable will be empty.
Using this parameters, you could use the follow configuration to your environment:
stream {
upstream java {
server __your_java_server_ip__:8068;
}
upstream nodejs {
server __your_node_js_server_ip__:443;
}
map $ssl_preread_protocol $upstream {
default java;
"TLSv1.2" nodejs;
}
server {
listen 443;
proxy_pass $upstream;
ssl_preread on;
}
}
In your case, this configuration will pass the connection directly to your nodejs and java backend servers, so, nodejs will need to negotiate the SSL. You can pass this work to NGiNX using another server context, like:
stream {
upstream java {
server __your_java_server_ip__:8068;
}
upstream nodejs {
server 127.0.0.1:444;
}
map $ssl_preread_protocol $upstream {
default java;
"TLSv1.2" nodejs;
}
server {
listen 443;
proxy_pass $upstream;
ssl_preread on;
}
}
http {
server {
listen 444 ssl;
__your_ssl_cert_configurations_here__
location / {
proxy_pass http://__your_nodejs_server_ip__:80;
}
}
}
You'll need NGiNX at least version 1.15.2 to this configuration to work, and compiled with ngx_stream_ssl_preread_module module (need to compile with --with-stream_ssl_preread_module configuration parameter, because this module is not built by default).
Source: https://www.nginx.com/blog/running-non-ssl-protocols-over-ssl-port-nginx-1-15-2/
I am using the following server script to run both http, https servers and redirect all http requests to https.
When I access the server both locally and remotely from IP addresses, the requests redirect to https and api works with an unsecure warning.
But when I access the same routes via domain, I get "Site cannot be Reached" error.
Although, accessing http://example.com/test-route redirects to https://example.com/test-route, I am still getting Site can't be reached error.
import http from 'http';
import https from 'https';
import redirectHttps from 'redirect-https';
import greenlock from 'greenlock';
import app from '../app';
var le = greenlock.create({
server: 'staging', // using https://acme-v01.api.letsencrypt.org/directory in prod
configDir: 'certs',
approveDomains: (opts, certs, cb) => {
if (certs) {
opts.domains = ['example.com']
} else {
opts.email = 'me#mymail.com',
opts.agreeTos = true;
}
cb(null, {
options: opts,
certs: certs
});
},
});
http.createServer(le.middleware(redirectHttps())).listen(80, function() {
console.log("Server Running On http # port " + 80);
});
https.createServer(le.httpsOptions, le.middleware(app)).listen(443, function() {
console.log("Server Running On https # port " + 443);
});
There's a number of reasons that this could be happening, and a lot has been updated in the library since you posted this question.
I've spent a lot of time recently updating the documentation and examples:
https://git.coolaj86.com/coolaj86/greenlock-express.js
I'd suggest taking a look at the video tutorial:
https://youtu.be/e8vaR4CEZ5s
And check each of the items in the troubleshooting section. For reference:
What if the example didn't work?
Double check the following:
Public Facing IP for http-01 challenges
Are you running this as a public-facing webserver (good)? or localhost (bad)?
Does ifconfig show a public address (good)? or a private one - 10.x, 192.168.x, etc (bad)?
If you're on a non-public server, are you using the dns-01 challenge?
correct ACME version
Let's Encrypt v2 (ACME v2) must use version: 'draft-11'
Let's Encrypt v1 must use version: 'v01'
valid email
You MUST set email to a valid address
MX records must validate (dig MX example.com for 'john#example.com')
valid DNS records
You MUST set approveDomains to real domains
Must have public DNS records (test with dig +trace A example.com; dig +trace www.example.com for [ 'example.com', 'www.example.com' ])
write access
You MUST set configDir to a writeable location (test with touch ~/acme/etc/tmp.tmp)
port binding privileges
You MUST be able to bind to ports 80 and 443
You can do this via sudo or setcap
API limits
You MUST NOT exceed the API usage limits per domain, certificate, IP address, etc
Red Lock, Untrusted
You MUST change the server value in production
Shorten the 'acme-staging-v02' part of the server URL to 'acme-v02'
Please post an issue at the repository if you're still having trouble and I'll do my best to help you sort things out. Make sure to upgrade to the latest version because it has better debug logging.
I'm starting a work with Nginx to reverse proxy my app for internet access outside my customer's network.
I managed to make it work, limiting the URLs that need to be exposed etc, but one thing is still due to finish my work.
I want to limit user access based on the username. But instead of creating an if for every user I want to block, I would like to use a wildcard because all the users I want to block ends with a specific string: #saspw
Sample of my /etc/nginx/conf.d/reverseproxy.conf
server {
listen 80; # Proxy trafic for SAS Visual Analytics Transport Services on HTTP
server_name mcdonalds-va-reverseproxy.cons.sashq-r.openstack.sas.com;
if ($remote_user = '*#saspw'){
return 401;
}
location /SASVisualAnalyticsTransport {
proxy_pass https://mtbis.mcdonalds.com.au:8343/SASVisualAnalyticsTransport/;
}
}
In the $remote_user if, I would like that all users with its username ending in #saspw get a 401 error, (which I will change to a 404 later).
It does only work if I put the whole username, like joe#saspw. Using a wildcard (*,?) does not work.
Is there a way to make $remote_user solve wildcards that way?
Thank you,
Joe
Use map nginx module
map $remote_user $is_denied {
default 0;
"~.*#saspw" 1;
}
server {
listen 80; # Proxy trafic for SAS Visual Analytics Transport Services on HTTP
server_name mcdonalds-va-reverseproxy.cons.sashq-r.openstack.sas.com;
if ($is_denied){
return 401;
}
location /SASVisualAnalyticsTransport {
proxy_pass https://mtbis.mcdonalds.com.au:8343/SASVisualAnalyticsTransport/;
}
}
It lets you to use regexes. Note that map must be outside server directive.