I have this simple caddy JSON config to proxy request from https://localhost to my localhost server running on port 8080. That's working fine.
{
"apps": {
"http": {
"servers": {
"localhost": {
"listen": [":443"],
"routes": [
{
"match": [
{
"host": ["localhost"]
}
],
"handle": [
{
"handler": "reverse_proxy",
"upstreams": [
{
"dial": "localhost:8080"
}
]
}
]
}
]
}
}
}
}
}
But, I would also like my caddy server to either:
reverse proxy calls from wss:localhost:3000/ws to ws:localhost:3000/ws (HTTPS to HTTP)
or simply ignore websocket calls and let them reach ws:localhost:3000/ws
I have not found the proper example or documentation for JSON config.
Check if Caddy 2.5 (Apr. 2022) could help in your use case.
It comes with Reverse proxy: Dynamic upstreams:
The ability to get the list of upstreams at every request (more specifically, every iteration in the proxy loop of every request) rather than just once at config-load time.
Dynamic upstream modules can be plugged in to provide Caddy with the latest list of backends in real-time.
Two standard modules have been implemented which can get upstreams from SRV and A/AAAA record lookups.
Warning: This deprecates the lookup_srv JSON field for upstreams (and srv+ scheme prefix in the Caddyfile), which will be removed in the future.
See PR 4470:
Right now, proxy upstreams have to be hard-coded into the config (with the only dyanamism coming from placeholders, which all act as a single upstream anyway).
This change adds supports for truly dynamic upstreams, with the potential for every request to have different upstreams -- not only every request, but every retry within a single request, too.
Instead of (or in addition to) specifying upstreams in your config, you could specify dynamic_upstreams and then define your upstream source module. Currently I'm implementing SRV and A/AAAA lookups as sources.
Related
{
auto_https off
servers {
protocol {
experimental_http3
}
}
}
:80 {
redir https://{host}{uri}
}
aero.xyz:443, cdn.aero.xyz:443 {
tls /etc/caddy/aero.xyz.crt /etc/caddy/aero.xyz.key
# other stuff
}
The above code is my Caddy file, I only want to keep aero.xyz renewed automatically since cdn.aero.xyz is on cloudflare and it manages the certificate for me.
I see it is possible to do it with tls configuration here: https://caddy.community/t/using-caddy-to-keep-certificates-renewed/7525
But is it possible while I am running a https server? If it is, how do I modify the tls setting? Or how exactly should I change the Caddyfile?
Short background: If we go back in time to about 2006-ish: We (ie: my company) used a java client app embedded in the browser that connected via port 443 to a C program backend running on port 8068 on an in-house server. At the time when the java app was first developed, port 443 was the only port that we knew would not be blocked by our customers that used the software (ease of installation and possibly the customer in-house staff didn't have the power or knowledge to control their internal firewall).
Fast-forward to 2016, and I'm hired to help develop a NodeJS/Javascript version of that Java app. The Java app continues to be used during development of its replacement, but whoops - we learn that browsers will drop support for embedded Java in the near future. So we switch to Java Web Start, so that the customers can continue to download the app and it still connects to the in house server with it's port 443->8068 routing.
2017 rolls around and don't you know, we can't use the up-coming JS web-app with HTTPS/SSL and the Java app at the same time, 'cause they use the same port. "Ok let's use NGINX to solve the problem." But due to in house politics, customer needs, and a turn-over of web-developer staff, we never get around to truly making that work.
So here we are at 2020, ready to deploy the new web version of the client software, and the whole 443 mess rears it's ugly head again.
Essentially I am looking to allow (for the time being) the Java app to continue using 443, but now need to let the web app use HTTPS too. Back in 2017/2018 we Googled ways to let them cohabitate through NGINX, but we never really got them to work properly, or the examples and tutorials were incomplete or confusing. It seemed like we needed to either use streaming along the lines of https://www.nginx.com/blog/running-non-ssl-protocols-over-ssl-port-nginx-1-15-2/ , or look at the incoming HTTPS header and do an 'if (https) { route to nodeJS server } else { assume it must be the java app and route to port 8068 }' -sort of arrangement inside the NGINX config file.
Past Googled links appear to not exist anymore, so if anyone knows of an NGINX configuration that allows an HTTPS website to hand off to a non-SSL application that still needs to use 443, I would greatly appreciate it. And any docs and/or tutorials that point us in the right direction would be helpful too. Thanks in advance!
You can do this using ssl_preread option. Basically, this option will allow access to the variable $ssl_preread_protocol, that contains the protocol negotiated at SSL port. If no valid protocol was detected, the variable will be empty.
Using this parameters, you could use the follow configuration to your environment:
stream {
upstream java {
server __your_java_server_ip__:8068;
}
upstream nodejs {
server __your_node_js_server_ip__:443;
}
map $ssl_preread_protocol $upstream {
default java;
"TLSv1.2" nodejs;
}
server {
listen 443;
proxy_pass $upstream;
ssl_preread on;
}
}
In your case, this configuration will pass the connection directly to your nodejs and java backend servers, so, nodejs will need to negotiate the SSL. You can pass this work to NGiNX using another server context, like:
stream {
upstream java {
server __your_java_server_ip__:8068;
}
upstream nodejs {
server 127.0.0.1:444;
}
map $ssl_preread_protocol $upstream {
default java;
"TLSv1.2" nodejs;
}
server {
listen 443;
proxy_pass $upstream;
ssl_preread on;
}
}
http {
server {
listen 444 ssl;
__your_ssl_cert_configurations_here__
location / {
proxy_pass http://__your_nodejs_server_ip__:80;
}
}
}
You'll need NGiNX at least version 1.15.2 to this configuration to work, and compiled with ngx_stream_ssl_preread_module module (need to compile with --with-stream_ssl_preread_module configuration parameter, because this module is not built by default).
Source: https://www.nginx.com/blog/running-non-ssl-protocols-over-ssl-port-nginx-1-15-2/
I am using Vue.js's API proxying functionality (which internally uses http-proxy-middleware/http-proxy) to forward API requests to my localbackend server. I set it in vue.conf.js like so:
module.exports = {
devServer: {
port: 8081,
proxy: {
'/api': {
target: 'http://localhost:8080',
xfwd: false
}
}
}
}
For some weird reason though, about every other proxied request coming from Chrome is slow:
When a slow request is profiled in Chrome, it looks like below:
Any idea what might be causing this delay between fetchStart and requestStart? When accessing the proxy through 127.0.0.1, the problem goes away for some reason (DNS issues???). Checked the backend, and it responds correctly as well. The problem doesn't exist on Firefox either.
System is latest Win10, checked on stable and canary Chrome.
I am using the following server script to run both http, https servers and redirect all http requests to https.
When I access the server both locally and remotely from IP addresses, the requests redirect to https and api works with an unsecure warning.
But when I access the same routes via domain, I get "Site cannot be Reached" error.
Although, accessing http://example.com/test-route redirects to https://example.com/test-route, I am still getting Site can't be reached error.
import http from 'http';
import https from 'https';
import redirectHttps from 'redirect-https';
import greenlock from 'greenlock';
import app from '../app';
var le = greenlock.create({
server: 'staging', // using https://acme-v01.api.letsencrypt.org/directory in prod
configDir: 'certs',
approveDomains: (opts, certs, cb) => {
if (certs) {
opts.domains = ['example.com']
} else {
opts.email = 'me#mymail.com',
opts.agreeTos = true;
}
cb(null, {
options: opts,
certs: certs
});
},
});
http.createServer(le.middleware(redirectHttps())).listen(80, function() {
console.log("Server Running On http # port " + 80);
});
https.createServer(le.httpsOptions, le.middleware(app)).listen(443, function() {
console.log("Server Running On https # port " + 443);
});
There's a number of reasons that this could be happening, and a lot has been updated in the library since you posted this question.
I've spent a lot of time recently updating the documentation and examples:
https://git.coolaj86.com/coolaj86/greenlock-express.js
I'd suggest taking a look at the video tutorial:
https://youtu.be/e8vaR4CEZ5s
And check each of the items in the troubleshooting section. For reference:
What if the example didn't work?
Double check the following:
Public Facing IP for http-01 challenges
Are you running this as a public-facing webserver (good)? or localhost (bad)?
Does ifconfig show a public address (good)? or a private one - 10.x, 192.168.x, etc (bad)?
If you're on a non-public server, are you using the dns-01 challenge?
correct ACME version
Let's Encrypt v2 (ACME v2) must use version: 'draft-11'
Let's Encrypt v1 must use version: 'v01'
valid email
You MUST set email to a valid address
MX records must validate (dig MX example.com for 'john#example.com')
valid DNS records
You MUST set approveDomains to real domains
Must have public DNS records (test with dig +trace A example.com; dig +trace www.example.com for [ 'example.com', 'www.example.com' ])
write access
You MUST set configDir to a writeable location (test with touch ~/acme/etc/tmp.tmp)
port binding privileges
You MUST be able to bind to ports 80 and 443
You can do this via sudo or setcap
API limits
You MUST NOT exceed the API usage limits per domain, certificate, IP address, etc
Red Lock, Untrusted
You MUST change the server value in production
Shorten the 'acme-staging-v02' part of the server URL to 'acme-v02'
Please post an issue at the repository if you're still having trouble and I'll do my best to help you sort things out. Make sure to upgrade to the latest version because it has better debug logging.
A simple question, i have a cluster of 10 node, currently i provide all node in the configuration for my js client, if i use https, i will lose the ability to query all node because i want to use a reverse proxy for one node only. Is it correct? I dont found any documentation about that.
I have already try shield or something like that, but it's like it's overkill i dont want to have ssl between nodes, i just want to have a front with https.
{
"name" : "node_01",
"cluster_name" : "*****",
"cluster_uuid" : "*****",
"version" : {
"number" : "5.4.1",
"build_hash" : "2cfe0df",
"build_date" : "2017-05-29T16:05:51.443Z",
"build_snapshot" : false,
"lucene_version" : "6.5.1"
},
"tagline" : "You Know, for Search"
}
Using a reverse proxy for TLS termination is a valid solution. Since Elasticsearch only needs HTTP it's very easy to proxy.
However you probably want to use more than one nginx instance on more than one host; otherwise you'll have a single point of failure. You could do something like this:
Use 3 nodes for communication with your client. Only allow Elasticsearch's HTTP access on port 9200 from localhost and proxy it through nginx running on the same host. Let nginx terminate the TLS connection and accept connections from your client. Allow Elasticsearch's transport protocol access on port 9300 only for all 10 Elasticsearch nodes.
On the remaining 7 Elasticsearch nodes, don't allow any HTTP access and only allow the transport protocol for the other Elasticsearch nodes.
PS: If you are running on AWS (or a similar service): Use an ELB to terminate TLS for you and keep your 10 nodes behind it.