Creating a HTTP proxy to handle mTLS connections with Hyper - rust

I need some help to create a proxy in hyper that resolves a mTLS connection. I used this example https://github.com/hyperium/hyper/blob/0.14.x/examples/http_proxy.rs as a starting point, and worked through adding tokio-rustls to support the mTLS connection. Here is the code that handles the http tunnel.
async fn tunnel(
mut upgraded: Upgraded,
destination: (String, u16),
certificates: Certificates,
) -> std::io::Result<()> {
let root_cert_store = prepare_cert_store(&certificates);
let tls_client_config = tls_config(certificates, root_cert_store);
let tls_connector = TlsConnector::from(Arc::new(tls_client_config));
let target = TcpStream::connect(target_address(&destination)).await?;
let domain = rustls::ServerName::try_from(destination.0.as_str()).expect("Invalid DNSName");
let mut tls_target = tls_connector.connect(domain, target).await?;
debug!("TlS Connection ready");
let (wrote, recvd) = tokio::io::copy_bidirectional(&mut upgraded, &mut tls_target).await?;
debug!("Client wrote {} and received {} bytes", wrote, recvd);
Ok(())
}
To make the proxy connection I made a really small snippet in Kotlin:
fun main() {
val fm = FuelManager.instance
fm.proxy = Proxy(Proxy.Type.HTTP, InetSocketAddress("127.0.0.1", 8100))
repeat(1) {
val response = "https://nginx-test".httpGet()
.responseString()
println("Response: $response")
println("Response Data: ${response.second.data.toString(Charset.defaultCharset())}")
}
}
It just sets the proxy address and makes a call to a local nginx server where the mTls auth is expected to occurs.
This kotlin code throws the following error Unsupported or unrecognized SSL message
And the Nginx logs the request like this:
172.18.0.1 - - [01/Feb/2023:17:51:08 +0000] "\x16\x03\x03\x01\xBF\x01\x00\x01\xBB\x03\x03\xD90\xF6+\xCEIvRr\xEF\x84{\x82\xD0\xA0\xFB8\xAD\xEB\x11\x1D\xC4 " 400 157 "-" "-" "-"
I'm assuming that the message is being delivered encrypted to the Nginx server and I can't really just copy the bytes from one connection to another like this let (wrote, recvd) = tokio::io::copy_bidirectional(&mut upgraded, &mut tls_target).await?;
Maybe I should wrap the "upgraded" connection in some sort of TlsAcceptor so it can decrypt the bytes before writing it, but i could not figure out how to do it.
Anyone has any thoughts on this?
Here is my nginx config:
server {
listen 80;
location / {
return 301 https://$host$request_uri;
}
}
server {
listen 443 ssl;
server_name nginx.test;
ssl_certificate /etc/ssl/server.crt;
ssl_certificate_key /etc/ssl/server.key;
ssl_protocols TLSv1.2 TLSv1.3;
ssl_ciphers HIGH:!aNULL:!MD5;
ssl_client_certificate /etc/nginx/client_certs/ca.crt;
ssl_verify_client optional;
ssl_verify_depth 2;
location / {
if ($ssl_client_verify != SUCCESS) { return 403; }
proxy_set_header SSL_Client_Issuer $ssl_client_i_dn;
proxy_set_header SSL_Client $ssl_client_s_dn;
proxy_set_header SSL_Client_Verify $ssl_client_verify;
return 200 "yay";
}
}

Related

What could be the misconfiguration in my api_gateway.conf file that is leading me to the error SSL_do_handshake() failed

I am getting this message in nginx (1.18) error.log file
*39 SSL_do_handshake() failed (SSL: error:1408F10B:SSL routines:ssl3_get_record:wrong version number) while SSL handshaking to upstream
I saw a lot of answers here, but none of them solve my problem.
I am trying to implement an api gateway. It should be the simplest thing in the world...
api_gateway.conf
include api_keys.conf;
server {
access_log /var/log/nginx/api_access.log; # Each API may also log to a
# separate file
auth_request /_validate_apikey;
root /var/www/api;
index index.html index.htm index.nginx-debian.html;
listen 443 ssl;
server_name api.example.com.br;
location /microservices/ {
proxy_pass https://127.0.0.1:10001/;
}
location /ms-email-sender/ {
proxy_pass https://127.0.0.1:10002/;
}
# Error responses
error_page 404 = #400; # Treat invalid paths as bad requests
proxy_intercept_errors on; # Do not send backend errors to client
include api_json_errors.conf; # API client-friendly JSON errors
default_type application/json; # If no content-type, assume JSON
# API key validation
location = /_validate_apikey {
internal;
if ($http_apikey = "") {
return 401; # Unauthorized
}
if ($api_client_name = "") {
return 403; # Forbidden
}
return 204; # OK (no content)
}
ssl_certificate /etc/letsencrypt/live/api.optimusdata.com.br/fullchain.pem; # managed by Certbot
ssl_certificate_key /etc/letsencrypt/live/api.optimusdata.com.br/privkey.pem; # managed by Certbot
include /etc/letsencrypt/options-ssl-nginx.conf; # managed by Certbot
ssl_dhparam /etc/letsencrypt/ssl-dhparams.pem; # managed by Certbot
}
server {
if ($host = api.optimusdata.com.br) {
return 301 https://$host$request_uri;
} # managed by Certbot
server_name api.optimusdata.com.br;
listen 80;
return 404; # managed by Certbot
}
My services are written in node.js
I tried to put some directives under location block like
proxy_ssl_verify off;
I changed a lot of things the api_gateway.conf
I saw several tutorials in the web and all of them are quite look alike that one.

Nginx multiport with if condition

I Need to open mutiports in nginx and check it if request comes to specific port , proxy pass to other specific port
ans so on :
i Think port mapping is okay
but i need to know what is the best practice for it ?
server {
listen 808;
listen [::]:808;
listen 809;
listen [::]:809;
server_name _;
access_log /var/log/nginx/access_socks_proxy.log;
error_log /var/log/nginx/error_socks_proxy.log;
if ($server_port = 808) {
location / {
proxy_pass http://127.0.0.1:10808;
}
}
if ($server_port = 809) {
location / {
proxy_pass http://127.0.0.1:10809;
}
}
}

Istio retry isn't triggered when an error occurs at transport layer

Over the last few days, I was trying to understand the Istio retry policy. Then, I found a field named "retry-on". The default value of this field is below (I use version 1.14.3 Istio).
RetryOn: "connect-failure,refused-stream,unavailable,cancelled,retriable-status-codes"
link to the source code
I want to know which case is included in "connect-failure". From the document, They explain like this.
connect-failure
Envoy will attempt a retry if a request is failed because of a connection failure to the upstream server (connect timeout, etc.). (Included in 5xx)
NOTE: A connection failure/timeout is a the TCP level, not the request level. This does not include upstream request timeouts specified via x-envoy-upstream-rq-timeout-ms or via route configuration or via virtual host retry policy.
link to doc
So, I think it will retry if any error occurs in the TCP protocol at the transport layer. I tried to prove that by creating 2 pods in the Kubernetes cluster. First is Nginx forwarding every HTTP request to the Second. The Second is the NodeJS TCP server that will break the TCP connection if you send an HTTP request with "/error" path. I show it below.
Nginx
user nginx;
error_log /var/log/nginx/error.log warn;
events {
worker_connections 65535;
use epoll;
multi_accept on;
}
http {
log_format main escape=json
'{'
'"clientIP":"$remote_addr",'
'"time-local":"$time_local",'
'"server-port":"$server_port",'
'"message":"$request",'
'"statusCode":"$status",'
'"dataLength":"$body_bytes_sent",'
'"referer":"$http_referer",'
'"userAgent":"$http_user_agent",'
'"xForwardedFor":"$http_x_forwarded_for",'
'"upstream-response-time":"$upstream_response_time",'
'"correlation-id":"$http_x_correlation_id",'
'"user-tier":"$http_x_neo_user_tier",'
'"session-id":"$http_x_session_id"'
'}';
access_log /var/log/nginx/access.log main;
client_max_body_size 100m;
client_header_timeout 5m; # default 60s
client_body_timeout 5m; # default 60s
send_timeout 5m; # default 60s
proxy_connect_timeout 5m;
proxy_send_timeout 5m;
proxy_read_timeout 5m;
server {
listen 8080;
location / {
proxy_pass http://ice-node-service.neo-platform.svc.cluster.local:8080;
}
}
}
NodeJS
var net = require('net');
var server = net.createServer();
server.listen(8080, '127.0.0.1');
server.addListener('close', () => {
console.log('close');
})
server.addListener('connection', socket => {
console.log('connect');
socket.addListener('data', data => {
try {
const [method, path] = data.toString().split("\n")[0].split(" ")
console.log(method, path);
if (path === "/error") {
socket.destroy(new Error("force error"))
} else {
socket.write(respond())
socket.end()
}
} catch (e) {
console.log(e);
}
})
})
server.addListener('error', err => {
console.log('error', err);
})
server.addListener('listening', () => {
console.log('listening');
})
function respond() {
const body = `<html><body>Hello</body></html>`
return `HTTP/1.1 200 OK
Date: ${new Date().toGMTString()}
Server: Apache
Last-Modified: Tue, 01 Dec 2009 20:18:22 GMT
ETag: "51142bc1-7449-479b075b2891b"
Accept-Ranges: bytes
Content-Length: ${body.length + 2}
Content-Type: text/html
${body}\r\n`
}
So, I tried to send a request through Nginx to the NodeJS server on "/error" path. I expected Istio would resend the request if the TCP connection is broken. But, It wasn't retried. So, I want to know why it wasn't.

Why aren't my creds being passed into my nginx.conf?

My original nginx.conf:
events {}
http {
server {
include credentials.conf;
listen 80;
location / {
proxy_set_header Authorization $credentials;
proxy_pass [website_of_choice];
}
}
}
My credentials.conf:
set $credentials 'Basic [long_encoded_login_details]';
But this wont work when nginx starts.
Using .conf as a file ending will not work in some cases. This should give you an error trying to reload nginx sudo nginx -s reload or nginx -t on configtest. This depends on your nginx.conf. Check if there is any include directive including anything *.conf from a given directory.
use this
credentials.include
set $credentials 'Basic [long_encoded_login_details]';
nginx.conf
events {}
http {
server {
include credentials.include;
listen 80;
location / {
proxy_set_header Authorization $credentials;
proxy_pass [website_of_choice];
}
}
}

Nginx Proxy + NodeJS WebSocket + >17KB messages. No traffic. Who is the culprit?

Impossible to increase buffer width to avoid dropping frames
OR
Unable to manage WS fragmentation correctly
Summary
My goal:
A very simple thing: have websocket tunnel to transfer at least 2/3 MB of data per tunnel. I need to send directory structure, therefore the data can be very many
The problem:
Sending WebSocket messages over 17KB, from A to B, cause a "communication lost" or packet drop/loss; the connection/tunnel remains up with the inability to send new messages over the same tunnel, from A to B; conversely, from B to A continues to work.
I must restart the tunnel to get functionality back.
It could also be an idea, the management of the packet heap that restarts the tunnel when the threshold is reached, but it is clear that I need to send more than the threshold at one time.
The "signal path":
GoLang app(Client) ---> :443 NGINX Proxy(Debian) ---> :8050 NodeJS WS Server
The tests:
Sending X messages/chunks of 1000 byte each | messages are received up to the 17th chunk, the following ones are not received (see below)
The analyses:
Wireshark on Go app shows the flow of all packets
tcpdump, on Debian machine, set to listen on eth (public), shows the flow of all packets
tcpdump, on Debian machine, set to listen on lo interface (for rev proxy scanning), shows the flow of all packets
NodeJS/fastify-websocket ws.on('message', (msg)=>{console.log(msg)}) shows up to the 17th chunk
Code & Config:
GoLang app relevant part
websocket.DefaultDialer = &websocket.Dialer{
Proxy: http.ProxyFromEnvironment,
HandshakeTimeout: 45 * time.Second,
WriteBufferSize: 1000, //also tried with 2000, 5000, 10000, 11000
}
c, _, err := websocket.DefaultDialer.Dial(u.String(), nil)
wsConn = c
bufferChunk := 1000
bufferSample := ""
for j := 7; j <= bufferChunk; j++ {
bufferSample = bufferSample + "0"
}
i := 1
for {
sendingBytes := i * bufferChunk
fmt.Println(strconv.Itoa(sendingBytes) + " bytes sent")
wsConn.WriteMessage(websocket.TextMessage, []byte(bufferSample))
i++
time.Sleep(1000 * time.Millisecond)
}
NGINX conf:
upstream backend {
server 127.0.0.1:8050;
}
server {
server_name my.domain.com;
large_client_header_buffers 8 32k;
location / {
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header Host $http_host;
proxy_set_header X-NginX-Proxy true;
proxy_buffers 8 2m;
proxy_buffer_size 10m;
proxy_busy_buffers_size 10m;
proxy_pass http://backend;
proxy_redirect off;
#proxy_buffering off; ### ON/OFF IT'S THE SAME
# enables WS support
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "Upgrade"; ### "upgrade" it's the same
}
listen 443 ssl; # managed by Certbot
ssl_certificate /etc/letsencrypt/live/my.domain.com/fullchain.pem; # managed by Certbot
ssl_certificate_key /etc/letsencrypt/live/my.domain.com/privkey.pem; # managed by Certbot
include /etc/letsencrypt/options-ssl-nginx.conf; # managed by Certbot
ssl_dhparam /etc/letsencrypt/ssl-dhparams.pem; # managed by Certbot
}
server {
if ($host = my.domain.com) {
return 301 https://$host$request_uri;
} # managed by Certbot
server_name my.domain.com;
listen 80;
return 404; # managed by Certbot
}
NodeJS code:
//index.js
const config = require("./config.js");
const fastify = require('fastify')();
const WsController = require("./controller");
fastify.register(require('fastify-websocket'), {
/*these options are the same as the native nodeJS WS*/
options :{
maxPayload: 10 * 1024 * 1024,
maxReceivedFrameSize: 131072,
maxReceivedMessageSize: 10 * 1024 * 1024,
autoAcceptConnections: false
}
});
fastify.ready(err => {
if (err) throw err
console.log("Server started")
fastify.websocketServer
.on("connection", WsController)
})
//controller.js
module.exports = (ws, req) => {
ws.on("message", (msg) => {
log("msg received"); //it is shown as long as the tunnel does not "fill" up to 17KB
})
})
SOLVED
Updating fastify and fastify-websocket the problem disappeared. What a shame!
I came up with this solution by creating a new cloud instance and installing everything from scratch.
Just npm update.
Thank you all for your support

Resources