Websocket Autobahn Python client: how to connect to server using server and client certificates? - python-3.x

A websocket client (using Autobahn/Python and Twisted) needs to connect to a websocket server: the client needs to present its client certificate to the server and the client needs to check the server's certificate. These certificates have been created, for instance, during setup of a Kubernetes minikube installation. In particular:
server certificate ~/.minikube/ca.crt (in X509 format from what I understand).
client certificate ~/.minikube/client.crt with key ~/.minikube/client.key.
I've checked that I can successfully use these certificates+key to issue Kubernetes remote API calls using curl.
From Autobahn's echo_tls/client.py example I understand that I may need to use a ssl.ClientContextFactory(). ssl here refers to the pyopenssl package that twisted automatically imports.
However, I cannot figure out how to pass the certificates to the factory?
How do I tell the websocket factor to present the client certificate to the server?
How do I tell the websocket to check the server's certificate in order to detect MITM attacks?

After some trial and error I've now arrived at this solution below. To help others I'll not only show code, but also a reference setup to test drive the example code.
First, install minikube, then start a minikube instance; I've tested with minikube 1.0.0, which then runs Kubernetes 1.14 which was current at the time of this writing. Then start a simple websocket server that just shows what is sent to it and will sent back any input you made to the connected websocket client.
minikube start
kubectl run wsserver --generator=run-pod/v1 --rm -i --tty \
--image ubuntu:disco -- bash -c "\
apt-get update && apt-get install -y wget && \
wget https://github.com/vi/websocat/releases/download/v1.4.0/websocat_1.4.0_ssl1.1_amd64.deb && \
dpkg -i webso*.deb && \
websocat -vv -s 0.0.0.0:8000"
Next comes the Python code. It attempts to connect to the wsserver we've just started via Kubernetes' remote API from the minikube, using the remote API as its reverse proxy. The minikube setup usually uses mutual SSL/TLS authentication of client and server, so this is a "hard" test here. Please note that there are also other methods, such as server certificate and bearer token (instead of a client certificate).
import kubernetes.client.configuration
from urllib.parse import urlparse
from twisted.internet import reactor
from twisted.internet import ssl
from twisted.python import log
from autobahn.twisted.websocket import WebSocketClientFactory, WebSocketClientProtocol, connectWS
import sys
if __name__ == '__main__':
log.startLogging(sys.stdout)
class EchoClientProto(WebSocketClientProtocol):
def onOpen(self):
print('onOpen')
self.sendMessage('testing...\n'.encode('utf8'))
def onMessage(self, payload, isBinary):
print('onMessage')
if not isBinary:
print('message %s' % payload.decode('utf8'))
def onClose(self, wasClean, code, reason):
print('onClose', wasClean, code, reason)
print('stopping reactor...')
reactor.stop()
# Select the Kubernetes cluster context of the minikube instance,
# and see what client and server certificates need to be used in
# order to talk to the minikube's remote API instance...
kubernetes.config.load_kube_config(context='minikube')
ccfg = kubernetes.client.configuration.Configuration._default
print('Kubernetes API server CA certificate at %s' % ccfg.ssl_ca_cert)
with open(ccfg.ssl_ca_cert) as ca_cert:
trust_root = ssl.Certificate.loadPEM(ca_cert.read())
print('Kubernetes client key at %s' % ccfg.key_file)
print('Kubernetes client certificate at %s' % ccfg.cert_file)
with open(ccfg.key_file) as cl_key:
with open(ccfg.cert_file) as cl_cert:
client_cert = ssl.PrivateCertificate.loadPEM(cl_key.read() + cl_cert.read())
# Now for the real meat: construct the secure websocket URL that connects
# us with the example wsserver inside the minikube cluster, via the
# remote API proxy verb.
ws_url = 'wss://%s/api/v1/namespaces/default/pods/wsserver:8000/proxy/test' % urlparse(ccfg.host).netloc
print('will contact: %s' % ws_url)
factory = WebSocketClientFactory(ws_url)
factory.protocol = EchoClientProto
# We need to attach the client and server certificates to our websocket
# factory so it can successfully connect to the remote API.
context = ssl.optionsForClientTLS(
trust_root.getSubject().commonName.decode('utf8'),
trustRoot=trust_root,
clientCertificate=client_cert
)
connectWS(factory, context)
print('starting reactor...')
reactor.run()
print('reactor stopped.')
The tricky part here when attaching the client and server certificates using optionsForClientTLS is that Twisted/SSL expects to be told the server's name we're going to talk to. This is also needed to inform virtual servers which one of their multiple server certificates they need to present -- before there will be any HTTP headers!
Unfortunately, this is now ugly territory -- and I would be glad to get feedback here! Simply using urlparse(ccfg.host).hostname works on some minikube instances, but not on others. I haven't yet figured out why seemingly similar instances behave differently.
My current workaround here is to simply use the CN (common name) of the subject from the server's certificate. Maybe a more robust way might be to only resort to such tactics when the URL for the remote API server uses an IP address literal and not a DNS name (or at least a label).
Alas, run the Python 3 code above python3 wssex.py. If the script correctly connects, then you should see a log message similar to 2019-05-03 12:34:56+9600 [-] {"peer": "tcp4:192.168.99.100:8443", "headers": {"sec-websocket-accept": ...
Additionally, the websocket server that you've started before should show log messages such as [INFO websocat::net_peer] Incoming TCP connection from Some(V4(172.17.0.1:35222)), and some more.
This then is proof that the client script has successfully connected to minikube's remote API via a secure websocket, passing authentication and access control, and is now connected to the (insecure) websocket demo server inside minikube.

Related

Puppet:Server hostname 'puppetmaster' did not match server certificate; expected one of puppetmaster.us-east-2.compute.internal, DNS:puppet,

I use puppet in AWS, and I get the following error when Puppet runs:
Puppet:Server hostname 'puppetmaster' did not match server certificate; expected one of puppetmaster.us-east-2.compute.internal, DNS:puppet,
Please find the following configurations:
#master /etc/hosts
ubuntu#puppetmaster:~$ cat /etc/hosts
127.0.0.1 localhost
172.31.16.177 puppetmaster puppet
172.31.19.211 ip-172-31-19-211 #client
#client
ubuntu#ip-172-31-19-211:~$ cat /etc/hosts
127.0.0.1 localhost
172.31.16.177 puppetmaster puppet
172.31.19.211 ip-172-31-19-211
ubuntu#ip-172-31-19-211:~$ cat /etc/puppetlabs/puppet/puppet.conf
# This file can be used to override the default puppet settings.
# See the following links for more details on what settings are available:
# - https://puppet.com/docs/puppet/latest/config_important_settings.html
# - https://puppet.com/docs/puppet/latest/config_about_settings.html
# - https://puppet.com/docs/puppet/latest/config_file_main.html
# - https://puppet.com/docs/puppet/latest/configuration.html
[main]
certname = ip-172-31-19-211
server = puppetmaster
The above are the host files of master and node machine and I have configured puppet.conf file as well in the node machine but still the client machine is not connected to the master.Please someone help me to fix the issue.
Puppet uses cryptographic certificates on both the client side and the server side to authenticate machine identities. The error message shows that this authentication is failing because the certificate the server presents to the client does not identify it as the machine the client expects.
Specifically, the machine expects the server to be identified as "puppetmaster", but that is not one of the identities listed in the cert. ("puppetmaster.us-east-2.compute.internal" is among those identities, but this is not equivalent for the purpose).
There is considerable flexibility in how all this is set up, but for the smoothest experience, one should
Configure the Puppet server and all Puppet clients with fully-qualified, DNS-resolvable hostnames. Do this on each machine before installing any Puppet software on that machine, or at least before starting any Puppet component for the first time.
Do not change Puppet client or server hostnames after Puppet is set up.
Always use the chosen fully-qualified name to connect to the Puppet server. In particular, specify this as the server name in clients' puppet.conf configuration files.
The question is unclear about the exact circumstances in which the error is observed, but probably it occurs on a new client, while initially trying to connect it to the server. In that case the easiest solution would probably be to update the client's puppet.conf to specify the server via the name on its cert: "puppetmaster.us-east-2.compute.internal". That supposes the server can indeed be reached via that name; if not, then a new cert will probably need to be generated for the server.

Issues connecting to mosquitto broker with node mqtt client via SSL/TLS

Helllo, I created a mosquitto broker via the eclipse docker image and recently followed this guide to add SSL/TLS support: http://www.steves-internet-guide.com/mosquitto-tls/.
When I am sshed in the VPS which is running the broker, I can use the command:
mosquitto_pub -h VPS_NAME -t test/topic -p 8883 --cafile ca.crt -m message -u BROKER_USERNAME -P BROKER_PASSWORD
and it publishes all fine and dandy. However when I run the same command on a local computer, I get the error:
'Unable to connect (Lookup error.).
I don't get any new logs from the broker container, so I think it's not even reaching the container. However when I run:
mosquitto_pub -h BROKER_IP_ADRESS -t test/topic -p 8883 --cafile ca.crt -m message -u BROKER_USERNAME -P BROKER_PASSWORD
I do get a response which is Error: A TLS error occured, and on my docker logs I get:
1583004287: New connection from LOCAL_IP_ADDRESS on port 8883.
1583004287: OpenSSL Error: error:14037438:SSL routines:ACCEPT_SR_KEY_EXCH:tlsv1 alert internal error
1583004287: OpenSSL Error: error:140370E5:SSL routines:ACCEPT_SR_KEY_EXCH:ssl handshake failure
1583004287: Socket error on client <unknown>, disconnecting.
I am only able to get a sucessful publish send when I add the --insecure command to the publish however I want to make sure the client knows that it's talking to the right server so I don't think this is the right solution.
In the end I want to run an mqtt client on a node application, I've tried this piece of code:
const fs = require('fs');
const optionsz = {
ca: [ fs.readFileSync(__dirname + '/ca.pem') ],
host: 'BROKER_IP_ADDRESS',
servername: 'VPS_NAME',
port: 8883,
rejectUnauthorized : false,
username : 'BROKER_USERNAME', // mqtt credentials if these are needed to connect
password : 'BROKER_PASSWORD',
clientId : 'test',
// Necessary only if the server's cert isn't for "localhost".
checkServerIdentity: () => { return null; },
};
class MqttHandler {
constructor() {
this.mqttClient = null;
};
connect() {
// Connect mqtt with credentials (in case of needed, otherwise we can omit 2nd param)
this.mqttClient = mqtt.connect(this.host, optionsz);
...
when I run this i keep getting disconnect events, and on my docker logs i get:
1583004505: New connection from LOCAL_IP_ADDRESS on port 8883.
1583004505: OpenSSL Error: error:140260FC:SSL routines:ACCEPT_SR_CLNT_HELLO:unknown protocol
1583004505: Socket error on client <unknown>, disconnecting.
I am really confused on how to even tackle this issue, I've been able to connect to a broker without SSL/TLS protection, but I wanted to make my device communication more secure.
Thank you for your time!
2 separate problems here.
Looks like you don't have a valid DNS entry for your VPS. mosquitto_pub is failing because it can't resolve the name to an IP address. It works with the --insecure and the IP address because you are telling mosquitto_pub to ignore the fact that the CN or SANs in the brokers certificate doesn't include the IP address only the name.
You are trying to connect with raw MQTT not MQTT over TLS, you need to use a URL not just a hostname or the first argument of the connect() function. e.g.
this.mqttClient = mqtt.connect("mqtts://" + this.host, optionsz);
To be honest you need to fix both of these to get things working properly.
To fix 1 you need to sort your DNS entries out so you have a valid fully qualified hostname that points to your VPS and matches the certificate you've deployed there.

Python Flask End To End Encryption Behind AWS ALB

I've a Python 3 Flask app running in an ECS cluster. The Flask app is configured to run in SSL mode. The app can't be accessed via the ALB Cname, as it generates connection refused as seen here -
curl -Il https://tek-app.example.com/health
curl: (7) Failed to connect to tek-app.example.com port 443: Connection refused
When the ALB is hit directly and ignoring the SSL cert exception, it works as seen here -
curl -Il -k https://tek-w-appli-1234.eu-west-1.elb.amazonaws.com/health
HTTP/2 200
date: Sun, 24 Feb 2019 14:49:27 GMT
content-type: text/html; charset=utf-8
content-length: 9
server: Werkzeug/0.14.1 Python/3.7.2
I understand the main recommendation is to run it behind a Nginx or Apache proxy and to set the X-Forward headers via their configs, but I feel this is over engineering the solution.
I've also tried enabling the following in the app -
from werkzeug.contrib.fixers import ProxyFix
...
app = Flask(__name__)
app.wsgi_app = ProxyFix(app.wsgi_app)
...
And this fix now produces the correct source IP's in the Cloudwatch logs, but doesn't allow connections via the ALB Cname.
Is there something simple that I'm missing here?
Reply to first answer
Thank you - the Cname is pointing to the correct ALB. I ran into a similar issue two weeks back with an Apache server, and the fix was to ensure X-Forward-Proto was in use in the Apache vhosts.conf file. So I'm thinking this may be something similar.
I did it again - while developing locally I edited my /etc/hosts file to have a local entry to play with. Then when the Flask app was pushed to the cloud and tested from the same desktop, it was referencing the local DNS entry as opposed to the public equivalent, thus the connection refused. With the local entry removed, all is now working.

SOCKS5 request from Azure VM

I have a Python 3 script that makes requests through a SOCKS5 proxy. I want to be able to run this script from an Azure VM. But when the request is being made I get the following error:
Not supported proxy scheme SOCKS5
I'm running Python 3.5.2 with requests 2.9.1 on an Ubuntu 16.10 LTS VM. I also installed pysocks to have requests works with SOCKS5.
The code that does the request is as follows:
server = 'socks5://u:p#proxy.server.com:1080'
proxies = { 'https': server, 'all': None }
response = requests.get(request_url, proxies=proxies)
Te script runs fine locally. So it seems that Azure won't allow me to make use of SOCKS5 proxies.
I've also added port 1080 as allowed outbound connection to the networking interface of the VM.
How do I configure my VM in such a way that it will allow SOCKS5 connections from the
Ok, it turns out that installing pysocks isn't enough.
When you use the following command:
pip3 install -U requests[socks]
It installs the required packages to work properly.
-U is the same as --upgrade. This flag is also required, without it you still won't be able to connect through SOCKS5.

SSL handshake failure with node.js https

I have an API running with express using https. For testing, I've been using tinycert.org for the certificates, which work fine on my machine.
I'm using docker to package up the app, and docker-machine with docker-compose to run it on a digital ocean server.
When I try to connect with Chrome, I get ERR_SSL_VERSION_OR_CIPHER_MISMATCH. When running this with curl, I get a handshake failure: curl: (35) SSL peer handshake failed, the server most likely requires a client certificate to connect.
I tried to debug with Wireshark's SSL dissector, but it hasn't given me much more info: I can see the "Client Hello" and then the next frame is "Handshake Failure (40)".
I considered that maybe node on the docker container has no available ciphers, but it has a huge list, so it can't be that. I'm unsure as to what's going on and how to remedy it.
EDIT
Here's my createServer() block:
let app = express();
let httpsOpts = {
key: fs.readFileSync("./secure/key.pem"),
cert: fs.readFileSync("./secure/cert.pem")
};
let port = 8080;
https.createServer(httpsOpts, app).listen(port);
I've had this problem for a really long time too, there's a weird fix:
Don't convert your certs to .pem; it works fine as .crt and .key files.
Add ca: fs.readFileSync("path to CA bundle file") to the https options.
It looks like your server is only sending the top certificate and the CA bundle file has the intermediate and root certificates which you'll need for non-browser use.
IMPORTANT! Reinstall or update node to the latest version.
You can use sudo apt-get upgrade if you're on Linux (it may take a while).
Re-download your certificate or get a new one.
If you are acting as your own certificate authority it could be not recognizing / trusting the certificate, so try testing your site on ssllabs.com.
If you're using the http2 API try adding allowHTTP1: true to the options.

Resources