AWS Load Balancer, enable listener for HTTPS and route to 80 - node.js

I've an AWS LoadBalancer in front of two servers running SailsJS with PM2. The LB works very well and routes the incoming HTTP requests to the server, which is perfect.
Now, I need to add support for HTTPS, so I followed this guide:
AWS Create a Classic Load Balancer with an HTTPS Listener, using a self-generated SSL certificate, and used this configuration for the ports
LB Port 80 - Instance 80
LB Port 443 - Instance 80
And the security group has these ports opened:
22,
80,
443
So, if I understood correctly, the LB will receive the HTTPS request on port 443 and will forward it to port 80 of the instance. My instance, of course, is listening on port 80.
The problem is, this don't work! I can make HTTP requests to the LB and all is routed perfectly to the Sails instance and the response is perfect. But if I use exactly the same URL but with HTTPS, then it doesn't work and I get a "ERR_SSL_PROTOCOL_ERROR".
What am I doing wrong, what am I missing?
Thank you!
EDIT 1
This is what I get if I try curl -v https://example.com
* Trying xx.xx.xx.xx...
* Connected to example.com (xx.xx.xx.xx) port 443 (#0)
* Unknown SSL protocol error in connection to example.com:-9838
* Closing connection 0
curl: (35) Unknown SSL protocol error in connection to mydomain.com:-9838
EDIT 2
Found another thread which suggested a different way of creating the certificate. So I tried it, but now AWS don't even accept the private key and the certificate
Server Certificate not found for the key: arn:aws:iam::111111111:server-certificate/CertificateMyName
EDIT 3
OK, so found more info on why I couldn't load the certificates to AWS and after trying some times, I managed to load it and use it.
After these, it appears to be working (with the warnings that is not a valid cert and bla, bla, bla, which is expected)
* Trying XXX.XXX.XXX.XXX...
* Connected to example.com (XXX.XXX.XXX.XXX) port 443 (#0)
* SSL certificate problem: Invalid certificate chain
* Closing connection 0
curl: (60) SSL certificate problem: Invalid certificate chain
So it appears to be working, and it appears, as #MarkB suggested that the certificate was wrong. Using the info found on EDIT 2, I created a new one, and upload it (with the info of EDIT 3) and it appears to be working.
I'll perform more tests to make 100% sure that this works and will report back soon.

Ok, so the problem was a wrongly generated certificate. I used the first method I found and it wasn't working, so just use these:
openssl genrsa -out client-key.pem 2048
openssl req -new -key client-key.pem -out client.csr
openssl x509 -req -in client.csr -signkey client-key.pem -out client-cert.pem
Even after that, AWS told me:
Server Certificate not found for the key: arn:aws:iam::111111111:server-certificate/CertificateMyName
But the last error is wrong, even with the error the certificate WAS added to the ACM, and I used it in my HTTPS 443 listener, tested again the services and all was working. So just create the certificate with the instructions above, import it in ACM and if it gives you an error like above, just ignore it, your cert will be on place and ready to use.
Hope this helps others!

Related

Simulate MQTT TLS login by MQTT.FX in linux?

I am having some problems using mosquitto client in linux, more specifically I need to use mosquitto_sub but I don't really get how I should authenticate.
All I have is a json config file for MQTT.Fx, that works fine when imported in that application. I can see there are username and password, as well as host information, and that SSL/TSL is enabled.
My question is: how can I do the same thing that MQTT.Fx does automatically since option CA signed server certificate is selected? I have been trying a lot of alternatives, like downloading server certificate and passing it as --cafile, generating new certificate, signing them, editing mosquitto.conf, but I didn't match the right combination of operations.
Any suggestion, please?
Edit: here is current command:
mosquitto_sub -h myhost.example -p 8883 -i example1 -u myusername -P mypassword -t XXXXXXXXXXXX/# --cafile /etc/mosquitto/trycert.crt
where file trycert.crt contains the response to following request (of course only part between BEGIN CERTIFICATE and END CERTIFICATE)
openssl s_client -showcerts -servername myhost.example -connect myhost.example:8883 </dev/null
All the times I had problems with MQTT over SSL its been that the server cert chain of trust broken on my client. In other words, the server i am connecting to has a cert. This cert is authorized by another cert and so forth. Each of the certs in the chain need to be on the client.
If any of these certs are missing, the chain of trust is broken and the stack will abort the connection.

Certificate issue with Curl

My CentOS 7 server which is in AWS private cloud (company network), is unable to connect to some sites. After some work I managed to narrow the problem down to following problem.
(1) The following internal site is not accessible (SSL by public CA):
curl -v https://git.example.com
which returns:
About to connect() to git.example.com port 443 (#0)
Trying 10.62.124.6...
Connected to git.example.com (10.62.124.6) port 443 (#0)
Initializing NSS with certpath: sql:/etc/pki/nssdb
CAfile: /etc/pki/tls/certs/ca-bundle.crt
CApath: none
(2) But following internal site works (SSL by public CA):
curl -v https://alm.example.com
which returns:
About to connect() to alm.example.com port 443 (#0)
Trying 10.64.167.137...
Connected to alm.example.com (10.64.167.137) port 443 (#0)
Initializing NSS with certpath: sql:/etc/pki/nssdb
CAfile: /etc/pki/tls/certs/ca-bundle.crt
CApath: none
SSL connection using TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384
...
...
...
Accept: */*
Any idea why number (1) is not working? These are both internal sites trusted by same public CA.
Thanks for the help.
It turned out to be the following case in our company.
git.example.com was hosted in private Azure and
alm.example.com was hosted in private AWS. And my working server also happens to be in AWS which is why the Azure one was having some trouble in the network. As advised by network team I set the MTU size in linux kernal to 1350 and this was resolved.
Moreover, our company had also started intercepting SSL traffic for which they have installed a intermediate certificate in the proxy and expect all internal servers to trust this certificate. My problem stated above was due to mix of both of these issues, where the certificate issue could have been sorted by trusting it or ignoring SSL verification.
Hope this helps someone.

Download SSL Corproate Proxy Cert via Linux CLI

How do I download a proxy's SSL cert and save it to a file using the Linux command line.
It's possible to download an ssl cert via the openssl tool: https://superuser.com/questions/97201/how-to-save-a-remote-server-ssl-certificate-locally-as-a-file. But this does not work when behind a corporate proxy that re-writes the SSL cert. I would like to download the proxy's ssl cert. Changing the HOST and PORT to my proxy's host and port does not work either.
Downloading the cert using my browser works but I need to do this in a bash script.
You can only extract certificates from the connection which actually get send inside the connection. Within a MITM proxy the root CA you want to have usually does not get send since it is expected to be installed locally as trusted, similar to a public root CA. And the reason you can extract this MITM CA within your browser is because the browser already has this CA as trusted in the CA store and can thus export it.
As mentioned here, openssl 1.1.0 and above support the -proxy argument so you can get the proxy's certificates with a command like (jcenter.bintray.com is just an example host to connect to)
openssl s_client -showcerts -proxy $https_proxy -connect jcenter.bintray.com:443
Also see this script for a more complete example how to import the certificate(s) to a JVM keystore and the system certificates.

mosquitto_sub Error:A TLS error occurred but is ok whit --insecure

I'm building a mqtt server. I used the mosquitto with the TLS on the server as a broker.
I encountered this problem:
I created the ca.crt, server certificate, server key, client certificate, client key via generate-CA.sh
I can connect the broker and publish and subscribe msg via MQTT.fx, but when I tried to connect the broker with the mosquitto_sub, it came out Error:A TLS error occurred on the client PC(ubuntu), at the same time, the server prints
New connection from xx.xx.xx.xx on port 8883.
Openssl Error: error:14094416:SSL routines:SSL3_READ_BYTES:sslv3 alert certificate unknown
Openssl Error: error:140940E5:SSL routines:SSL3_READ_BYTES:ssl handshake failure
my command used is:
mosquitto_sub -p 8883 -i test -t mqtt -h 150.xx.xx.xx --cafile ca.crt --cert xx.crt --key xx.key in which, the 150.xx.xx.xx is the IP of my broker.
when I used the option --insecure with the command above, the problem disappeared.
so I think it is the server hostname which leads to this problem.
In the mosquitto_sub command the option -h specifies the hostname, but i need to use this parameter to point to the IP address of my broker, so how could i specify the hostname of my server??
Old question but perhaps this might help someone:
If the --insecure option makes it work, you have a certificate problem. What hostname did you set whilst signing the certificate? What does openssl s_client -showcerts -connect 150.xx.xx.xx:8883 say?
Related: although it should be possible to use SSL certs for your servers using public IP addresses (see Is it possible to have SSL certificate for IP address, not domain name?), I'd recommend not doing this and just using DNS, even if this means server.localdomain and/or editing your /etc/hosts file if necessary.

SSL handshake failure with node.js https

I have an API running with express using https. For testing, I've been using tinycert.org for the certificates, which work fine on my machine.
I'm using docker to package up the app, and docker-machine with docker-compose to run it on a digital ocean server.
When I try to connect with Chrome, I get ERR_SSL_VERSION_OR_CIPHER_MISMATCH. When running this with curl, I get a handshake failure: curl: (35) SSL peer handshake failed, the server most likely requires a client certificate to connect.
I tried to debug with Wireshark's SSL dissector, but it hasn't given me much more info: I can see the "Client Hello" and then the next frame is "Handshake Failure (40)".
I considered that maybe node on the docker container has no available ciphers, but it has a huge list, so it can't be that. I'm unsure as to what's going on and how to remedy it.
EDIT
Here's my createServer() block:
let app = express();
let httpsOpts = {
key: fs.readFileSync("./secure/key.pem"),
cert: fs.readFileSync("./secure/cert.pem")
};
let port = 8080;
https.createServer(httpsOpts, app).listen(port);
I've had this problem for a really long time too, there's a weird fix:
Don't convert your certs to .pem; it works fine as .crt and .key files.
Add ca: fs.readFileSync("path to CA bundle file") to the https options.
It looks like your server is only sending the top certificate and the CA bundle file has the intermediate and root certificates which you'll need for non-browser use.
IMPORTANT! Reinstall or update node to the latest version.
You can use sudo apt-get upgrade if you're on Linux (it may take a while).
Re-download your certificate or get a new one.
If you are acting as your own certificate authority it could be not recognizing / trusting the certificate, so try testing your site on ssllabs.com.
If you're using the http2 API try adding allowHTTP1: true to the options.

Resources