I am using mkcert to generate a self signed certificate for localhost.
mkcert -install
mkcert localhost
This works fine for the browser but if I try and and do a fetch from node, I get this error:
FetchError: request to https://localhost:52882/ failed, reason: unable to verify the first certificate
I think this is because mkcert is not creating the full chain.
I have hacked around this by using the NODE_EXTRA_CA_CERTS environment variable.
NODE_EXTRA_CA_CERTS="$(mkcert -CAROOT)/rootCA.pem"
and I know there is the process.env.NODE_TLS_REJECT_UNAUTHORIZED = "0"; nuclear approach but I am curious to know how this can be fixed without these.
It is working perfectly. You have own certificate authority (CA) and that one issues localhost certificate directly. There is no intermediate certificate authority used, so assumption mkcert is not creating the full chain is not correct.
CA cert must be available on your machine and you need to define, which CA certs are trustworthy. NODE_EXTRA_CA_CERTS is exactly that config, where you can allow particular CA cert file.
Of course you can add this custom CA cert to system CA cert stores. Their locations depend on used OS, e.g.:
"/etc/ssl/certs/ca-certificates.crt", // Debian/Ubuntu/Gentoo etc.
"/etc/pki/tls/certs/ca-bundle.crt", // Fedora/RHEL 6
"/etc/ssl/ca-bundle.pem", // OpenSUSE
"/etc/pki/tls/cacert.pem", // OpenELEC
"/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem", // CentOS/RHEL 7
"/etc/ssl/cert.pem", // Alpine Linux
That should be done by mkcert -install.
My guess is that your node is not using system CA store (env variable NODE_OPTIONS=--use-openssl-ca), so only node's own CA certs (e.g. https://github.com/nodejs/node/blob/v14.0.0/src/node_root_certs.h) are trustworthy for the node.
You have option to use system CA cert store (env variable NODE_OPTIONS=--use-openssl-ca or node CLI parameter --use-openssl-ca) or you can allow your custom CA with env variable NODE_EXTRA_CA_CERTS as you did.
Related
I have a Node.js server that receives a request with the Client TLS certificate supplied in the XFCC header.
I would like to perform the Mutual TLS at the Application level, i.e. validate Client TLS cert against the server's CA truststore - all of this done in application code, rather than relying on a web proxy configuration.
I am using NPM's pem dependency, which is essentially a bunch of JS wrappers around openssl. In particular, the verification needed to resemble mTLS is the verify method:
openssl verify -CAfile /my/server/ca-chain.crt client-chain.crt
This works in the simplest case:
ca-chain.crt: Root CA -> Int 1 CA
client-chain.crt Root CA -> Int 1 CA -> Leaf 1
But it fails in the more complex cases where Int CA's are different:
ca-chain.crt: Root CA -> Int 1 CA
client-chain.crt Root CA -> Int 2 CA -> Leaf 2
With the following:
openssl verify -CAfile /my/server/ca-chain.crt client-chain.crt
error 20 at 0 depth lookup:unable to get local issuer certificate
As far as I understand mTLS would be successfully performed as long as all certs are valid and lead up to the same Root CA, despite different Int CA's, which means verify doesn't work as-is for the purpose of doing mTLS equivalent at the App level.
I know about s_client and s_server capabilities, but they seem like hacks for what I need, rather than a proper solution.
I guess my question is then this:
Is it possible to use openssl to verify certificate against CA chain according to the mTLS rules?
And if not possible, then what would be the way to do it without resorting to writing it from scratch?
As dave_thompson_085 pointed out in his other answer, for the openssl verify to make it work you need to be aware that it does not read the entire certificate chain from the supplied client cert file, only the last (leaf) certificate.
So I believe that this method in pem package is not entirely correct (in fact they do have open issue on that), but that's another thing for discussion.
The supposed openssl command should've been translated to is this:
openssl verify -CAfile /my/server/ca-chain.crt -untrusted client-ca-chain.crt client-leaf.crt
Here I split the leaf client cert from the rest of the chain which is passed in the -untrusted param, while -CAfile contains a chain with different a Int CA, but which eventually leads up to the same Root CA - and this is what effectively makes the client cert chain valid.
This should be fairly trivial to implement with the Node.js' openssl or similar wrapper.
I am working with self signed certificates for the fist time. I understand that node red does not use the default ca store. The solution to this seems to be to provide a key and self signed certificate when preforming an https request. I would like to uses the standard http request node to do this but i cant find documentation on how to a key, cert, and rejectUnauthorized through the message block. Is this even possible?
Thank you
NodeJS bundles the default CA store into the node binary so you can't just add a file to a dir and have it pick up extra CA certs.
Assuming you are using the HTTP-request node you can add certs/keys by ticking the "Enable secure (SSL/TLS) connection" check box.
This should make a drop down box appear that will let you create a new TLS configuration. In here you can add the certs and keys for the connection.
I want to ensure that client libraries (currently in Python, Ruby, PHP, Java, and .NET) are configured correctly and failing appropriately when SSL certificates are invalid. Shmatikov's paper, The Most Dangerous Code in the World:
Validating SSL Certiļ¬cates in Non-Browser Software, reveals how confusing SSL validation is so I want to thoroughly test the possible failures.
Based on research a certificate is invalid if:
It is used before its activation date
It is used after its expiry date
It has been revoked
Certificate hostnames don't match the site hostname
Certificate chain does not contain a trusted certificate authority
Ideally, I think I would have one test case for each of the invalid cases. To that end I am currently testing an HTTP site accessed over HTTPS, which leads to a failure that I can verify in a test like so:
self.assertRaises(SSLHandshakeError, lambda: api.call_to_unmatched_hostname())
This is incomplete (only covering one case) and potentially wrong, so...
How can you test that non-browser software properly validates SSL certificates?
First off, you'll need a collection of SSL certificates, where each has just one thing wrong with it. You can generate these using the openssl command line tool. Of course, you can't sign them with a trusted root CA. You will need to use your own CA. To make this validate correctly, you'll need to install your CA certificate in the client libraries. You can do this in Java, for example, using the control panel.
Once you have the certificates, you can use the "openssl s_server" tool to serve an SSL socket using each one. I suggest you put one certificate on each port.
You now have to use the client library to connect to a port, and verify that you get the correct error message.
I know that Python by default does no certificate validation (look at the manual for httplib.HTTPSConnection). However, m2crypto does do validation. Java by default does do validation. I don't know about other languages.
Some other cases you could test:
1) Wildcard host names.
2) Certificate chaining. I know there was a bug in old browsers where if you had a certificate A signed by the root, A could then sign B, and B would appear valid. SSL is supposed to stop this by having flags on certificates, and A would not have the "can sign" flag. However, this was not verified in some old browsers.
Good luck! I'd be interested to hear how you get on.
Paul
Certificate hostnames don't match the site hostname
This is probably the easiest to check, and failure (to fail) there is certainly a good indication that something is wrong. Most certificates for well-known services only use host names for their identity, not IP addresses. If, instead of asking for https://www.google.com/, you ask for https://173.194.67.99/ (for example) and it works, there's something wrong.
For the other ones, you may want to generate your own test CA.
Certificate chain does not contain a trusted certificate authority
You can generate a test certificate using your test CA (or a self-signed certificate), but let the default system CA list be used for the verification. Your test client should fail to verify that certificate.
It is used before its activation date, It is used after its expiry date
You can generate test certificates using your test CA, with notBefore/notAfter dates that make the current date invalid. Then, use your test CA as a trusted CA for the verification: your test client should fail to validate the certificate because of the dates.
It has been revoked
This one is probably the hardest to set up, depending on how revocation is published. Again, generate some test certificates that you've revoked immediately, using your own test CA.
Some tools expect to be configured with a set of CRL files next to the set of trusted CAs. This requires some setup for the test itself, but very little online setup: this is probably the easiest. You can also set up a local online revocation repository, e.g. using CRL distribution points or OCSP.
PKI testing can be more complex than that more generally. A full test suite would require a fairly good understanding of the specifications (RFC 5280). Indeed, you may need to check the dates for all intermediate certificates, as well as various attributes for each certificate in the chain (e.g. key usage, basic constraints, ...).
In general, client libraries separate the verification process into two operations: verifying that the certificate is trusted (the PKI part) and verifying that it was issued to the entity you want to connect to (the host name verification part). This is certainly due to the fact these are specified in different documents (RFC 3280/5280 and RFC 2818/6125, respectively).
From a practical point of view, the first two points to check when using an SSL library are:
What happens when you connect to a known host, but with a different identifier for which the certificate isn't valid (such as its IP address instead of the host)?
What happens when you connect to a certificate that you know cannot be verified by any default set of trusted anchors (for example, a self-signed certificate or from your own CA).
Failure to connect/verify should happen in both cases. If it all works, short of implementing a full PKI test suite (which require a certain expertise), it's often the case that you need to check the documentation of that SSL library to see how these verifications can be turned on.
Bugs aside, a fair number of problems mentioned in this paper are due to the fact that some library implementations have made the assumption that it was up to their users to know what they were doing, whereas most of their users seem to have made the assumption that the library was doing the right thing by default. (In fact, even when the library is doing the right thing by default, there is certainly no shortage of programmers who just want to get rid of the error message, even if it makes their application insecure.)
I would seem fair to say that making sure the verification features are turned on would be sufficient in most cases.
As for the status of a few existing implementations:
Python: there was a change between Python 2.x and Python 3.x. The ssl module of Python 3.2 has a match_hostname method that Python 2.7 doesn't have. urllib.request.urlopen in Python 3.2 also has an option to configure CA files, which its Python 2.7 equivalent doesn't have. (This being said, if it's not set, verification won't occur. I'm not sure about the host name verification.)
Java: verification is turned on by default for both PKI and host name for HttpsUrlConnection, but not for the host name when using SSLSocket directly, unless you're using Java 7 and you've configure its SSLParameters using setEndpointIdentificationAlgorithm("HTTPS") (for example).
PHP: as far as I'm aware, fopen("https://.../") won't perform any verification at all.
I would like to be able to determine if a remote domain's TLS/SSL certificate is 'trusted' from the command line.
Here is an openssl example I was playing with a few weeks back, here I use openssl to acquire the certificate and then pipe it to openssl's 'verify' command. I assumed that the 'verify' command would verify the certificate, however, how I understand it now is that the 'verify' command just verifies the certificate chain (I think). (cdn.pubnub.com is just a domain I found from a quick Twitter search as an example to use)
echo "GET /" | openssl s_client -connect cdn.pubnub.com:443 | openssl x509 -text | openssl verify
As you can see from the cdn.pubnub.com domain (at the time of writing), the browser (Chrome at least) does not trust the certificate (because the certificate domain doesn't match), however, the openssl 'verify' command does not output 'trusted' or 'not trusted' or something else we can deduct that information from.
Another way I thought of doing this, is by using a headless browser (such as PhantomJS) and parsing any errors they return. It turns out that PhantomJS just errors but does not give any details, so this can not be used as the error could have been caused by something else.
I didn't think it would be this hard to find out that a certificate was trusted or not from the command line, without having to parse and check all the data that makes a certificate trusted myself which I don't think would be wise.
Is there a library or some other way I can tell if a remote domain's certificate is trusted from the command line?
curl (and libcurl) uses OpenSSL for https URLs, and checks certificate validity unless -k, --insecure option is enabled.
zsh 29354 % curl https://cdn.pubnub.com/
curl: (51) SSL peer certificate or SSH remote key was not OK
As you see, it doesn't give much details on why the certificate is invalid, but otherwise it should be as good as a headless browser, and much lighter.
It depends on what you consider "trusted". Beside the core cryptographic checks (e.g. checking the digital signature) the client usually does the following:
Check that the certificate chains to a trusted root
Verify that the current time is between the notValidBefore and not validAfter attributes.
The certificate is not revoked.
keyUsage and other certificate constraints match.
The entity we are communicating is somehow found in the subject of the certificate (for servers this usually means the hostname is listed as CN or subjectAlternativeName).
In your case the information to verify step 5 (namely the hostname) is missing, so it cannot be checked. You would have to do this step yourself.
Please note that different clients perform different checks to see if a certificate is trusted, so one answer may not apply to all possible clients. If you want to check your installation deeply, consider using the check from ssl labs https://www.ssllabs.com/ssltest /
The man page did not clearly specify this. But looking at openssl's apps implementations, SSL_CTX_use_PrivateKey* calls are usually made after SSL_CTX_use_certificate_file succeeded. I assume this is mostly used at the server side.
I recently confused the above function with SSL_CTX_load_verify_locations wherein you could specify a CA certificate file and path. It turned out that SSL_CTX_load_verify_locations is the one I needed to verify a server certificate which is signed by a Trusted Authority.
SSL_CTX_use_certificate_file() is used to load the certificates into the CTX object either in PEM or DER format. The certificates can be chained ultimately ending at the root certificates. SSL_CTX_use_certificate_file API loads the first certificate into the CTX context;not the entire chain of certificates. If you prefer that thorough check of certificates is needed then you need to opt for SSL_CTX_use_certificate_chain_file()
http://publib.boulder.ibm.com/infocenter/tpfhelp/current/index.jsp?topic=/com.ibm.ztpf-ztpfdf.doc_put.cur/gtpc2/cpp_ssl_ctx_use_certificate_file.html