May the root store contain non-self-signed certificates, i.e. the issuer and subject are different?
If so, will certificate chain validations return “success” upon encountering a non-self-signed certificate in the root store, or will the validation continue in the root store until either a self-signed certificate (“success”) or none (“fail”) is encountered?
I suspect this behavior is implementation dependent, but I can’t find any reference.
A root as defined in RFC5280, Section 3.2:
(a) Internet Policy Registration Authority (IPRA): This
authority, operated under the auspices of the Internet
Society, acts as the root of the PEM certification hierarchy
at level 1. It issues certificates only for the next level
of authorities, PCAs. All certification paths start with the
IPRA.
Therefore, a "root store", even if it is a generic, non-specified, description (as President James K. Polk pointed out in his comment), should only contain root CA certificates, which means they have signed themselves.
If you do there might be unwanted side effects...
On Windows this breaks MutualTLS with Internet Information Services (IIS)
This is also not specified by the RFC but the issue is documented as Cause 2 here: https://learn.microsoft.com/en-us/troubleshoot/iis/http-403-forbidden-access-website
Related
Looking at LetsEncrypt FAQ page, they have clearly stated that email encryption and code signing require a different type of certificate and therefore they are not supported by LetsEncrypt.
My understanding is that https and S/MIME both require X.509 certificates. What is the difference between certificates these two technologies require?
Among other things (like the encryption key), an X.509 certificate also specifies what it may be used for. X.509 certificates for HTTPS and S/MIME usage basically have different usages.
The structure of an X.509 certificate is fairly complex. Its possible usage depends on attributes and extensions within the certificate, and require that specific combinations of them with specific values must be present.
For example, an S/MIME certificate, requires a Key Usage attribute with something like Encrypt, Verify, Wrap, Derive, a Key Usage extension that must have the Critical attribute set to Yes and include a Usage attribute with Key Encipherment plus in the Extended Key Usage extension (sic!) it should list the Purpose attribute value Email Protection.
Note that Key Usage extension Data Encipherment is not required, because in S/MIME, the data contents are encrypted by a random symmetric key which then is encrypted with the private key from the certificate. This is called hybrid encryption and it is done for performance and scalability reasons.
The certificate requirements even extend into the certificate chain, meaning that the certificate must be signed by another certificate that has been issued to sign certificates for that usage.
Note that the above example may not be 100% correct, because the subject is so complex, and I don't fully understand every aspect of it myself. I found this quote which I think describes the situation fairly well:
I think a lot of purists would rather have PKI be useless to anyone in
any practical terms than to have it made simple enough to use, but
potentially "flawed". — Chris Zimman
Resources that helped me:
X.509 Style Guide by Peter Gutmann: an attempt to shed light on the relations of attributes and extensions and their interpretations
RFC 4262 - X.509 Certificate Extension for Secure/Multipurpose Internet Mail Extensions (S/MIME) Capabilities
I would like to know what are the recommended best practices and validation logic for handling TLS certificate exceptions, similar to OpenSSH's known_hosts file. I know the best practices would be to have certificates that can be validated automatically, but here I am talking of the cases where the certificate cannot be validated and the user wants to accept it anyway. I would like to know the following:
1) What information should be stored in each entry
2) How the information should be stored for secure access
3) How the hashing of the certificate should be performed
AFAIK, the known_hosts file contains the following information:
<hostname> <certificate hash>
The biggest problem I can see with this approach is when we connect to the same hostname with different ports, which happens frequently when using port forwarding to map to different machines behind NAT. In this case, extra information should probably be stored this way:
<hostname>:<port> <certificate hash>
As for storing the information itself, the known_hosts file is normally stored in a user directory with owner write permissions. Is this considered "secure"? I mean, any process running as the current user could just add new exceptions for certificates the user has not explicitly accepted.
As for the hashing, I assume it should be performed on the entire X.509 certificate? I just wanted to check, since the X.509 certificate has the "TBSCertificate" structure in it, which excludes the the signature. I am not sure what should really be done here. Also, I would like to know the current recommended algorithms for hashing the certificate for exception purposes.
Thank you in advance for your recommendations on the question!
I currently believe cacert.pem is a bunch of keys that I can use to check that the site I'm talking to is in fact the site its claiming to be. As such, if I sent someone a program that was dependent on cacert.pem I can just send them a version on my computer and this has no security threat to me.
The only security threat would be for them and that is if I sent them a phony cacert.pem.
Is this correct and am I safe sending the version of cacert.pem on my computer to another potentially untrusted person?
EDIT:
AS Steffen pointed out, cacert.pem could refer to any file. I was referring to in particular the one that is found in the Requests Python package.
I don't know which cacert.pem file you are talking about, but/etc/ssl/cacert.pem on BSD or the /etc/ssl/certs folder on Linux contain just a public list of trusted certificate agencies, which are used to verify trust for SSL connections. There is no secret in these files and usually they are not even system specific (although one might add or remove CAs to manage own trust settings).
But again, I don't know what your cacert.pem file contains, because there is no inherent semantic with this file name. If it contains also private keys you should definitely not give it to others.
The only security threat would be for them and that is if I sent them a phony cacert.pem.
cacert.pem is a collection of Root CAs and Subordinate CAs used to certify a site or service.
The three threats here are:
You add your own CA, and then later MitM the connection
The wrong CA certifies the site or service, and an attacker then later MitM the connection
Your copy of cacert.pem is tampered in transit
(1) is less of a concern because it would require you to have a privileged network position, like on the same LAN or in the telecom infrastructure. You could add your own CA and the recipient would likely be no wiser.
(2) is a real problem. For example, we know Google is certified by Equifax Secure Certificate Authority. Equifax certifies a Subordinate CA called GeoTrust Global CA. And GeoTrust certifies a Google Subordinate CA called Google Internet Authority G2.
So the first problem with (2) is Diginotar and recently MSC Holdings claimed to certify Google properties, which we know is wrong. They could pull it off because of the collection of Roots and Subordinates.
The second problem with (2) is related to the first. Because you trust, say, Google Internet Authority G2, Google can mint certificates for any domain, and not just their properties. The problem here is its an unconstrained Subordinate CA, and it was done because it was too inconvenient.
(3) is simply an attack by a MitM. He can remove a needed certificate, which could result in a DoS. Or he could insert a CA, which leads back to (1). or he could corrupt the whole file.
I want to ensure that client libraries (currently in Python, Ruby, PHP, Java, and .NET) are configured correctly and failing appropriately when SSL certificates are invalid. Shmatikov's paper, The Most Dangerous Code in the World:
Validating SSL Certificates in Non-Browser Software, reveals how confusing SSL validation is so I want to thoroughly test the possible failures.
Based on research a certificate is invalid if:
It is used before its activation date
It is used after its expiry date
It has been revoked
Certificate hostnames don't match the site hostname
Certificate chain does not contain a trusted certificate authority
Ideally, I think I would have one test case for each of the invalid cases. To that end I am currently testing an HTTP site accessed over HTTPS, which leads to a failure that I can verify in a test like so:
self.assertRaises(SSLHandshakeError, lambda: api.call_to_unmatched_hostname())
This is incomplete (only covering one case) and potentially wrong, so...
How can you test that non-browser software properly validates SSL certificates?
First off, you'll need a collection of SSL certificates, where each has just one thing wrong with it. You can generate these using the openssl command line tool. Of course, you can't sign them with a trusted root CA. You will need to use your own CA. To make this validate correctly, you'll need to install your CA certificate in the client libraries. You can do this in Java, for example, using the control panel.
Once you have the certificates, you can use the "openssl s_server" tool to serve an SSL socket using each one. I suggest you put one certificate on each port.
You now have to use the client library to connect to a port, and verify that you get the correct error message.
I know that Python by default does no certificate validation (look at the manual for httplib.HTTPSConnection). However, m2crypto does do validation. Java by default does do validation. I don't know about other languages.
Some other cases you could test:
1) Wildcard host names.
2) Certificate chaining. I know there was a bug in old browsers where if you had a certificate A signed by the root, A could then sign B, and B would appear valid. SSL is supposed to stop this by having flags on certificates, and A would not have the "can sign" flag. However, this was not verified in some old browsers.
Good luck! I'd be interested to hear how you get on.
Paul
Certificate hostnames don't match the site hostname
This is probably the easiest to check, and failure (to fail) there is certainly a good indication that something is wrong. Most certificates for well-known services only use host names for their identity, not IP addresses. If, instead of asking for https://www.google.com/, you ask for https://173.194.67.99/ (for example) and it works, there's something wrong.
For the other ones, you may want to generate your own test CA.
Certificate chain does not contain a trusted certificate authority
You can generate a test certificate using your test CA (or a self-signed certificate), but let the default system CA list be used for the verification. Your test client should fail to verify that certificate.
It is used before its activation date, It is used after its expiry date
You can generate test certificates using your test CA, with notBefore/notAfter dates that make the current date invalid. Then, use your test CA as a trusted CA for the verification: your test client should fail to validate the certificate because of the dates.
It has been revoked
This one is probably the hardest to set up, depending on how revocation is published. Again, generate some test certificates that you've revoked immediately, using your own test CA.
Some tools expect to be configured with a set of CRL files next to the set of trusted CAs. This requires some setup for the test itself, but very little online setup: this is probably the easiest. You can also set up a local online revocation repository, e.g. using CRL distribution points or OCSP.
PKI testing can be more complex than that more generally. A full test suite would require a fairly good understanding of the specifications (RFC 5280). Indeed, you may need to check the dates for all intermediate certificates, as well as various attributes for each certificate in the chain (e.g. key usage, basic constraints, ...).
In general, client libraries separate the verification process into two operations: verifying that the certificate is trusted (the PKI part) and verifying that it was issued to the entity you want to connect to (the host name verification part). This is certainly due to the fact these are specified in different documents (RFC 3280/5280 and RFC 2818/6125, respectively).
From a practical point of view, the first two points to check when using an SSL library are:
What happens when you connect to a known host, but with a different identifier for which the certificate isn't valid (such as its IP address instead of the host)?
What happens when you connect to a certificate that you know cannot be verified by any default set of trusted anchors (for example, a self-signed certificate or from your own CA).
Failure to connect/verify should happen in both cases. If it all works, short of implementing a full PKI test suite (which require a certain expertise), it's often the case that you need to check the documentation of that SSL library to see how these verifications can be turned on.
Bugs aside, a fair number of problems mentioned in this paper are due to the fact that some library implementations have made the assumption that it was up to their users to know what they were doing, whereas most of their users seem to have made the assumption that the library was doing the right thing by default. (In fact, even when the library is doing the right thing by default, there is certainly no shortage of programmers who just want to get rid of the error message, even if it makes their application insecure.)
I would seem fair to say that making sure the verification features are turned on would be sufficient in most cases.
As for the status of a few existing implementations:
Python: there was a change between Python 2.x and Python 3.x. The ssl module of Python 3.2 has a match_hostname method that Python 2.7 doesn't have. urllib.request.urlopen in Python 3.2 also has an option to configure CA files, which its Python 2.7 equivalent doesn't have. (This being said, if it's not set, verification won't occur. I'm not sure about the host name verification.)
Java: verification is turned on by default for both PKI and host name for HttpsUrlConnection, but not for the host name when using SSLSocket directly, unless you're using Java 7 and you've configure its SSLParameters using setEndpointIdentificationAlgorithm("HTTPS") (for example).
PHP: as far as I'm aware, fopen("https://.../") won't perform any verification at all.
The man page did not clearly specify this. But looking at openssl's apps implementations, SSL_CTX_use_PrivateKey* calls are usually made after SSL_CTX_use_certificate_file succeeded. I assume this is mostly used at the server side.
I recently confused the above function with SSL_CTX_load_verify_locations wherein you could specify a CA certificate file and path. It turned out that SSL_CTX_load_verify_locations is the one I needed to verify a server certificate which is signed by a Trusted Authority.
SSL_CTX_use_certificate_file() is used to load the certificates into the CTX object either in PEM or DER format. The certificates can be chained ultimately ending at the root certificates. SSL_CTX_use_certificate_file API loads the first certificate into the CTX context;not the entire chain of certificates. If you prefer that thorough check of certificates is needed then you need to opt for SSL_CTX_use_certificate_chain_file()
http://publib.boulder.ibm.com/infocenter/tpfhelp/current/index.jsp?topic=/com.ibm.ztpf-ztpfdf.doc_put.cur/gtpc2/cpp_ssl_ctx_use_certificate_file.html