I would like to be able to determine if a remote domain's TLS/SSL certificate is 'trusted' from the command line.
Here is an openssl example I was playing with a few weeks back, here I use openssl to acquire the certificate and then pipe it to openssl's 'verify' command. I assumed that the 'verify' command would verify the certificate, however, how I understand it now is that the 'verify' command just verifies the certificate chain (I think). (cdn.pubnub.com is just a domain I found from a quick Twitter search as an example to use)
echo "GET /" | openssl s_client -connect cdn.pubnub.com:443 | openssl x509 -text | openssl verify
As you can see from the cdn.pubnub.com domain (at the time of writing), the browser (Chrome at least) does not trust the certificate (because the certificate domain doesn't match), however, the openssl 'verify' command does not output 'trusted' or 'not trusted' or something else we can deduct that information from.
Another way I thought of doing this, is by using a headless browser (such as PhantomJS) and parsing any errors they return. It turns out that PhantomJS just errors but does not give any details, so this can not be used as the error could have been caused by something else.
I didn't think it would be this hard to find out that a certificate was trusted or not from the command line, without having to parse and check all the data that makes a certificate trusted myself which I don't think would be wise.
Is there a library or some other way I can tell if a remote domain's certificate is trusted from the command line?
curl (and libcurl) uses OpenSSL for https URLs, and checks certificate validity unless -k, --insecure option is enabled.
zsh 29354 % curl https://cdn.pubnub.com/
curl: (51) SSL peer certificate or SSH remote key was not OK
As you see, it doesn't give much details on why the certificate is invalid, but otherwise it should be as good as a headless browser, and much lighter.
It depends on what you consider "trusted". Beside the core cryptographic checks (e.g. checking the digital signature) the client usually does the following:
Check that the certificate chains to a trusted root
Verify that the current time is between the notValidBefore and not validAfter attributes.
The certificate is not revoked.
keyUsage and other certificate constraints match.
The entity we are communicating is somehow found in the subject of the certificate (for servers this usually means the hostname is listed as CN or subjectAlternativeName).
In your case the information to verify step 5 (namely the hostname) is missing, so it cannot be checked. You would have to do this step yourself.
Please note that different clients perform different checks to see if a certificate is trusted, so one answer may not apply to all possible clients. If you want to check your installation deeply, consider using the check from ssl labs https://www.ssllabs.com/ssltest /
Related
I'm adding automatic upgrades to an application of mine. I need code-signing for this, or else automatic upgrades could be an attack vector. I need the signing and verification to be doable with "openssl" commands, since my application can run on any platform, and OpenSSL is available on any platform. However, when I try to verify a timestamp with openssl, with the code-signing certificate I bought from Comodo, I get the error "Verify error:unable to get local issuer certificate". The commands I run are as follows:
First, I extract the private key and the certificates from the .p12 file from Comodo, with the following:
openssl pkcs12 -in full-certs-from-comodo.p12 -nocerts -out private-key.pem
openssl pkcs12 -in full-certs-from-comodo.p12 -nokeys -out certs.pem
Then, to query and verify a timestamp, I run:
openssl ts -query -data mydata.tar.gz -cert -CAfile certs.pem -sha256 -out request-256.tsq
cat request-256.tsq | curl -s -S --data-binary #- -H 'Content-Type: application/timestamp-query' 'http://timestamp.comodoca.com?td=sha256' > response-256.tsr
openssl ts -verify -sha256 -in response-256.tsr -data mydata.tar.gz -CAfile certs.pem
This is the full error that results:
Verification: FAILED
140710242829968:error:2F06D064:time stamp routines:TS_VERIFY_CERT:certificate verify error:ts_rsp_verify.c:246:Verify error:unable to get local issuer certificate
Comodo tech support can't solve it, and I've been communicating with them for a month now. Digicert says they can only sign certain kinds of files, and those don't include a .tar.gz file. *sigh*
I've never used code-signing before, but that doesn't sound right to me, unless Digicert is adding artificial restrictions. Can't I hash any file, sign the hash with a private key, and then verify it on the user end with the public key? I don't think it should be this hard. What don't I understand?
Anyway, I'd love to get this working even with a paid certificate vendor, but failing that I'm wondering if I can just create my own key pair (a la PGP) and use that. I guess I wouldn't be able to revoke the certificate; are there any other downsides? In particular, does anyone see any reduced security by doing it this way? I do need very good security for this app.
The application is a Perl script and normally runs on a Web server, i.e. usually a *nix platform, but can also run on Windows.
Thanks! I appreciate any clues in getting this working at all, in any way, paid or not. I can't be the first person to need this kind of code-signing, but Comodo and Digicert tech support seemingly haven't heard of it at all.
Maybe not an answer but definitely too much for comments.
Aside: OpenSSL is available on many platforms, but not all. Although you only care about platforms where your app can be installed, and perl is already pretty demanding of platforms and can't be installed anywhere near everywhere.
More Important: code-signing and trusted timestamping are different and separate things, although sometimes used together: some codesigning schemes like Microsoft and Java encourage (but don't require) you to get a trusted timestamp on the (code) signature; I'm not sure about Apple, or Android. In particular you can't (validly) use a code-signing cert for timestamping, or verifying timestamps, and if you can get a timestamping cert (you probably can't meet the requirements to be trusted by anyone besides yourself, see below) you can't use it for signing or verifying code. Although the error you got on ts is probably not because of this misuse but because you did something else wrong, but you don't tell us what you did, and imagining and describing the very many things you could possibly have done wrong would take far more than is justified for or even fits in a single Stack answer.
The cert can't restrict what you can sign, but it may restrict where that signature will be trusted. In particular for Microsoft Authenticode, only a cert from a CA specifically approved by Microsoft will work. And I believe Apple only trusts certs they themselves issue.
Yes, if you control both/all ends you don't need a 'real' cert; the (only) value of a trusted-thirdparty CA, and certs from it, is allowing your system(s) and/or code to trust data or code from those of other people, and/or other people's to trust yours, under known and more or less reasonable conditions. You presumably trust yourself entirely, unless you're Michael Garibaldi. If you use OpenSSL's 'primitive' signing functions (commandline dgst -sign/verify or rsautl/pkeyutl -sign/verify, or the equivalent library calls) you only need the two keys, private and public. If you use CMS (aka PKCS7) or S/MIME signatures you need a cert, but it can be a self-signed cert with any identity information, true or false, you feel like putting in it.
We have installed a server certificate in IIS for a website. When browsing over HTTPS to the website and inspecting the icon using chrome, we get a message "Your connection ... is encrypted with obsolete cryptography".
How do I configure IIS so that Chrome stops displaying this message, also need to balance the need to support IE>=8.
[EDIT]: As per the screenshot, we can see that the encryption method used is "AES_256_CBC with SHA1 for message authentication". The question is how do we change this in IIS so that Chrome no longer complains about "Obselete Cryptography".
The answer Steffen gave is incorrect (although the link he provided does provide the answer if you read further down). The reason Chrome gives the error regarding obsolete cryptography in this case is due to AES in CBC mode.
It has nothing to do with having a SHA-1 certificate.
The TL;DR - ignore this error, it doesn't matter.
If you really want to get rid of the error then you need to enable AES GCM instead. However this is easier said than done. I answered this in full on serverfault recently - see the second half of my answer here;
https://serverfault.com/questions/683697/change-key-exchange-mechanism-in-iis-8/683705#683705
Since am new to SSL and certificates, I struggled with this too. Here's how we solved this issue. Note that in our case, we are working with an internal web application and use a self-signed certificate.
Using OpenSSL on Linux, create a private key:
openssl genrsa -out box.key 2048
Then create and sign a certificate with the key (we set the expire date for a year out and 10 days):
openssl req -new -x509 -sha256 -days 375 -key box.key -out box.crt
Answer the questions (make sure the Common Name matches the web server's FQDN)
Configure your web server to use SSL using this key and certificate
Using Chrome on Windows, enter your web sites HTTPS URL
Click on the lock icon in the address bar, then select the Certificate Information link in the popup
Go to the Details tab, select the Copy to File... button to launch the Certificate Export Wizard
Using the wizard, select PKCS #7 as the export format, and save the certificate (i.e. mykey.p7b)
Install the certificate in the Trusted Root Certification Authorities certificate store (use certmgr.msc or right click on the certificate and select Install Certificate
Close Chrome, logout and re-login to Windows (force the old site warning out of the cache)
Re-open Chrome and enter your web sites HTTPS URL
Admire your shiny green lock icon with modern cryptography
You might want to read https://www.chromium.org/Home/chromium-security/education/tls#TOC-Deprecation-of-TLS-Features-Algorithms-in-Chrome, which was the first hit when looking for this specific error message.
It is hard to know for sure without having a look at your certificate, but I guess the following description from the linked page will match your certificate:
SHA-1 is deprecated in Chrome at the start of 2015.
Certificates expiring in 2016 will be marked as "secure, but with minor errors".
Certificates expiring in 2017 are later will be treated as "affirmatively insecure".
To answer my own question:
Ensure latest Windows Updates have been installed
Download and run IIS Crypto (https://www.nartac.com/Products/IISCrypto)
Ensure that this Cipher is top of the list on the left hand side:
TLS_DHE_RSA_WITH_AES_128_GCM_SHA256
Apply changes in IIS Crypto
Restart the server
In this link there are a black and a white lists about ciphers. Maybe if you just use the white ones it would solve your problem. Look after the lists in the comments you will see that it has change a little since the answer was written.
It helped me a lot when I started to have this problem with Glassfish, I hope it helps you with IIS too.
My immediate certificate on https://paper-shape.com got a weak signature algorithm SHA1: https://www.ssllabs.com/ssltest/analyze.html?d=paper-shape.com
I followed theses instructions. I created my pfx file both per OpenSSL and per certificate export wizard.
The CRT and pem (immediate certificate from startcom) seem to be ok, because the following command shows "Signature Algorithm: sha256WithRSAEncryption" on both (CRT and PEM):
$ openssl x509 -text -in paper-shape.com.crt
Either something went wrong during my pfx creation process or azure website overrules my immediate certificate.
Has anybody an idea?
Check your locally-installed certificates (on Windows, 'certmgr.msc'). You may have an old SHA-1-signed copy of the StartCom intermediate certificate which is still valid (say, to 2017) and being used in preference to that provided by the server.
You can find (and chain) the SHA-256 intermediate certificate for Class-1 in PEM format, here: https://www.startssl.com/certs/class1/sha2/pem/sub.class1.server.sha2.ca.pem
I have been facing this same problem, I was about to pull my hair out when the certificate seemed to be right in some browsers and OS and in others it claimed I was using SHA-1 and even https://shaaaaaaaaaaaaa.com was telling me that I had a SHA-2 signed crt.
So! Here is a huge thread in StartCom forum about this issue: https://forum.startcom.org/viewtopic.php?f=15&t=15929&st=0&sk=t&sd=a
The thing is that the browser is using an Intermediate crt that is SHA-1 signed.
The solution: you need to configurate the Intermadiate crt in your server!
You can see more details here:
https://sslmate.com/blog/post/chrome_cached_sha1_chains
I want to ensure that client libraries (currently in Python, Ruby, PHP, Java, and .NET) are configured correctly and failing appropriately when SSL certificates are invalid. Shmatikov's paper, The Most Dangerous Code in the World:
Validating SSL Certiļ¬cates in Non-Browser Software, reveals how confusing SSL validation is so I want to thoroughly test the possible failures.
Based on research a certificate is invalid if:
It is used before its activation date
It is used after its expiry date
It has been revoked
Certificate hostnames don't match the site hostname
Certificate chain does not contain a trusted certificate authority
Ideally, I think I would have one test case for each of the invalid cases. To that end I am currently testing an HTTP site accessed over HTTPS, which leads to a failure that I can verify in a test like so:
self.assertRaises(SSLHandshakeError, lambda: api.call_to_unmatched_hostname())
This is incomplete (only covering one case) and potentially wrong, so...
How can you test that non-browser software properly validates SSL certificates?
First off, you'll need a collection of SSL certificates, where each has just one thing wrong with it. You can generate these using the openssl command line tool. Of course, you can't sign them with a trusted root CA. You will need to use your own CA. To make this validate correctly, you'll need to install your CA certificate in the client libraries. You can do this in Java, for example, using the control panel.
Once you have the certificates, you can use the "openssl s_server" tool to serve an SSL socket using each one. I suggest you put one certificate on each port.
You now have to use the client library to connect to a port, and verify that you get the correct error message.
I know that Python by default does no certificate validation (look at the manual for httplib.HTTPSConnection). However, m2crypto does do validation. Java by default does do validation. I don't know about other languages.
Some other cases you could test:
1) Wildcard host names.
2) Certificate chaining. I know there was a bug in old browsers where if you had a certificate A signed by the root, A could then sign B, and B would appear valid. SSL is supposed to stop this by having flags on certificates, and A would not have the "can sign" flag. However, this was not verified in some old browsers.
Good luck! I'd be interested to hear how you get on.
Paul
Certificate hostnames don't match the site hostname
This is probably the easiest to check, and failure (to fail) there is certainly a good indication that something is wrong. Most certificates for well-known services only use host names for their identity, not IP addresses. If, instead of asking for https://www.google.com/, you ask for https://173.194.67.99/ (for example) and it works, there's something wrong.
For the other ones, you may want to generate your own test CA.
Certificate chain does not contain a trusted certificate authority
You can generate a test certificate using your test CA (or a self-signed certificate), but let the default system CA list be used for the verification. Your test client should fail to verify that certificate.
It is used before its activation date, It is used after its expiry date
You can generate test certificates using your test CA, with notBefore/notAfter dates that make the current date invalid. Then, use your test CA as a trusted CA for the verification: your test client should fail to validate the certificate because of the dates.
It has been revoked
This one is probably the hardest to set up, depending on how revocation is published. Again, generate some test certificates that you've revoked immediately, using your own test CA.
Some tools expect to be configured with a set of CRL files next to the set of trusted CAs. This requires some setup for the test itself, but very little online setup: this is probably the easiest. You can also set up a local online revocation repository, e.g. using CRL distribution points or OCSP.
PKI testing can be more complex than that more generally. A full test suite would require a fairly good understanding of the specifications (RFC 5280). Indeed, you may need to check the dates for all intermediate certificates, as well as various attributes for each certificate in the chain (e.g. key usage, basic constraints, ...).
In general, client libraries separate the verification process into two operations: verifying that the certificate is trusted (the PKI part) and verifying that it was issued to the entity you want to connect to (the host name verification part). This is certainly due to the fact these are specified in different documents (RFC 3280/5280 and RFC 2818/6125, respectively).
From a practical point of view, the first two points to check when using an SSL library are:
What happens when you connect to a known host, but with a different identifier for which the certificate isn't valid (such as its IP address instead of the host)?
What happens when you connect to a certificate that you know cannot be verified by any default set of trusted anchors (for example, a self-signed certificate or from your own CA).
Failure to connect/verify should happen in both cases. If it all works, short of implementing a full PKI test suite (which require a certain expertise), it's often the case that you need to check the documentation of that SSL library to see how these verifications can be turned on.
Bugs aside, a fair number of problems mentioned in this paper are due to the fact that some library implementations have made the assumption that it was up to their users to know what they were doing, whereas most of their users seem to have made the assumption that the library was doing the right thing by default. (In fact, even when the library is doing the right thing by default, there is certainly no shortage of programmers who just want to get rid of the error message, even if it makes their application insecure.)
I would seem fair to say that making sure the verification features are turned on would be sufficient in most cases.
As for the status of a few existing implementations:
Python: there was a change between Python 2.x and Python 3.x. The ssl module of Python 3.2 has a match_hostname method that Python 2.7 doesn't have. urllib.request.urlopen in Python 3.2 also has an option to configure CA files, which its Python 2.7 equivalent doesn't have. (This being said, if it's not set, verification won't occur. I'm not sure about the host name verification.)
Java: verification is turned on by default for both PKI and host name for HttpsUrlConnection, but not for the host name when using SSLSocket directly, unless you're using Java 7 and you've configure its SSLParameters using setEndpointIdentificationAlgorithm("HTTPS") (for example).
PHP: as far as I'm aware, fopen("https://.../") won't perform any verification at all.
The man page did not clearly specify this. But looking at openssl's apps implementations, SSL_CTX_use_PrivateKey* calls are usually made after SSL_CTX_use_certificate_file succeeded. I assume this is mostly used at the server side.
I recently confused the above function with SSL_CTX_load_verify_locations wherein you could specify a CA certificate file and path. It turned out that SSL_CTX_load_verify_locations is the one I needed to verify a server certificate which is signed by a Trusted Authority.
SSL_CTX_use_certificate_file() is used to load the certificates into the CTX object either in PEM or DER format. The certificates can be chained ultimately ending at the root certificates. SSL_CTX_use_certificate_file API loads the first certificate into the CTX context;not the entire chain of certificates. If you prefer that thorough check of certificates is needed then you need to opt for SSL_CTX_use_certificate_chain_file()
http://publib.boulder.ibm.com/infocenter/tpfhelp/current/index.jsp?topic=/com.ibm.ztpf-ztpfdf.doc_put.cur/gtpc2/cpp_ssl_ctx_use_certificate_file.html