Cross-platform code-signing of .tar.gz file using OpenSSL? - security

I'm adding automatic upgrades to an application of mine. I need code-signing for this, or else automatic upgrades could be an attack vector. I need the signing and verification to be doable with "openssl" commands, since my application can run on any platform, and OpenSSL is available on any platform. However, when I try to verify a timestamp with openssl, with the code-signing certificate I bought from Comodo, I get the error "Verify error:unable to get local issuer certificate". The commands I run are as follows:
First, I extract the private key and the certificates from the .p12 file from Comodo, with the following:
openssl pkcs12 -in full-certs-from-comodo.p12 -nocerts -out private-key.pem
openssl pkcs12 -in full-certs-from-comodo.p12 -nokeys -out certs.pem
Then, to query and verify a timestamp, I run:
openssl ts -query -data mydata.tar.gz -cert -CAfile certs.pem -sha256 -out request-256.tsq
cat request-256.tsq | curl -s -S --data-binary #- -H 'Content-Type: application/timestamp-query' 'http://timestamp.comodoca.com?td=sha256' > response-256.tsr
openssl ts -verify -sha256 -in response-256.tsr -data mydata.tar.gz -CAfile certs.pem
This is the full error that results:
Verification: FAILED
140710242829968:error:2F06D064:time stamp routines:TS_VERIFY_CERT:certificate verify error:ts_rsp_verify.c:246:Verify error:unable to get local issuer certificate
Comodo tech support can't solve it, and I've been communicating with them for a month now. Digicert says they can only sign certain kinds of files, and those don't include a .tar.gz file. *sigh*
I've never used code-signing before, but that doesn't sound right to me, unless Digicert is adding artificial restrictions. Can't I hash any file, sign the hash with a private key, and then verify it on the user end with the public key? I don't think it should be this hard. What don't I understand?
Anyway, I'd love to get this working even with a paid certificate vendor, but failing that I'm wondering if I can just create my own key pair (a la PGP) and use that. I guess I wouldn't be able to revoke the certificate; are there any other downsides? In particular, does anyone see any reduced security by doing it this way? I do need very good security for this app.
The application is a Perl script and normally runs on a Web server, i.e. usually a *nix platform, but can also run on Windows.
Thanks! I appreciate any clues in getting this working at all, in any way, paid or not. I can't be the first person to need this kind of code-signing, but Comodo and Digicert tech support seemingly haven't heard of it at all.

Maybe not an answer but definitely too much for comments.
Aside: OpenSSL is available on many platforms, but not all. Although you only care about platforms where your app can be installed, and perl is already pretty demanding of platforms and can't be installed anywhere near everywhere.
More Important: code-signing and trusted timestamping are different and separate things, although sometimes used together: some codesigning schemes like Microsoft and Java encourage (but don't require) you to get a trusted timestamp on the (code) signature; I'm not sure about Apple, or Android. In particular you can't (validly) use a code-signing cert for timestamping, or verifying timestamps, and if you can get a timestamping cert (you probably can't meet the requirements to be trusted by anyone besides yourself, see below) you can't use it for signing or verifying code. Although the error you got on ts is probably not because of this misuse but because you did something else wrong, but you don't tell us what you did, and imagining and describing the very many things you could possibly have done wrong would take far more than is justified for or even fits in a single Stack answer.
The cert can't restrict what you can sign, but it may restrict where that signature will be trusted. In particular for Microsoft Authenticode, only a cert from a CA specifically approved by Microsoft will work. And I believe Apple only trusts certs they themselves issue.
Yes, if you control both/all ends you don't need a 'real' cert; the (only) value of a trusted-thirdparty CA, and certs from it, is allowing your system(s) and/or code to trust data or code from those of other people, and/or other people's to trust yours, under known and more or less reasonable conditions. You presumably trust yourself entirely, unless you're Michael Garibaldi. If you use OpenSSL's 'primitive' signing functions (commandline dgst -sign/verify or rsautl/pkeyutl -sign/verify, or the equivalent library calls) you only need the two keys, private and public. If you use CMS (aka PKCS7) or S/MIME signatures you need a cert, but it can be a self-signed cert with any identity information, true or false, you feel like putting in it.

Related

NodeJS 14: Using a SSL CA Bundle

I'm currently trying to use a CA Bundle with NodeJS 14.0. I've been using Namecheap's article as a guide the implement this feature. I'm currently stuck on a few things:
For the ca parameter for https.createServer(), what file formats are allowed to be passed in?
How do I check that a CA bundle is actually being used?
For the ca parameter for https.createServer(), what file formats are allowed to be passed in?
From NodeJS tls.createSecureContext:
Any string or Buffer can contain multiple PEM CAs concatenated together
Though, in general NodeJS uses PEM format.
How do I check that a CA bundle is actually being used?
You can use a certificate not signed by your CA, e.g. a self-signed certificate.
One point, possibly more subtle than you wanted: nodejs tls.createSecureContext internally calls OpenSSL PEM_read_bio_X509_AUX which actually accepts three PEM formats (or any sequence of those three formats, since nodejs loops). For two of them the base64/64cpl blob contains (exactly) an X.509 certificate, as respecified in rfc7468 sec 5, with either preferred label "CERTIFICATE" or deprecated label "X509 CERTIFICATE". In addition OpenSSL accepts a format of its own with label "TRUSTED CERTIFICATE" where the blob contains an X.509 certificate plus additional (ASN.1) data defined by OpenSSL; see e.g. the man page for d2i_X509_AUX online here. OpenSSL doesn't use this additional data for much, and of course nothing else uses it at all, so it's rare.
And to avoid confusion it might be worth noting that all OpenSSL PEM_read_ routines, including this one, skip any 'comment' data while searching for the PEM data, so actually a file/buffer that contains garbage, then a PEM cert, then more garbage, then another PEM cert, etc. will work the same as if it contained only the PEM certs.

Why do i have a wrong (sha1) immediate startcom certificate in my chain on azure website?

My immediate certificate on https://paper-shape.com got a weak signature algorithm SHA1: https://www.ssllabs.com/ssltest/analyze.html?d=paper-shape.com
I followed theses instructions. I created my pfx file both per OpenSSL and per certificate export wizard.
The CRT and pem (immediate certificate from startcom) seem to be ok, because the following command shows "Signature Algorithm: sha256WithRSAEncryption" on both (CRT and PEM):
$ openssl x509 -text -in paper-shape.com.crt
Either something went wrong during my pfx creation process or azure website overrules my immediate certificate.
Has anybody an idea?
Check your locally-installed certificates (on Windows, 'certmgr.msc'). You may have an old SHA-1-signed copy of the StartCom intermediate certificate which is still valid (say, to 2017) and being used in preference to that provided by the server.
You can find (and chain) the SHA-256 intermediate certificate for Class-1 in PEM format, here: https://www.startssl.com/certs/class1/sha2/pem/sub.class1.server.sha2.ca.pem
I have been facing this same problem, I was about to pull my hair out when the certificate seemed to be right in some browsers and OS and in others it claimed I was using SHA-1 and even https://shaaaaaaaaaaaaa.com was telling me that I had a SHA-2 signed crt.
So! Here is a huge thread in StartCom forum about this issue: https://forum.startcom.org/viewtopic.php?f=15&t=15929&st=0&sk=t&sd=a
The thing is that the browser is using an Intermediate crt that is SHA-1 signed.
The solution: you need to configurate the Intermadiate crt in your server!
You can see more details here:
https://sslmate.com/blog/post/chrome_cached_sha1_chains

Setting https on expressjs PEM routines:PEM_read_bio:no start line

A few similar threads exist but none has a checked answer or much discussion. I'm trying to setup an https server on express js but I'm getting
crypto.js:100
c.context.setKey(options.key);
^
Error: error:0906D06C:PEM routines:PEM_read_bio:no start line
I generated my .csr and .key files with
openssl req -nodes -newkey rsa:2048 -keyout myserver.key -out myserver.csr
One suggestion was to convert the .csr to a .pem by following these instructions: http://silas.sewell.org/blog/2010/06/03/node-js-https-ssl-server-example/
That didn't work.
The express.js docs (http://nodejs.org/api/https.html) show both of these files as .pem, however. If that's the issue, how would you convert a .key file to a .pem? This threat is partially helpful How to get .pem file from .key and .crt files? but if anyone knows what expressjs requires, I feel that's the missing component.
How would I check that the files are properly in ANSI, or convert them if not?
There is also some discussion on whether the file should begin with -----BEGIN ENCRYPTED PRIVATE KEY----- or -----BEGIN RSA PRIVATE KEY-----
Any help is greatly appreciated.
So i think there's at least a little bit of terminological confusion, and the node.js example you have there doesn't help by renaming everything to .pem.
Here's a general overview for how SSL works:
You generate a pair of public and private keys. For our purposes the former is your "certificate signing request" (CSR for short) and the latter is your private signing key (just "your key").
If you wanted to generate a self-signed certificate (this is useful for local testing purposes) you can turn around and use your key and your CSR to generate a certificate. This link http://www.akadia.com/services/ssh_test_certificate.html has a pretty clear run down of how to do that on a *nix based system.
For the purposes of web browsers, SSL certificates need to be co-signed by a trusted authority, e.g. a Certificate Authority (CA). You pay a CA to co-sign your cert, and vouch for your authenticity with browser vendors (who will in turn display a green padlock for your site when your website presents its certificate to browsers).
The co-signing process starts with you uploading your CSR to your CA. They will then take that CSR and generate your certificate. They will then provide you with a couple of certificates, your certificate, their root certificate, and possibly some intermediate certificates.
You then need to form a combined certificate that proves a chain of authenticity back to browsers. You do this literally just by concatenating your certificate, followed by the intermediate certificates (in whatever order was specified) ending with the root certificate. This combined certificate is what you hand to your web server.
In order to enable your web server to serve over SSL, you need to hand it your (combined) certificate as its public encryption key (which it provides to web browsers upon request), and your private encryption key, so that it can decrypt the traffic sent to it by web browsers.
So. Now with all of that in mind, you should take that CSR that you have and provide it to your CA, and get the various certificates back, concatenate them, and then use that w/ your private key in your express server.

Is cacert.pem unique to my computer?

I currently believe cacert.pem is a bunch of keys that I can use to check that the site I'm talking to is in fact the site its claiming to be. As such, if I sent someone a program that was dependent on cacert.pem I can just send them a version on my computer and this has no security threat to me.
The only security threat would be for them and that is if I sent them a phony cacert.pem.
Is this correct and am I safe sending the version of cacert.pem on my computer to another potentially untrusted person?
EDIT:
AS Steffen pointed out, cacert.pem could refer to any file. I was referring to in particular the one that is found in the Requests Python package.
I don't know which cacert.pem file you are talking about, but/etc/ssl/cacert.pem on BSD or the /etc/ssl/certs folder on Linux contain just a public list of trusted certificate agencies, which are used to verify trust for SSL connections. There is no secret in these files and usually they are not even system specific (although one might add or remove CAs to manage own trust settings).
But again, I don't know what your cacert.pem file contains, because there is no inherent semantic with this file name. If it contains also private keys you should definitely not give it to others.
The only security threat would be for them and that is if I sent them a phony cacert.pem.
cacert.pem is a collection of Root CAs and Subordinate CAs used to certify a site or service.
The three threats here are:
You add your own CA, and then later MitM the connection
The wrong CA certifies the site or service, and an attacker then later MitM the connection
Your copy of cacert.pem is tampered in transit
(1) is less of a concern because it would require you to have a privileged network position, like on the same LAN or in the telecom infrastructure. You could add your own CA and the recipient would likely be no wiser.
(2) is a real problem. For example, we know Google is certified by Equifax Secure Certificate Authority. Equifax certifies a Subordinate CA called GeoTrust Global CA. And GeoTrust certifies a Google Subordinate CA called Google Internet Authority G2.
So the first problem with (2) is Diginotar and recently MSC Holdings claimed to certify Google properties, which we know is wrong. They could pull it off because of the collection of Roots and Subordinates.
The second problem with (2) is related to the first. Because you trust, say, Google Internet Authority G2, Google can mint certificates for any domain, and not just their properties. The problem here is its an unconstrained Subordinate CA, and it was done because it was too inconvenient.
(3) is simply an attack by a MitM. He can remove a needed certificate, which could result in a DoS. Or he could insert a CA, which leads back to (1). or he could corrupt the whole file.

Determining if a TLS/SSL certificate is 'trusted' from the command line?

I would like to be able to determine if a remote domain's TLS/SSL certificate is 'trusted' from the command line.
Here is an openssl example I was playing with a few weeks back, here I use openssl to acquire the certificate and then pipe it to openssl's 'verify' command. I assumed that the 'verify' command would verify the certificate, however, how I understand it now is that the 'verify' command just verifies the certificate chain (I think). (cdn.pubnub.com is just a domain I found from a quick Twitter search as an example to use)
echo "GET /" | openssl s_client -connect cdn.pubnub.com:443 | openssl x509 -text | openssl verify
As you can see from the cdn.pubnub.com domain (at the time of writing), the browser (Chrome at least) does not trust the certificate (because the certificate domain doesn't match), however, the openssl 'verify' command does not output 'trusted' or 'not trusted' or something else we can deduct that information from.
Another way I thought of doing this, is by using a headless browser (such as PhantomJS) and parsing any errors they return. It turns out that PhantomJS just errors but does not give any details, so this can not be used as the error could have been caused by something else.
I didn't think it would be this hard to find out that a certificate was trusted or not from the command line, without having to parse and check all the data that makes a certificate trusted myself which I don't think would be wise.
Is there a library or some other way I can tell if a remote domain's certificate is trusted from the command line?
curl (and libcurl) uses OpenSSL for https URLs, and checks certificate validity unless -k, --insecure option is enabled.
zsh 29354 % curl https://cdn.pubnub.com/
curl: (51) SSL peer certificate or SSH remote key was not OK
As you see, it doesn't give much details on why the certificate is invalid, but otherwise it should be as good as a headless browser, and much lighter.
It depends on what you consider "trusted". Beside the core cryptographic checks (e.g. checking the digital signature) the client usually does the following:
Check that the certificate chains to a trusted root
Verify that the current time is between the notValidBefore and not validAfter attributes.
The certificate is not revoked.
keyUsage and other certificate constraints match.
The entity we are communicating is somehow found in the subject of the certificate (for servers this usually means the hostname is listed as CN or subjectAlternativeName).
In your case the information to verify step 5 (namely the hostname) is missing, so it cannot be checked. You would have to do this step yourself.
Please note that different clients perform different checks to see if a certificate is trusted, so one answer may not apply to all possible clients. If you want to check your installation deeply, consider using the check from ssl labs https://www.ssllabs.com/ssltest /

Resources