How can I use a digital signature to control software upgrades? - digital-signature

I'm after some concrete advice on how best to prevent (or at least deter) unauthorised software upgrades on an embedded system. It doesn't need to be bullet-proof and we can assume for now that the system itself is sufficiently locked down so that no-one can get unauthorised access to it.
My plan is to basically have an installer process running on the system which would receive update packages from anywhere but that could ensure those packages came from a trusted source (i.e., me) before attempting to install them.
In simple form, the update package would have the actual installation package, plus a matching digital signature that could only be generated by myself. Moreover, the signatures would be purely self-generated with no external authorities involved.
So these are my thoughts on the possible process:
Generate a private/public key pair and distribute the public key along with the embedded system itself.
When creating a software install package, pipe the contents of the package (or an MD5 of the package) through a signature generator using our private key.
Distribute the software install package along with that signature.
Have the installer check the signature against the software install package (using the public key it already has) and only install if there's a match.
If anyone can find any problems with this scheme, I'd appreciate the details, along with any specific advice on how to avoid them. In addition (though this is not the primary purpose of the question), any advice on tools to generate the keys would be appreciated.

I do not see any apparent problems with your solution. I can suggest improvements that you may have already taken into account
If the embedded software is sufficiently locked, it is not necessary to take additional measures to protect the integrity of the public key distributed with the software (e.g. by signing the installer itself and obfuscate, that could be a headache)
I've considered a TLS connection to download the updates, but it would not really needed, because packages are going to be protected with a digital signature
I suggest encapsulating the public key in an X509 certificate. This way you can control the period of validity and even a possible revocation if the private key has been compromised. In this case you will need a hierarchical Certificate Authority, with a root certificate that issues the signing certificates. Include in the truststore of the installer the public part of the root certificate. Then using a different signing certificate after expiration/revocation will be transparent to installer.
The root certificate has a long duration and a large key size (and should be conveniently secured), and the signing certificates have a shorter duration and can use a smaller key.
With this CA you could also generate a TLS certificate if you need some additional service: e.g check available updates. In this case include the certificate in the truststore of the installer to avoid man-in-the-middle attacks (SSL-pinning).
You can sign the full distribution or a hash. It does not affect security (see https://crypto.stackexchange.com/questions/6335/is-signing-a-hash-instead-of-the-full-data-considered-secure) but do not use MD5 because has extensive vulnerabilities. Use a SHA-2 function.
To generate the keys you can use openssl in command line or use the GUI application KeyStore-Explorer

Related

What are typical use cases for self-signed code certificates?

I work as a developer for a young company, and I also develop personal projects. These are mainly C# and python apps. Our company purchased a code-signing certificate from a CA to avoid the "unknown publisher" warnings and some antivirus protection issues, but I wanted to avoid that cost for personal projects.
From my understanding, the only way to accomplish this is using a certificate from a trusted CA, but then why would one use a self-signed certificate? I know that they exist, but since most users aren't going to edit their trust stores, what do they realistically accomplish?
note: I'm asking specifically about code-signing certs, not SSL or otherwise
Self signed certificates are best suited for development , test and learning environments.
No where else you should be thinking about them
Certificates establish trust. It is impossible to trust a certificate that anyone can create because anyone else can also create one, e.g., a self signed certificate allows a man in the middle attack.
Your question is mixing several issues, and I think that's what's causing the trouble. A commercial CA is useful in exactly one, and only one, situation: where you need a third-party that everyone trusts. They are useless, and actually a determent, in cases where you do not want that.
So a commercial code-signing CA is useful for signing public web sites. A commercial CA is less useful for signing private API certificates (though on some platforms, particularly iOS, there are reasons to use one anyway).
Similarly, a commercial CA is useful if you have an OS that trusts that CA for code-signing. If you're on a recent version of macOS, however, then you really need one that's signed specifically by Apple.
But if you control the platform yourself, for example in an embedded system or a plugin engine, it is completely appropriate to self-sign the binaries. "Self-sign" just means "using the root certificate." There's nothing magical about commercial roots. They're "self-signed." It's just that others trust them. If you don't need anyone's trust but yourself, then using your own root is better than a commercial one.
(There are some details I'm glossing over here to get to the core point. In particular, often "self-signed" are really secondary certificates that are rooted to some self-signed cert. That's something that's normal for commercial certs, and good practice even if you create your own root. But the basic intuitions are the same.)
If the question is specifically "why would I use a self-signed cert for signing Windows binaries outside of a controlled environment like an enterprise," then the answer is you probably shouldn't, and why do you think you should? But for the general problem of "code signing" across all possible platforms, there are many cases where using your own root is ideal. And inside an enterprise, signing your own binaries is very normal.

What are mix and match attacks? (Docker - snapshot key)

Can someone help me in understanding what are mix and match attacks in security? I tried checking on Google but could not get a clear idea. Any explanation with an example should be helpful
Also, how is this related to do snapshot key in docker content trust?
That expression is used in "Manage keys for content trust":
snapshot: This key signs the current collection of image tags, preventing mix and match attacks.
When doing a docker push with Content Trust enabled for the first time, the root, targets, snapshot, and timestamp keys are generated automatically for the image repository:
The timestamp and snapshot keys are safely generated and stored in a signing server that is deployed alongside the Docker registry. These keys are generated in a backend service that isn't directly exposed to the internet and are encrypted at rest.
That is part of the docker notary architecture:
Rollback, Freeze, Mix and Match - The attacker can request that the Notary signer sign any arbitrary timestamp (and maybe snapshot) metadata they want. Attackers can launch a freeze attack, and, depending on whether the snapshot key is available, a mix-and-match attack up to the expiration of the targets file.
Clients both with and without pinned trust would be vulnerable to these attacks, so long as the attacker ensures that the version number of their malicious metadata is higher than the version number of the most recent good metadata that any client may have.
Note that the timestamp and snapshot keys cannot be compromised in a server-only compromise, so a key rotation would not be necessary. Once the Server compromise is mitigated, an attacker will not be able to generate valid timestamp or snapshot metadata and serve them on a malicious mirror, for example.
An attacker can add malicious content, remove legitimate content from a collection, and mix up the targets in a collection, but only within the particular delegation roles that the key can sign for.
For a definition of those terms, one can look at "Improving Hackage security" (for Haskell, but it applies also for a Docker registry):
Rollback attacks where an attacker gets a client to install an older version of a package than a version the client previously knew about.
Consider for example a case where the older package might have known security vulnerabilities.
Freeze attacks where the attacker prevents the entire set of packages from being updated (e.g. by always responding with an old snapshot).
Mix and match attacks where the attacker supplies combinations of packages or package metadata that never existed in the upstream repository.

Signing upgrades w/o Certification Authority

Can I have my InstallScript upgrades signed so that when an upgrade is taking place some validation will take place that the installed application and the upgrade both have the same signature.
I know I can use certification authority but I need to do it with out.
I know that there are cheap certification but this is not about the money.
Validation from inside your code makes no sense. The only reliable validation is the one built into the outside entity, i.e. the OS. In Windows such mechanism is named Authenticode, and it involves certificates, issued by Certificate Authorities. Similar mechanisms exist for Java, for Adobe scripting stuff and for Office scripts, yet all of them use certificates. So your (most likely the only) option is to get some cheap Code Signing certificates from one of CAs.

Secure Authenticode key on a build server

I'm trying to figure out how best to set up Authenticode signing at my workplace. The security implications are stressing me out.
My initial thought is that the person who controls the key should install it to the build server and secure it so that only the build account can access it.
This seems reasonably secure, but it actually isn't. Yes, you can't steal the cert at this point, but if you can create a build you can get the build account to sign any binary.
Does anyone who is familiar with the process give me some pointers?
Indeed, if the key is available for use in build account, it's available for admin's account and it can be used to sign other file. Whatever you give to others' posession is not yours anymore. If you can't secure the server from other people access, then you don't control the server fully, and this leaves a chance for misuse. Frankly speaking, I can't imagine a single way (other than move signing to some other trusted system) to protect the key from misuse. Even when the key can't be extracted or copied (say it's put to cryptotoken), it still can be used in some way.

Loading only third party trusted assemblies in an Application Server

Scenario
I want to design a server which loads plug-in assemblies from third party vendors. The third party vendor needs to follow some contract while implementing plug-in assemblies. The third party plug-in assemblies needs to be copied to a specified deployment folder of the server. The server will dynamically load the plug-in assemblies.The server needs to load only those assemblies which are from a trusted source.
Possible Solution
As one of the solution, the server could rely on digital certificate technology. The server should load only those assemblies which are digitally signed with a trusted root certificate authority. I am planning to derive test cases from the following diagram:
The leaf node (highlighted in purple) denotes the possible test cases.
I need to get ideas/feedback on the following:
Whether the above mechanism based on digital certificates is good enough for the above mentioned scenario?
What are other alternatives in addition to digital certificate technology?
Are there any test cases missing that have not been considered (based on the above diagram)?
Thanks.
Just some random thoughts.
While not the only way to do this (off the top of my head you could for example use a HMAC with specific keys, or just a public key algorithm such as RSA or DSA on their own) it is probably the best way to achieve what you want to do with the minimum of effort.
Of course I would presume you would act as the CA in this scenario and any third-party could get a certificate signed from you? If not, and would just go for say a Verisign cert etc. you might want to consider checking the key usage and enhanced key usage fields of the certificate to ensure it is suitable for signing binaries (to stop someone for example using an SSL cert).
As pointed out in the above comment you want to check any certificate revocation lists, although that might be covered in signed versus unsigned. You probably also want a distinct test case between a file which is unsigned completely, a file which is signed but incorrectly (say public keys don't match) and one which is signed but invalidated, e.g. the signature is not timestamped by a trusted authority and the certificate has expired, or the CRL stuff.
Also are you excluding the possibility where the signing cert is the CA? It is a dumb thing to have but technically there is nothing wrong with doing so. You could even just skip the whole CA stuff and get a third party to generate their own self-signed cert and send that to the administrator of the server who would add it to the list of valid certificates for use. The only reason for the CA is they are supposed to check the details of the person who wants it, depending on how you plan to use this system than might not be necessary.

Resources