Purpose of verifying digital signature of a mirrored download link - digital-signature

I've done a fair bit of reading about digital signatures but I can't for the life of me work out why I'd need to verify the signature of a digitally signed file uploaded on a mirror site.
The question popped up when I tried to install Maven:
https://maven.apache.org/download.cgi
The site urges you to "verify the signature of the release bundles against the public KEYS used by the Apache Maven developers".
I understand the need for integrity which a checksum provides (granted MD5 is considered weak) - why do I need to do more? I know the file that I've downloaded has not been modified since the checksum was initially generated.
Digital signatures are supposed to provide integrity, authenticity and non-repudiation.
1) Integrity is already provided anyway by confirming the checksum.
2) Authenticity - verifies identity of signer. In this case, the signer is the owner of the public key (supposedly a maven developer). In this scenario, do I really care who signed the file?
3) Non-repudiation - Do I really care that the developer can't deny the file was signed by him/her? Maybe if the maven developer created a malicious file and I wanted to sue them for distributing it...
I don't see the significance of authenticity here. I KNOW the sender (eg. someMirrorSite.com) is not the one who signed the file since if I use their public key to try verify the signature, it would be invalid. If I use the maven developer's public key all I'm verifying is the fact that the maven developer signed the file (granted they have a valid certificate which links that maven developer with their public key).
So basically the question is: Provided that I trust maven.apache.org, why should I verify the signature of the file hosted by the mirror site when I can just simply verify the checksum of the file?

Is basically a perspective, do I care who made or compiled a file? Who is behind this so called signed document/file/byte array?
Behind a signed file is Private Key Infrastructure, is like having a notarized document when a particular risky transaction is on play and a bunch of lawyers around, why should I care? Well it depends how important that activity is for you or a third party. Let's say You want to sell a company (small business). This is risky, may be not for you but for the third party, he will make sure there is a notary present to witness the transaction and the state he is receiving the company. At this point it becomes clear to you that you also need a notarized document of you selling the company. Who knows what's going to happen next? May be you don't care what happns next, as long as it's sold, so no notarized document (aka digitally signed document).
Check Integrity = Just lawyers
Digitally Signed Document = You don't trust lawyers that much so you need a notary (who is basically a lawyer anyway :-))

If there is a file F, and C = checksum(F), then when you download F you can recompute C and see if it matches the published value. But, how do you know if this published value of C can be trusted? If I am "Evil Inc." I can make a compromised copy of F, call it Fe and then compute its checksum Ce, and publish both on my website. Many people will not bother to even look at the checksum, but even those that do will be fooled, because they will compute Ce too.
Using a Public Key signature as the checksum is an attempt to strengthen this procedure.

Related

Method to verify a signed archive's X.509 CoT

I am trying to understand the key high level details behind verifying trust when downloading an archive.
This is my understanding of how it could be done:
On the Software Developer side:
Obtain a certificate from a public CA like verisign
Generate a hash of your archive and then encrypt this string using the private key from your certificate, this is the "signature"
Host the archive for download, along with a separate file which contains the public key from your certificate + the signature generated in step 2.
On the (user) client side:
Download and unpack the archive, download the signature + public key file
Decrypt the downloaded signature using the downloaded public key, save this value
Iterate through the public root certificates embedded within your operating system. For each root certificate, decrypt the signature value and compare the result to the result in step 5.
Once a match is found in 6, you have verified that the author's private key descends from the chain of trust of the CA which you found matched in step 6.
This all assumes that the software developer used a CA for which we have an embedded root certificate in our clients OS.
Questions:
Is the above method sound, or am I overlooking key details?
Given a blank slate client that you control, if I wanted to combine the public key + signature + archive into a single file that I could make the client understand and parse, are there any widely supported formats to leverage for organizing this data?
Aside from being a little too specific on Developer (2) (that describes how RSA signatures work, but ECDSA is perfectly well suited to this task) that sounds rather like Authenticode minus some EKU restrictions. This leads me to ask "why not use Authenticode signing?".
The structure I'd consider is the PKCS#7/CMS SignedData format. It can describe multiple signatures from multiple certificates (sign it ECDSA-brainpoolP320t1-SHA-3-512 for anyone who can read it as well as RSA-2048-SHA-2-256 for most of us, and DSA-1024-SHA-1 for anyone whose computer was built in 2001).
For data file you can just use SignedData normally, for executables it's harder since there are semantic portions (so you have to squirrel it away somewhere and use indirect signing).
If you do your signing with .NET, PKCS#7/CMS SignedData is available for both signing and verifying via System.Security.Cryptography.Pkcs.SignedCms (though you probably have to define your own chain trust rules outside of that class).

Security tokens with unreadable private keys?

I'd like to uniquely identify users by issuing security tokens to them. In order to guarantee non-repudiation I'd like to ensure that no one has access to the user's private key, not even the person who issues the tokens.
What's the cheapest/most-secure way to implement this?
Use a 3rd party certificate authority: you don't know the private key and you don't have to care about how the client gets and secures the private key (but you can worry about it). Not the cheapest solution ever...
OR:
Share a secret with each client (printed on paper, through email, phone, whatever...).
Have the client generate the keys based on that secret, time (lets say 5 minute intervals) and whatever else you can get (computer hardware id - if you already know it, client IP, etc...). Make sure that you have the user input the secret and never store it in an app/browser.
Invalidate/expire the tokens often and negotiate new ones
This is only somewhat safe (just like any other solution)...if you want to be safe, make sure that the client computer is not compromised.
It depends on where/how you want to use those keys* but the bottom line is that in the case of asymmetric keys, the client will encrypt the data sent to you (the server) using their private key and you (the server) will decrypt that data using the client's public key (opposite of how HTTPS works).
Can you verify, at any point in time, the identity of your clients?
If the client computer is compromised, you can safely assume that the private key is compromised too. What's wrong with SSL/HTTPS. Do you really need to have one certificate per client?
Tokens are not the same thing as keys and they don't have to rely on public/private keys. Transport, however, might require encryption.
*my bank gave me a certificate (which only works in IE) used to access my online banking account. I had to go through several convoluted steps to generate and install that certificate and it only works on one computer - do you think that your clients/users would agree to go through this kind of setup?
It would be relatively easy for compromised computers to steal the user's private key if it were stored as a soft public key (e.g., on the hard drive). (APT (botnet) malware has been known to include functionality to do exactly this.)
But more fundamentally, nothing short of physically incapacitating the user will guarantee non-repudiation. Non-repudiation is something the user chooses to do, opposing evidence notwithstanding, and to prove that a user didn't do something is impossible. Ultimately, non-repudiation involves a legal (or at least a business) question: what level of confidence do you have that the user performed the action he is denying having performed and that his denial is dishonest? Cryptosystems can only provide reasonable confidence of a user's involvement in an action; they cannot provide absolute proof.
PIV cards (and PIV-I cards) use a number of safeguards for signing certificates. First, the private key is stored on the smart card, and there is no trivial way to extract it. The card requires a numeric PIN to use the private signing key, and effectively destroys the key after a certain number of incorrect attempts. The hardware cryptomodule must meet Level-2 standards and be tamper-resistant, and transport of the card requires Level-3 physical security (FIPS 201). The certificate is signed by a trusted CA. The PIN, if entered using a keyboard, must be sent directly to the card to avoid keylogger-type attacks.
These precautions are elaborate, intensive, and still do not guarantee non-repudiation. (What if malware convinces the user to sign a different document than the one he is intending to sign? Or the user is under duress? Or an intelligence agency obtains the card in transit and uses a secret vulnerability to extract the private key before replacing the card?)
Security is not generally a question of cheapest/most secure, but rather of risk assessment, mitigation, and ultimately acceptance. What are your significant risks? If you assess the types of non-repudiation risks you face and implement effective compensating controls, you will be more likely to find a cost-effective solution than if you seek to eliminate risk altogether.
The standard way to handle non-repudiation in a digital signature app is to use a trusted third party, a 3rd party cert authority.
Be careful trying to create your own system--since you're not an expert in the field, you'll most probably end up either losing the non-repudiation ability that you seek or some other flaw.
The reason the standards for digital signatures exist is that this stuff is very hard to get right in a provable way. See "Schneier's Law"
Also, eventually, non-repudiation comes down to someone being sued--you say that "B" did it (signed the agreement, pressed the button, etc), but "B" denies it. You say you can "prove" that B did it. But so what, you'll need to prove in court that B did it, to get the court to grant you relief (to order B to do something such as pay damages.)
But it will be very very expensive to sue someone and to prove their case due to a digital sig system. And if you went to all that trouble and then the digital sig system was some homebrew system, not a standard, then your odds of relief would drop down to about 0%.
Conclusion: if you care enough about the digital sig to sue people, then use a standard for digital sig. If you will ultimately negotiate rather than sue, then look at the different options.
For example, why not use a hardware security token They're now available as apps for people's phones, too.

Signing and verifying an automatically generated report

Last summer, I was working on an application that tested the suitability of a prospective customer's computer for integrating our hardware. One of the notions suggested was to use the HTML report generated by the tool as justification for a refund in certain situations.
My immediate reaction was, "well we have to sign these reports to verify their authenticity." The solution I envisioned involved creating a signature for the report, then embedding it in a meta tag. Unfortunately, this scenario would require the application to sign the report, which means it would need a private key. Once the application is storing the private key, we're back at square one with no guarantee of authenticity.
My next idea was to phone home and have a server sign the report, but then the user needs an internet connection just to test hardware compatibility. Plus, the application would need to authenticate with the server, and an interested party could figure out what credentials it was using to do that.
So my question is this. Is there any way, outside of obfuscation, to verify that the application did indeed generate a given report?
As Eugene has rightly pointed that my initial answer was to authenticate the receiver. Let me propose an alternative approach for authenticating the sender
authenticate the sender:
When your application is deployed at your client end, you generate and deploy a self signed PFX certificate which holds the private key.
The details of your client and passphrase for the PFX is set by your client and may be you can get it printed and signed by your client in paper to hold them accountable for the keys which they have just generated..
Now you have a private key which can sign and when exporting the HTML report, you can export the certificate along with the report.
This is a low cost solution and is not as secure as having your private keys in a cryptotoken, as indicated by Eugene, in the previous post.
authenticate the receiver:
Have a RSA 2048 key pair at your receiving end. Export your public key to your senders.
When the sender has generated the report, let the report be encrypted by a symmetric key say AES 256. Let the symmetric key itself be encrypted/wrapped by your public key.
When you receive the encrypted report,use your private key to unwrap/decrypt the symmetric key and in turn decrypt the encrypted report with the symmetric key.
This way, you make sure that only the intended receiver alone can view the report.
I'd say that you need to re-evaluate possible risks and most likely you will find them to be not as important as you could think. The reason is that the report has value for you but less likely for a customer. So it's more or less a business task, not a programming one.
To answer your concrete question, there's no simple way to protect the private key used for signing from being stolen (if one really wants to). For more complex solutions employing a cryptotoken with private key stored inside would work, but cryptotoken is itself a hardware and in your scenario it would unnecessarily complicate the scheme.

Keygen tag in HTML5

So I came across this new tag in HTML5, <keygen>. I can't quite figure out what it is for, how it is applied, and how it might affect browser behavior.
I understand that this tag is for form encryption, but what is the difference between <keygen> and having a SSL certificate for your domain. Also, what is the challenge attribute?
I'm not planning on using it as it is far from implemented in an acceptable range of browsers, but I am curious as to what EXACTLY this tag does. All I can find is vague cookie-cutter documentation with no real examples of usage.
Edit:
I have found a VERY informative document, here. This runs through both client-side and server-side implementation of the keygen tag.
I am still curious as to what the benefit of this over a domain SSL certificate would be.
SSL is about "server identification" or "server AND client authentication (mutual authentication)".
In most cases only the server presents its server-certificate during the SSL handshake so that you could make sure that this really is the server you expect to connect to. In some cases the server also wants to verify that you really are the person you pretend to be. For this you need a client-certificate.
The <keygen> tag generates a public/private key pair and then creates a certificate request. This certificate request will be sent to a Certificate Authority (CA). The CA creates a certificate and sends it back to the browser. Now you are able to use this certificate for user authentication.
You're missing some history. keygen was first supported by Netscape when it was still a relevant browser. IE, OTOH, supported the same use cases through its ActiveX APIs. Opera and WebKit (or even KHTML), unwilling to reverse-engineer the entire Win32 API, reverse-engineered keygen instead.
It was specified in Web Forms 2.0 (which has now been merged into the HTML specification), in order to improve interoperability between the browsers that implemented it.
Since then, the IE team has reiterated their refusal to implement keygen, and the specification (in order to avoid turning into dry science fiction) has been changed to not require an actual implementation:
Note: This specification does not
specify what key types user agents are
to support — it is possible for a user
agent to not support any key types at
all.
In short, this is not a new element, and unless you can ignore IE, it's probably not what you want.
If you're looking for "exactly" then I'd recommend reading the RFC.
The keygen element is for creating a key for authentication of the user while SSL is concerned about privacy of communication and the authentication of the server. Quoting from the RFC:
This specification does not specify how the private key generated is to be used. It is expected that after receiving the SignedPublicKeyAndChallenge (SPKAC) structure, the server will generate a client certificate and offer it back to the user for download; this certificate, once downloaded and stored in the key store along with the private key, can then be used to authenticate to services that use TLS and certificate authentication.
Deprecated
This feature has been removed from the Web standards. Though some
browsers may still support it, it is in the process of being dropped.
Avoid using it and update existing code if possible. Be aware that
this feature may cease to work at any time.
Source
The doc is useful to elaborate on what is the keygen element. Its requirement arises in WebID that maybe understood to be part of Semantic Web of Linked Data as seen at https://dvcs.w3.org/hg/WebID/raw-file/tip/spec/index-respec.html#creating-a-certificate 2.1.1
This might be useful for websites that provide services, where people need to pay for the service, like video on demand, or news website for professionals like Bloomberg. With this keys people can only watch the content in their computer and not in simultaneous computers! You decide how data is stored and processed. you can specify a .asp or .php file that will receive the variables and your file will store that key in the user profile. This way your users will not be able to log in from a different computer if you want. You may force them to check their email to authorize that new computer, just like steam does. Basically it allows to individualize service access, if your licensing model is per machine, like Operating System.
You can check the specs here:
http://www.w3.org/TR/html-markup/keygen.html

SSL authentication by comparing certificate fingerprint?

Question for all the SSL experts out there:
We have an embedded device with a little web server on it, and we can install our own SSL self-signed certificates on it. The client is written in .NET (but that doesn't matter so much).
How can I authenticate the device in .NET? Is it enough to compare the fingerprint of the certificate against a known entry in the database?
My understanding is that the fingerprint is a hash of the whole certificate, including the public key. A device faking to be my device could of course send the same public certificate, but it couldn't know the private key, right?
Or do I have to build up my own chain of trust, create my own CA root certificate, sign the web server certificate and install that on the client?
What you propose is in principle ok. It is for example used during key signing parties. Here the participants usually just exchange their name and fingerprints of their public keys and make sure that the person at the party really is who he/she claims. Just verifying fingerprints is much easier than to verify a long public key.
Another example is the so called self certifying file system. Here again only hashes of public keys get exchanged over a secure channel. (I.e. these hashes are embedded in URLs.) In this scheme the public keys don't have to be sent securely. The receiver only has to check that the hash of the public keys matche the hashes embedded in the URLs. Of course the receiver also has to make sure that these URLs come from a trusted source.
This scheme and what you propose are simpler than using a CA. But there is a disadvantage. You have to make sure that your database with hashes is authentic. If your database is large then this will likeley be difficult. If you use CAs then you only have to ensure that the root keys are authentic. This usually simplifies the key management significantly and is of course one reason, why CA based schemes are more popular than e.g. the self certifying file system mentioned above.
In the same way you wouldn't and shouldn't consider two objects to be equal just because their hash codes matched, you shouldn't consider a certificate to be authentic just because its fingerprint appears in a list of "known certificate fingerprints".
Collisions are a fact of life with hash algorithms, even good ones, and you should guard against the possibility that a motivated attacker could craft a rogue certificate with a matching fingerprint hash. The only way to guard against that is to check the validity of the certificate itself, i.e. check the chain of trust as you're implying in your last statement.
Short:
Well in theory you then do exactly what a Certificate Authority does for you. So it should be fine.
Longer:
When a Certificate Authority signs your public-key/certificate/certificate request it doesn't sign the whole certificate data. But just the calculated hash value of the whole certificate data.
This keeps the signature small.
When you don't want to establish your own CA or use a commercial/free one -
by comparing the fingerprint with the one you trust you'll gain the second most trustworthy configuration. The most trustworthy solution would be by comparing the whole certificate, because also protects you from hash collision attacks.
As the other guys here stated you should make sure to use a secure/safe hashing algorithm. SHA-1 is no longer secure.
more detailed informations to this topic:
https://security.stackexchange.com/questions/6737
https://security.stackexchange.com/questions/14330

Resources