Signing and verifying an automatically generated report - digital-signature

Last summer, I was working on an application that tested the suitability of a prospective customer's computer for integrating our hardware. One of the notions suggested was to use the HTML report generated by the tool as justification for a refund in certain situations.
My immediate reaction was, "well we have to sign these reports to verify their authenticity." The solution I envisioned involved creating a signature for the report, then embedding it in a meta tag. Unfortunately, this scenario would require the application to sign the report, which means it would need a private key. Once the application is storing the private key, we're back at square one with no guarantee of authenticity.
My next idea was to phone home and have a server sign the report, but then the user needs an internet connection just to test hardware compatibility. Plus, the application would need to authenticate with the server, and an interested party could figure out what credentials it was using to do that.
So my question is this. Is there any way, outside of obfuscation, to verify that the application did indeed generate a given report?

As Eugene has rightly pointed that my initial answer was to authenticate the receiver. Let me propose an alternative approach for authenticating the sender
authenticate the sender:
When your application is deployed at your client end, you generate and deploy a self signed PFX certificate which holds the private key.
The details of your client and passphrase for the PFX is set by your client and may be you can get it printed and signed by your client in paper to hold them accountable for the keys which they have just generated..
Now you have a private key which can sign and when exporting the HTML report, you can export the certificate along with the report.
This is a low cost solution and is not as secure as having your private keys in a cryptotoken, as indicated by Eugene, in the previous post.
authenticate the receiver:
Have a RSA 2048 key pair at your receiving end. Export your public key to your senders.
When the sender has generated the report, let the report be encrypted by a symmetric key say AES 256. Let the symmetric key itself be encrypted/wrapped by your public key.
When you receive the encrypted report,use your private key to unwrap/decrypt the symmetric key and in turn decrypt the encrypted report with the symmetric key.
This way, you make sure that only the intended receiver alone can view the report.

I'd say that you need to re-evaluate possible risks and most likely you will find them to be not as important as you could think. The reason is that the report has value for you but less likely for a customer. So it's more or less a business task, not a programming one.
To answer your concrete question, there's no simple way to protect the private key used for signing from being stolen (if one really wants to). For more complex solutions employing a cryptotoken with private key stored inside would work, but cryptotoken is itself a hardware and in your scenario it would unnecessarily complicate the scheme.

Related

How to make sure that only the authorized user can access a feature provided by the server?

We are building an android application and one of its features is to book a cab service provider's cab (say an Uber).
We have an application specific user ID. Let us call it AUID. To book the cab, the application would Post a request to server and send AUID along with other relevant information (like lat, long etc). How do I make sure at the server end that the request is indeed coming from the correct user and it is safe to book the cab? In the current form, if a third party gets to know the AUID of another person, the third party can book a cab on behalf of that person.
One of the solutions I thought of was using asymmetric encryption. The application would hold the public key and the server would contain the private key. Instead of sending the user ID to the server, we'll instead send an encrypted key where the key would be AUID + timestamp encrypted using the public key. We'll then decrypt the message using private key at server end to obtain the AUID. If the timestamp at server does not lie within a certain interval of the timstamp sent by the client, we reject the request.
Is this a safe enough approach? Is there any other practice widely followed for such scenarios?
What you propose is sensible: encrypt the AUID on the client app and verify on the server. As comments suggest, SSL is vital.
The problem is that if how to encrypt the AUID is in your app, it can be figured out by anyone dedicated enough.
You can drastically reduce the risks of fake requests by issuing a separate encryption key for each user. This means that if someone cracks your code, they can still only spoof from one account. However, once an attacker had decompiled your app, they could theoretically start new accounts, get a valid encryption key and spoof requests.
What you need for 100% reliability is some form of authentication which is not stored in the client app - like a password or TouchID on iOS or fingerprint api on Android M. So when a user orders a cab, they need to enter some piece of information which you also encode with the AUID and check on the server. That secret information is not stored in your app, so no-one can fake requests.
Requiring a password from a user is pretty inconvenient. Fingerprint scanning is much easier and probably acceptable. You could also use a trust system - if the user has ordered cabs before and everything was OK, they can order without special authentication. Using Trust together with individual encryption keys is pretty effective because anyone trying to spoof requests would need to do a successful order before being able to spoof - which is probably too much hassle for them.

decrypt data using master key and not key used for encryption

I am trying to build an application that stores user related information client side in localstorage. I am encrypting that data with a password given by user.
If I implement forgot password and generate a new password how can I get back my data that is encrypted on old password.
I am using sjcl for encrypting data. Is there any technique to encrypt data with 2 passwords??
What would be an ideal pattern for this scenario??
The conventional approach for this is called "key escrow." Basically, it means giving a copy of the key to someone that you trust.
If you won't trust anyone, then key escrow is not for you. Your only option is to make sure that you don't lose the one-and-only key. And this is a fairly common approach too. Many products that advertise secure storage emphasize this point. As examples, see Bruce Schneier's password manager "PasswordSafe," and LaCie's security-focused DropBox alternative, "Wuala."
There are accepted methods for encrypting data so that it could be decrypted with any one of several passwords. But I don't see how this helps; if you can't remember one password, how will you remember two?
Any other approach that pretends to avoid key escrow but still provides a backdoor to access your data if you lose the key is not secure and no one should trust it.

Security tokens with unreadable private keys?

I'd like to uniquely identify users by issuing security tokens to them. In order to guarantee non-repudiation I'd like to ensure that no one has access to the user's private key, not even the person who issues the tokens.
What's the cheapest/most-secure way to implement this?
Use a 3rd party certificate authority: you don't know the private key and you don't have to care about how the client gets and secures the private key (but you can worry about it). Not the cheapest solution ever...
OR:
Share a secret with each client (printed on paper, through email, phone, whatever...).
Have the client generate the keys based on that secret, time (lets say 5 minute intervals) and whatever else you can get (computer hardware id - if you already know it, client IP, etc...). Make sure that you have the user input the secret and never store it in an app/browser.
Invalidate/expire the tokens often and negotiate new ones
This is only somewhat safe (just like any other solution)...if you want to be safe, make sure that the client computer is not compromised.
It depends on where/how you want to use those keys* but the bottom line is that in the case of asymmetric keys, the client will encrypt the data sent to you (the server) using their private key and you (the server) will decrypt that data using the client's public key (opposite of how HTTPS works).
Can you verify, at any point in time, the identity of your clients?
If the client computer is compromised, you can safely assume that the private key is compromised too. What's wrong with SSL/HTTPS. Do you really need to have one certificate per client?
Tokens are not the same thing as keys and they don't have to rely on public/private keys. Transport, however, might require encryption.
*my bank gave me a certificate (which only works in IE) used to access my online banking account. I had to go through several convoluted steps to generate and install that certificate and it only works on one computer - do you think that your clients/users would agree to go through this kind of setup?
It would be relatively easy for compromised computers to steal the user's private key if it were stored as a soft public key (e.g., on the hard drive). (APT (botnet) malware has been known to include functionality to do exactly this.)
But more fundamentally, nothing short of physically incapacitating the user will guarantee non-repudiation. Non-repudiation is something the user chooses to do, opposing evidence notwithstanding, and to prove that a user didn't do something is impossible. Ultimately, non-repudiation involves a legal (or at least a business) question: what level of confidence do you have that the user performed the action he is denying having performed and that his denial is dishonest? Cryptosystems can only provide reasonable confidence of a user's involvement in an action; they cannot provide absolute proof.
PIV cards (and PIV-I cards) use a number of safeguards for signing certificates. First, the private key is stored on the smart card, and there is no trivial way to extract it. The card requires a numeric PIN to use the private signing key, and effectively destroys the key after a certain number of incorrect attempts. The hardware cryptomodule must meet Level-2 standards and be tamper-resistant, and transport of the card requires Level-3 physical security (FIPS 201). The certificate is signed by a trusted CA. The PIN, if entered using a keyboard, must be sent directly to the card to avoid keylogger-type attacks.
These precautions are elaborate, intensive, and still do not guarantee non-repudiation. (What if malware convinces the user to sign a different document than the one he is intending to sign? Or the user is under duress? Or an intelligence agency obtains the card in transit and uses a secret vulnerability to extract the private key before replacing the card?)
Security is not generally a question of cheapest/most secure, but rather of risk assessment, mitigation, and ultimately acceptance. What are your significant risks? If you assess the types of non-repudiation risks you face and implement effective compensating controls, you will be more likely to find a cost-effective solution than if you seek to eliminate risk altogether.
The standard way to handle non-repudiation in a digital signature app is to use a trusted third party, a 3rd party cert authority.
Be careful trying to create your own system--since you're not an expert in the field, you'll most probably end up either losing the non-repudiation ability that you seek or some other flaw.
The reason the standards for digital signatures exist is that this stuff is very hard to get right in a provable way. See "Schneier's Law"
Also, eventually, non-repudiation comes down to someone being sued--you say that "B" did it (signed the agreement, pressed the button, etc), but "B" denies it. You say you can "prove" that B did it. But so what, you'll need to prove in court that B did it, to get the court to grant you relief (to order B to do something such as pay damages.)
But it will be very very expensive to sue someone and to prove their case due to a digital sig system. And if you went to all that trouble and then the digital sig system was some homebrew system, not a standard, then your odds of relief would drop down to about 0%.
Conclusion: if you care enough about the digital sig to sue people, then use a standard for digital sig. If you will ultimately negotiate rather than sue, then look at the different options.
For example, why not use a hardware security token They're now available as apps for people's phones, too.

Backwards HTTPS; User communicates with previously generated private key

I am looking for something like https, but backwards. The user generates their own private key (in advance) and then (only later) provides the web application with the associated public key. This part of the exchange should (if necessary) occur out-of-band. Communication is then encrypted/decrypted with these keys.
I've thought of some strange JavaScript approaches to implement this (From the client perspective: form submissions are encrypted on their way out while (on ajax response) web content is decrypted. I recognize this is horrible, but you can't deny that it would be a fun hack. However, I wondered if there was already something out there... something commonly implemented in browsers and web/application servers.
Primarily this is to address compromised security when (unknowingly) communicating through a rogue access point that may be intercepting https connections and issuing its own certificates. Recently (in my own network) I recreated this and (with due horror) soon saw my gmail password in plain text! I have a web application going that only I and a few others use, but where security (from a learning stand point) needs to be top notch.
I should add, the solution does not need to be practical
Also, if there is something intrinsically wrong with my thought process, I would greatly appreciate it if someone set me on the right track or directed me to the proper literature. Science is not about finding better answers; science is about forming better questions.
Thank you for your time,
O∴D
This is already done. They're called TLS client certificates. SSL doesn't have to be one-way; it can be two-party mutual authentication.
What you do is have the client generate a private key. The client then sends a CSR (Certificate Signing Request) to the server, who signs the public key therein and returns it to the client. The private key is never sent over the network. If the AP intercepts and modifies the key, the client will know.
However, this does not stop a rogue AP from requesting a certificate on behalf of a client. You need an out-of-band channel to verify identity. There is no way to stop a man in the middle from impersonating a client without some way to get around that MITM.
If a rogue access point can sniff packets, it can also change packets (an ‘active’ man-in-the-middle attack). So any security measure a client-side script could possibly provide would be easily circumvented by nobbling the script itself on the way to the client.
HTTPS—and the unauthorised-certificate warning you get when a MitM is trying to fool you—is as good as it gets.
SSL and there for HTTPS allows for client certificates. on the server side you can use these environment variables to verify a certificate. If you only have 1 server and a bunch of clients then a full PKI isn't necessary. Instead you can have a list of valid client certificates in the database. Here is more info on the topic.
Implementing anything like this in JavaScript is a bad idea.
I don't see, why you are using assymetric encryption here. For one, it is slow, and secondly, it is vulnerable to man in the middle anyhow.
Usually, you use an asymmetric encryption to have a relatively secure session negotiation, including an exchange of keys for a symmetric encryption, valid for the session.
Since you use a secure channel for the negociation, I don't really understand why you even send around public keys, which themselves are only valid for one session.
Asymmetric encryption makes sense, if you have shared secret, that allows verifying a public key. Having this shared secret is signifficantly easier, if you don't change the key for every session, and if the key is generated in a central place (i.e. the server and not for all clients).
Also, as the rook already pointed out, JavaScript is a bad idea. You have to write everything from scratch, starting with basic arithmetic operations, since Number won't get you very far, if you want to work with keys in an order of magnitude, that provides reasonable security.
greetz
back2dos

SSL authentication by comparing certificate fingerprint?

Question for all the SSL experts out there:
We have an embedded device with a little web server on it, and we can install our own SSL self-signed certificates on it. The client is written in .NET (but that doesn't matter so much).
How can I authenticate the device in .NET? Is it enough to compare the fingerprint of the certificate against a known entry in the database?
My understanding is that the fingerprint is a hash of the whole certificate, including the public key. A device faking to be my device could of course send the same public certificate, but it couldn't know the private key, right?
Or do I have to build up my own chain of trust, create my own CA root certificate, sign the web server certificate and install that on the client?
What you propose is in principle ok. It is for example used during key signing parties. Here the participants usually just exchange their name and fingerprints of their public keys and make sure that the person at the party really is who he/she claims. Just verifying fingerprints is much easier than to verify a long public key.
Another example is the so called self certifying file system. Here again only hashes of public keys get exchanged over a secure channel. (I.e. these hashes are embedded in URLs.) In this scheme the public keys don't have to be sent securely. The receiver only has to check that the hash of the public keys matche the hashes embedded in the URLs. Of course the receiver also has to make sure that these URLs come from a trusted source.
This scheme and what you propose are simpler than using a CA. But there is a disadvantage. You have to make sure that your database with hashes is authentic. If your database is large then this will likeley be difficult. If you use CAs then you only have to ensure that the root keys are authentic. This usually simplifies the key management significantly and is of course one reason, why CA based schemes are more popular than e.g. the self certifying file system mentioned above.
In the same way you wouldn't and shouldn't consider two objects to be equal just because their hash codes matched, you shouldn't consider a certificate to be authentic just because its fingerprint appears in a list of "known certificate fingerprints".
Collisions are a fact of life with hash algorithms, even good ones, and you should guard against the possibility that a motivated attacker could craft a rogue certificate with a matching fingerprint hash. The only way to guard against that is to check the validity of the certificate itself, i.e. check the chain of trust as you're implying in your last statement.
Short:
Well in theory you then do exactly what a Certificate Authority does for you. So it should be fine.
Longer:
When a Certificate Authority signs your public-key/certificate/certificate request it doesn't sign the whole certificate data. But just the calculated hash value of the whole certificate data.
This keeps the signature small.
When you don't want to establish your own CA or use a commercial/free one -
by comparing the fingerprint with the one you trust you'll gain the second most trustworthy configuration. The most trustworthy solution would be by comparing the whole certificate, because also protects you from hash collision attacks.
As the other guys here stated you should make sure to use a secure/safe hashing algorithm. SHA-1 is no longer secure.
more detailed informations to this topic:
https://security.stackexchange.com/questions/6737
https://security.stackexchange.com/questions/14330

Resources