I have earlier asked a related question here. I have come up with a scheme which I shall describe below. I request experts out there to provide feedback.
Since the target application is a consumer application, implicit assumption is that the application won't be deployed on BES. If required, a separate application would be developed that is more suitable and integrates well with BES environment.
First, the build system of the application (including source code) is tied up with the user's registration. That is, when the user register, an application is built for that user only with a link being provided as soon as the registration is complete. The following sequence of steps are executed by the server, on behalf of an user.
Installation
(Private Key, Public Key) called "Master Keys" for that user are generated.
(Private Key, Public Key) called "Channel Keys" for that user are generated.
Master Public Key would be signed by Server's code signing keys.
Channel Public Key would be signed by Server's code signing keys
(Channel Key, Master Key) would be packaged along with the application source code.
An unique identifier for that application is generated and bundled along with the application.
The above source code is compiled using RIM's tools and signed by RIM signer certificate.
Any intermediate files that are generated for the above process are deleted immediately after the build is complete.
Master Keys are used to carry out sensitive operations such as (a) Reset the user's password on device (b) Reset application's password (c) Remote wipe when the device is lost (d) Turn on remote tracking when the device is lost.
Channel Keys are used to encrypt/ sign the data when client communicates with the server.
Creating Session Keys. Session keys are used for one time communication between client and server. They are exchanged over HTTPS between device and server, encrypted (perhaps using AES-256).
When the user downloads the application on to the phone and installs it successfully, on the first launch the user selects a password for the application. This password is known only to the user.
Application sends (user id, application id) encrypted with session key to the server over HTTPS
Application generates a 128 bit UUID called "Rescue Code" and prompts the user for an E-mail id. An E-mail would be sent to this E-mail id that contains this "Rescue Code". The user is required to keep this safely and produce it when any the user loses the phone or forgets the password.
This rescue code is stored on the device.
Once started, the application ALWAYS runs in the background and starts up as the phone boots.
Recovery
When he user forgets the password or losses the phone.
The user proves identity by producing valid identity card (provided by Government, perhaps) to appropriate authority.
Server requests the client to create a secure channel. The client re-connects to the server by presenting a token that is encrypted by the Master Key.
The clients presents the server with a challenge, requesting a "Rescue Code". This can be shown on Web UI.
The user presents the "Rescue Code" to the server
Client matches the rescue code presented by the server against the one that is stored and then a success code is sent to server.
Now client can perform sensitive operations on behalf of user.
Recently I have met an expert who has designed security systems for very large banks ( which I cannot reveal) has implemented this kind of security model for some situations. Now, I can say with certain degree of confidence this is indeed a commercially workable and acceptable solution.
Related
Desktop Client
Protected Resource Server
Authorization Server (Google)
User-Agent (Browser)
The Desktop Client generates Pub/Priv key pair, and directs the user agent with webbrowser.open_new() to Protected Resource Server OAuth initiation page, which stores the Public Key in the State parameter field of the Auth_URI redirect.
User-Agent successfully auths with the Authorization Server and is redirected back to Protected Resource Sever with Auth_Code and Public Key in state parameter field.
Protected Resource Server exchanges Auth_Code using confidential client secret and validates id_token.
If id_token is valid, (server side processing happens), it then redirects on loopback to listening Desktop Client with query parameter containing an encrypted value that is only accessible by the initiating app.
It's a very similar process to PKCE, but I'm having the Client Secret remain confidential on the sever rather than embedding it in the Desktop Client.
My concern is about a malicious 3rd party app that is able to intercept the initial OAuth_URI redirect and modify its values. Is this a mitigable threat once the device/browser is compromised? PKCE would suffer from the same issue and I've seen no explanations that mention my concern as a particular issue so I am under the assumption it is fine.
REDIRECT TAMPERING
The standard protection against malicious redirect tampering is in these emerging standards:
PAR
JAR
JARM
In high security scenarios you may need to follow profiles that mandate some of these - and some profiles may include financial-grade recommendations such as FAPI 2.0 Client Requirements, which includes PAR.
DESKTOP APPS
One known issue for desktop apps is that a malicious app could send the same client ID and use the same redirect URI to trigger a complete flow. I don't think you can fully protect against that, and any efforts may just be obfuscation.
Your concern is perhaps this issue, and it is not currently solvable, since any code your app runs could also be run by a malicious app, including use of PAR / DPop or other advanced options.
CLIENT ATTESTATION
The behaviour you are after is client attestation, where a malicious party cannot attempt authentication without proving its identity cryptographically. Eg an iOS app can send proof of ownership of its App Store signing key.
Mobile and web apps can achieve a reasonable amount of client attestation by owning the domain for an HTTPS based redirect URI, but these cannot be used for desktop apps. It would be good to see improved client attestation options for desktop apps some time soon.
WHAT TO DO?
Generally I would say keep your code based on standards that have been vetted by experts. This should mean that you use libraries in places but that your code remains simple.
Also your desktop app will be as secure as other desktop apps without trying to solve this problem. Also, users are socialised to not run arbitrary EXEs these days, and only run proper signed apps, whose code signing certificate identifies them and chains up to an authority that has approved the app (we hope).
I wrote a small Chat application, where users can write each other messages:
On first Login, a user will generate a public/private keypair, derived from the users password.
The public-key will be sent to the server (database).
If a user (A) wants to write user (B) a message, user A encrypts the message with the public key of user B and sends it to the server (and the server will send it then to user B).
But what, if somebody with database-access will change the public-key of user B in the database? Then the attacker can read all messages.
Is it somehow possible to authenticate the public key in the database and make sure, it was not changed and it 100% belongs to user B?
So you're trying to protect against the scenario where an attacker has control over the server and the server cannot be trusted. Since you can't trust any information by the server, you cannot use it directly in any form of verification either. The server can only be relegated to being a dumb transport, and the verification needs to happen directly against the other peer.
Being able to exchange the key out-of-band would help a lot here, meaning you can somehow facilitate a direct peer-to-peer exchange of the key. Since it is difficult to trust the identity of a random remote peer over the general internet, you'd need to employ a strategy like Threema: you can get any remote peer's public key anonymously, but your relationship to this peer is not verified then. Only if you're able to meet in person and exchange/verify keys by physically scanning each others QR codes is the key trustworthy.
To facilitate any sort of key exchange with a remote peer via an untrustworthy server, you'd basically need to implement a Diffie-Hellman key exchange; the server can facilitate the communication, but will have no visibility into what data is being exchanged. This will have to happen with both peers being online at the same time (or it's a very slow offline back-and-forth), so may be somewhat problematic in practice depending on your use case.
I'm building a web application that builds a XML document based on the user input. After the doc is created, it needs to follow an approval path, e.g. a workflow, where several users "sign" the document. The signature from the user point of view is just checking a field and clickin "accept", but what I need is for the document to be digitally signed in each step, to
finally store it signed in a database.
What kind of devices/tools do I need to use? X.509 certificates on the client browser? Public/Private keys generated by the app? Any link to documentation will be appreciated.
Certificates are not normally generated by the application (since PKI is about trust, which is hierarchical in case of certificates). Users acquire certificates with private keys (let's say so for simplicity) and store them in the safe place or on hardware devices (smartcards, USB tokens).
Then those certificates are used to sign information. In case of web application you can either transfer the data itself to the client or send a hash of the data there, but in any case signing takes place on the client side (except rare cases where certificates are stored on central server and access to them is authorized by the client each time the certificate is used).
We offer components for distributed signing of data. This answer contains detailed description of how such signing works. You can use our solution or create your own, that will do the same.
What are the best practices for delivering an Adobe Air app that needs a private key in order to communicate with some online API?
Adobe Air apps seem like they are delivered to the user with full source code, so storing any keys within the source would be a really bad idea. I've read some suggestions saying to download the key from your server, but that has the same problem because the url allowing the download would have to be stored in source code. Also, suggestions saying to store in the encrypted local storage don't make sense to me either, because I still have to obtain the private key somehow.
I think this is a global problem of delivering secret keys in any application, since everything can be reverse-engineered (disasamble for executables, IL readers, etc.)
No matter what you do, if the client application needs to somehow "know" a secret key, then the user can know the secret key.
Assuming that:
You deliver a product ("client application") which relys on some 3rd party web service ("the service").
Your company has just one secret key ("company key") for using the service.
The company key must never be exposed (due to possible abuse)
Every piece of information held by or transmitted by the client application is exposed
A solution might be to use some proxy:
The proxy implements the API of the service
The client application connects to the proxy
The proxy connects to the service using the company key
The proxy delegates all calls from the client to the service and vice-versa
I have a Silverlight 3 app which connects to a server to perform various actions. My users log in using Forms Authentication but the actions they request are run on the server using the AppPool account so when they go in the audit logs they're recorded against the AppPool account. PCI DSS regulations now require that the user's own ID is in the audit logs which means the action must be taken using the user's creds. Now, I can save the user's creds when they log on and submit them with each request and the actions being taken by the server can use those creds. But the PCI regs say that if creds are saved they must be encrypted (to avoid someone taking a memory dump of the PC and getting the password).
The only way I can see of doing this is to get a public key from the server and encrypt the password with it, then submit the encrypted password and decrypt it on the server using the private key. But Silverlight doesn't have asymmetric cryptography.
I guess I'm too close to the problem and there must be another solution but I can't see what it is. Can anyone help?
CLARIFICATIONS
It's an internal application. Up until now, I've been using IIS Forms AuthN over SSL to Active Directory - I'm not worried about protecting the password in transit, just whilst it's held in memory on the client. As I understand it, because I'm using Forms Authentication, impersonation is not possible on the server unless I use LogonUser, which means I need the password on the server, so I need to transmit it each time, so I need to hold it in the client, in memory, until the app closes.
Are you saying you need to store the password for re-use in the silverlight app? If you are concerned about the password appearing in memory un-encrypted then Silverlight then I think you're in trouble.
The .NET framework does have a SecureString class for exact purpose you outline.
Unfortunately the Silverlight version of the framework does not have this class. Hence even if you were to keep the logical storage of the password encrypted at some point your code would need to decrypt it before using it. At the point there is memory allocated containing the string in unencrypted form.
I don't know much about Forms authentication but if you can map the User principle to a domain user (which you seem to indicate you need) then you will want to use impersonation when running your code on the server.
Alternatively stop using Forms authentication and use Windows integrated authentication where you definitely can use impersonation server-side.
Encryption should never be used for passwords. When you encrypt something then it follows there should be a way to decrypt it. One way hashes should always be used for passwords. md5 and sha1 have been proven to be far too weak for any secuirty system.
Sha256 should be used, and in silverlight this library will take care of it:
http://msdn.microsoft.com/en-us/library/system.security.cryptography.sha256%28VS.95%29.aspx
In fact storing passwords using "encryption" is recognized by the vulnerability family CWE-257. The use of a message digest is the ONLY way to safely store passwords. I didn't just make this up, this is coming from NIST. There are many other vulnerabilities that come up when storing passwords. Here is THE LIST that NIST has put together: