I'm building a web application that builds a XML document based on the user input. After the doc is created, it needs to follow an approval path, e.g. a workflow, where several users "sign" the document. The signature from the user point of view is just checking a field and clickin "accept", but what I need is for the document to be digitally signed in each step, to
finally store it signed in a database.
What kind of devices/tools do I need to use? X.509 certificates on the client browser? Public/Private keys generated by the app? Any link to documentation will be appreciated.
Certificates are not normally generated by the application (since PKI is about trust, which is hierarchical in case of certificates). Users acquire certificates with private keys (let's say so for simplicity) and store them in the safe place or on hardware devices (smartcards, USB tokens).
Then those certificates are used to sign information. In case of web application you can either transfer the data itself to the client or send a hash of the data there, but in any case signing takes place on the client side (except rare cases where certificates are stored on central server and access to them is authorized by the client each time the certificate is used).
We offer components for distributed signing of data. This answer contains detailed description of how such signing works. You can use our solution or create your own, that will do the same.
Related
Context
I'm using mTLS to secure Docusign Webhooks (Docusign Connect Service). I'm able to make a successful mTLS connection to get certificate fingerprint, according to the documentation.
The next suggested step is to do access control by validating the certificate fingerprint and possibly the Distinguished Name (DN) but I'm confused what should the correct way to do so.
Questions
How do we know which client certificate is going to be sent by DocuSign to our listener in live environment (theoretically can be one of these and which logic is used to determine which one is sent? Should we validate which certification is sent by the DN (e.g. connect.docusign.net)?
What information should we validate from the certificate message? The fingerprint, DN, both or more?
With the above, how can we know all possible fingerprints to validate from server side, assuming different webhooks messages can send different client certificates? Should we compute the fingerprint of all public connect certificates to get a full list?
What is the best way to handle expirations of client certificates?
Re:
Q. How do we know which client certificate is going to be sent by DocuSign to our listener in live environment (theoretically can be one of these and which logic is used to determine which one is sent? Should we validate which certification is sent by the DN (e.g. connect.docusign.net)?
A. Best is to validate based on the certificate's fingerprint matching a fingerprint of one of the expected certificates. DocuSign uses different certificates depending on the platform. But there's a limited set of certs used, so it should not be a big deal to check to see if the offered cert matches one of the expected certs.
Q. What information should we validate from the certificate message? The fingerprint, DN, both or more?
A. I'd recommend the fingerprint since it is more specific than the DN. With the DN, you're trusting the CAs to not issue a cert with a DocuSign DN to a bad guy. It should never happen but it has in the past (not to DocuSign though). See Rogue certificates
Q. With the above, how can we know all possible fingerprints to validate from server side, assuming different webhooks messages can send different client certificates? Should we compute the fingerprint of all public connect certificates to get a full list?
A. DocuSign uses a limited set of five certificates for webhook notifications, see the list on the trust center in the Connect Certificates section. Checking the incoming certificate against five or ten (see below) fingerprints is not a big deal.
Q. What is the best way to handle expirations of client certificates?
A. When the new certificates are announced, compute their fingerprints and add them to your system.
Then test by switching your DocuSign account to use the new certificates. Once the test succeeds, you can delete the fingerprints of the old certs.
I'm currently trying to use IdentityServer4 to build a single-signon experience for my users across different apps I have. They are all hosted in the same local network and no third-party apps authenticate with it. The client apps are still Katana/Owin-based.
I'm using the implicit workflow.
At the moment I still use a certificate randomly generated at runtime to sign tokens.
I wonder
whether I actually need more and what the implications are of leaving it as it is and
how the signature is actually validated by clients.
To the second question I found this piece in the openidconnect spec:
The OP advertises its public keys via its Discovery document, or may
supply this information by other means. The RP declares its public
keys via its Dynamic Registration request, or may communicate this
information by other means.
So does that mean Katana is actually getting the public keys from IdentityServer4 and validates accordingly? And if so, would it matter if it the certificate changes? The time between issuing and validating a token is always very small, correct? So why would I need a proper, rarely-changing certificate?
Generating a new certificate on app startup has a few downsides:
If you restart your IDS4 process you effectively invalidate any otherwise valid tokens as the signature will no longer be valid
Inability to scale out - all servers need to have the same signing and validation keys
Clients might only periodically update their discovery info so you need to allow for a rollover period, something that IDS4 supports as you can have more than one validation key.
See the guidance here: http://docs.identityserver.io/en/release/topics/crypto.html
The next simplest option would be to use a self-issued cert that's installed in the host machine's ceritificate store.
First of all, OpenID Connect discovery is a process of communicating relying party to retrieve provider's information, dynamically. There is a dedicated specification for this, OpenID Connect Discovery 1.0
According to it's metadata section, jwks_uri explains about token signing key publication.
1. So does that mean Katana is actually getting the public keys from IdentityServer4 and validates accordingly?
Yes it should. If your applications (relying parties) want dynamic information, you should go ahead with discovery document to retrieve token signing key information.
2 And if so, would it matter if it the certificate changes? The time between issuing and validating a token is always very small, correct?
Discovery document is part of OpenID Connect dynamic (reference - http://openid.net/connect/ ). So yes, it can be used to convey certificate changes to relying party (token consumers)
3. So why would I need a proper, rarely-changing certificate?
Certificate must be there to validate id tokens issued by identity provider. So at minimum, certificate must live till last token expires. Other than that, one might be using proper certificates issued by a CA, which comes with a cost. So, some implementations could have rarely changing certificates.
Bonus : how the signature is actually validated by clients.
You hash your received message, compare it against decrypted signature using public key of the certificate. Also, if you are wondering the format of key information, it is a JWK defined by RFC7517.
P.S - ID Token validation is same as validating a JWT explained by JWT spec.
Note - I am not an expert in PKI domain. Some expert could point out something else for short lived certificates independent of OpenID Connect protocol.
I have a web application where some data (not file) needs to be digitally signed using a PKI Private Key. The PKI Certificate & Private Key will be in a USB Cryptotoken which registers the certificates with the browser when inserted into the USB slot. This eases the pain of doing authentication using the certificate because I do that by trigerring ssl-renegotiation in my Application.
However, using a certificate for digital signing seems to be a bit more tricky. I can think of several ways to do this
CAPICOM - http://en.wikipedia.org/wiki/CAPICOM
This will work for browsers which support CAPICOM (eg. IE). However it seems that Microsoft has discontinued this.
Mozilla Crypto Object - https://developer.mozilla.org/en-US/docs/JavaScript_crypto
WebCrypto API - this is not yet supported by most browsers.
A custom Java Applet or some opensource freely available JavaApplet control.
Any other options?
I am trying to figure out what is the common, convenient and secure way of doing this in a web-application.
Note:
I am OK with just supporting the popular browsers.
I am signing a small piece of data - say 100-200 bytes rather than a file.
I would prefer PKCS#7 signatures.
[Disclosure: I work for CoSign.]
The problem that you're running into is a common one with old-style PKI systems that store the signer's private key at the boundary (eg in a smart card, a token, etc). This system was designed when the PC (and apps running on it) was the focus. But that isn't true this century. Now either the browser or the mobile is the focus.
You have tension between the nature of web apps (they're either running on the host or are sandboxed JavaScript on the browser) versus the idea of local hardware that "protects" the private key.
Breaking out of the browser's sandbox
One design direction is to try to break out of the browser's sandbox to access the local hardware private key store. You've listed a number of options. An additional one is the Chrome USB access library. But all of these solutions are:
Limited to specific browsers
Hard (and expensive) to install
Hard (and expensive) to maintain
High level of administrative overhead to help the users with their questions about keeping the system working.
Re your question 5 "Any other options?"
Yes: Centralized signing
A better option (IMHO) is to sign centrally. This way the keys are kept in a centralized FIPS-secure server. Meanwhile, the signers just use a webapp to authorize the signing. The signers don't need to hold the private key since it is stored in the secure server.
To authenticate the signers, you can use whatever level of security your app needs: user name/password; One Time Password; two factor authentication via SMS; etc.
The CoSign Signature API and CoSign Signature Web Agent are designed for this. Centralized PKI signing is also available from other vendors.
Added in response to comment
From the 2nd part of your answer - If the certificate is stored in the server and retrieved by authenticating the user by using uname/pwd or with 2FA, then why do digital signing at all? i.e. what advantage does it offer over just authenticating the transaction with uname/pwd or 2FA?
A: In the centralized design, the private key does not leave the central server. Rather, the document or data to be signed is sent to the server, is signed, and then the signed doc or data (e.g. XML) is returned to the webapp.
Re: why do this? Because a digitally signed document or data set (eg XML) can be verified to guarantee that the document was not changed since signed and provides a trust chain to provide assurance of the signer's identity. In contrast, passwords, even when strengthed by 2FA etc, only provide the app with signer identity assurance, not third parties.
PKI digital signing enables third parties to assure themselves of the signer's identity through the verification process. And the strength of the assurance can be set, as needed, by choosing different CAs.
I am looking into ways of securing the channel between my client apps and the server.
I have a rich desktop client (win) and mobile client connecting to a webservice, exchanging data.
Using SSL certificates, server and clients may trust each other. On the secured connection i can exchange username and password and therefore authenticate the user.
However i have certain circumstances where a user must connect to the server via any of the two methods without his credentials but only a literal, like say, a license plate number.
I really want to make sure that in this case i ONLY allow client connects from devices i am sure i know, since there is no further checks on the authentication and a license plate number would be a pretty common literal.
How can i ensure that only "devices" which are known to my server, can interact with my server?
If you want to authenticate the device, you'll need to find a way for the device to prove what it is, without disclosing its secret.
A system similar to a number plate would be quite easy to spoof, for anyone in a position to see that number. Depending on how much control you have on this device, you might not be able to hide it, even if the connection to your server is secured with SSL/TLS.
A potential way to do this would be to use a cryptographic hardware token (or smart card). Some of these tokens can be configured to hold a certificate and private key, with the ability to use the private key without being able to export that private key. The cryptographic operations (signing and decryption) happen on the token itself.
You can use these to perform client-certificate authentication to your server. In this case, you would know that the client has that token. This could work on the condition that you know the CAs were issued its certificates only for key pairs in such tokens: there will be a cost in administering the CA to handle this.
This would at least allow you to tie the authentication to a particular token. Whether you can integrate this with your overall device depends on the kind of device you have.
Please check if TLS Pre-Shared Keys (RFC 4279) can be used for your scenario.
I have earlier asked a related question here. I have come up with a scheme which I shall describe below. I request experts out there to provide feedback.
Since the target application is a consumer application, implicit assumption is that the application won't be deployed on BES. If required, a separate application would be developed that is more suitable and integrates well with BES environment.
First, the build system of the application (including source code) is tied up with the user's registration. That is, when the user register, an application is built for that user only with a link being provided as soon as the registration is complete. The following sequence of steps are executed by the server, on behalf of an user.
Installation
(Private Key, Public Key) called "Master Keys" for that user are generated.
(Private Key, Public Key) called "Channel Keys" for that user are generated.
Master Public Key would be signed by Server's code signing keys.
Channel Public Key would be signed by Server's code signing keys
(Channel Key, Master Key) would be packaged along with the application source code.
An unique identifier for that application is generated and bundled along with the application.
The above source code is compiled using RIM's tools and signed by RIM signer certificate.
Any intermediate files that are generated for the above process are deleted immediately after the build is complete.
Master Keys are used to carry out sensitive operations such as (a) Reset the user's password on device (b) Reset application's password (c) Remote wipe when the device is lost (d) Turn on remote tracking when the device is lost.
Channel Keys are used to encrypt/ sign the data when client communicates with the server.
Creating Session Keys. Session keys are used for one time communication between client and server. They are exchanged over HTTPS between device and server, encrypted (perhaps using AES-256).
When the user downloads the application on to the phone and installs it successfully, on the first launch the user selects a password for the application. This password is known only to the user.
Application sends (user id, application id) encrypted with session key to the server over HTTPS
Application generates a 128 bit UUID called "Rescue Code" and prompts the user for an E-mail id. An E-mail would be sent to this E-mail id that contains this "Rescue Code". The user is required to keep this safely and produce it when any the user loses the phone or forgets the password.
This rescue code is stored on the device.
Once started, the application ALWAYS runs in the background and starts up as the phone boots.
Recovery
When he user forgets the password or losses the phone.
The user proves identity by producing valid identity card (provided by Government, perhaps) to appropriate authority.
Server requests the client to create a secure channel. The client re-connects to the server by presenting a token that is encrypted by the Master Key.
The clients presents the server with a challenge, requesting a "Rescue Code". This can be shown on Web UI.
The user presents the "Rescue Code" to the server
Client matches the rescue code presented by the server against the one that is stored and then a success code is sent to server.
Now client can perform sensitive operations on behalf of user.
Recently I have met an expert who has designed security systems for very large banks ( which I cannot reveal) has implemented this kind of security model for some situations. Now, I can say with certain degree of confidence this is indeed a commercially workable and acceptable solution.