I am using Azure ACS with my MVC4 client using passive redirection. I've configured Azure to issue JWT, and I'm using the new WIF JWT library for validating the token. This is all working just fine.
The problem that I am encountering is that Azure signs the JWT with the X.509 Certificate, rather than the Symmetric Key, which requires my MVC application to have the Certificate, which is annoying at the moment.
I know that the JWT support is in Beta, and that there are security issues with using a Symmetric Key (in that anyone with the Key could fake a token).
Is there some setting that I am missing? I tried making the Certificate the "Secondary" signing key in Azure, but this had no effect.
ACS chooses keys to sign a JWT in the following precedence order:
Relying party symmetric key
Relying party certificate
Service-wide certificate
What you don't see anywhere on this list is a Symmetric service key, because there are security issues with using a symmetric key between more than two entities.
What this means is that your key needs to be associated with the relying party, not the namespace, as in the following screenshot.
Related
I'm currently trying to use IdentityServer4 to build a single-signon experience for my users across different apps I have. They are all hosted in the same local network and no third-party apps authenticate with it. The client apps are still Katana/Owin-based.
I'm using the implicit workflow.
At the moment I still use a certificate randomly generated at runtime to sign tokens.
I wonder
whether I actually need more and what the implications are of leaving it as it is and
how the signature is actually validated by clients.
To the second question I found this piece in the openidconnect spec:
The OP advertises its public keys via its Discovery document, or may
supply this information by other means. The RP declares its public
keys via its Dynamic Registration request, or may communicate this
information by other means.
So does that mean Katana is actually getting the public keys from IdentityServer4 and validates accordingly? And if so, would it matter if it the certificate changes? The time between issuing and validating a token is always very small, correct? So why would I need a proper, rarely-changing certificate?
Generating a new certificate on app startup has a few downsides:
If you restart your IDS4 process you effectively invalidate any otherwise valid tokens as the signature will no longer be valid
Inability to scale out - all servers need to have the same signing and validation keys
Clients might only periodically update their discovery info so you need to allow for a rollover period, something that IDS4 supports as you can have more than one validation key.
See the guidance here: http://docs.identityserver.io/en/release/topics/crypto.html
The next simplest option would be to use a self-issued cert that's installed in the host machine's ceritificate store.
First of all, OpenID Connect discovery is a process of communicating relying party to retrieve provider's information, dynamically. There is a dedicated specification for this, OpenID Connect Discovery 1.0
According to it's metadata section, jwks_uri explains about token signing key publication.
1. So does that mean Katana is actually getting the public keys from IdentityServer4 and validates accordingly?
Yes it should. If your applications (relying parties) want dynamic information, you should go ahead with discovery document to retrieve token signing key information.
2 And if so, would it matter if it the certificate changes? The time between issuing and validating a token is always very small, correct?
Discovery document is part of OpenID Connect dynamic (reference - http://openid.net/connect/ ). So yes, it can be used to convey certificate changes to relying party (token consumers)
3. So why would I need a proper, rarely-changing certificate?
Certificate must be there to validate id tokens issued by identity provider. So at minimum, certificate must live till last token expires. Other than that, one might be using proper certificates issued by a CA, which comes with a cost. So, some implementations could have rarely changing certificates.
Bonus : how the signature is actually validated by clients.
You hash your received message, compare it against decrypted signature using public key of the certificate. Also, if you are wondering the format of key information, it is a JWK defined by RFC7517.
P.S - ID Token validation is same as validating a JWT explained by JWT spec.
Note - I am not an expert in PKI domain. Some expert could point out something else for short lived certificates independent of OpenID Connect protocol.
We currently only use reference tokens as access tokens. This has me wondering if we could just skip the entire certificate management hell by including a self-signed X509 certificate with a ridiculously long validity and store it with our source code (private github) - stop screaming, this might make sense soon.
The worst case I see would be that someone with access to the private key (i.e. any employee or force with access to our github repository) could issue any JWT and use that in the client (angular) - but that's client-side. The APIs are protected via IdentityServer Access Token Validation and all clients are configured to use reference tokens.
Another possible pitfall would be if we ever added a client that uses JWT for access tokens, but I don't really see that happening.
To me, using a long lived self-signed certificate under source-control seems to be the easiest and okay(-ish) solution for this case - unless I've overlooked something. We would never do that with SSL certificates or similar. I'm focussing only on the IdentityServer4 signing credential in combination with exclusive use of reference tokens.
Otherwise we'll have to somehow get certificate rollover (at runtime), certificate management etc. running. I guess we could implement ISigningCredentialStore to manage where the certificates are loaded from - but that still leaves us with the issue on how to handle certificates in a Docker swarm (or just plain Docker containers).
Am I missing something here? Would this solution have any flaws?
Could you not just look it up?
like this
var cert = new CertificateService().GetCertificate(appSettings.CertificateName, StoreName.TrustedPeople, StoreLocation.LocalMachine);
services
.AddSigningCredential(cert);
I want to increase my safety of my web app in case of an attack.
The following components are present in my system:
Azure Web App
Azure Blob Storage
Azure SQL Azure
Azure KeyVault
Now there is the scenario that the app encrypts and stores uploaded documents.
This works as described:
1) User Uploads doc to the web app
2) random encryption key is generated
3) random encryption key is stored to the azure key vault
4) sql azure stores the blob url and the key url
Now my question is:
How is using the key vault safer in case of hacking the web app instance? I mean there is the client id and client secret in the app.config to access the keyvault, we need it to read and write keys. So if i use key vault or not does not increase safety in terms of hacking the web app, right?
The Key Vault is an API wrapped around an HSM. What makes the Key Vault or HSM Secure is that the keys can not be extracted from them once imported / created. Also, the crypto (encrypt / decrypt in your case) operations happen inside the vault so the keys are never exposed, even in memory.
If someone was able to hack your web application and get the credentials to your key vault they could use the vault to decrypt the data. So, in this case you could regenerate the credentials for the Key Vault and still continue to use the same keys that are in the vault - because they were never exposed. Meaning any data that is encrypted that the attacker didn't already decrypt is still safe because the keys were never exposed.
Typically HSMs aren't designed to store a large number of keys in only a few really important keys. You might want to consider using a key wrapping solution where you have one key in the vault.
You probably want to encrypt the client id and client secret in your config and decrypt them at runtime - this adds another layer of security. Now the attacker either needs to read the keys out of your application memory while it is running on your Cloud Service / VM (not an easy task). Or the attacker would need to obtain the config file and the private key of the certificate used to encrypt your config values (easier than reading memory, but still requires a lot of access to your system).
So if i use key vault or not does not increase safety in terms of
hacking the web app, right?
It all depends at what level they were able to hack the site. In the case you describe, if they obtained your source code then - yes, its game over. But it doesn't have to be that way. It truly comes down to your configuration.
However, most of the time, developers forget that security is a layered approach. When you're talking about encryption of data and related subjects, they are generally a last line of defense. So if a malicious actors has acquired access to the encrypted sensitive data they have breached other vulnerable areas.
The problem is not Key Vaults but your solution of using client secret. Client secret is a constant string which is not considered safe. You can use certificate and thumbprint as a "client secret". Your application needs to read the .pfx file which is stored in web app, then decrypt to grab thumbprint. Once thumbprint is retrieved successfully then you Key Vault secret is retrievable. Moreover, in Key Vault you are given the ability to use your own certificate rather than just a masked string in Secret. This is so-called "nested encryption".
The hacker if getting access to your app.config, he get nothing than the path of .pfx file which he does not know where to store, even how it looks like. Generating the same pfx file becomes impossible. If he could he would break the entirely crypto world.
I imported a third party CA issued PFX certificate using PFXImportCertStore. Upon successful importing, the PCERT_KEY_PROV_INFO_PROP_ID is set to the following values by default by the same call, PFXImportCertStore
Why is the dwKeySpec recognised as AT_KEYEXCHANGEkey type rather than AT_SIGNATURE?
Why is the pwszProvName set to Microsoft Base Cryptographic Provider v1.0?
The certificate in the first place was issued ONLY for Digital signing. But the key usage field indicates that the certificate can be used for Digital Signature, Non-Repudiation, Key Encipherment, Data Encipherment (f0). Enhanced key usage indicates, Client Authentication and Secure Email? Has the CA has issued the certificate correctly? The front of the certificate shows the following message(screen shot below) which makes me to suspect that this certificate was not issued for digital singing? Am i thinking correctly or not?
4.Because of these issue, i am unable to sign data using CryptSignMessage. The internal call fails to acquire context to the private key for signing. Any suggestions on how i can get around this issue?
I am able to sign with a selfsigned PFX cert which i generated. Do you think that i could export the private key in to new container and set it property to AT_SIGNATURE and the csp provider Type to PROV_RSA_AES, as i require SHA256.
I am working with XP sp3.
Thanks
Answer 1: The key is automatically classified as AT_KEYEXCHANGE because, its usage is also to encrypt session key etc. ie Though my application's main purpose is to digitally sign data, the CA has defined the key usage policy to include encipher, which forces CryptoAPI to map the key type to AT_KEYEXCHANGE.
Answer 2: I ASSUME that it is a default csp in this machine, so...? Any better explanation, please
Answer 3: From many replies from guys in Crypto Google group, AT_KEYEXCHANGE key can also be used to sign data, provided your certificate's key usage allows you to do digital signing. It seems to be common practice for third party CA's to issue certificates that can be used for multiple purposes. So the third party CA has issued the certificate correctly.
Answer 4: I managed to sign data using CryptSignMessage with the same third party issued certificate. I changed the dwProvType in PCERT_KEY_PROV_INFO_PROP_ID to PROV_RSA_AES and passed in NULL for pwsProvName. This change is performed by using CertGetCertificateContextProperty first to get the properties and then using CertSetCertificateContextProperty to set the properties of your choice. This fixed the signing issue. Now i am able to sign with SHA256/RSA1024, AT_EXCHANGE key.
I have recently setup a WCF service against an STS using WIF, I am trying to understand the certificates needed and what they affect, I have a certificate against IIS allowing HTTPS communication but in the STS configuration there is a reference to two more certificates. e.g.
<appSettings>
<add key="SigningCertificateName" value="CN=STSTestCert"/>
<add key="EncryptingCertificateName" value="CN=DefaultApplicationCertificate"/>
</appSettings>
In the MSDN documentation(http://msdn.microsoft.com/en-us/library/ee748498.aspx) it states
The STS uses a default certificate to sign the tokens it issues. This cert is named “STSTestCert” and it is added to your certificate store automatically for use by the STS. The certificate file is present in the STS project. The password for the file is “STSTest”. This should not be used in a production exercise. You can replace the default certificate with any other certificate
My question is what are the Signing Certificate and Encrypting Certificate used for and what would be suitable certificates for a public facing service? Do I need 3 different ones?
The claims that WIF is built around are delivered via tokens.
Each token is signed to prove that it came from the expected STS.
AFAIK, there is no way to remove the signed component of a token (which makes sense as otherwise any third party could generate them and "pretend" that they came from the STS).
These tokens can also be encrypted. If you were running across https, the whole message would be encrypted with the IIS certificate and the token would itself be encrypted again with the WIF encrypting certificate. The token encryption is optional. When you use FedUtil, one of the questions is "Do you want token encryption?". If you say "No", it is not encrypted. If you say "Yes", it is encrypted and you are then asked for the certificate.
If you wanted, you could use the same certificate for both token encryption and signing. From a security perspective, it makes sense to use two.
So the "most secure" solution would use three certificates.
You get the certificates in the normal manner from a trusted issuer.