PFXImportCertStore- issue - cryptoapi - security

I imported a third party CA issued PFX certificate using PFXImportCertStore. Upon successful importing, the PCERT_KEY_PROV_INFO_PROP_ID is set to the following values by default by the same call, PFXImportCertStore
Why is the dwKeySpec recognised as AT_KEYEXCHANGEkey type rather than AT_SIGNATURE?
Why is the pwszProvName set to Microsoft Base Cryptographic Provider v1.0?
The certificate in the first place was issued ONLY for Digital signing. But the key usage field indicates that the certificate can be used for Digital Signature, Non-Repudiation, Key Encipherment, Data Encipherment (f0). Enhanced key usage indicates, Client Authentication and Secure Email? Has the CA has issued the certificate correctly? The front of the certificate shows the following message(screen shot below) which makes me to suspect that this certificate was not issued for digital singing? Am i thinking correctly or not?
4.Because of these issue, i am unable to sign data using CryptSignMessage. The internal call fails to acquire context to the private key for signing. Any suggestions on how i can get around this issue?
I am able to sign with a selfsigned PFX cert which i generated. Do you think that i could export the private key in to new container and set it property to AT_SIGNATURE and the csp provider Type to PROV_RSA_AES, as i require SHA256.
I am working with XP sp3.
Thanks

Answer 1: The key is automatically classified as AT_KEYEXCHANGE because, its usage is also to encrypt session key etc. ie Though my application's main purpose is to digitally sign data, the CA has defined the key usage policy to include encipher, which forces CryptoAPI to map the key type to AT_KEYEXCHANGE.
Answer 2: I ASSUME that it is a default csp in this machine, so...? Any better explanation, please
Answer 3: From many replies from guys in Crypto Google group, AT_KEYEXCHANGE key can also be used to sign data, provided your certificate's key usage allows you to do digital signing. It seems to be common practice for third party CA's to issue certificates that can be used for multiple purposes. So the third party CA has issued the certificate correctly.
Answer 4: I managed to sign data using CryptSignMessage with the same third party issued certificate. I changed the dwProvType in PCERT_KEY_PROV_INFO_PROP_ID to PROV_RSA_AES and passed in NULL for pwsProvName. This change is performed by using CertGetCertificateContextProperty first to get the properties and then using CertSetCertificateContextProperty to set the properties of your choice. This fixed the signing issue. Now i am able to sign with SHA256/RSA1024, AT_EXCHANGE key.

Related

Public keys in OpenID Connect

I'm currently trying to use IdentityServer4 to build a single-signon experience for my users across different apps I have. They are all hosted in the same local network and no third-party apps authenticate with it. The client apps are still Katana/Owin-based.
I'm using the implicit workflow.
At the moment I still use a certificate randomly generated at runtime to sign tokens.
I wonder
whether I actually need more and what the implications are of leaving it as it is and
how the signature is actually validated by clients.
To the second question I found this piece in the openidconnect spec:
The OP advertises its public keys via its Discovery document, or may
supply this information by other means. The RP declares its public
keys via its Dynamic Registration request, or may communicate this
information by other means.
So does that mean Katana is actually getting the public keys from IdentityServer4 and validates accordingly? And if so, would it matter if it the certificate changes? The time between issuing and validating a token is always very small, correct? So why would I need a proper, rarely-changing certificate?
Generating a new certificate on app startup has a few downsides:
If you restart your IDS4 process you effectively invalidate any otherwise valid tokens as the signature will no longer be valid
Inability to scale out - all servers need to have the same signing and validation keys
Clients might only periodically update their discovery info so you need to allow for a rollover period, something that IDS4 supports as you can have more than one validation key.
See the guidance here: http://docs.identityserver.io/en/release/topics/crypto.html
The next simplest option would be to use a self-issued cert that's installed in the host machine's ceritificate store.
First of all, OpenID Connect discovery is a process of communicating relying party to retrieve provider's information, dynamically. There is a dedicated specification for this, OpenID Connect Discovery 1.0
According to it's metadata section, jwks_uri explains about token signing key publication.
1. So does that mean Katana is actually getting the public keys from IdentityServer4 and validates accordingly?
Yes it should. If your applications (relying parties) want dynamic information, you should go ahead with discovery document to retrieve token signing key information.
2 And if so, would it matter if it the certificate changes? The time between issuing and validating a token is always very small, correct?
Discovery document is part of OpenID Connect dynamic (reference - http://openid.net/connect/ ). So yes, it can be used to convey certificate changes to relying party (token consumers)
3. So why would I need a proper, rarely-changing certificate?
Certificate must be there to validate id tokens issued by identity provider. So at minimum, certificate must live till last token expires. Other than that, one might be using proper certificates issued by a CA, which comes with a cost. So, some implementations could have rarely changing certificates.
Bonus : how the signature is actually validated by clients.
You hash your received message, compare it against decrypted signature using public key of the certificate. Also, if you are wondering the format of key information, it is a JWK defined by RFC7517.
P.S - ID Token validation is same as validating a JWT explained by JWT spec.
Note - I am not an expert in PKI domain. Some expert could point out something else for short lived certificates independent of OpenID Connect protocol.

JWT RS256: Is it safe to fetch public key over https?

I'm signing JWT's using the RS256 algorithm. To verify those tokens on the client, I somehow need to access the public key.
Are there security concerns (spoofing, ...) when I set up an unprotected API route ('/api/certificate') that returns the certificate containing the public key. And do I need to take any extra security measurements?
Several concepts are often mixed up, maybe not for you, but let me try to explain a few things in this answer.
Assymetric cryptography obviously needs a public and a private key, both are basically just numbers. The private key is kept secret, the public is, well, public, anybody can have it. When signing stuff, you use the private key to sign and then anybody can verify using the public key that the signature was made by somebody that had the corresponding private key (ie. you).
But the question is then how you distribute your public key, or in your jwt example, how clients get it. As you correctly pointed out in the question, simply downloading the public key over an insecure channel is not good enough as an attacker could replace it with his own, resulting in the attacker being able to sign tokens.
One solution to this could be getting it over https as you proposed, which practically means using a second set of public-private keypair (keys of the webserver) to secure sending the first one. The theoretical question is still the same btw, it's just inherently solved in the background for you: how does the browser know that the public key it receives from the server upon connection actually belongs to the server. There is no secure channel yet between them.
Enter certificates.
A certificate is a document that essentially ties a public key to its owner, and that is excactly what you want. When a browser connects to a website, the server sends its public key along with its certificate, so that the browser can verify that the public key actually belongs to the server (the domain name in this case) that sent it. How that verifies it is beyond the scope of this post, the point is that the certificate is signed by another public key, the certificate to which may be signed by another public key, etc., and the chain is terminated by a list of so called trusted root certificates already set up for your computer and/or browser by your OS/browser vendor.
And you too should verify the public key with the certificate the same way. You don't even need the burden of SSL (https) transport for this, verifying that a public key belongs to a particular subject is the main purpose of certificates.
So all you have to do is not just get the public key from the API, but get it along with its certificate. You are probably already doing this, bare public keys are very rarely used. You are most probably already receiving a pfx or cer or crt or whatever from the server. Depending on the technology stack that you are developing on, you can for sure use built in mechanisms to fully verify a certificate and make sure it's valid. Please don't implement your own validation though, as that's tricky business and quite hard to get right. If the certificate passes validation, you can trust that the public key you received from the API is authentic and belongs to whatever it claims to belong to. There may be caveats though (like for example make sure that beyond basic validation, you check a combination of fields from the certificate that others can't have).
As an additional security measure, you can also implement certificate pinning to make it even more secure against certain types of attacks by having a list of fingerprints for valid certificates in the client (less so in a browser client, but still the concept is the same).
Edit (what fields to check in the certificate after it passed general validation of expiry, etc):
In the general case it depends on who signed the certificate and what kind of certificate it is.
A server certificate signed by a real certificate authority (CA) can only have the server domain as its common name (CN) field, a real CA won't normally sign anything else, and they also won't sign a certificate for yourdomain.com unless you can prove you control yourdomain.com. So in this case it may be enough to check CN after the cert passed validation. You do need to check CN though, as anybody can have a valid certificate from say GlobalSign or Thawte or other trusted CAs, it just costs money. What they can't have is a certificate for yourdomain.com.
If you sign your own certificates, you also won't sign anything for anyone, so in that case it could be enough to check the issuer (that you signed it) and the CN (for whom). If the certificate otherwise passed validation (meaning a trusted root certificate signed it) it should be ok, as an attacker won't normally be able to have his CA certificate as trusted on your computer.
The point in general is that you want to check something that others can't have. It's easier, if you are relying on real CAs, and it's usually best to check the fingerprint.

Authentication using digital signtures

I know a bit of authentication theory, but would like to know how is it really put in practice.
There are these software patches that must be distributed periodically. To ensure that only the genuine content reaches our users, we have been advised to sign our content before distribution.
The plan is to generate a Public-Private key pair. The patch would first be signed by our private key and recipients then authenticate the downloaded patch by using our public key. Our idea of signing is to generate a hash of the patch and encrypt the hash with our private key. The encrypted hash (signature) is to be bundled along with the patch before distribution.
We have been advised further that it is a good practice to get a digital certificate for our public key from a CA and post it on a certificate server in our premises. We are told that the CA would create this certificate using its private key. Our users are expected to download the public key certificate from our server and authenticate it using the public key of the CA. Thus our users would be confident that they have the right public key from us to authenticate the genuineness of the patch.
And finally the question:
How/where can the exact public key of the CA be downloaded for authentication of the public key certificate downloaded from our server?
In what formats are these certificates available? Are these plain text files or XMLs or ??
To answer your questions in order:
Using a browser and SSL. In that case you rely on the certificate store already in the browser. It may be a good idea to also publish the fingerprint of your own certificate. Note that you also distribute a certificate - or certificate chain - within your software. If the software download is trusted, then you may not even need an external Certificate Authority. But in that case you keep your private key of the CA very secure.
X5.09 certificates are created using ASN.1 DER encoding. DER is a binary encoding (and the textual ASN.1 definitions specifies the contents). Certificates are also often distributed in PEM format. This is a base 64 encoding of the binary certificate, with an additional header and footer.

Why certificate is needed for signing instead of just private/public key pair?

Newbie question: some vendors propose solution like generating dynamic certificates to allow user who haven't classic certificate to sign documents. But why not just generate private/public keys alone instead of bothering with certificate format ?
The purpose of the (public key) certificate is to bind the public key to the identity of its subject (i.e. the owner/entity associated with the key pair), and possibly various attributes telling you what the certificate may be used for. (You may be interested in this question on Security.SE.)
You always sign with the private key (not the public key or the certificate), but the public key or certificate are often attached with the signed document.
If you have an explicit list of public keys you know and can link independently to a user, you don't need a certificate.
The certificate allows third parties (who have signed the certificate) to assert the binding between the identifier and the public key. Even if you don't know that identity in advance, you can link the signature to the signer's identity as long as you trust the entity that signed the certificate.
Dynamically generated certificates may not be very useful in this case, unless you trust the party that generates the certificate dynamically (I'm not sure if you meant the tool itself or perhaps a website that you would also know).
Often, X.509 certificates will be used just to attach to that signature, because the tooling requires it, whereas you may be able to match the public key against an identity you know directly in the tool with which you verify the signature. Sometimes, it's also just done in anticipation of a case where it will be useful one day.
For example, if you publish your own artifacts to the central Maven repository, you will be required to sign it with your PGP certificate (often only referred to as the PGP public key). Yet, no verification of the certificate is made at all during the process (PGP certificate with only its self-signed signature is good enough). This makes this process relatively pointless in this case, but makes it possible to be stricter in which artifacts you want to use, if you're able to verify those certificates later on.
It's the same but you need a third party to consent that private key belongs to whom ever you think it belongs to.
Signing proves first of all authorship (or approval) of the document by some person. And the key alone won't prove anything. This is what the certificate is needed for - some certificate authority signs the certificate of the user and certifies that the keypair belongs to the person (or legal entity) to which the certificate is issued. The reader of the document can ensure that the signature is valid not by just computing the signature itself, but also by validating the certificate and seeing the name of the certificate owner.
I don't quite understand what vendors can issue certificates dynamically - issuing certificate in such way that they are not self-signed (and self-signed certificates make little sense in context of document signing) requires that the private key, used for signing the certificate, should be embedded into software of those vendors, and as such it's also prone to misuse.

Validate digital signature with a self-signed certificate

I have a question regarding validation of digital signatures using a self-signed certificate:
The following tutorial works for me:
http://www.oracle.com/technetwork/articles/javase/dig-signature-api-140772.html
However, when a X.509 certificate is self-signed, how can a receiver trust certificate data attached to an XML message? Any one can generate a self-signed cert and claim to be the same sender. The validation in the above tutorial always returns true. Sender’s cert must be loaded to receiver’s truststore, so receiver can use whatever in the truststore to validate signed doc. I cannot find any reference for such a scenario.
Your understanding is correct - with self-signed certificates anyone can create a certificate and signature validation will be ok. The reason is that signature validation performs first of all cryptographic operation, which is completed successfully. The second step is to validate the certificate itself AND also it's origins. When the CA-signed certificate is used, the certificate is validated using CA certificate(s) up to trusted CA (or known root CA). With self-signed certificate validation is not possible. In the above tutorial the procedure of certificate validation was skipped for simplicity as it's quite complex and beyond the scope of tutorial.
The problem you're describing is usually addressed by Public Key Infrastructures (PKI).
This is the traditional model for verifying certificates for HTTPS sites, for example. It starts with a set of trusted Certification Authorities (CAs) from which you import the CA certificates as "trusted". The entity certificates that you get are then verified against this set of trusted anchors by building a certification path between the certificate to verify and a CA certificate you know (linking the certificate to a trusted issuer, perhaps via intermediate CA certificates).
The various rules to do this are described in RFC 5280. The PKI system doesn't apply only to web servers, but to any entity (there are additional rules for web servers to verify that they're the one you want to talk to, on top of having a valid certificate).
(In particular because the choice of which CA certificates to trust is often done on behalf of the user, at least by default, by the OS or browser vendor, this model isn't perfect, but it's the most common in use.)
Alternatively, there's nothing wrong with establishing a list of self-signed certificates you would trust in advance.
Either way, you need to pre-set what you trust by mechanisms out of bands (e.g. by meeting someone you trust and using the certificate they give you in person).
This PKI model goes hand-in-hand with the X.509 format thanks to the notion of Issuer DN and Subject DN. You could have other models, for example relying on PGP certificates, where you would build a web of trust; you would still need an initial set of trusted anchors.
For XML-DSig in Java, you should implement a X509KeySelector that only returns a key that you trust. In a simple scenario, where you have a pre-defined set of self-signed certificates you trust, you can iterate over a keystore containing those trusted certificates. Otherwise, use the Java PKI Programmer Guide (as linked from the tutorial you've used).

Resources