Two PIV certificates - one YubiKey 5 - yubico

I'm trying to import two PIV certificates to be used on one Yubico Key 5 (slot 9a).
One certificate for regular use and another for elevated privileges. For the life of me, I can't figure it out!
I've tried using the GUI YubiKey Manager > PIV > configure certificates > Import
all this does is overwrite the existing certificate with the one that is being imported to the key
I've tried figuring out what command line to use with the following pdf: https://www.yubico.com/wp-content/uploads/2016/05/Yubico_PIV_Tool_Command_Line_Guide_en.pdf
At this point, I'm just banging my skull against the wall and not seeing how to solve this. Does anyone have any ideas or insights on this?

It's not possible to store more that one certificate in one slot. There are different slots for different purpose. See this page for details: https://developers.yubico.com/PIV/Introduction/Certificate_slots.html
So, I think, this is not possible what you planed to do.

This is unfortunate because Gemalto smartcards allow multiple certificates to be loaded on a single card. The inability to load multiple certs in slot 9a will require using two different Yubikeys for two different certs

Technically the slot numbers refer to key slots where the private key is stored. Tools typically allow you to import certificates 'to' slot 9a when they really mean 'for' slot 9a. Certificates are stored separately from keys in PIV, and there is a well known mapping from key slot to certificate slot. For example, the certificate for slot 9a i stored in a certificate slot named 0x5fc105. Each such certificate slot can only store one certificate according to the PIV standard (which specifies the format of the data in the certificate slot), but there are 24 key key slots (with corresponding certificate slots) on a YubiKey. Depending on tooling you may be able to use other slots for your alternate key and certificate. Since certificates are public information they could also be stored somewhere else than on the YubiKey entirely, as long as you can convince your tools to use them that way. If your goal is to store two separate certificates for one key your best bet would be to import the same private key to two separate key slots, and store your two different certificates in their respective certificate slots. That won't work for onboard generated keys since you can't copy or extract those.

Related

Programmatically regenerate keys for group enrollments in Azure Device provisioning Service (DPS)

I want to programmatically regenerate the symmetric key (primary and secondary keys) in group enrollments of Azure DPS, there is an API provided by azure in the link.
I used this github repo and was able to run it.
I used the API but it retured 404 not found.
I used the mentioned github repo and was able to get the instance of an enrollment group.
Now I want a way to regenerate the keys for current group but there is seem to have no function that would allow that thing.
A way is to change the redo attestation that in return will change the symmetric keys but I have not find a way yet.
If anyone could help me, that would be great.
There's no API specifically for regenerating group enrollment keys. However, you can use the CreateOrUpdateEnrollmentGroupAsync method to update an existing enrollment group, passing in a new set of keys. See: https://learn.microsoft.com/en-us/dotnet/api/microsoft.azure.devices.provisioning.service.provisioningserviceclient.createorupdateenrollmentgroupasync?view=azure-dotnet&viewFallbackFrom=azure-dotnet-preview
You will need to generate your new symmetric keys to pass in as part of the EnrollmentGroup parameter.
The following sample shows an example of using this method with an enrollment group that uses X.509 certs, but you should be able to easily modify it to use symmetric keys instead: https://github.com/Azure/azure-iot-sdk-csharp/tree/main/provisioning/service/samples/getting%20started/EnrollmentGroupSample
To generate a suitable key in Python, you could use the following:
from hashlib import sha256
from base64 import b64encode
s = 'mysecretkeyfordps'
h = sha256()
h.update(s.encode())
b64bytes = b64encode(h.digest())
print(b64bytes.decode())

When using Azure Key Vault or JWT what is the proper design for setting and retrieving/decrypting metadata. 1 to many or 1 to 1 keys?

The use case is a user has a metadata that needs to be encrypted so when they sign-in a protected and stored object "encrypted" will be "checked" to verify the object information coming in plaintext is equal to what is in the encrypted object.
The question is, is it more appropriate in an Azure Key Vault to give each and every user a key with public and private key ability. Or, just use a single key that will encrypt the object that is stored and just un-sign/decrypt the object when it is accessed.
To me, the object is what is necessary to be encrypted and doesn't really relate to how the key is encrypted hence a universal 1 key to many approach.
The other approach makes sense too but I would have to create a hell of a lot of keys in order to facilitate such an approach. Is 1000's or millions of keys resulting in a key per each user appropriate?
What are the advantages or disadvantages of each other.
I think the same practice would apply to JWT token signing.
I think its better to have one key and on a regular basis rotate the key.
For example, like they do in ASP.NET Core Data Protection API (I know you are using node) where they every 90 days (by default) replace the current key with a new one, and the old one is still kept to allow decryption of old data. In .NET they call this the key-ring, that hold many keys.
I did blog about this here.
Also, do be aware that using some SDK's with Azure Key Vault, they try to download all secrets at start-up, one-by-one. That can be quite a time consuming if you have many secrets.

Security on azure Cosmos db

I want to use Cosmos db with c# code. A really important point is that data should stay encrypted at any point. So, as I understood, once the data on the server, it's automaticaly encrypted by azure by the encryption-at-rest. But during the transportation, do I have to use certificate or it's automatically encrypted. I used this link to manage the database https://learn.microsoft.com/fr-fr/azure/cosmos-db/create-sql-api-dotnet. My question is finally : Is there any risk of safety if I just follow this tutorial?
Thanks.
I think that's a great starting point.
Just one note, your data is only as secure as the access keys to the account so, on top encryption at rest and in transit, the Access Key is probably the most sensitive piece of information you need to protect.
My advice is to use a KeyVault to store the database access key rather than define them as environment variables. Combined with Managed Identity, your key will never leave the confines of the azure portal which makes it the most secure option. I'm not sure how you plan on deploying your code but more times than not I've seen those keys encoded in source code or in some configuration file that ends up exposed.
A while ago I wrote a step-by-step tutorial describing how to implement this. You can find my article here
I would suggest you to follow the instructions mentioned in here, and not even using access keys, because if they are accidentally exposed, no matter that you have stored them in a Key Vault or not, your database is out there. Besides, if you want to use access keys, it is recommended to change the access keys periodically, which then you need to make this automatic and known to your key vault, here it is described how you could automate that.

Migrating Thales payshield 9000 to Azure Key vault

We want to migrate HSM keys from Thales paysheild 9000 to Azure Key vault. We would like to know if this migration is supported and if supported, what’s the migration approach and use cases where customers have already migrated to Azure. We have gone through the article https://github.com/MicrosoftDocs/azure-docs/blob/master/articles/key-vault/key-vault-hsm-protected-keys.md, it talks about Thales nShield family but we are using https://www.thalesesecurity.com/products/payment-hsms/payshield-9000
Thanks in advance.
Excellent question, as Dan suggests you should contact Microsoft for clarification, but unfortunately I don't think it's possible.
Recapping, as I'm sure you are aware the purpose of HSM's is so that the keys are not exportable.
Microsoft (and I assume Thales) supports key backup: https://learn.microsoft.com/en-us/rest/api/keyvault/backupkey but it can only be restored to the same geographical area.
In the article you supplied it mentions "Key Exchange Key" in each geographical area, which I assume will mean that Microsoft will be using a different key to that of another install of an HSM.
Having said this I'm not a general HSM expert, these are just links I have come across over time using KeyVault.
Please do contact Microsoft as I would to be interested if this is possible, please post an answer once you have heard back or a Microsoft employee can perhaps answer directly.
On the Thales literature it states:
"With nShield BYOK for Microsoft Azure, your on-premises
nShield HSM generates, stores, wraps, and exports keys to the
Microsoft Azure Key Vault on your behalf"
http://go.thalesesecurity.com/rs/480-LWA-970/images/Thales-e-Security-Microsoft-Azure-UK-sb.pdf
Interestingly it says generates / stores which suggests a pre-created key could be migrated. However on the contray I'm guessing the export must happen using the "Key Exchange Key" and stored in both on-prem and exported for Azure at the same time, not on-prem first, in the BYOK process.
This blog post has keyvault team's contact details if it helps: https://blog.romyn.ca/key-management-in-azure/
The migration of important keys, that are encrypted under current LMK on your Thales payshield on premises, is very straightforward process:
1- Use console command GC to generate new ZMK in a clear format component, this will be done by using key type to be 000 which is ZMK key type, and also to choose clear format components option use letter 'x' in GC command steps.
2-Repeat the GC command above 3 times to generate 3 different plaintext format components of the new ZMK.
3-Now, at your payshield 9000 HSM, use the console command FK which means Form Key from components, the result is the new ZMK encrypted under old LMK.
4-Use the command KE ,which means export key, to export the important data encryption keys (DEK), such as ZPK for example, which is encrypted under old LMK to be encrypted under the new ZMK. Note: in KE command here use key type to be 001 which is ZPK key type.
5- Now you need to manually distribute the same new ZMK to the other party that you are going to migrate to.
6- You can do this manual distribution to such an important key (new ZMK) by sending the 3 different plaintext format components, which you have generated earlier in step number 2, to three different security officers at your corporate, and for security reasons, no one can have the 3 components all together.
7- On the other entity that you wanted to migrate your keys to, which is Microsoft Azure Key Vault cloud service, Azure is offering securing your keys in a hardware HSM environmental of nShield type, which is general purpose HSM and it is not specific in payment transactions like Thales payshield HSM.
8 - Refer to Microsoft Azure key vault documents, to know how to form the new ZMK of the 3 different plaintext format components that you have generated before, and refer to nShield manuals also to check the command which is responsible for importing keys.
9- Now, your important keys such as ZPK which was exported under new ZMK, are now imported under the same ZMK, and finally stored encrypted under the new LMK of your nShield provided cloud service.

AWS KMI - force key rotation if compromised

How do I force AWS KMI to rotate a key after a compromise? It seems I can instruct AWS to automatically rotate keys once a year. But on demand, if compromised - doesn't seem possible. Specifically, the PCI-DSS requirements:
3.6.5
a) Do cryptographic key procedures include retirement or replacement (for example, archiving, destruction, and/or revocation) of cryptographic keys when the integrity of the key has been weakened (for example, departure of an employee with knowledge of a clear-text key)?
b) Do cryptographic key procedures include replacement of known or suspected compromised keys?
(When you say KMI, I guess you mean AWS KMS.)
This is a valid question. The way to rotate keys manually is to create a new key and change the alias from the old key to the new one. Unfortunately, most AWS resources use the key id, so you have to assess which one is still using the old one.
If you use Infrastructure as Code tools such as Terraform or Pulumi, you just need to taint the KMS resource, so it will be recreated with a new ID and if you did everything right, e.g. used the alias-based data queries in Terraform, you just need to run the pipelines for those resources, and you are good to go.
How do I force AWS KMI to rotate a key after a compromise?
Key rotation doesn't mitigate compromised key data.
Key rotation is a mechanism to prevent encrypting too many chunks of data (there is a math about relation between amount of encrypted data and probability of key revelation)
Basically the key rotation creates a new version of the key for new encryptions, but KMS will allow decrypting any ciphertext encrypted with the older key version.
See:
https://docs.aws.amazon.com/kms/latest/developerguide/rotate-keys.html#rotate-keys-manually
Unfortunately, most AWS resources use the key id, so you have to assess which one is still using the old one.
When using manual key rotation, the application needs to know which key is the current one (e.g. using an alias), but the ciphertext needs a reference to the key id/arn to decrypt.
But on demand, if compromised - doesn't seem possible.
By default the KMS key doesn't leave the hardware. There is no way the KMS key itself is compromised, so maybe an automatic rotation is feasible having all the advantages (keeping the same ARN/ID/alias)
However - the KMS is meant for envelope encryption, KMS is used to encrypt the random data-key or a service-specific key, which can be leaked theoretically. Then you need to create policies to manage this risk

Resources