I am reading a paper about Qubes OS (security oriented operating system) and there are two acronyms that I do not understand - TPM and PV. Does anybody know what they should mean?
They are used in these sentences:
TPM
Those secrets are released to Dom0 by the TPM ...
...attacker should not be able to re-seal secrets in TPM ...
...TPM-based trusted/verified boot process...
...hypervisor would still be loaded and started, but TPM would not release the secret needed to decrypt the rest of the filesystems...
...are placed into special TPM registers...
If the measurements are correct they will later allow to unseal a secret from the TPM that will be needed to get access to various disk encryption keys.
... would need to unseal a secret from the TPM needed to decrypt the rest of the file systems
Even though the TPM-based verified boot process...
This authentication passphrase would, of course, be normally encrypted, and the decryption key should be sealed into the TPM...
and more...
PV
The I/O Emulation vs. PV drivers.
If one doesnʼt need to run HVM guests, e.g. Windows, and runs only PV guests in Xen...
...in case the user wants to use only PV guests...
...dedicated minimal PV domains.
A driver domain is an unprivileged PV-domain...
(that is for hosting PV driver backends...
... KVM doesnʼt support PV domains...
... if the user only uses the PV guests.
...support only for PV Linux)
...in a separate PV domain...
... it is a regular unprivileged PV domain.
...USB PV backends...
TPM = Trusted Platform Module
PV = Paravirtualized
Related
My code (C++ on Linux) needs to uniquely identify the machine (vm guest/container/physical) that it is running on, and I have been researching TPM (2) for this purpose. Since each physical TPM has a unique EK it seems that I could use the TPM API to verify the machine actually running on is the one expected.
However, with VM's/containers, it seems like a virtual TPM can be copied along with the guest/container to another hypervisor/host. In that case, I assume the EK is copied along with it and this defeats the ability to unique identify the machine (vm guest/container/physical) on which my code runs.
Is this correct? Or do virtual TPM's somehow connect to the host TPM to ensure uniqueness? If someone wanted to bypass my check, I assume running a TPM simulator would have the same effect?
I am trying to add a TPM 2.0 enabled device to Azure Device Provisioning Service Enrollment List. This requires the Endorsement Key (EKPub) of the TPM.
What would be the best way to extract (find out) the EKPub (Endorsment Key) of a TPM? I appreciate your help.
Intel provide a suite of tools for interacting with a TPM 2.0 which you can download from here: https://github.com/tpm2-software/tpm2-tools
Note you'll have to also compile and install abrmd (a resource manager) and the tss stack/libraries. The tools work on Linux (Ubuntu, RedHat, CentOS, Debian at least, and Raspian on the Raspberry PI with a suitable TPM board).
The command you're looking for here is: tpm2_createek which will generate the EK and store it in the TPM. Meaning, that the TPM 2.0 has a seed value from which the EK (and AK) is generated when needed. Typically - at least we do - is generate the EK and AK, then move these to persistent handles so they survive power down.
https://github.com/tpm2-software/tpm2-tools/blob/master/man/tpm2_createek.1.md
I know that vmware's Vsphere VM's can be encrypted using a KMS server but can the actual drive which vsphere is hosted on be encrypted? In Microsoft the hyper-visor host can be encrypted if bit-locker is enabled.
Not explicitly. You can, however, use Secure Boot to ensure that only signed code is ran: https://blogs.vmware.com/vsphere/2017/05/secure-boot-esxi-6-5-hypervisor-assurance.html
Based on Kyle Rudy's vmware link the following is good to note:
https://blogs.vmware.com/vsphere/2017/05/secure-boot-esxi-6-5-hypervisor-assurance.html
TPM and TXT
The question always comes up in customer conversations of “Does this require TPM or TXT??”. The answer is no. They are mutually exclusive. Secure Boot for ESXi is purely a function of the UEFI firmware and the validation of cryptographically signed code. Period.
Note that TPM 1.2 and TPM 2.0 are two vastly different implementations. They are not backwards compatible. There is support, via 3rd parties like HyTrust, for TPM 1.2 in ESXi 6.5.
TPM 2.0 is not supported in 6.5.
Standard BIOS firmware vs UEFI firmware
Typically, switching your hosts from their standard (legacy) BIOS firmware to UEFI firmware in some operating systems will cause issues. With ESXi, you can switch with no modification to ESXi. If you have installed 6.5 using standard BIOS and you want to try out Secure Boot then in the host firmware you can switch and ESXi will come up.
So I noticed that with hardware TPM you dont need a password (you just save the private key to external USB).
Now, imagine someone stole my PC (which has the TPM hardware on it), couldn't they just install a fresh copy of windows 10 in a new hard drive, connect my old drive that was protected with bitcopy as secondary drive, and access all my data?
because the TPM hardware module is still on the same motherboard.
Remember, they didn't just steal the HDD but he whole PC.
Thanks for reading,
Sean
https://learn.microsoft.com/en-us/windows/device-security/bitlocker/bitlocker-frequently-asked-questions#bkmk-deploy
What system changes would cause the integrity check on my operating system drive to fail?
The following types of system changes can cause an integrity check
failure and prevent the TPM from releasing the BitLocker key to
decrypt the protected operating system drive:
Moving the BitLocker-protected drive into a new computer.
Installing a new motherboard with a new TPM.
Turning off, disabling, or clearing the TPM.
Changing any boot configuration settings.
Changing the BIOS, UEFI firmware, master boot record, boot sector, boot manager, option ROM, or other early boot components or boot
configuration data.
How can I develop applications that use Arm's trust zone? Specifically, I want to develop a program that can save sensitive data in the secure world.
Should this program run in the normal world or the secure world? I know there are trustlets in the secure world, do I need to develop trustlets? Are there SDK or API
that I can use to directly interact with an existing secure world os or do I need to compile and install my own secure os?
Any advice will be greatly appreciated.
Thank you!
There are two extremes. These are documented in the Software overview chapter of ARMs Security Technology: Building a Secure System using TrustZone Technology.
APIs
At the one end of the spectrum, there is only a set of APIs which can be called from the normal world. This is detailed in the SMC calls for Linux. For instance, if the device contains a public-private key, an API call could sign data. The normal world would never have access to the private key, but anyone can verify that the device is original by verifying the signature. So the normal world is free to forward this request over any communications interface. This maybe part of authenticating a device.
Co-operative OSs
In this mode, there is a full blown OS in both the secure and normal world (called TEE and REE elsewhere). The OSs must co-operate with interrupts and scheduling. They may also use SMC calls, lock free algorithms and semaphores along with shared memory.
ARM recommends using the FIQ for the secure world and to leave the IRQ for the normal world. Specifically, there are settings to stop the normal world from masking the FIQ ever. All of these issue rely on the type of IPC, scheduling, interrupt response, etc that the system needs.
The simplest Secure scheduler would always pre-empt the normal world. Only the idle task would yield the CPU to the normal world. A more flexible solution would have the schedulers co-operate so that both worlds can have higher and lower priority tasks.
The better way is install a REE OS and a TEE OS in one device. When a program wants to do some sensitive things, the device will change to TEE OS, so you can deal with sensitive date securely. When you have done with sensitvie date, device will change to REE OS.
But implementing two OS switch on a device is a tough work.
Operating Systems such as MobiCore already exist and have been deployed on mass market devices such as Samsung Galaxy S3.
MobiCore is an OS that runs alongside Android, so trustlets (= MobiCore apps) can communicate with Android apps via a set of system calls to the MobiCore driver, which is the part of the Android OS in charge of communicating with the trusted execution enviromnent.
If you are looking to develop trustlets for MobiCore as explained above, you must become a MobiCore developer, which you could theoretically do by signing up as a developer for MobiCore's Trustonic venture.
If you wish to use ARM's TrustZone technology on your own device / dev board with an open-source secure OS, perhaps you can use OpenVirtualization's SierraTEE, which seems to be compiled for Xilinx Zynq-7000 AP SOC and also compatible with Android as the rich OS.
You can use OPTEE(Open Sourec Portable Trusted Execution environment) OS. If you are looking for trusted execution environment application examples, which are also know Trusted Applications(TA), then you can check this optee trusted applications examples repository and this TA using OP-TEE and Comcast Crypto API.
Optee os provides following APIs for writing Trusted Applications:
Secure Storage APIs for secure storage
Cryptographic Operations APIs for encryptiion, decryption of secure credentials and data
Secure Element API which help in hosting applications or applets on tamper-resistant platform
Time APIs
Arithmetical APIs
For client side or normal world optee provides:
Tee client side Apis
You can refer documentation here.