I am trying to read the TrustZone white paper but it is really difficult to understand some of the basic stuff. I have some questions about it. They may be simple questions but I am a beginner in this field:
What makes secure world really "secure". I mean why normal world might be tampered with but not the secure world?
Who can change secure os? I mean like adding a "service"? can for example an application developer for mobile pay application add a service in the Secure OS to work with his app? if Yes then how any developer can add to the secure OS and it is still secure?
What prevents a malicious application form the normal OS to raise an SMC exception and transfer to Secure OS?,answered
The idea of a secure world is to keep the code executing there as small and as simple as possible - the bare minimum to fulfil its duties (usually controlling access to some resource like encryption keys or hardware or facilitating some secure functions like encryption/decryption).
Because the amount of code in the secure world is small, it can be audited easily and there's reduced surface area for bugs to be introduced. However, it does not mean that the secure world is automatically 'secure'. If there is a vulnerability in the secure world code, it can be exploited just like any other security vulnerability.
Contrast this with code executing in the normal world. For example, the Linux kernel is much more complex and much harder to audit. There are plenty of examples of kernel vulnerabilities and exploits that allow malicious code to take over the kernel.
To illustrate this point, let's suppose you have a system where users can pay money via some challenge-response transaction system. When they want to make a transaction, the device must wait for the user to press a physical button before it signs the transaction with a cryptographic key and authorises the payment.
But what if some malicious code exploited a kernel bug and is able to run arbitrary code in kernel mode? Normally this means total defeat. The malware is able to bypass all control mechanisms and read out the signing keys. Now the malware can make payments to anyone it wants without even needing the user to press a button.
What if there was a way that allows for signing transactions without the Linux kernel knowing the actual key? Enter the secure world system.
We can have a small secure world OS with the sole purpose of signing transactions and holding onto the signing key. However, it will refuse to sign a transaction unless the user presses a special button. It's a very small OS (in the kilobytes) and you've hired people to audit it. For all intents and purposes, there are no bugs or security vulnerabilities in the secure world OS.
When the normal world OS (e.g. Linux) needs to sign a transaction, it makes a SMC call to transfer control to the secure world (note, the normal world is not allowed to modify/read the secure world at all) with the transaction it wants to sign. The secure world OS will wait for a button press from the user, sign the transaction, then transfer control back to normal world.
Now, imagine the same situation where malware has taken over the Linux kernel. The malware now can't read the signing key because it's in the secure world. The malware can't sign transactions without the user's consent since the secure world OS will refuse to sign a transaction unless the user presses the button.
This kind of use case is what the secure world is designed for. The whole idea is the hardware enforced separation between the secure and normal world. From the normal world, there is no way to directly tamper with the secure world because the hardware guarantees that.
I haven't worked with TrustZone in particular but I imagine once the secure world OS has booted, there is no way to directly modify it. I don't think application developers should be able to 'add' services to the secure world OS since that would defeat the purpose of it. I haven't seen any vendors allowing third parties to add code to their secure world OS.
To answer your last question, I've already answered it in an answer here. SMC exceptions are how you request a service from the secure world OS - they're basically system calls but for the secure world OS. What would malicious code gain by transferring control to the secure world?
You cannot modify/read the secure world from the normal world
When you transfer control to the secure world, you lose control in the normal world
What makes secure world really "secure". I mean why normal world might be tampered with but not the secure world?
The security system designer makes it secure. TrustZone is a tool. It provides a way to partition PHYSICAL memory. This can prevent a DMA attack. TrustZone generally supports lock at boot features. So once a physical mapping is complete (secure/normal world permissions) they can not be changed. TrustZone gives tools to partition interrupts as well as boot securely.
It is important to note that the secure world is a technical term. It is just a different state than the normal world. The name secure world doesn't make it secure! The system designer must. It is highly dependant on what the secure assets are. TrustZone only gives tools to partition things that can prevent the normal world access.
Conceptually there are two types of TrustZone secure world code.
A library - here there will not usually be interrupts used in the secure world. The secure API is a magic eight ball. You can ask it a question and it will give you an answer. For instance, it is possible that some Digital rights management system might use this approach. The secret keys will be hidden and in-accessible in the normal world.
A secure OS - here the secure world will have interrupts. This is more complex as interrupts imply some sort of pre-emption. The secure OS may or may not use the MMU. Usually the MMU is needed if the system cache will be used.
Those are two big differences between final TrustZone solutions. The depend on the system design and what the end application is. TrustZone is ONLY part of the tools to try and achieve this.
Who can change secure os? I mean like adding a "service"? can for example an application developer for mobile pay application add a service in the Secure OS to work with his app? if Yes then how any developer can add to the secure OS and it is still secure?
This is not defined by TrustZone. It is up to the SOC vendor (people who licence from ARM and build the CPU) to provide a secure boot technology. The Secure OS might be in ROM and not changeable for instance. Other methods are that the secure code is digitally signed. In this case, there is probably on-chip secure ROM that verifies the digital signing. The SOC vendor will provide (usually NDA) information and techniques for the secure boot. It usually depends on their target market. For instance, physical tamper protection and encrypt/decrypt hardware maybe also included in the SOC.
The on-chip ROM (only programmed by the SOC vendor) usually has mechanisms to boot from different sources like NAND flash, serial NOR flash, eMMC, ROM, Ethernet, etc). Typically it will have some one time fuseable memory (EPROM) that a device/solution vendor (the people that make things secure for the application) can program to configure the system.
Other features are secure debug, secure JTAG, etc. Obviously all of these are possible attack vectors.
Related
A lot of literature that I stumbled upon referred TrustZone as a mechanism that facilitates Secure Boot (as can be seen here, and a lot more).
To my knowledge, Secure Boot operates this way:
"Root-of-Trust verifies img1 verifies img2 ..."
So in case the chip is booting from a ROM that verifies the first image which resides in a flash memory, what added value do I get by using TrustZone?
It seems to me that TrustZone cannot provide Secure Boot if there is no ROM Root-of-Trust to the system, because it can only isolate RAM memory and not flash, so during run-time, if the non-trusted OS is compromised, it has no way of protecting its own flash from being rewritten.
Am I missing something here?
So in case the chip is booting from a ROM that verifies the first image which resides in a flash memory, what added value do I get by using TrustZone?
Secure boot and TrustZone are separate features/functions. They often work together. Things will always depend on your threat model and system design/requirements. Ie, does an attacker have physical access to a device, etc.
If you have an image in flash and someone can re-write flash, it may be the case that a system is 'OK' if the boot fails. Ie, someone can not re-program the flash and have a user think the software is legitimate. In this case, you can allow the untrusted OS access to the flash. If the image is re-written the secure boot will fail and an attacker can not present a trojan image.
Am i missing something here?
If your system fails if some one can stop the system from booting, then you need to assign the flash controller to secure memory and only allow access to the flash via controlled channels between worlds. In this design/requirement, secure boot might not really do much as you are trying to construct the system to not run un-authorized software.
This is probably next to impossible if an attacker has physical access. They can disassemble the device and re-program flash by removing, external programming and reinstalling the chip. Also, an attacker can swap the device with some mocked-up trojan device that doesn't even have the same CPU but just an external appearance and similar behaviour.
If the first case is acceptable (rogue code reprogramming flash, but not bootable), then you have designs/requirements where the in-memory code can not compromise functionality of the running system. Ie, you may not want this code to grab passwords, etc. So TrustZone and secure boot work together for a lot of cases. It is entirely possible to find some model that works with only one. It is probably more common that you need both and don't understand all threats.
Pretty sure TrustZone can isolate flash depending on the vendor's implementation of the Secure Configuration Register(SCR)
Note this is with regards to TrustZone-M(TrustZone for Cortex-M architecture) which may not be what you are looking for.
everyone.
When we talk about information security, we usually think that the more the system rely on secure hardware, the saver the system is than that rely secured software for the same security function. Why? Won't a secure hardware have a bug within it?
Thanks
It depends upon your system. What type of system are you talking about.
Stand alone system, server, application system etc. Suppose if you talk about server, developing firewall using s/w is not enough. We have to use different h/w devices as well as securing the server from different hazards.
When we talk about the stand alone application there can be firewall, password security and also user lock devices. So every system has its own type of security requirements.
How can I develop applications that use Arm's trust zone? Specifically, I want to develop a program that can save sensitive data in the secure world.
Should this program run in the normal world or the secure world? I know there are trustlets in the secure world, do I need to develop trustlets? Are there SDK or API
that I can use to directly interact with an existing secure world os or do I need to compile and install my own secure os?
Any advice will be greatly appreciated.
Thank you!
There are two extremes. These are documented in the Software overview chapter of ARMs Security Technology: Building a Secure System using TrustZone Technology.
APIs
At the one end of the spectrum, there is only a set of APIs which can be called from the normal world. This is detailed in the SMC calls for Linux. For instance, if the device contains a public-private key, an API call could sign data. The normal world would never have access to the private key, but anyone can verify that the device is original by verifying the signature. So the normal world is free to forward this request over any communications interface. This maybe part of authenticating a device.
Co-operative OSs
In this mode, there is a full blown OS in both the secure and normal world (called TEE and REE elsewhere). The OSs must co-operate with interrupts and scheduling. They may also use SMC calls, lock free algorithms and semaphores along with shared memory.
ARM recommends using the FIQ for the secure world and to leave the IRQ for the normal world. Specifically, there are settings to stop the normal world from masking the FIQ ever. All of these issue rely on the type of IPC, scheduling, interrupt response, etc that the system needs.
The simplest Secure scheduler would always pre-empt the normal world. Only the idle task would yield the CPU to the normal world. A more flexible solution would have the schedulers co-operate so that both worlds can have higher and lower priority tasks.
The better way is install a REE OS and a TEE OS in one device. When a program wants to do some sensitive things, the device will change to TEE OS, so you can deal with sensitive date securely. When you have done with sensitvie date, device will change to REE OS.
But implementing two OS switch on a device is a tough work.
Operating Systems such as MobiCore already exist and have been deployed on mass market devices such as Samsung Galaxy S3.
MobiCore is an OS that runs alongside Android, so trustlets (= MobiCore apps) can communicate with Android apps via a set of system calls to the MobiCore driver, which is the part of the Android OS in charge of communicating with the trusted execution enviromnent.
If you are looking to develop trustlets for MobiCore as explained above, you must become a MobiCore developer, which you could theoretically do by signing up as a developer for MobiCore's Trustonic venture.
If you wish to use ARM's TrustZone technology on your own device / dev board with an open-source secure OS, perhaps you can use OpenVirtualization's SierraTEE, which seems to be compiled for Xilinx Zynq-7000 AP SOC and also compatible with Android as the rich OS.
You can use OPTEE(Open Sourec Portable Trusted Execution environment) OS. If you are looking for trusted execution environment application examples, which are also know Trusted Applications(TA), then you can check this optee trusted applications examples repository and this TA using OP-TEE and Comcast Crypto API.
Optee os provides following APIs for writing Trusted Applications:
Secure Storage APIs for secure storage
Cryptographic Operations APIs for encryptiion, decryption of secure credentials and data
Secure Element API which help in hosting applications or applets on tamper-resistant platform
Time APIs
Arithmetical APIs
For client side or normal world optee provides:
Tee client side Apis
You can refer documentation here.
We had developed an application in C language. We then cross compiled it and have it port over to an ARM Single Board Computer that is running Linux. We are starting to sell this SBC to customers for their specific needs using this application. However, we heard rumors that they are also trying to hack into the SBC to copy our compiled code and then to decompile it so that in the future they will develop their own.
So my question is: How to protect my software application in the SBC?
We had tried standard Linux software protection solution such as truecrypt but it is difficult to cross compile to the SBC. Any suggestions?
Best regards,
new2RoR
You appear to have two separate issues to deal with here:
Disclosure of confidential information (e.g. details of the implementation of your software)
Unauthorised use/modification of your software
The only entirely reliable solution to both is using a SBC with secure boot, and fully trusted execution path (e.g. everything code-signed). This is by no means impossible, but will limit your hardware choices and be high-effort. This is fundamentally contrary to the ethos of the open source movement, and solutions are hard to come by.
Dealing with the issue of confidentiality, disassembling compiled C and C++ is not particularly easy or useful and requires a fairly high level of skill; it won't be an economic attack unless the value of doing so is very high.
Unless you can prevent the attacker getting access to the binary form of the software, you can reduce the attack surface and make life more difficult for any attacker by:
Stripping symbols
Obfuscating those symbols that need to remain
Statically link with libraries.
Obfuscation or encryption of any data compiled into the software
Preventing unauthorised usage can be achieved by some kind of authentication with something a legitimate user holds and/or code signing
Code signing can be used to prevent modified versions of the software being used (so long as you trust the operating system to enforce it)
Using a hardware authentication token or unique identity device can ensure the software isn't copied and used on another system.
In practice, you probably want both of these.
I am looking at an embedded system where secrets are stored in flash that is internal to the chip package, and there is no physical interface to get that information out - all access to this flash is policed by program code.
All DMA attacks and JTAG and such are disabled. This seems to be a common locked-down configuration for system-on-a-chip.
How might an attacker recover the secrets in that Flash?
I understand they can fuzz for vulnerabilities in the app code and exploit it, that there could be some indistinct general side channel attack or something.
But how would an attacker really go about trying to recover those keys? Are there viable approaches for a determined attacker to somehow shave-down the chip or some kind of microscope attack?
I've been searching for information on how various game consoles, satellite TV, trusted computing and DVD systems have been physically attacked to see how this threat works and how vulnerable SoC is, but without success.
It seems that actually all those keys have been extracted from software, or multi-chip systems?
http://www.youtube.com/watch?v=tnY7UVyaFiQ
Security person analysing a smart card. Chemically strips case then uses oscilloscope to see what it's doing when decrypting.
The attack against MiFare RFID.
"...For the MiFare crack, they shaved off layers of silicon and photographed them. Using Matlab they visually identified the various gates and looked for crypto like parts..."