extracting secrets from an embedded chip - security

I am looking at an embedded system where secrets are stored in flash that is internal to the chip package, and there is no physical interface to get that information out - all access to this flash is policed by program code.
All DMA attacks and JTAG and such are disabled. This seems to be a common locked-down configuration for system-on-a-chip.
How might an attacker recover the secrets in that Flash?
I understand they can fuzz for vulnerabilities in the app code and exploit it, that there could be some indistinct general side channel attack or something.
But how would an attacker really go about trying to recover those keys? Are there viable approaches for a determined attacker to somehow shave-down the chip or some kind of microscope attack?
I've been searching for information on how various game consoles, satellite TV, trusted computing and DVD systems have been physically attacked to see how this threat works and how vulnerable SoC is, but without success.
It seems that actually all those keys have been extracted from software, or multi-chip systems?

http://www.youtube.com/watch?v=tnY7UVyaFiQ
Security person analysing a smart card. Chemically strips case then uses oscilloscope to see what it's doing when decrypting.

The attack against MiFare RFID.
"...For the MiFare crack, they shaved off layers of silicon and photographed them. Using Matlab they visually identified the various gates and looked for crypto like parts..."

Related

Why are software-based attacks on TPMs so rare? How do TPMs protect against them?

I'm working on obtaining a unique hardware fingerprint on a host through WMI.
However, I found that this approach was so vulnerable.
There are at least two attack vectors:
attack on kernel space by memory manipulation
https://github.com/Alex3434/wmi-static-spoofer
attack on user space by dll-hooking
https://dzone.com/articles/windows-api-hooking-and-dll-injection
Eventually, I survey the TPM technique as a hardware identification solution.
To me astonishment, software-based attack on TPM is so rare.
My question is
by what why the TPM protect against the software-based attack such as dll-hooking and memory manipulation ?
There are features in the TPM (at least in 2.0) that help to prevent such attacks.
You can use attestation to check if a key is protected by the tpm.
You can use the session based encryption/decryption to protect at least some data between your application and the TPM.
But if an attacker can perform any of the attacks you listed he must have managed to get admin or root rights on the system and therefore can perform other attacks to steal your data.

TrustZone vs ROM as root-of-trust in Secure Boot

A lot of literature that I stumbled upon referred TrustZone as a mechanism that facilitates Secure Boot (as can be seen here, and a lot more).
To my knowledge, Secure Boot operates this way:
"Root-of-Trust verifies img1 verifies img2 ..."
So in case the chip is booting from a ROM that verifies the first image which resides in a flash memory, what added value do I get by using TrustZone?
It seems to me that TrustZone cannot provide Secure Boot if there is no ROM Root-of-Trust to the system, because it can only isolate RAM memory and not flash, so during run-time, if the non-trusted OS is compromised, it has no way of protecting its own flash from being rewritten.
Am I missing something here?
So in case the chip is booting from a ROM that verifies the first image which resides in a flash memory, what added value do I get by using TrustZone?
Secure boot and TrustZone are separate features/functions. They often work together. Things will always depend on your threat model and system design/requirements. Ie, does an attacker have physical access to a device, etc.
If you have an image in flash and someone can re-write flash, it may be the case that a system is 'OK' if the boot fails. Ie, someone can not re-program the flash and have a user think the software is legitimate. In this case, you can allow the untrusted OS access to the flash. If the image is re-written the secure boot will fail and an attacker can not present a trojan image.
Am i missing something here?
If your system fails if some one can stop the system from booting, then you need to assign the flash controller to secure memory and only allow access to the flash via controlled channels between worlds. In this design/requirement, secure boot might not really do much as you are trying to construct the system to not run un-authorized software.
This is probably next to impossible if an attacker has physical access. They can disassemble the device and re-program flash by removing, external programming and reinstalling the chip. Also, an attacker can swap the device with some mocked-up trojan device that doesn't even have the same CPU but just an external appearance and similar behaviour.
If the first case is acceptable (rogue code reprogramming flash, but not bootable), then you have designs/requirements where the in-memory code can not compromise functionality of the running system. Ie, you may not want this code to grab passwords, etc. So TrustZone and secure boot work together for a lot of cases. It is entirely possible to find some model that works with only one. It is probably more common that you need both and don't understand all threats.
Pretty sure TrustZone can isolate flash depending on the vendor's implementation of the Secure Configuration Register(SCR)
Note this is with regards to TrustZone-M(TrustZone for Cortex-M architecture) which may not be what you are looking for.

How is an ARM TrustZone secure OS secure?

I am trying to read the TrustZone white paper but it is really difficult to understand some of the basic stuff. I have some questions about it. They may be simple questions but I am a beginner in this field:
What makes secure world really "secure". I mean why normal world might be tampered with but not the secure world?
Who can change secure os? I mean like adding a "service"? can for example an application developer for mobile pay application add a service in the Secure OS to work with his app? if Yes then how any developer can add to the secure OS and it is still secure?
What prevents a malicious application form the normal OS to raise an SMC exception and transfer to Secure OS?,answered
The idea of a secure world is to keep the code executing there as small and as simple as possible - the bare minimum to fulfil its duties (usually controlling access to some resource like encryption keys or hardware or facilitating some secure functions like encryption/decryption).
Because the amount of code in the secure world is small, it can be audited easily and there's reduced surface area for bugs to be introduced. However, it does not mean that the secure world is automatically 'secure'. If there is a vulnerability in the secure world code, it can be exploited just like any other security vulnerability.
Contrast this with code executing in the normal world. For example, the Linux kernel is much more complex and much harder to audit. There are plenty of examples of kernel vulnerabilities and exploits that allow malicious code to take over the kernel.
To illustrate this point, let's suppose you have a system where users can pay money via some challenge-response transaction system. When they want to make a transaction, the device must wait for the user to press a physical button before it signs the transaction with a cryptographic key and authorises the payment.
But what if some malicious code exploited a kernel bug and is able to run arbitrary code in kernel mode? Normally this means total defeat. The malware is able to bypass all control mechanisms and read out the signing keys. Now the malware can make payments to anyone it wants without even needing the user to press a button.
What if there was a way that allows for signing transactions without the Linux kernel knowing the actual key? Enter the secure world system.
We can have a small secure world OS with the sole purpose of signing transactions and holding onto the signing key. However, it will refuse to sign a transaction unless the user presses a special button. It's a very small OS (in the kilobytes) and you've hired people to audit it. For all intents and purposes, there are no bugs or security vulnerabilities in the secure world OS.
When the normal world OS (e.g. Linux) needs to sign a transaction, it makes a SMC call to transfer control to the secure world (note, the normal world is not allowed to modify/read the secure world at all) with the transaction it wants to sign. The secure world OS will wait for a button press from the user, sign the transaction, then transfer control back to normal world.
Now, imagine the same situation where malware has taken over the Linux kernel. The malware now can't read the signing key because it's in the secure world. The malware can't sign transactions without the user's consent since the secure world OS will refuse to sign a transaction unless the user presses the button.
This kind of use case is what the secure world is designed for. The whole idea is the hardware enforced separation between the secure and normal world. From the normal world, there is no way to directly tamper with the secure world because the hardware guarantees that.
I haven't worked with TrustZone in particular but I imagine once the secure world OS has booted, there is no way to directly modify it. I don't think application developers should be able to 'add' services to the secure world OS since that would defeat the purpose of it. I haven't seen any vendors allowing third parties to add code to their secure world OS.
To answer your last question, I've already answered it in an answer here. SMC exceptions are how you request a service from the secure world OS - they're basically system calls but for the secure world OS. What would malicious code gain by transferring control to the secure world?
You cannot modify/read the secure world from the normal world
When you transfer control to the secure world, you lose control in the normal world
What makes secure world really "secure". I mean why normal world might be tampered with but not the secure world?
The security system designer makes it secure. TrustZone is a tool. It provides a way to partition PHYSICAL memory. This can prevent a DMA attack. TrustZone generally supports lock at boot features. So once a physical mapping is complete (secure/normal world permissions) they can not be changed. TrustZone gives tools to partition interrupts as well as boot securely.
It is important to note that the secure world is a technical term. It is just a different state than the normal world. The name secure world doesn't make it secure! The system designer must. It is highly dependant on what the secure assets are. TrustZone only gives tools to partition things that can prevent the normal world access.
Conceptually there are two types of TrustZone secure world code.
A library - here there will not usually be interrupts used in the secure world. The secure API is a magic eight ball. You can ask it a question and it will give you an answer. For instance, it is possible that some Digital rights management system might use this approach. The secret keys will be hidden and in-accessible in the normal world.
A secure OS - here the secure world will have interrupts. This is more complex as interrupts imply some sort of pre-emption. The secure OS may or may not use the MMU. Usually the MMU is needed if the system cache will be used.
Those are two big differences between final TrustZone solutions. The depend on the system design and what the end application is. TrustZone is ONLY part of the tools to try and achieve this.
Who can change secure os? I mean like adding a "service"? can for example an application developer for mobile pay application add a service in the Secure OS to work with his app? if Yes then how any developer can add to the secure OS and it is still secure?
This is not defined by TrustZone. It is up to the SOC vendor (people who licence from ARM and build the CPU) to provide a secure boot technology. The Secure OS might be in ROM and not changeable for instance. Other methods are that the secure code is digitally signed. In this case, there is probably on-chip secure ROM that verifies the digital signing. The SOC vendor will provide (usually NDA) information and techniques for the secure boot. It usually depends on their target market. For instance, physical tamper protection and encrypt/decrypt hardware maybe also included in the SOC.
The on-chip ROM (only programmed by the SOC vendor) usually has mechanisms to boot from different sources like NAND flash, serial NOR flash, eMMC, ROM, Ethernet, etc). Typically it will have some one time fuseable memory (EPROM) that a device/solution vendor (the people that make things secure for the application) can program to configure the system.
Other features are secure debug, secure JTAG, etc. Obviously all of these are possible attack vectors.

How to secure the software application in a single board arm computer?

We had developed an application in C language. We then cross compiled it and have it port over to an ARM Single Board Computer that is running Linux. We are starting to sell this SBC to customers for their specific needs using this application. However, we heard rumors that they are also trying to hack into the SBC to copy our compiled code and then to decompile it so that in the future they will develop their own.
So my question is: How to protect my software application in the SBC?
We had tried standard Linux software protection solution such as truecrypt but it is difficult to cross compile to the SBC. Any suggestions?
Best regards,
new2RoR
You appear to have two separate issues to deal with here:
Disclosure of confidential information (e.g. details of the implementation of your software)
Unauthorised use/modification of your software
The only entirely reliable solution to both is using a SBC with secure boot, and fully trusted execution path (e.g. everything code-signed). This is by no means impossible, but will limit your hardware choices and be high-effort. This is fundamentally contrary to the ethos of the open source movement, and solutions are hard to come by.
Dealing with the issue of confidentiality, disassembling compiled C and C++ is not particularly easy or useful and requires a fairly high level of skill; it won't be an economic attack unless the value of doing so is very high.
Unless you can prevent the attacker getting access to the binary form of the software, you can reduce the attack surface and make life more difficult for any attacker by:
Stripping symbols
Obfuscating those symbols that need to remain
Statically link with libraries.
Obfuscation or encryption of any data compiled into the software
Preventing unauthorised usage can be achieved by some kind of authentication with something a legitimate user holds and/or code signing
Code signing can be used to prevent modified versions of the software being used (so long as you trust the operating system to enforce it)
Using a hardware authentication token or unique identity device can ensure the software isn't copied and used on another system.
In practice, you probably want both of these.

Changing bios code/flashing the bios

I've spent a lot of time developing an operating system and working on my low level boot loader. But now I want to take some time off my operating system while not leaving the low-level environment and doing something involving security.
So I chose to build my own standard password utility following the pre-boot authentication scheme. Since I want the software to be at least a little portable I want it to use as little external support as is possible. I figured that I'd be best if I somehow managed to 'hook' into the bios somewhere between the self checks and the int 19 bootstrap from within a running real mode OS.
However finding information on how to modify the bios code proved to be impossible. I've found nothing on how to achieve the before mentioned. I have only found pages describing how to flash your bios.
Does anyone know how I can read/write bios code? Or can someone provide links to pages that describe this?
I know that it's not only possible to brick my device but it is also likely, I'm aware of the risk and willing to take it.
Pinczakko's articles on BIOS reverse engineering are a great place to start looking at this. There was also a book published by the same author but it is now out of print.
I'm not sure if this approach is the best approach towards a secure boot, but the articles on this site are very detailed and should point you towards a method for modifying your BIOS firmware.
I'm not really sure what you are trying to achieve, but:
The BIOS is completely hardware specific - each manufacturer will have their own mechanism for updating / flashing BIOS and so trying to come up with a portable mechanism for updating a BIOS is destined for failure. For example when using Bochs you "update" the BIOS by specifying a different BIOS ROM image.
If you want to modifty / write your own BIOS then its going to be completely specific to that hardware. Your best bet would be to start with something like Bochs as its open source - as you can take a look at the source code for the BIOS (and easily test / debug it) you stand a reasonable chance of understanding the BIOS code and modifying it into something that works, however I suspect this isn't what you are trying to do.
Why not just perform this authentication as your OS boots? If you want to protect the data then you should encrypt it and require that the user supply log in / supply the decryption key on startup.
If you were thinking of working with "legacy" PC BIOS, I would dissuade you from trying for many of the reasons Justin mentioned: 1) legacy BIOS is PC vendor-specific; 2) it is closed source and proprietary; 3) there are no industry standards defining legacy BIOS interfaces for extending the system as you are trying to do.
On the other hand, if you have access to a UEFI-based BIOS PC, you may be able to write your own PEI/DXE driver(s) to implement such a feature. This will at least point you in the right direction:
http://sourceforge.net/apps/mediawiki/tianocore/index.php?title=Welcome
Intel Press book on the topic: Beyond BIOS
Regarding the practicality of read/writing the BIOS, you'll need to identify the SPI part containg the BIOS and get a ROM burner. The SPI part may or may not be socketed; if it is not socketed, you'll need a soldering iron and be able to create a socket/header for the part. You obviously do not want to embark on this project with your primary computer system. Perhaps you could find an older system or a reference board.

Resources