Why is the function implemented with hardware generally considered to be more secure than that implemented with software? - security

everyone.
When we talk about information security, we usually think that the more the system rely on secure hardware, the saver the system is than that rely secured software for the same security function. Why? Won't a secure hardware have a bug within it?
Thanks

It depends upon your system. What type of system are you talking about.
Stand alone system, server, application system etc. Suppose if you talk about server, developing firewall using s/w is not enough. We have to use different h/w devices as well as securing the server from different hazards.
When we talk about the stand alone application there can be firewall, password security and also user lock devices. So every system has its own type of security requirements.

Related

Why is a Trusted Execution Environment more significant for mobile devices?

I've been trying to understand what a Trusted Execution Environment is and how they work. Why is there such a strong emphasis on mobile devices? I've been trying to look for what's the difference in personal computers versus mobile devices with respect to a TEE. What am I missing?
Even though it’s late; I will add my comments in simplest possible way for reference.
As the world starts to move toward Enterprise Mobility, using mobile devices for work starts to become essential for different companies and organizations. From there a need come up to secure that devices, not only the data, but the processes and memory allocation as well; Especially when governments and sensitive departments start to use mobile devices.
Starting from the very low level of mobile devices architecture, every mobile device has a processor, processor manufacturers come up with new technology which creates two isolated areas running at the same time (e.g. ARM Trustzone) on the CPU and controlled by SoC (Software on Chip).
First area is where everyone use on mobile devices (Normal World / Rich Execution Environment - REE), the second one is the secure area (Secure World / Trusted Execution Environment - TEE). Each area has its own operating system running at the same CPU but their processes and memory allocation are totally separate.
Many mobile device manufactures (e.g Samsung), start to utilize that area, by loading third party secure Operating System (OS) into there (e.g Kinibi OS from Trustonic).
Developing applications (Trusted Application - TA) in the secure world is not easy process, provisioning them there is another story and integrating that applications with the normal world is another story as well (Some sort of especial SDK provided by TEE OS owner has to be used).
It is worth to mentioning that applications running in the TEE can have extraordinary privileges and normally TEE OS Owners limit what TA’s can do.
Lastly, although TEE is considered a secure area for sensitive processes (So far). There are other ways to achieve same level (or even better) of security on mobile devices.

How is an ARM TrustZone secure OS secure?

I am trying to read the TrustZone white paper but it is really difficult to understand some of the basic stuff. I have some questions about it. They may be simple questions but I am a beginner in this field:
What makes secure world really "secure". I mean why normal world might be tampered with but not the secure world?
Who can change secure os? I mean like adding a "service"? can for example an application developer for mobile pay application add a service in the Secure OS to work with his app? if Yes then how any developer can add to the secure OS and it is still secure?
What prevents a malicious application form the normal OS to raise an SMC exception and transfer to Secure OS?,answered
The idea of a secure world is to keep the code executing there as small and as simple as possible - the bare minimum to fulfil its duties (usually controlling access to some resource like encryption keys or hardware or facilitating some secure functions like encryption/decryption).
Because the amount of code in the secure world is small, it can be audited easily and there's reduced surface area for bugs to be introduced. However, it does not mean that the secure world is automatically 'secure'. If there is a vulnerability in the secure world code, it can be exploited just like any other security vulnerability.
Contrast this with code executing in the normal world. For example, the Linux kernel is much more complex and much harder to audit. There are plenty of examples of kernel vulnerabilities and exploits that allow malicious code to take over the kernel.
To illustrate this point, let's suppose you have a system where users can pay money via some challenge-response transaction system. When they want to make a transaction, the device must wait for the user to press a physical button before it signs the transaction with a cryptographic key and authorises the payment.
But what if some malicious code exploited a kernel bug and is able to run arbitrary code in kernel mode? Normally this means total defeat. The malware is able to bypass all control mechanisms and read out the signing keys. Now the malware can make payments to anyone it wants without even needing the user to press a button.
What if there was a way that allows for signing transactions without the Linux kernel knowing the actual key? Enter the secure world system.
We can have a small secure world OS with the sole purpose of signing transactions and holding onto the signing key. However, it will refuse to sign a transaction unless the user presses a special button. It's a very small OS (in the kilobytes) and you've hired people to audit it. For all intents and purposes, there are no bugs or security vulnerabilities in the secure world OS.
When the normal world OS (e.g. Linux) needs to sign a transaction, it makes a SMC call to transfer control to the secure world (note, the normal world is not allowed to modify/read the secure world at all) with the transaction it wants to sign. The secure world OS will wait for a button press from the user, sign the transaction, then transfer control back to normal world.
Now, imagine the same situation where malware has taken over the Linux kernel. The malware now can't read the signing key because it's in the secure world. The malware can't sign transactions without the user's consent since the secure world OS will refuse to sign a transaction unless the user presses the button.
This kind of use case is what the secure world is designed for. The whole idea is the hardware enforced separation between the secure and normal world. From the normal world, there is no way to directly tamper with the secure world because the hardware guarantees that.
I haven't worked with TrustZone in particular but I imagine once the secure world OS has booted, there is no way to directly modify it. I don't think application developers should be able to 'add' services to the secure world OS since that would defeat the purpose of it. I haven't seen any vendors allowing third parties to add code to their secure world OS.
To answer your last question, I've already answered it in an answer here. SMC exceptions are how you request a service from the secure world OS - they're basically system calls but for the secure world OS. What would malicious code gain by transferring control to the secure world?
You cannot modify/read the secure world from the normal world
When you transfer control to the secure world, you lose control in the normal world
What makes secure world really "secure". I mean why normal world might be tampered with but not the secure world?
The security system designer makes it secure. TrustZone is a tool. It provides a way to partition PHYSICAL memory. This can prevent a DMA attack. TrustZone generally supports lock at boot features. So once a physical mapping is complete (secure/normal world permissions) they can not be changed. TrustZone gives tools to partition interrupts as well as boot securely.
It is important to note that the secure world is a technical term. It is just a different state than the normal world. The name secure world doesn't make it secure! The system designer must. It is highly dependant on what the secure assets are. TrustZone only gives tools to partition things that can prevent the normal world access.
Conceptually there are two types of TrustZone secure world code.
A library - here there will not usually be interrupts used in the secure world. The secure API is a magic eight ball. You can ask it a question and it will give you an answer. For instance, it is possible that some Digital rights management system might use this approach. The secret keys will be hidden and in-accessible in the normal world.
A secure OS - here the secure world will have interrupts. This is more complex as interrupts imply some sort of pre-emption. The secure OS may or may not use the MMU. Usually the MMU is needed if the system cache will be used.
Those are two big differences between final TrustZone solutions. The depend on the system design and what the end application is. TrustZone is ONLY part of the tools to try and achieve this.
Who can change secure os? I mean like adding a "service"? can for example an application developer for mobile pay application add a service in the Secure OS to work with his app? if Yes then how any developer can add to the secure OS and it is still secure?
This is not defined by TrustZone. It is up to the SOC vendor (people who licence from ARM and build the CPU) to provide a secure boot technology. The Secure OS might be in ROM and not changeable for instance. Other methods are that the secure code is digitally signed. In this case, there is probably on-chip secure ROM that verifies the digital signing. The SOC vendor will provide (usually NDA) information and techniques for the secure boot. It usually depends on their target market. For instance, physical tamper protection and encrypt/decrypt hardware maybe also included in the SOC.
The on-chip ROM (only programmed by the SOC vendor) usually has mechanisms to boot from different sources like NAND flash, serial NOR flash, eMMC, ROM, Ethernet, etc). Typically it will have some one time fuseable memory (EPROM) that a device/solution vendor (the people that make things secure for the application) can program to configure the system.
Other features are secure debug, secure JTAG, etc. Obviously all of these are possible attack vectors.

Develop programs for Arm trust zone

How can I develop applications that use Arm's trust zone? Specifically, I want to develop a program that can save sensitive data in the secure world.
Should this program run in the normal world or the secure world? I know there are trustlets in the secure world, do I need to develop trustlets? Are there SDK or API
that I can use to directly interact with an existing secure world os or do I need to compile and install my own secure os?
Any advice will be greatly appreciated.
Thank you!
There are two extremes. These are documented in the Software overview chapter of ARMs Security Technology: Building a Secure System using TrustZone Technology.
APIs
At the one end of the spectrum, there is only a set of APIs which can be called from the normal world. This is detailed in the SMC calls for Linux. For instance, if the device contains a public-private key, an API call could sign data. The normal world would never have access to the private key, but anyone can verify that the device is original by verifying the signature. So the normal world is free to forward this request over any communications interface. This maybe part of authenticating a device.
Co-operative OSs
In this mode, there is a full blown OS in both the secure and normal world (called TEE and REE elsewhere). The OSs must co-operate with interrupts and scheduling. They may also use SMC calls, lock free algorithms and semaphores along with shared memory.
ARM recommends using the FIQ for the secure world and to leave the IRQ for the normal world. Specifically, there are settings to stop the normal world from masking the FIQ ever. All of these issue rely on the type of IPC, scheduling, interrupt response, etc that the system needs.
The simplest Secure scheduler would always pre-empt the normal world. Only the idle task would yield the CPU to the normal world. A more flexible solution would have the schedulers co-operate so that both worlds can have higher and lower priority tasks.
The better way is install a REE OS and a TEE OS in one device. When a program wants to do some sensitive things, the device will change to TEE OS, so you can deal with sensitive date securely. When you have done with sensitvie date, device will change to REE OS.
But implementing two OS switch on a device is a tough work.
Operating Systems such as MobiCore already exist and have been deployed on mass market devices such as Samsung Galaxy S3.
MobiCore is an OS that runs alongside Android, so trustlets (= MobiCore apps) can communicate with Android apps via a set of system calls to the MobiCore driver, which is the part of the Android OS in charge of communicating with the trusted execution enviromnent.
If you are looking to develop trustlets for MobiCore as explained above, you must become a MobiCore developer, which you could theoretically do by signing up as a developer for MobiCore's Trustonic venture.
If you wish to use ARM's TrustZone technology on your own device / dev board with an open-source secure OS, perhaps you can use OpenVirtualization's SierraTEE, which seems to be compiled for Xilinx Zynq-7000 AP SOC and also compatible with Android as the rich OS.
You can use OPTEE(Open Sourec Portable Trusted Execution environment) OS. If you are looking for trusted execution environment application examples, which are also know Trusted Applications(TA), then you can check this optee trusted applications examples repository and this TA using OP-TEE and Comcast Crypto API.
Optee os provides following APIs for writing Trusted Applications:
Secure Storage APIs for secure storage
Cryptographic Operations APIs for encryptiion, decryption of secure credentials and data
Secure Element API which help in hosting applications or applets on tamper-resistant platform
Time APIs
Arithmetical APIs
For client side or normal world optee provides:
Tee client side Apis
You can refer documentation here.

How to secure the software application in a single board arm computer?

We had developed an application in C language. We then cross compiled it and have it port over to an ARM Single Board Computer that is running Linux. We are starting to sell this SBC to customers for their specific needs using this application. However, we heard rumors that they are also trying to hack into the SBC to copy our compiled code and then to decompile it so that in the future they will develop their own.
So my question is: How to protect my software application in the SBC?
We had tried standard Linux software protection solution such as truecrypt but it is difficult to cross compile to the SBC. Any suggestions?
Best regards,
new2RoR
You appear to have two separate issues to deal with here:
Disclosure of confidential information (e.g. details of the implementation of your software)
Unauthorised use/modification of your software
The only entirely reliable solution to both is using a SBC with secure boot, and fully trusted execution path (e.g. everything code-signed). This is by no means impossible, but will limit your hardware choices and be high-effort. This is fundamentally contrary to the ethos of the open source movement, and solutions are hard to come by.
Dealing with the issue of confidentiality, disassembling compiled C and C++ is not particularly easy or useful and requires a fairly high level of skill; it won't be an economic attack unless the value of doing so is very high.
Unless you can prevent the attacker getting access to the binary form of the software, you can reduce the attack surface and make life more difficult for any attacker by:
Stripping symbols
Obfuscating those symbols that need to remain
Statically link with libraries.
Obfuscation or encryption of any data compiled into the software
Preventing unauthorised usage can be achieved by some kind of authentication with something a legitimate user holds and/or code signing
Code signing can be used to prevent modified versions of the software being used (so long as you trust the operating system to enforce it)
Using a hardware authentication token or unique identity device can ensure the software isn't copied and used on another system.
In practice, you probably want both of these.

Sandboxed operating system

On most operating systems today, the default is that when we install a program, it is given access to many resources that it may not need, and it's user may not intend to give it access to. For example, when one installs a closed source program, in principle there is nothing to stop it from reading the private keys in ~/.ssh and send them to a malicious third party over the internet, and unless the user is a security expert proficient in using tracing programs, he will likely not be able to detect such a breach.
With the proliferation of many closed sourced programs being installed on computers, what actions are different operating systems taking to solve the problem of sandboxing third party programs?
Are there any operating system designed from the grounds up with security in mind, where every program or executable has to declare in a clearly readable format by the user what resources it requires to run, so that the OS runs it in a sandbox where it has access only to those resources? For example, an executable will have to declare that it will require access to a certain directory or a file on the filesystem, that it will have to reach certain domains or IP address over the network, that it will require certain amount of memory, etc ... If the executable lies in its declaration for system resource requirements, it should be prevented from accessing them by the operating system.
This is a the beauty of Virtualization. Anyone performing testing or operating a questionable application would be wise to use a virtual machine.
Virtual Machines:
Provide advantages of a full Operating System without direct hardware access
Can crash or fail and be restarted without affecting the host machine
Are cheap to deploy and configure to a variety of environments
Great for using applications designed for other platforms
Sandboxes applications that may attempt to access other private data on your computer
With the seamless modes virtualization programs such as VirtualBox provide you can take advantage of Virtual Machine's sandboxing in a nearly seamless fashion.
You have just described MAC (Mandatory Access Control) in your last paragraph.
I was always curious about that too.
Nowadays mobile OSes like Android do have sandboxing built-in. When installing an app, it asks for permissions to access a set of resources/features. Windows too as far as I know, at least to some extend. It is more permissive though.
Ironically, linux and others seem to be far far away concerning "software based permissions" and are stuck in the past, which is a pity. ...at least, as far as I know. I would be pleased for someone to show me wrong and show me a "usable" open source system where application sandboxing/privileges is built-in. Currently, as far as I know, permissions are solely user based.
I think this awareness that not only users need rights to access documents but also executables need rights to access resources has been missing for several decades. It might have avoided a plague of viruses and security issues of our century.

Resources