ARM TrustZone, connecting peripherals? - security

I'm currently doing some research about ARM's TrustZone, e.g. here: ARM information center. As far as I understand, with TrustZone a secure environment based on the AMBA AXI bus can be created.
On ARM website it says: "This concept of secure and non-secure worlds extends beyond the processor to encompass memory, software, bus transactions, interrupts and peripherals within an SoC." I read that peripherals can be connected to TrustZone via the NonSecure-bit of the AMBA AXI bus (The extra signal is used to differentiate between trusted and non-trusted requests).
1) What, except the extra pin of AMBA AXI bus, is the TrustZone specific hardware in a SoC with TrustZone?
2) Is it possible to connect an external non-volatile memory (e.g. Flash) or a partition of it to TrustZone with access to secure world (via external memory interface and -then internal- the AXI bus)? If no, how are secrets (as keys) stored to be used in the secure world (with help of fuses??)? If yes, how is it prevented that a Flash including malicious code is connected?
3) Is it possible to implement code to the secure world as a customer of a chip vendor (e.g. TI or NXP), either before or after the chip left the factory?
Thank you for your answers.

TrustZone is a set of standards released by ARM. It gives OEM (embedded software programmers) and SOC vendors some tools to make a secure solution. These have different needs depending on what needs to be secured. So each SOC will be different. Some SOC manufacturers will try to compete on the same security application, but they will still differentiate.
1) What, except the extra pin of AMBA AXI bus, is the TrustZone specific hardware in a SoC with TrustZone?
Anything that the vendor wants. The GIC (ARMv7-A) interrupt controller, the L1 and L2 controllers, and MMU are all TrustZone aware peripherals in most Cortex-A CPUs. These are designed by ARM and implemented in the SOC. As well, there are various memory partitioning/exclusion devices which can be placed in between a peripheral and the SOC. Examples are the NIC301 and various proprietary BUS interconnect technology.
Other hardware may include physical tampers, voltage and temperature monitoring, clock monitoring and cryptography accelerators.
2) Is it possible to connect an external non-volatile memory (e.g. Flash) or a partition of it to TrustZone with access to secure world (via external memory interface and -then internal- the AXI bus)? If no, how are secrets (as keys) stored to be used in the secure world (with help of fuses??)? If yes, how is it prevented that a Flash including malicious code is connected?
As the above alludes, chips like the NIC301 can physically partition AXI peripherals.See image below Part of any TrustZone solution is some secure boot mechanism. All CPU will boot in the secure world. The secure boot mechanism may vary. For instance, a one time programmable ROM might be appropriate for some applications. Many have programmable fuses with a public/private key mechanism implemented in SOC ROM. The SOC ROM boot software will verify that the image in flash is properly signed by whoever burned the one time fuses.
This OEM image can set-up many TrustZone peripherals, most of which will have a lock bit. Once set, registers in the peripherals can not be changed until the next hard boot.
3) Is it possible to implement code to the secure world as a customer of a chip vendor (e.g. TI or NXP), either before or after the chip left the factory?
Yes, this is the secure boot mechanism. It is not specified in the ARM TrustZone documents on how code will be secured. If you manufacture the chip and have on-chip ROM with a MESH layer protecting it maybe sufficient for secure boot. However, TI and NXP will implement a public/private key mechanism and verify that only software signed by an OEM can be loaded. This OEM software can have bugs (and possibly the ROM loader by the SOC vendor), but at least it is possible to create a secure boot chain.
With public key, even complete access to the chip will only allow an attacker to load previously released software from the OEM. Some solutions may have revocation mechanisms as well to prevent previously released software from being used.
See: trust-zone
Typical ARM bus
ARM partition checker
Handling ARM TrustZone

Related

Does a HDD/SSD use 1 or 2 device controllers to connect to the OS?

I am studying Operating Systems, and came across divice controllers.
I gathered that a device controller is hardware whereas a device driver is software.
I also know that a HDD and a SSD both have a small PCB buit into them and I assume those PCB's are the device controllers.
Now what I want to know is if there is another device controller on the PC/motherboard side of the bus/cable connecting the HDD/SSD to the OS?
Is the configuration: OS >> Device Driver >> Bus >> Device Controller >> HDD/SSD
Or is it: OS >> Device Driver >> Device Controler >> Bus >> Device Controller >> HDD/SSD
Or is it some other configuration?
Sites I visited for answers:
Tutorialspoint
JavaPoint
Idc online
Quora
Most hard-disks on desktop are SATA or NVME. eMMC is popular for smartphones but some might use something else. These are hardware interface standards that describe the way to interact electrically with those disks. It tells you what voltage at what frequency and for what amount of time you need to apply (a signal) to a certain pin (a bus line) to make the device behave or react in a certain way.
Most computers are separated in a few external chips. On desktop, it is mostly SATA, NVME, DRAM, USB, Audio Output, network card and graphics card. Even though there is few chips, the CPU would be very expensive if it had to support all those hardware interface standards on the same silicon chip. Instead, the CPU implements PCI/PCI-e as a general interface to interact with all those chips using memory mapped registers. Each of these devices have an external PCI-e controller between the device and the CPU. In the same order as above, you have AHCI, NVME controller, DRAM (not PCI and in the CPU), xHCI (almost everywhere) and Intel HDA (example). Network cards are PCI-e and there isn't really a controller outside the card. Graphics card are also self standing PCI-e devices.
So, the OS detects the registers of those devices that are mapped in the address space. The OS writes at those locations, and it will write the registers of the devices. PCI-e devices can read/write DRAM directly but this is managed by the CPU in its general implementation of the PCI-e standard most likely by doing some bus arbitration. The CPU really doesn't care what's the device that it is writing. It knows that there is a PCI register there and the OS instructs to write it with something so it does. It just happens that this device is an implementation of a standard and that the OS developers read the standard so they write the proper values in those registers and the proper data structures in DRAM to make sure that the device knows what to do.
Drivers implement the standard of the software interface of those controllers. The drivers are the ones instructing the CPU on values to write and writing the proper data structures in DRAM for giving commands to the controllers. The user thread simply places the syscall number in a conventionnal register determined by the OS developers and they call an instruction to jump into the kernel at a specific address that the kernel decides by writing a register at boot. Once there, the kernel looks at the register for the number and determines what driver to call based on the operation.
On Linux and some place else, it is done with files. You call syscalls on files and the OS has a driver attached to the file. They are called virtual files. A lot of transfer mechanisms are similar to the reading/writing files pattern so Linux uses that to make a general driver model where the kernel doesn't even need to understand the driver. The driver just says create me a file there that's not really on the hard disk and if someone opens it and calls an operation on it then call this function that's there in my driver. From there, the driver can do whatever it wants because it is in kernel mode. It just creates the proper data structures in DRAM and writes the registers of the device it drives to make it do something.

Non-Hardware Device Drivers

In Windows Internal 7th Edition - Book following text is Mentioned Under Windows Kernel Architecture
Device drivers -This includes both hardware device drivers, which translate user I/O function
calls into specific hardware device I/O requests, and non-hardware device drivers, such as file
system and network drivers.
Can anyone please elaborate on hardware device drivers and non-hardware device drivers?
Assume you have multiple layers - e.g. when a process makes a file IO request it goes to a virtual file system layer, which may send a request to a file system layer, which may send request/s to a software RAID layer, which may send requests to a USB mass storage device driver, which may send a request to a USB controller driver.
You can split these layers into 2 main categories:
a) "device drivers", where there's an actual device. For these, the relationships between device drivers tends to mirror the hierarchical relationships between hardware devices (e.g. "PCI bus with controllers plugged in, with various devices plugged into those controllers, with various peripherals plugged into those devices" may become a tree of "parent device driver communicating with none or more child device drivers that are...").
b) "things that do not drive a device, and therefore are not technically device drivers". For the file IO example above, this is the VFS, file systems and software RAID layer. For networking it'll be code to handle a TCP/IP stack (and figure out routing, etc - which network card should send a packet based on the destination IP address). For user input (keyboard, etc) it could be things like Input Method Editors. For sound it can be code to determine how loud the sound should be on which speakers (on which sound card/s) based on a 2D position.
For most operating systems; device drivers need to be treated as "special" because they need to use interfaces (and possibly direct hardware access) that normal software/processes can't use. For example, for monolithic kernels they might be treated as a kernel extension and (dynamically) linked directly into the kernel.
However; "things that do not drive a device, and therefore are not technically device drivers" end up needing similar special support (e.g. the ability to use the same or similar interfaces that normal software/processes can't use but device drivers can use, the ability to be linked into a monolithic kernel, etc). For an OS designer, the differences between device drivers and "things that aren't technically device drivers but need the same access" is relatively insignificant (compared to normal software/processes which don't have/need special access); so it's tempting to use the same word to describe both - e.g. call them all "kernel modules" (regardless of whether they're device drivers or not); or call them all "device drivers" (regardless of whether they're technically device drivers or not).
Note that there's a few things that confuse this even more:
a) There's actually a third category - "virtual devices". In some cases software is trying to emulate a real device (e.g. RAM disks that use software/RAM to emulate a hard drive; printers that use a PDF file format converter to "print" to a file, etc). For these cases, emulation/virtualization necessitates implementation as a device driver (but there's technically no device being driven).
b) To make terminology seem more consistent; some operating systems are biased towards defining interfaces as "virtual devices". If you try hard enough you can pretend anything is some kind of abstract virtual device ("It's not a compression/decompression library, it's a virtual compression/decompression device", "It's not a database management engine, it's a virtual relational data storage device", ...).
c) Some operating systems also try to pretend that everything is a file (e.g. Unix - https://en.wikipedia.org/wiki/Everything_is_a_file ). In this case you might have a directory of "device drivers pretending to be files" (e.g. /dev) and end up with "things that are not device drivers that are pretending to be device drivers that are pretending to be files" slapped into the same directory.
Your question is unclear. If you ask for an example of a non-hardware device driver, an example would be the random number generator device. On Linux, for example, the "/dev/random" device provides a software implementation of a random number generator so systems without the necessary hardware can still have this function

How to programmatically read ThunderBolt firmware from UEFI

ThunderBolt firmware is stored in its own SPI flash and is updatable from the OS. The system's UEFI firmware is also able to access its configuration data in the flash - users are able to change the ThunderBolt Security Level (SL) from the firmware setup menu during pre-boot. This means there is definitely some way to access the ThunderBolt firmware via some UEFI protocol, but nothing I've tried seems to work.
What I've Tried
I'm able to successfully identify the ThunderBolt device based on its vendor ID and device ID using the EFI_PCI_IO_PROTOCOL.
I initially thought the firmware is an option ROM, so it should be accessible via EFI_PCI_IO_PROTOCOL.RomImage. However the value is 0. I then thought the Expansion ROM Base Address Register (XROMBAR) that's inside the PCI Configuration Space may have it. But the XROMBAR is also 0. Extracting the firmware by reading the SPI flash using a hardware programmer, I found that it doesn't have the option ROM's signatures of 0xAA55 and "PCIR" anywhere. So it seems like the firmware is not an option ROM.
I then thought it could be stored in a firmware volume and thus should be accessible via the EFI_FIRMWARE_VOLUME2_PROTOCOL. I searched through all the firmware volumes and found a few option ROM, but none of them belong to ThunderBolt (as seen from their vendor ID and device ID).
Background
I was looking at the ThunderSpy exploit and the report states that the ThunderBolt firmware is not verified during boot. I thought this was unusual since my thinking then was that the firmware should be an option ROM, and option ROMs must be signed and verified by Secure Boot during every boot. From my findings so far, it seems like the firmware isn't an option ROM and is most likely executed directly on the ThunderBolt controller chip and not on the CPU, hence it is outside the purview of Secure Boot. I'm trying to programmatically access the firmware so as to see if there are ways to defend against ThunderSpy-like attacks where malicious modifications were made to the firmware.

How does the Linux Operating System understand the underlying hardware?

I want to learn how Linux OS understands the underlying hardware.Can anyone suggest me where to start for getting this understanding,As of now i just know the '/dev' sub-directory plays a vital role in that.
It has the device special files which are like a portal to the device driver which then takes it to the physical device.
I read somewhere that Udev daemon listens to the netlink socket to collect this information and Udev device manager detects addition and removal of devices as they occur.
But with these i am just not satisfied with the thought of how Linux reads the hardware.
Please let me know where to start to understand this, i am so thankful to anyone trying to help.
I think at first you need to find out how the memory mapping works. What is the address space and how it relates to physical memory. Then you could read about how the hardware is mapped in address space and how to access it. It is a big amount of docs to read.
Some of those information are in Linux Documentation Project.
Additionally some knowledge about electronic would be helpful.
In general - Linux for communication with devices needs some "channel" of communication. This channel may be for example ISA, PCI, USB, etc bus. For example PCI devices are memory mapped devices and Linux kernel communicates with them via memory accesses. So first Linux needs to see given device in some memory area and then it is able to configure this device and do some communication with it.
In case of USB devices it is a little bit complicated because USB devices are not memory mapped. You need to configure USB host first to be able to communicate with USB devices. Every communication with USB device is achieved via USB host.
There are also devices which are not connected via ISA, PCI or USB. They are connected directly to the processor and visible under some memory address. This solution is usually implemented in embedded devices. For example ARM processors use this approach.
Regarding udev - it is user-space application which listens for events from Linux kernel and helps other applications with recognizing device addition and configuration.

HAL layer vs Device driver

In Linux, HAL provides hardware abstraction and device driver too provide hardware abstraction. Can you please clarify me the difference between two ?
The device driver communicates with a specific device at a specific buffer and control flag block location. A hardware abstraction layer abstracts away the details of how specific devices work. For example, the driver for a USB mouse is very different from the driver for a PS2 mouse but at the HAL layer they are both mice and can be treated interchangeably.
I would say that HAL provides hardware abstraction using device drivers. From a certain point of view, no device can work without a driver. HAL goes one step ahead, offering a uniform (or, "easier") API for the application.
You can bypass HAL and talk directly to the device driver, but you can not bypass the device driver and talk directly to the hardware (this last sentence is more or less valid in general, depending on OS and environment).
The main difference is what they provide abstraction for. HAL abstracts processors, device drivers abstract different devices. So in a sense HAL is the "device" driver of the processor or the motherboard in PCs.
Back in the day, every programmer who coded an app also codes drivers for the various hardware that they wanted to support. So, if you have an idea to develop an app which needs to use network capabilities, you also needed to know how to program hardware drivers for the network card. Then came in the HAL.
So instead of having your software and OS directly reaching out to the hardware, there is now a layer in between called the HAL. The HAL lies underneath the operating system layer or within.
Now nobody is allowed to access the hardware, except that they do it through and by the hardware abstraction layer(HAL). Just the HAL is allowed to access the hardware.
Now it's something which is standard. All Devs have to do is make the game/app work with the HAL.
Now we have the drivers. The drivers tell the HAL how to access the actual hardware.
So whoever makes the sound card, they just make a driver that tells the HAL how to access that sound card.
So overall, our software interacts with the HAL, The HAL uses drivers to interact with the hardware. We are telling the HAL how to access that sound card or network card etc. with the use of the drivers.

Resources