ThunderBolt firmware is stored in its own SPI flash and is updatable from the OS. The system's UEFI firmware is also able to access its configuration data in the flash - users are able to change the ThunderBolt Security Level (SL) from the firmware setup menu during pre-boot. This means there is definitely some way to access the ThunderBolt firmware via some UEFI protocol, but nothing I've tried seems to work.
What I've Tried
I'm able to successfully identify the ThunderBolt device based on its vendor ID and device ID using the EFI_PCI_IO_PROTOCOL.
I initially thought the firmware is an option ROM, so it should be accessible via EFI_PCI_IO_PROTOCOL.RomImage. However the value is 0. I then thought the Expansion ROM Base Address Register (XROMBAR) that's inside the PCI Configuration Space may have it. But the XROMBAR is also 0. Extracting the firmware by reading the SPI flash using a hardware programmer, I found that it doesn't have the option ROM's signatures of 0xAA55 and "PCIR" anywhere. So it seems like the firmware is not an option ROM.
I then thought it could be stored in a firmware volume and thus should be accessible via the EFI_FIRMWARE_VOLUME2_PROTOCOL. I searched through all the firmware volumes and found a few option ROM, but none of them belong to ThunderBolt (as seen from their vendor ID and device ID).
Background
I was looking at the ThunderSpy exploit and the report states that the ThunderBolt firmware is not verified during boot. I thought this was unusual since my thinking then was that the firmware should be an option ROM, and option ROMs must be signed and verified by Secure Boot during every boot. From my findings so far, it seems like the firmware isn't an option ROM and is most likely executed directly on the ThunderBolt controller chip and not on the CPU, hence it is outside the purview of Secure Boot. I'm trying to programmatically access the firmware so as to see if there are ways to defend against ThunderSpy-like attacks where malicious modifications were made to the firmware.
Related
The STM32 has a read-out protection level 2 feature adjusted from ST-LINK Utility App (Options Bytes) so code cannot be read out via the debug interface (SWD) or any other way.
There was an explanation,ST Electronics website. I tried the proposed app named as Device Firmware Update (DFU). However, it did not work.
This is the MCU I work on.
In read out protection, Level 2 is selected and applied. After that, MCU cannot be seen or communicated.
RDP Level 2 can't be disabled.
It is in a permanent state, debug interfaces are disabled, the factory bootloader is disabled.
In RDP L2 only a custom bootloader (ie code running from the chip FLASH memory) can modify the FLASH, But it cant disable the protection.\
You need to physically replace the microcontroller. At the moment your board is bricked and there is no way of unbricking it
I want to learn how Linux OS understands the underlying hardware.Can anyone suggest me where to start for getting this understanding,As of now i just know the '/dev' sub-directory plays a vital role in that.
It has the device special files which are like a portal to the device driver which then takes it to the physical device.
I read somewhere that Udev daemon listens to the netlink socket to collect this information and Udev device manager detects addition and removal of devices as they occur.
But with these i am just not satisfied with the thought of how Linux reads the hardware.
Please let me know where to start to understand this, i am so thankful to anyone trying to help.
I think at first you need to find out how the memory mapping works. What is the address space and how it relates to physical memory. Then you could read about how the hardware is mapped in address space and how to access it. It is a big amount of docs to read.
Some of those information are in Linux Documentation Project.
Additionally some knowledge about electronic would be helpful.
In general - Linux for communication with devices needs some "channel" of communication. This channel may be for example ISA, PCI, USB, etc bus. For example PCI devices are memory mapped devices and Linux kernel communicates with them via memory accesses. So first Linux needs to see given device in some memory area and then it is able to configure this device and do some communication with it.
In case of USB devices it is a little bit complicated because USB devices are not memory mapped. You need to configure USB host first to be able to communicate with USB devices. Every communication with USB device is achieved via USB host.
There are also devices which are not connected via ISA, PCI or USB. They are connected directly to the processor and visible under some memory address. This solution is usually implemented in embedded devices. For example ARM processors use this approach.
Regarding udev - it is user-space application which listens for events from Linux kernel and helps other applications with recognizing device addition and configuration.
I want to know who fills the configuration space of a particular device of PCI
at the first place when a new device is connected to the PCI bus. I know both bios and operating system can configure the PCI space but who gives the information of the device to both of them.
The read-only fields of the PCI configuration space, identifying the device and its capabilities, are built-in to the device, not filled in by software.
Some fields, such as the BARs, are configured by the BIOS, as part of its responsibility to set up the address map of the system. The rest of the fields are programmed by the OS or the device driver. (The BIOS may also have a driver for the device, if the device may be used to boot the system.)
Decisions of these three software components (BIOS, OS, and driver) are based on rules and policies built into the software by its designers and/or configured by the system installer or user. For example, BIOS setup menus often have settings to control where the BAR regions may be placed. In Windows, information used to configure devices may come from the registry.
I've finished developing a pcie driver for an FPGA under a linux distributiuon. Everything works fine. But I'm wondering where the base address register in the PCI Endpoint of the FPGA gets the base address. When I've generated the PCIe Endpoint I was able to set up the length of the BAR, but not more.
In the PCIe driver I do the standard functions like pci_enable_device, but I do not set up specifically a base address.
So does the operating system set up the base address during startup? or how does it work?
By the side I would like to know what initialisations the operating system gernerally do if an pcie pcie device is connected. Since I do see my pci device in lspci even if the driver is unloaded.
Kind regards
Thomas
The address allocation for the PCI devices are generally done at the BIOS level. Let us refer to the x86 platform. If we look closely at the system address map, it would be something like this (image taken from BIOS DISASSEMBLY NINJUTSU, by Darmawan Salihun)
On the address map, there is a dedicated space to map the PCI memory regions. The same could be replicated using the output of /proc/iomem.
This implementation is platform dependent, and as the BIOS "knows" about the platform, it would set aside the addresses dedicated to the PCI slots. When a device is plugged into the slot, the BIOS interacts with the firmware on the device and actually sets up the memory regions for the device, such that the OS could make use of it.
Now coming to the drivers part. In Linux, the drivers follow a specific standard known as the 'Linux Device Model', which constitutes a Core Layer(PCI core), Host Controller Drivers(PCI Controller/Masters) and Client Drivers(PCI devices). When the PCI device(client) is plugged into the slot, the corresponding host controller knows about the attachment and it further informs the PCI core about it, and hence appears in the output of lspci.
lspci shows the devices which are identified by the host controller, in which case, it may or may not be tied to a driver. The core further traverses the drivers in the system, finds a matching one, and attaches to this device.
So, the reason you are seeing the device in the output of lspci is because the host controller has identified the device, and has informed the PCI core. It doesn't matter even if any driver is attached to the device or not.
On most consumer grade computers, BAR allocation seem to be done in the BIOS.
I suppose that in a hotplug capable architecture this must be done or at least triggered by the OS.
I'm trying to develop an automount for cryptofs encrypted devices/partitions. The thing is that I don't have experience in the low level layer of Linux.
Is there any way I can detect when a cryptofs device or partition has been inserted in the system? (p.e. when you insert a dongle with a regular partition and an encrypted one)
Never tried but I would follow this approach:
In Linux plug and play is handled by hal and/or udev. hal is bit older and most of the recent distributions uses udev.
You can start looking into "libudev". Using libudev api's you will be able to get the information about connected devices.
This should help: http://www.signal11.us/oss/udev/
After that, open the device and start reading the filesystem information and figure out if it is cryptofs
See, if this answer helps: How to programmatically discover the filesystem without mounting the device (like "fdisk -l")