What causes dma_map_page/dma_unmap_page to take longer time on some hardware? - linux

I've been programming a Linux kernel module for several years for a PCIe device. One of the main feature is to transfer data from the PCIe card to the host memory using DMA.
I'm using streaming DMA, i.e. it's the user program that allocates the memory, and my kernel module has to do the job of locking the pages and creating the scatter gather structure. It works correctly.
However, when used on some more recent hardware with Intel processors, the function calls dma_map_page and dma_unmap_page are taking much longer time to execute.
I've tried to use dma_map_sg and dma_unmap_sg, it takes approximately the same longer-time.
I've tried to split the dma_unmap_sg into a first call to dma_sync_sg_for_cpu, followed by the call to dma_unmap_sg_attrs with attribute DMA_ATTR_SKIP_CPU_SYNC. It works correctly. And I can see the additional time is spend on the unmap operation, not on the sync.
I've tried to play with the linux command line parameters relating to the iommu (on, force, strict=0), and also intel_iommu, with no change in the behavior.
Some other hardware show a decent transfer rate, i.e. more than 6GB/s on PCIe3x8 (max 8GB/s).
The issue on some recent hardware is limiting transfer rate to ~3GB/s (I've checked that the card is correctly configured for PCIe3x8, and the programmer of the Windows device driver manages to achieve the 6GB/s on the same system. Things are more behind the curtains in Windows and I cannot get much information from him.)
On some hardware, the behavior is either normal or slowed, depending on the Linux distribution (and the Linux kernel version I guess). On some other hardware, the roles are reversed, i.e. the slow one becomes the fast one and vice-versa.
I cannot figure out the cause of this. Any clue?

The trouble was the bounce buffers. Didn't know about this.

Related

PCIe device discovery algorithm pseudo code

I have a PCIe model written in System Verilog, although I think this question is language agnostic. The model performs PCIe configuration reads and writes and memory reads and writes perfectly in simulation. However, what I need to do is "discover" my PCIe device and configure my config space registers in simulation. Is there a boiler plate chunk of pseudo code that represents the Linux PCIe enumeration process that I can just add my own models transactions functions too so that I can get a "Bus walk", followed by BAR programming, SR-IOV enable if discovered, MSIx config? It seems like this would be a common exercise for PCIe device so maybe there is model.
It isn't terribly difficult to do. Basically you loop through the config space, checking for each each possible device on the first root bus 0. When a device is found, you allocate a memory space for it based on its requested size and program the BARs accordingly. If you find any bridges, you also configure and enable them - the basic bridge registers for this are standard. This includes assigning the upstream and downstream bus numbers, which then allows you to enumerate the new downstream bus, and so on.
I had to do this once to access a PCI I/O card on a system that had no OS or other software environment. It wasn't too bad and that was across two bridges from two vendors, as well as the I/O card registers and the CPU bus root bridge setup. This was PCI, not PCIe, but it would be very much the same. You could even do it with completely hard-coded numbers if the hardware never changed, but in my case there were a couple variants so I actually had to do some simple enumeration to find the device numbers dynamically. One gotcha is that you may have to delay a bit, or retry, to give all the devices time to come online before you try to access them.
In doing that I found this book to be invaluable: PCI System Architecture (4th Edition). I notice there is also an version for PCIe: PCI Express System Architecture (1st Edition). I would definitely get one of those if you haven't already. These books contain detailed algorithms and explanations about how to do all of this. At the time I didn't really use or refer to any code to speak of, but...
The best code resource I have found is U-Boot. It operates at a similarly low-level and is totally self contained and is still fairly small and as simple as possible. For example, the enumeration appears to start with the function pci_init() calls a board specific pci_xxx_init(). This then sets up the root bridge and then calls pci_hose_scan_bus() in drivers/pci/pci.c to do the real work. Also check out the routines in drivers/pci/pci_auto.c, as well as the rest of the folder.
For your task you probably only need a very small subset and could just hack out parts of these files into a simple driver. Basically a for() loop and some pci_read/write_config() calls with logic to recognize your device and bridge IDs.

How to generate a steady 37kHz GPIO trigger from inside linux kernel?

I have a micro controller taking care of infrared TX-carrier wave generation currently, but I started wondering if I could dispose of it, and do this work in linux side - thus bringing the cost of my embedded system down.
I'm running on a Freescale i.mx233 (454MHz ARM9), and if I access registry directly through /dev/mem, I can achieve quite steady 5MHz triggering to a GPIO pin.
Since I need 37kHz, I started looking ways of slowing it down, but it seems that at least nanowait() is way too rough for this purpose.
I found one solution of calling rand() in a for loop, and I seem to be able to generate 38,4kHz signal quite well, However there is some unacceptable jitter from time to time according to oscilloscope. (I understand that this is quite a bit waste of resources, but when the TX needs to be done, the system has no other tasks really)
My questions:
Freescales kernel code (3.8 branch) doesn't have CONFIG_PREEMPT_RT patches, so that is one thing maybe I should look into, but before that:
Could I achieve more accurate performance, by writing a kernel module to drive the GPIO from inside the kernel ? I do need to read up on some data from user space (data to be sent), but other than that, I only need to trigger the led on specified frequency at the end of the GPIO, so the driver should be pretty simple.
Can I force the priority of my driver, so that other tasks don't interrupt this gpio triggering ? (data sending takes currently roughly 400ms, and it's done very seldom)
Is there some better way to create an interrupt say every 37kHz, so that I don't stall the system by SW ?
Micro controller is perfect for this kind of tasks, but it would be nice to avoid this cost overhead if possible...
The i.MX23 PWM in "Multi-Chip Attachment Mode" is designed exactly for this requirement.
Use one of the PWM's in "Multi-Chip Attachment Mode", for example, assuming you are using a 24Mhz clock, with
MATT=1 (Enable multi-chip attachment mode)
MATT_SEL=1 (User 24Mhz clock)
CDIV=0x2 (or DIV_4, i.e. divide by 4)
INACTIVE_STATE=0x2 or 0x3
ACTIVE_STATE=0x3 or 0x2
PERIOD=175 (i.e 176-1)
If you use a 32Mhz clock you will need other CDIV and PERIOD parameters to get to 34Khz.
See the "i.MX23 Applications Processor Reference Manual" for example code. If I am not mistaken the driver code is in arch/arm/plat-mxc/pwm.c but it doesn't seem to support the MATT mode. You will probably have to extend the code yourself.
Regarding the implementation -
The above answer relates to the CPU only. In practice, the ability to implement the idea depends on the board design. The board would need a header (pins for external connection) that connects to a GPIO pin that can be connected via the pinmux to one of the PWMs. I would assume that most reference designs would have at least one PWM configurable GPIO exposed through a header. The the question is if there is only one and if you are already using it for some other control purpose.
After determining that there is a header with a free PWM configurable GPIO, you need to configure the pin mux and activate the PWM. There are instructions for this in the processor reference manual noted above. Most systems do this configuration in the boot loader board_init() (assuming U-boot), although it can probably be done in userspace also with some mmap trickery after Linux boots.
Finally you would need to write a driver based on the interface to the PWM module in platform-mxc_pwm.c.
If you are using the i.MX23 EVK 10.05 you might be able to modify the LED PWM driver since it is already configured at the level of the bootloader and kernel and connect your device to the LED output instead of the LED. (You will need a hardware technician to help you with this.) Make sure you config the kernel with the CONFIG_LEDS_MXS.
The above comments regarding implementation are somewhat speculative since I don't know the EVK. Perhaps someone who knows it can improve on this.
Update September 21, 2013
Another way to generate a 37kHz signal with the i.MX23 or with any SoC with a similar ARM CPU core is to use an unused on-chip timer to generate a FIQ interrupt at the required frequency and write a FIQ interrupt handler to toggle a GPIO pin. Maxime Ripard posted a complete example of this method using the i.MX28 SoC on his Free Electrons blog on April 30 this year. To use this method you will need both an unused timer and not be using the FIQ interrupt for another purpose such as one of the SPI, camera, or brownout-detection drivers that use the ARM FIQ. You will also need to write the ISR in ARM assembler.
The best way to get a 37 kHz signal would be to find some serial/audio/PWM output that can generate it in hardware.
It might be possible to raise the priority of your userspace process, but this won't help against interrupts or high-priority kernel tasks.
An RT kernel would allow you to get priority over more kernel tasks, but wouldn't help against all interrupts.
I don't know if you will be able to get the maximum latency below 37 kHz (27 µs); I think it's unlikely.
Doing this in the kernel would help because you could disable interrupt handling.
However, disabling interrupts for as long as 400 ms is frowned upon.

Why do we need a bootloader in an embedded device?

I'm working with ELinux kernel on ARM cortex-A8.
I know how the bootloader works and what job it's doing. But i've got a question - why do we need bootloader, why was the bootloader born?
Why we can't directly load the kernel into RAM from flash memory without bootloader? If we load it what will happen? In fact, processor will not support it, but why are we following the procedure?
In the context of Linux, the boot loader is responsible for some predefined tasks. As this question is arm tagged, I think that ARM booting might be a useful resource. Specifically, the boot loader was/is responsible for setting up an ATAG list that describing the amount of RAM, a kernel command line, and other parameters. One of the most important parameters is the machine type. With device trees, an entire description of the board is passed. This makes a stock ARM Linux impossible to boot with out some code to setup the parameters as described.
The parameters allows one generic Linux to support multiple devices. For instance, an ARM Debian kernel can support hundreds of different board types. Uboot or other boot loader can dynamically determine this information or it can be hard coded for the board.
You might also like to look at bootloader info page here at stack overflow.
A basic system might be able to setup ATAGS and copy NOR flash to SRAM. However, it is usually a little more complex than this. Linux needs RAM setup, so you may have to initialize an SDRAM controller. If you use NAND flash, you have to handle bad blocks and the copy may be a little more complex than memcpy().
Linux often has some latent driver bugs where a driver will assume that a clock is initialized. For instance if Uboot always initializes an Ethernet clock for a particular machine, the Linux Ethernet driver may have neglected to setup this clock. This can be especially true with clock trees.
Some systems require boot image formats that are not supported by Linux; for example a special header which can initialize hardware immediately; like configuring the devices to read initial code from. Additionally, often there is hardware that should be configured immediately; a boot loader can do this quickly whereas the normal structure of Linux may delay this significantly resulting in I/O conflicts, etc.
From a pragmatic perspective, it is simpler to use a boot loader. However, there is nothing to prevent you from altering Linux's source to boot directly from it; although it maybe like pasting the boot loader code directly to the start of Linux.
See Also: Coreboot, Uboot, and Wikipedia's comparison. Barebox is a lesser known, but well structured and modern boot loader for the ARM. RedBoot is also used in some ARM systems; RedBoot partitions are supported in the kernel tree.
A boot loader is a computer program that loads the main operating system or runtime environment for the computer after completion of the self-tests.
^ From Wikipedia Article
So basically bootloader is doing just what you wanted - copying data from flash into operating memory. It's really that simple.
If you want to know more about boostrapping the OS, I highly recommend you read the linked article. Boot phase consists, apart from tests, also of checking peripherals and some other things. Skipping them makes sense only on very simple embedded devices, and that's why their bootloaders are even simpler:
Some embedded systems do not require a noticeable boot sequence to begin functioning and when turned on may simply run operational programs that are stored in ROM.
The same source
The primary bootloader is usually built in into the silicon and performs the load of the first USER code that will be run in the system.
The bootloader exists because there is no standardized protocol for loading the first code, since it is chip dependent. Sometimes the code can be loaded through a serial port, a flash memory, or even a hard drive. It is bootloader function to locate it.
Once the user code is loaded and running, the bootloader is no longer used and the correctness of the system execution is user responsibility.
In the embedded linux chain, the primary bootloader will setup and run the Uboot. Then Uboot will find the linux kernel and load it.
Why we can't directly load the kernel into RAM from flash memory without bootloader? If we load it what will happen? In fact, processor will not support it, but why are we following the procedure?
Bartek, Artless, and Felipe all give parts of the picture.
Every embedded processor type (E.G. 386EX, Coretex-A53, EM5200) will do something automatically when it is reset or powered on. Sometimes that something is different depending on whether the power is cycled or the device is reset. Some embedded processors allow you to change that something based on voltages applied to different pins when the device is powered or reset.
Regardless, there is a limited amount of something that a processor can do, because of the physical space on-processor required to define that something, whether it is on-chip FLASH, instruction micro-code, or some other mechanism.
This limit means that the something is
fixed purpose, does one thing as quickly as possible.
limited in scope and capability, typically loading a small block of code (often a few kilobytes or less) into a fixed memory location and executing from the start of the loaded code.
unmodifiable.
So what a processor does in response to reset or power-cycle cannot be changed, and cannot do very much, and we don't want it to automatically copy hundreds of megabytes or gigabytes into memory which may not exist or may not be initialized, and which could take a looooong time.
So....
We set up a small program which is smaller than the smallest size permitted across all of the devices we are going to use. That program is stored wherever the something needs it to be.
Sometimes the small program is U-Boot. Sometimes even U-Boot is too big for initial load, so the small program then in turn loads U-Boot.
The point is that whatever gets loaded by the something, is modifiable as needed for a particular system. If it is U-Boot, great, if not, it knows where to load the main operating system or where to load U-Boot (or some other bootloader).
U-Boot (speaking of bootloaders in general) then configures a minimal set of devices, memory, chip settings, etc., to enable the main OS to be loaded and started. The main OS init takes care of any additional configuration or initialization.
So the sequence is:
Processor power-on or reset
Something loads initial boot code (or U-Boot style embedded bootloader)
Initial boot code (may not be needed)
U-Boot (or other general embedded bootloader)
Linux init
The kernel requires the hardware on which you are working to be in a particular state. All the hardware you used needs to be checked for its state and initialized for its further operation. This is one of the main reasons to use a boot loader in an embedded (or any other environment), apart from its use to load a kernel image into the RAM.
When you turn on a system, the RAM is also not in a useful state (fully initialized to use) for us to load kernel into it. Therefore, we cannot load a kernel directly (to answer your question)and thus arises the need for a construct to initialize it.
Apart from what is stated in all the other answers - which is correct - in some cases the system has to go through different execution modes, take as example TrustZone for secure ARM chips. It is possible to still consider it as sort of HW initialization, but what makes it peculiar is the fact that there are additional limitations (ex: memory available) that make it impractical, if not impossible, to do everything in a single binary, thus multiple stages of bootloader are available.
Furthermore, for security reason, each of them is signed and can perform its job only if it meets the security requirements.

Address space identifiers using qemu for i386 linux kernel

Friends, I am working on an in-house architectural simulator which is used to simulate the timing-effect of a code running on different architectural parameters like core, memory hierarchy and interconnects.
I am working on a module takes the actual trace of a running program from an emulator like "PinTool" and "qemu-linux-user" and feed this trace to the simulator.
Till now my approach was like this :
1) take objdump of a binary executable and parse this information.
2) Now the emulator has to just feed me an instruction-pointer and other info like load-address/store-address.
Such approaches work only if the program content is known.
But now I have been trying to take traces of an executable running on top of a standard linux-kernel. The problem now is that the base kernel image does not contain the code for LKM(Loadable Kernel Modules). Also the daemons are not known when starting a kernel.
So, my approach to this solution is :
1) use qemu to emulate a machine.
2) When an instruction is encountered for the first time, I will parse it and save this info. for later.
3) create a helper function which sends the ip, load/store address when an instruction is executed.
i am stuck in step2. how do i differentiate between different processes from qemu which is just an emulator and does not know anything about the guest OS ??
I can modify the scheduler of the guest OS but I am really not able to figure out the way forward.
Sorry if the question is very lengthy. I know I could have abstracted some part but felt that some part of it gives an explanation of the context of the problem.
In the first case, using qemu-linux-user to perform user mode emulation of a single program, the task is quite easy because the memory is linear and there is no virtual memory involved in the emulator. The second case of whole system emulation is a lot more complex, because you basically have to parse the addresses out of the kernel structures.
If you can get the virtual addresses directly out of QEmu, your job is a bit easier; then you just need to identify the process and everything else functions just like in the single-process case. You might be able to get the PID by faking a system call to get_pid().
Otherwise, this all seems quite a bit similar to debugging a system from a physical memory dump. There are some tools for this task. They are probably too slow to run for every instruction, though, but you can look for hints there.

Is forcing I2C communication safe?

For a project I'm working on I have to talk to a multi-function chip via I2C. I can do this from linux user-space via the I2C /dev/i2c-1 interface.
However, It seems that a driver is talking to the same chip at the same time. This results in my I2C_SLAVE accesses to fail with An errno-value of EBUSY. Well - I can override this via the ioctl I2C_SLAVE_FORCE. I tried it, and it works. My commands reach the chip.
Question: Is it safe to do this? I know for sure that the address-ranges that I write are never accessed by any kernel-driver. However, I am not sure if forcing I2C communication that way may confuse some internal state-machine or so.(I'm not that into I2C, I just use it...)
For reference, the hardware facts:
OS: Linux
Architecture: TI OMAP3 3530
I2C-Chip: TWL4030 (does power, audio, usb and lots of other things..)
I don't know that particular chip, but often you have commands that require a sequence of writes, first to one address to set a certain mode, then you read or write another address -- where the function of the second address changes based on what you wrote to the first one. So if the driver is in the middle of one of those operations, and you interrupt it (or vice versa), you have a race condition that will be difficult to debug. For a reliable solution, you better communicate through the chip's driver...
I mostly agree with #Wim. But I would like to add that this can definitely cause irreversible problems, or destruction, depending on the device.
I know of a Gyroscope (L3GD20) that requires that you don't write to certain locations. The way that the chip is setup, these locations contain manufacturer's settings which determine how the device functions and performs.
This may seem like an easy problem to avoid, but if you think about how I2C works, all of the bytes are passed one bit at a time. If you interrupt in the middle of the transmission of another byte, results can not only be truly unpredictable, but they can also increase the risk of permanent damage exponentially. This is, of course, entirely up to the chip on how to handle the problem.
Since microcontrollers tend to operate at speeds much faster than the bus speeds allowed on I2C, and since the bus speeds themselves are dynamic based on the speeds at which devices process the information, the best bet is to insert pauses or loops between transmissions that wait for things to finish. If you have to, you can even insert a timeout. If these pauses aren't working, then something is wrong with the implementation.

Resources