Using reserved memory in Linux device drivers - linux

When writing a device driver that lets the device and the user-space code share some memory, are there any benefits in using reserved memory for this task?
Some things I can think of:
If the reserved memory is assigned to this particular device alone, other processes won't be able to use up available memory and leave nothing for the driver. This might be especially useful when a large continuous chunk of memory is required by the driver.
The driver could always use the same fixed (physical) memory address, which might be something the device requires.
Are there any other good reasons (not) to use reserved memory for the driver in this situation?

Related

kernel driver or user space driver?

I would like to ask your advice on the following: I need to write drivers for omap3, for accessing external dsp through fpga (through gpmc interface). The dsp is required to load file to dsp, and to read/write buffer from dsp. There is already FPGA driver in kernel. The kernel is 2.6.32. So I thought of the following options:
writing dsp driver in kernel, which uses the existing fpga driver.
writing a user space driver which interfaces with the fpga kernel driver.
writing user space driver using UIO , which will not use the kernel fpga driver, but shall do the access to fpga, as part of the user space single and complete dsp driver.
What do you think is prefered option ?
What is the advantage of kernel driver over user sace and vise versa ?
Thanks, Ran
* User-space driver:
Easier to debug.
Loads of libraries to support you.
Allows you to hide the details of your IP if you want to ( people will really hate you if you did! )
A crash won't affect the whole system.
Higher latency in handling interrupts since the kernel will have to relay the interrupt to the user space somehow.
You can't control access to your device from user space.
* Kernel-space driver:
Harder to debug.
Only linux kernel frameworks are supported.
You can always provide a binary blob to hide the details of your IP but this is annoying since it has to be generated against a specific kernel.
A crash will bring down the whole system.
Less latency in handling interrupts.
You can control access to your device from kernel space because it's a global context that all processes see.
As a kernel engineer I'm more comfortable/happy hacking code in a kernel context, that's probably why I would write the whole driver in the kernel.
However, I would say that the best thing to do is to divide the functionality of your driver into units and only put the unit in the kernel when there's a reason to do so.
For example:
If your device has a shared resource ( like an MMU, hardware FIFO ) and you want multiple processes to be able to use it safely, then probably you need some buffer manager to be in the kernel and all the processes would be communicating with it through ioctl.
If your driver needs to respond to an interrupt as fast as possible ( very low-latency ), then you will need to put the part of the code that handles interrupts in the kernel interrupt handler rather than putting it in user space and notifying user space when an interrupt occurs.

Linux allocates memory at specific physical address

I am testing a PCI Endpoint driver, I would like to do simple copy from the PCI RootPort side to the PCI Endpoint side. In PCI Endpoint side, we have address translation from PCI address to CPU physical address. We can configure the CPU physical address in the translation so that it maps to the specific DRAM region. The problem is how can we allocate a memory buffer at that specific CPU physical address to make sure the write from RootPort side really works?
Any recommendations are appreciated. Thanks a lot!
You need to first reserve the physical memory area. The easiest but ugly way to do that is to pass a "mem=" parameter to the kernel command line that precludes the physical memory range you are interested in from kernel memory management and then use ioremap() to get a virtual mapping of that.
For example if your machine has 256 Mb of RAM use mem=255M to reserve the last Mb to your uses and then map it via ioermap()
NOTE: original answer fixed based on feedback from #Adrian Cox.
If you can remap the translation on the fly, then you should work like any driver that uses DMA. Your basic reference for this is Chapter 15 of LDD3, plus the Linux DMA API.
What you are allocating is a DMA coherent buffer, via dma_alloc_coherent. On most platforms you should be able to pass in a null struct device pointer and get a generic DMA address. This will give you both a kernel virtual address to access the data, and a dma address which is the CPU physical address to map through your translation layer.
If your address translation is not very flexible, you may need to modify the platform code for your endpoint to reserve this buffer early on, in order to meet address alignment requirements. This is a bit more complicated, but there is an update of the bigphysarea patch to recent kernels that may help as a starting point.

Access NOR memory from userspace

On my Compulab cm-x270 CoM Linux kernel placed in NOR. This kernel without MTD support, and after boot I can't access to NOR as MTD partition. My goal is update this kernel from userspace. Yes, update from bootloader by tftp most easier way, but I can't use it in this task. It is possible mapping NOR in /dev/mem or any another way?
I had a similar situation with SRAM. I wrote a block device driver for /dev/sram. Access through a device driver preserves all of the Linux security rules.
You didn't mention how this NOR memory is accessed. If it's in the physical memory address space, then the driver would perform request_mem_region() and ioremap() to map the NOR memory into virtual kernel memory space. Then user programs can use standard file I/O on this block (or char) device.

Linux: create mmap()able virtual file (e.g. SPI memory)

I have a char device which enables access to an external SPI memory, and I'd like to mmap() the external memory so that I can access it from a program as if it were normal memory.
If I use the usual mmap() page remapping implementation on the char device file, it just enables me to see a device memory region, not its virtual char file...
Is there a trick to allow me to do that?
TIA
If the character device driver provided an mmap implementation, it would work. There's probably a good reason it doesn't:
Memory access instructions create memory transactions on the bus. An SPI memory is not addressable that way (although the SPI controller may use memory-mapped I/O, that's for its own register-level interface, not the memory content). You could build an SPI memory controller with a memory bus interface, I suppose, but you'd lose the device-indepedence of the SPI standard.
Emulating a memory region is possible (grab a page of memory, mark it for no access, and deal with the access violations (SIGBUS and SIGSEGV), but that would be horribly inefficient. Sometimes you find virtual machines doing this, but performance is very bad.
It sounds like you would need some sort of driver that translates the memory region accesses to commands sent through the character-oriented interface. This would probably be a pretty straight-forward block device driver.

How do you access the high speed SRAM in ARM CPUs from user-mode code on WinCE?

When writing embedded ARM code, it's easy to access to the built-in zero wait state memory to accelerate your application. Windows CE doesn't expose this to user-mode applications, but there is probably a way to do it. The internal SRAM is usually used for the video buffer, but there's usually some left over. Anyone know how to do it?
Thanks,
Larry B.
Unfortunately you can't access the high speed ram from usermode-processes.
The only way to get access to it on a WindowsCE-OS is to write a driver, map the fixed address of the TCM into the user-mode process address space and pass it to the user-mode process.

Resources