On my Compulab cm-x270 CoM Linux kernel placed in NOR. This kernel without MTD support, and after boot I can't access to NOR as MTD partition. My goal is update this kernel from userspace. Yes, update from bootloader by tftp most easier way, but I can't use it in this task. It is possible mapping NOR in /dev/mem or any another way?
I had a similar situation with SRAM. I wrote a block device driver for /dev/sram. Access through a device driver preserves all of the Linux security rules.
You didn't mention how this NOR memory is accessed. If it's in the physical memory address space, then the driver would perform request_mem_region() and ioremap() to map the NOR memory into virtual kernel memory space. Then user programs can use standard file I/O on this block (or char) device.
Related
When writing a device driver that lets the device and the user-space code share some memory, are there any benefits in using reserved memory for this task?
Some things I can think of:
If the reserved memory is assigned to this particular device alone, other processes won't be able to use up available memory and leave nothing for the driver. This might be especially useful when a large continuous chunk of memory is required by the driver.
The driver could always use the same fixed (physical) memory address, which might be something the device requires.
Are there any other good reasons (not) to use reserved memory for the driver in this situation?
What information is contained in the memory map of application processor? Is it tells which subsystem can access which area of RAM or it means if CPU tries to access an address based on memory map it can be RAM address or a device address? I am referring this documentation
http://infocenter.arm.com/help/index.jsp?topic=/com.arm.doc.ddi0515b/CIHIJJJA.html.
Here 0x00_0000_0000 to 0x00_0800_0000 is mapped to the boot region, what does that imply?
The style of memory map diagram you've linked to shows how the processor and peripherals will decode physical memory addresses. This is a normal diagram for any System-on-Chip device, though the precise layout will vary. The linked page actually lists which units of the SoC use this memory map for their address decoding, and it includes the ARM and the Mali graphics processor. In a Linux system, much of this information will be passed to the kernel in the device tree. It's important to remember that this tells us nothing about how the operating system chooses to organise the virtual memory addresses.
Interesting regions of this are:
DRAM - these addresses will be passed to the DRAM controller. There is no guarantee that the specific board being used has DRAM at all of that address space. The boot firmware will set up the DRAM controller and pass those details to the operating system.
PCIe - these addresses will be mapped to the PCIe controller, and ultimately to transfers on the PCIe links.
The boot region on this chip by default contains an on-chip boot rom and working space. On this particular chip there's added complexity caused by ARMs TrustZone security architecture, which means that application code loaded after boot may not have access to this region. On the development board it should be possible to override this mapping and boot from external devices.
The memory map contains an layout of the memory of your device.
It tells your OS, where the OS can place data and how it is accessed, as some areas may be only accessible in a privileged state.
Your boot image will be placed in the boot area.This defines among other things your entry point.
I would like to ask your advice on the following: I need to write drivers for omap3, for accessing external dsp through fpga (through gpmc interface). The dsp is required to load file to dsp, and to read/write buffer from dsp. There is already FPGA driver in kernel. The kernel is 2.6.32. So I thought of the following options:
writing dsp driver in kernel, which uses the existing fpga driver.
writing a user space driver which interfaces with the fpga kernel driver.
writing user space driver using UIO , which will not use the kernel fpga driver, but shall do the access to fpga, as part of the user space single and complete dsp driver.
What do you think is prefered option ?
What is the advantage of kernel driver over user sace and vise versa ?
Thanks, Ran
* User-space driver:
Easier to debug.
Loads of libraries to support you.
Allows you to hide the details of your IP if you want to ( people will really hate you if you did! )
A crash won't affect the whole system.
Higher latency in handling interrupts since the kernel will have to relay the interrupt to the user space somehow.
You can't control access to your device from user space.
* Kernel-space driver:
Harder to debug.
Only linux kernel frameworks are supported.
You can always provide a binary blob to hide the details of your IP but this is annoying since it has to be generated against a specific kernel.
A crash will bring down the whole system.
Less latency in handling interrupts.
You can control access to your device from kernel space because it's a global context that all processes see.
As a kernel engineer I'm more comfortable/happy hacking code in a kernel context, that's probably why I would write the whole driver in the kernel.
However, I would say that the best thing to do is to divide the functionality of your driver into units and only put the unit in the kernel when there's a reason to do so.
For example:
If your device has a shared resource ( like an MMU, hardware FIFO ) and you want multiple processes to be able to use it safely, then probably you need some buffer manager to be in the kernel and all the processes would be communicating with it through ioctl.
If your driver needs to respond to an interrupt as fast as possible ( very low-latency ), then you will need to put the part of the code that handles interrupts in the kernel interrupt handler rather than putting it in user space and notifying user space when an interrupt occurs.
I am testing a PCI Endpoint driver, I would like to do simple copy from the PCI RootPort side to the PCI Endpoint side. In PCI Endpoint side, we have address translation from PCI address to CPU physical address. We can configure the CPU physical address in the translation so that it maps to the specific DRAM region. The problem is how can we allocate a memory buffer at that specific CPU physical address to make sure the write from RootPort side really works?
Any recommendations are appreciated. Thanks a lot!
You need to first reserve the physical memory area. The easiest but ugly way to do that is to pass a "mem=" parameter to the kernel command line that precludes the physical memory range you are interested in from kernel memory management and then use ioremap() to get a virtual mapping of that.
For example if your machine has 256 Mb of RAM use mem=255M to reserve the last Mb to your uses and then map it via ioermap()
NOTE: original answer fixed based on feedback from #Adrian Cox.
If you can remap the translation on the fly, then you should work like any driver that uses DMA. Your basic reference for this is Chapter 15 of LDD3, plus the Linux DMA API.
What you are allocating is a DMA coherent buffer, via dma_alloc_coherent. On most platforms you should be able to pass in a null struct device pointer and get a generic DMA address. This will give you both a kernel virtual address to access the data, and a dma address which is the CPU physical address to map through your translation layer.
If your address translation is not very flexible, you may need to modify the platform code for your endpoint to reserve this buffer early on, in order to meet address alignment requirements. This is a bit more complicated, but there is an update of the bigphysarea patch to recent kernels that may help as a starting point.
I have a char device which enables access to an external SPI memory, and I'd like to mmap() the external memory so that I can access it from a program as if it were normal memory.
If I use the usual mmap() page remapping implementation on the char device file, it just enables me to see a device memory region, not its virtual char file...
Is there a trick to allow me to do that?
TIA
If the character device driver provided an mmap implementation, it would work. There's probably a good reason it doesn't:
Memory access instructions create memory transactions on the bus. An SPI memory is not addressable that way (although the SPI controller may use memory-mapped I/O, that's for its own register-level interface, not the memory content). You could build an SPI memory controller with a memory bus interface, I suppose, but you'd lose the device-indepedence of the SPI standard.
Emulating a memory region is possible (grab a page of memory, mark it for no access, and deal with the access violations (SIGBUS and SIGSEGV), but that would be horribly inefficient. Sometimes you find virtual machines doing this, but performance is very bad.
It sounds like you would need some sort of driver that translates the memory region accesses to commands sent through the character-oriented interface. This would probably be a pretty straight-forward block device driver.