I'm developing on an embedded system that has a DSP and a CPU. There is a shared memory region between the DSP and CPU. The DSP processes data and signals to the CPU when data is ready. I'm running Linux on the CPU and have excluded the shared memory region from the kernel mapping.
I have Linux kernel driver code that can ioremap the physical address sent by the DSP. Using the returned virtual address, I can inspect the data and see that the data looks valid. I'm now having trouble trying to map the data so I can DMA it out of the CPU to a peripheral device. I want to avoid copying the data to a dma_alloc_coherent region to minimize latency.
I believe this is because there is an IOMMU between the memory and the device doing the DMA. What do I need to do in order to generate the dma address to physical address mapping?
Related
I am currently studying about virtual memory in operating system and I have few questions.
Is swap partition or swap file same as virtual memory in terms of Linux?
If yes, then in case I've no swapping enabled in my Linux system, does that mean my system has no virtual memory?
I have also read that virtual memory makes system more secure because with virtual memory, CPU generates virtual addresses which are then translated to actual physical addresses by MMU, therefore securing the system because no process can actually interact with the actual physical memory. So if I just enable swapping on my Linux system, will my CPU start generating virtual addresses and currently it's directly generating physical addresses as I have no swap partition?
How does CPU know if virtual memory is present or not?
Having no swap file/partition doesn't imply that you don't have virtual memory. Modern operating-systems always use paging/virtual memory no matter what.
Is swap partition or swap file same as virtual memory in terms of Linux?
No swap file and virtual memory is not the same in terms of any OS. Virtual memory just says that all memory accesses are going to be translated by the MMU using the page tables. Modern OSes always use paging.
If yes, then in case I've no swapping enabled in my Linux system, does that mean my system has no virtual memory?
Your system certainly has virtual memory. To use long mode (64bits mode), the OS must enable paging. I doubt that you have a system old enough to not use paging. Page swapping to the hard-disk is not virtual memory. It is more like a feature of virtual memory that can be used to extend physical memory because a page which isn't required immediatly can be swapped to the hard-disk momentarily.
I have also read that virtual memory makes system more secure because with virtual memory, CPU generates virtual addresses which are then translated to actual physical addresses by MMU, therefore securing the system because no process can actually interact with the actual physical memory. So if I just enable swapping on my Linux system, will my CPU start generating virtual addresses and currently it's directly generating physical addresses as I have no swap partition?
Your computer certainly has paging/virtual memory enabled. Having no swap partition doesn't mean that you don't have virtual memory. Paging can also be used to avoid fragmentation of RAM and for security. You are right that paging is securing your system because the page tables prevent a process from accessing the memory of another process. It also has ring privilege on a page to page basis which allows to differentiate between kernel mode and user mode code.
How does CPU know if virtual memory is present or not?
The OS just enables paging by setting a bit in a control register. Then the CPU starts blindly translating every memory accesses using the MMU.
No. Swap file is not the same as virtual memory.
Once the firmware/kernel sets up the necessary registers and/or in-memory data structures and switches the processor mode, virtual memory mappings are used for accessing the physical memory.
Yes, the inability of processes to refer to memory locations without a mapping allows the kernel to employ isolation and access control mechanisms.
Through active mappings, different virtual addresses can map to the same physical memory region at different times. The kernel can maintain the illusion that a larger amount of memory is available that the capacity of the actual physical memory, where only a subset of the virtual memory resides in the physical memory at any given time. The rest is stored in the swap file.
Accesses to virtual addresses where the corresponding data is currently in the swap file are trapped by the kernel (via a page fault) and might lead to the kernel swapping the data in, and swapping some other data from physical memory out.
If you disable the swap file, the kernel has no place store the swapped out data. This reduces the amount of virtual memory available.
Say that I have a device that uses memory mapped IO.
And we know that in Linux, each process have 3 GB of user space, and 1 GB of kernel space.
Now I assume that the address(es) for this device will be mapped to the kernel space of a process, so that a process (which is running in user mode) cannot access the device. Am I correct?
Now I assume that the address(es) for this device will be mapped to the kernel space of a process, so that a process (which is running in user mode) cannot access the device. Am I correct?
Mostly. Since devices exist in physical memory, they can be mapped to multiple virtual addresses. An appropriately privileged userspace application can use mmap() on /dev/mem to remap portions of I/O memory into its address spce.
What information is contained in the memory map of application processor? Is it tells which subsystem can access which area of RAM or it means if CPU tries to access an address based on memory map it can be RAM address or a device address? I am referring this documentation
http://infocenter.arm.com/help/index.jsp?topic=/com.arm.doc.ddi0515b/CIHIJJJA.html.
Here 0x00_0000_0000 to 0x00_0800_0000 is mapped to the boot region, what does that imply?
The style of memory map diagram you've linked to shows how the processor and peripherals will decode physical memory addresses. This is a normal diagram for any System-on-Chip device, though the precise layout will vary. The linked page actually lists which units of the SoC use this memory map for their address decoding, and it includes the ARM and the Mali graphics processor. In a Linux system, much of this information will be passed to the kernel in the device tree. It's important to remember that this tells us nothing about how the operating system chooses to organise the virtual memory addresses.
Interesting regions of this are:
DRAM - these addresses will be passed to the DRAM controller. There is no guarantee that the specific board being used has DRAM at all of that address space. The boot firmware will set up the DRAM controller and pass those details to the operating system.
PCIe - these addresses will be mapped to the PCIe controller, and ultimately to transfers on the PCIe links.
The boot region on this chip by default contains an on-chip boot rom and working space. On this particular chip there's added complexity caused by ARMs TrustZone security architecture, which means that application code loaded after boot may not have access to this region. On the development board it should be possible to override this mapping and boot from external devices.
The memory map contains an layout of the memory of your device.
It tells your OS, where the OS can place data and how it is accessed, as some areas may be only accessible in a privileged state.
Your boot image will be placed in the boot area.This defines among other things your entry point.
I know access to ports in I/O address spaces requires specific IN/OUT instructions and they are not part of Physical memory( RAM) but I have not understood Where is the I/O address space actually located (Physically)? (some sort of RAM in )I/O controller? Reserved side of physical memory?
On the early X86 processors (and also the 8080, Z80 etc) I/O address space was on the same data and address bus as the memory, but was accessed by activating a dedicated IO-request pin on the CPU
So electrically I/O was in parallell with the RAM
Thses days the CPU speaks HDMI and PCIe directly so much of the I/O space is either internal to the CPU (eg: the VGA I/O interface) or accessed over th serial bus that is PCIe
PCIe is also used for memory mapped I/O so in that respect IO is still accessed over mostly the same electrical interfaces as memory mapped IO . but not over the same IO pins that are used for RAM any more,
A list of I/O addresses can be found in Ralf Browns x86/MSDOS Interrupt List:
http://www.pobox.com/~ralf
http://www.pobox.com/~ralf/files.html
ftp://ftp.cs.cmu.edu/afs/cs.cmu.edu/user/ralf/pub/
inter61d.zip: "PORTS.A", "PORTS.B", "PORTS.C"
First, you should understand that a device can be programmed to respond to any address, even if that address is not part of physical memory. This is done by programming their memory decoders. In short, the memory for I/O devices is located on the device. The I/O space provided to the device usually maps the memory which is on the device, i.e., each I/O device provides it's own memory.
Well, in the old days, there were certain "well-known" addresses, for instance, 0x3f8, 0x2f8 for com (serial) ports, 0xCF8-0xCFC for PCI config space. These addresses do not use any physical memory, a separate I/O signal is asserted to indicate such. These devices memory decoders were programmed at the factory to respond to these addresses only when the I/O pin is asserted.
But this became obsolete. Even in the later days of PCI, most devices were initially configured through IO space, but then their memory decoders were programmed to respond to a memory-mapped address in the virtual space above physical memory. When memory decoders are programmed, not only is the base address provided but also the size of that address space is provided as well to avoid collisions between devices. The memory is located on the device, not in the host computer's RAM or chipset.
For PCI-express, I believe now the acpi table is consulted for the memory-mapped space and i/o instructions are essentially deprecated. Serial ports are not usually included on modern hardware. And even if they were it would be implemented on a PCI or PCIe device.
On my Compulab cm-x270 CoM Linux kernel placed in NOR. This kernel without MTD support, and after boot I can't access to NOR as MTD partition. My goal is update this kernel from userspace. Yes, update from bootloader by tftp most easier way, but I can't use it in this task. It is possible mapping NOR in /dev/mem or any another way?
I had a similar situation with SRAM. I wrote a block device driver for /dev/sram. Access through a device driver preserves all of the Linux security rules.
You didn't mention how this NOR memory is accessed. If it's in the physical memory address space, then the driver would perform request_mem_region() and ioremap() to map the NOR memory into virtual kernel memory space. Then user programs can use standard file I/O on this block (or char) device.