Accessing DDR memory address between (2GB-4Gb) through DSP DMA 64bit addressing - 64-bit

We are trying to access the DDR addresses of more than 2Gb to 4Gb through ShimDMA where it is more than 32 bits. Is DMA supports more than 32 bit addressing of physical memory ? We are using below api's for it OctSysDmaMemWrite, OctSysDmaMemRead.

The hardware supports more than 32-bit. However, the DSP kernel APIs don't. We will try to add this to the next release (3.20).

Related

Access 36-bit physical address from 32-bit userspace

I am working on a system in which the memory mapping is done using 36-bit addressing. In particular I am trying to access the PCIe memory space (that is mapped to 0xc00000000 address) from a 32-bit userspace Linux application.
I had plan to use mmap for this purpose but the last argument of mmap is of type off_t which is 4-bytes wide on a 32-bit OS. Can someone guide how to access the address 0xc00000000 from userspace app using mmap?
P.S: My machine is running 32-bit versions of both Linux Kernel and RootFS.

Meltdown on x86 cpu with pae in 32bit mode

Most modern x86 CPUs implement physical address extension (pae). With that it is possible for the operating system to address up to 64 GBytes of memory while Applications usually can only see and address 4 GBytes of RAM at a time. Even an PAE aware application need to switch the 4 GB Memory window to address more memory.
Could this be used to protect a 32bit-pae operating system kernel from meltdown attacks?

Linux Kernel Mapping Physical Memory for DMA

I'm developing on an embedded system that has a DSP and a CPU. There is a shared memory region between the DSP and CPU. The DSP processes data and signals to the CPU when data is ready. I'm running Linux on the CPU and have excluded the shared memory region from the kernel mapping.
I have Linux kernel driver code that can ioremap the physical address sent by the DSP. Using the returned virtual address, I can inspect the data and see that the data looks valid. I'm now having trouble trying to map the data so I can DMA it out of the CPU to a peripheral device. I want to avoid copying the data to a dma_alloc_coherent region to minimize latency.
I believe this is because there is an IOMMU between the memory and the device doing the DMA. What do I need to do in order to generate the dma address to physical address mapping?

Use Linux to share continous RAM between processors

I am working with the Zynq 7 of Xilinx. On the Zynq there is a FPGA, an ARM processor and 512 MiB of DDR RAM. When the board is powered on, the ARM processor starts Ubuntu, which initializes the DDR RAM and claims it as its own. On the FPGA, I am developping another processor and I want to give it a piece of DDR memory. Since I am still developing, I would like to somehow allocate a piece of 64 MiB of continous DDR RAM from linux userspace (the device has a MMU). I would then get the start address of this piece of RAM, pass it on to the FPGA processor and it would work with it. While it works on it, I could check if everything is fine from the same program, in Ubuntu.
The question is about the Linux side of this, what would be a good way?
Here is what I gathered myself:
I read a bit about cma, and I noticed that the Ubuntu instance already allocates 128 MiB of cma RAM on boot. So, I think the best way would be to find or develop a driver that takes some of that RAM and "locks" it so that the OS will not give it out to other programs. Then I would still need a way to access it from userspace. Is this the right track of thinking?

Xen E820 memory map for domU kernels

How does xen handle E820 memory map for domU kernels? In my specific problem I am trying to map Non-Volatile RAM to domU kernels. The dom0 memory map returns the following relevant parts.
100000000-17fffffff : System RAM (4GB to 6GB)
180000000-37fffffff : reserved (6GB to 14GB)
The second line corrosponds to the NVRAM which is a region from 6GB to 14GB in the dom0 kernel. How can I map this NVRAM region to the domU kernel which does not map this region at all.
Ultimately I want to the nvram region to be available in other domU VMs so any solutions or advice would be highly helpful.
P.S. :: If I attempt to write to this region from the domU kernel will Xen intercept this write operation. Actually this is just a memory region write which should not be a problem, but it might appear as a hardware access.
Guest domains in Xen have two different models for x86:
1. Hardware Virtual Machine (HVM) : It takes benefit of Intel VT or AMD SVM extensions to enable true virtualization on x86 platform
2. Paravirtualized (PV) : This mode adds modifications in the source code of the operating system to get rid of the x86 virtualization problems and also add performance boost to the system.
These two different models handle the E820 memory map differently. E820 memory map basically gives an OS the physical address space to operate on along with the location of I/O devices. In PV mode I/O devices are available through Xenstore. The domain builder only provides a console device during boot to the pv guest. All other I/O devices have to be mapped by the guest. The guest in this mode starts execution in protected mode instead of real mode for x86. The domain builder maps the start_info pages into the guest domain's physical address space. This start_info pages contain most of the information to initialize a kernel such as number of available pages, number of CPUs, console information, Xenstore etc. E820 memory map in this context would just consist of the number of available memory pages because BIOS is not emulated and I/O device information is provided separately through Xenstore.
On the otherhand, in HVM guest BIOS and other devices have to be emulated by Xen. This mode should support any unmodified OS, thus we cannot use the previous method. BIOS emulation is done via code borrowed from Bochs, while devices are emulated using QEMU code. Here an OS is provided with an E820 memory map, build by the domain builder. The HVM domain builder would typically pass the memory layout information to the Bochs emulator which then performs the required task.
To get hold of the NVRAM pages you will have to build a separate MMU for NVRAM. This MMU should handle all the NVM pages and allocate/free it on demand just like the RAM pages. It is a lot of work.

Resources