LDD3 (p:453) demos dma_map_single using a buffer passed in as a parameter.
bus_addr = dma_map_single(&dev->pci_dev->dev, buffer, count, dev->dma_dir);
Q1: What/where does this buffer come from?
kmalloc?
Q2: Why does DMA-API-HOWTO.txt state I can use raw kmalloc to DMA into?
Form http://www.mjmwired.net/kernel/Documentation/DMA-API-HOWTO.txt
L:51 If you acquired your memory via the page allocator kmalloc() then you may DMA to/from that memory using the addresses returned from those routines.
L:74 you cannot take the return of a kmap() call and DMA to/from that.
So I can pass the address returned from kmalloc to my hardware device?
Or should I run virt_to_bus on it first?
Or should I pass this into dma_map_single?
Q3: When the DMA transfer is complete, can I read the data in the kernel driver via the kmalloc address?
addr = kmalloc(...);
...
printk("test result : 0x%08x\n", addr[0]);
Q4: Whats the best way to get this to user-space?
copy_to_user?
mmap the kmalloc memory?
others?
kmalloc is indeed one source to get the buffer. Another can be alloc_page with the GFP_DMA flag.
The meaning is that the memory that kmalloc returns is guaranteed to be contiguous in physical memory, not just virtual memory, so you can give the bus address of that pointer to your hardware. You do need to use dma_map_single() on the address returned which depending on exact platform might be no more then wrapper around virt_to_bus or might do more then do (set up IOMMU or GART tables)
Correct, just make sure to follow cache coherency guidelines as the DMA guide explains.
copy_to_user will work fine and is the easiest answer. Depending on your specific case it might be enough or you might need something with better performance. You cannot normaly map kmalloced addresses to user space, but you can DMA into user provided address (some caveats apply) or allocate user pages (alloc_page with GFP_USER)
Good luck!
Related
Im not sure if I understand the full flow of CPU direct access to memory in ARM processors,
I interested to know which part of memory access the cache (L1 and L2) ,DMA and MMU(or secure MMU) are participate.
I'm not sure if I understand the process sending data from non-secureOS to SecureOS start from allocating shared buffer via DMA and writing data to to share buffer (between secureOS and non-secureOS) and sending.
Additional questions:
Why DMA needed to communicate between secure or non secure ? Why not possible to use via kernel buffer (kmalloc(), kzalloc(), get_page() and etc.)?
Generally, there is possible to CPU access to memory without DMA ? Does DMA must to participate ?
There is possible no-coherency between CPU(cache L1 or L2) to DMA ?
For example:
non-secureOS write own data to DMA buffer and send to secureOS.
secureOS receive the buffer, non-secureOS change the buffer again without flushing (I think the changes keep in the cache) and finally secureOS read stale fake data from the cache
Everything with TrustZone is accomplished with the 'NS' bit that augments the BUS.
For a TrustZone CPU, L1/L2/TLB (via MMU) need to be aware of the 'NS' bit. Caches and TLB are augmented with a 'NS' bit and are not accessible from the normal world if the 'NS' is clear.
I'm not sure if I understand the process sending data from non-secureOS to SecureOS start from allocating shared buffer via DMA and writing data to to share buffer (between secureOS and non-secureOS) and sending.
The secure/non-secure OS have several means to communicate. A DMA buffer is one way, but it is probably complex and would not be a normal mode. The most basic mechanism is the SMC instruction. This is trapped by monitor mode and accomplishes the same thing as a 'syscall'.
Interpret ARM SMC calls
ARM SMC Calling Convention
SMC on StackOverflow
Another way is to map RAM as world shareable. Typically, this is done with a TZASC, but other Trustzone memory controllers may exist on a system. This is probably best 'bootstrapped' via the smc mechanics.
The use of a DMA controller could extend the world shareable memory buffer to off load CPU work load. However, I think this case is a little pathological and would never be done. Even faster than copying the memory via DMA is just to update the TZASC to make a buffer shareable. There is no copying.
Normal world reads 'secure memory' -> faults.
Normal world reads 'world shared memory' -> access as per normal.
The secure OS can flip the TZASC permissions during run time, if the device is not boot locked.
Why DMA needed to communicate between secure or non secure ? Why not possible to use via kernel buffer (kmalloc(), kzalloc(), get_page() and etc.)?
It is as detailed above. It requires world shareable memory.
Generally, there is possible to CPU access to memory without DMA ? Does DMA must to participate?
No DMA does not need to be involved at all. In fact, I wonder what made you think this is the case?
There is possible no-coherency between CPU(cache L1 or L2) to DMA ? For example: non-secureOS write own data to DMA buffer and send to secureOS. secureOS receive the buffer, non-secureOS change the buffer again without flushing (I think the changes keep in the cache) and finally secureOS read stale fake data from the cache
DMA and caches always have coherency issues. TrustZone doesn't add anything new. If you are using DMA, you need to have the MMU set that as device memory and it will not be cached.
Also, the DMA devices themselves are considered BUS masters. They can be TrustZone aware or some front-end logic if placed on them. In the first case, the controller with flip the 'NS' bit based on documented use patterns. For example a crypto device may present banked registers to normal/secure worlds. Depending on who accessed the device, the DMA will be performed with NS set or clear. For the 2nd case, another device/gasket sets up fixed access for the DMA. It is always either normal or secure access. This is often boot locked.
The DMA (and all hardware beside the CPU) are outside the scope of the CPU. The SOC designer and OEM have to configure the system to match the security requirements of the application. So different devices should map to normal/secure (or dynamic if required). The safest case is to fix these mappings and lock them at boot time. Otherwise, your attack surface grows in attacks against TrustZone.
I have buffer coming in from the user space which needs to be filled with device registers as a debugging mechanism. Is it safe to use copy_to_user() / copy_from_user() for device memory? If not, what's the best alternative given that the device driver lies in kernel space?
All the comments are wrong.
For any data moves between user and kernel spaces, you have to use copy_from/to_user
memcpy_from/toio() are reserved for addresses IN the kernel space and MMIO. It's unsafe to use those functions with user-space addresses.
Answer:
You can simply use copy_from/to_user() directly with the mapped MMIO address in void * to or void * from. So that you don't need a useless intermediate buffer.
To be used only with prefetchable memory since it might read/write several times the same memory and/or in an unordered way.
I have a custom device driver that implements an mmap operation to map a shared RAM buffer (outside of the OS) to userspace. The buffer is reserved by passing mem=32M as a boot argument for the OS, leaving the rest of the 512MB available as a buffer. I would like to perform zero-copy operations from the mapped memory, which is not possible if the vm_flags include VM_PFNMAP and VM_IO.
My driver currently performs the mapping by calling vm_iomap_memory(vma, start, size), which in turn calls io_remap_pfn_range and remap_pfn_range, which sets up the vma with the VM_PFNMAP and VM_IO set. This works to map the memory to userspace, but zero-copy socket operations fail at get_user_pages due either to the VM_PFNMAP flags being set or the struct page being missing. The comments for remap_pfn_range show this is intended behavior, as pfn-mapped memory should not be treated as 'normal'. However, for my case it is just a block of reserved RAM, so I don't see why it should not be treated as normal. I have set up cache invalidation/flushing to manually manage the memory.
I have tried unsetting the VM_PFNMAP and VM_IO flags on the vm_area_struct both during and after the mapping, but get_user_pages still fails. I have also looked at the dma libraries but it looks like they rely on a call to remap_pfn_range behind the scenes.
My question is how do I map physical memory as a normal, non-pfn, struct page-backed userspace address? Or is there some other way I should be looking at it? Thanks!
I've found the solution to mapping a memory buffer outside the Kernel that requires a correction to several wrong starting points that I mentioned above. It's not possible to post full source code here, but the steps to get it working are:
Device tree: Define reserved memory region for buffer with no associated driver. Do not use mem or memmap bootargs. Kernel will confine itself to using memory outside of this reserved space for itself, but now will be able to make struct pages for reserved memory.
In a device driver (a LKM in my case), mapping physical address to kernel virtual address requires using using memremap instead of ioremap, as it is real memory we are mapping.
In device driver mmap routine, do not use any variant of remap_pfn_range to setup the vma for usespace, instead assign a custom fault nopage routine to the vma->vm_ops.fault to look up the page when the userspace virtual address is used. This approach is described in lddv3 ch15.
The nopage function in the driver should use the vm_fault structure argument that is passed to it to calculate the offset into the vma for the address that needs a page. Then use that offset to calculate an kernel virtual address (against the memremap'd address), and get the page with a call to page = virt_to_page(pageptr);, followed by a call to get_page(page);, and assign it to the vm_fault structure with vmf->page = page; The latter part of this is illustrated in lddv3 chapter 15 as well.
The memory mapped in this fashion using mmap against the custom device driver can be used just like normal malloc'd memory as far as I can tell. There are probably ways to achieve a similar result with the DMA libraries, but I had constraints preventing that route, or associating the device tree node with the driver.
I've been struggling with this one, would really appreciate some help. I want to use the internal SRAM (stepping stone - not used after boot) of my At91sam9g45 to speed up some intensive computations and am having trouble meeting all the following conditions:
Memory is accessible from user space. This was easy using the user space mmap() and then kernel remap_pfn_range(). Using the pointer returned, my user space programs can read/write to the SRAM.
Using the kernel DMA API call dma_async_memcpy_buf_to_buf() to do a memcpy using DMA. Within my basic driver, I want to call this operation to copy data from DDR( allocated with kmalloc()) into the SRAM buffer.
So my problem is that I have the user space and physical addresses, but no kernel-space DMA API friendly mapping.
I've tried using ioremap and using the fixed virutal address provided to iotable_init(). None of these seems to result in a kernel virtual address that can be used with something like virt_to_bus (which works for the kmalloc addresses and i think is used within the DMA API).
There's way around and thats just triggering the DMA manually using the physical addresses, but I'd like to try and figure this out. I've been reading through LDD3 and googling, but i can't see any examples of using non-kmalloc memory for the DMA API (except for PCI buses).
I have a PCI device that needs to read and write from userspace. I'm trying to use zero copy; is there a way to allocate, pin, and get the physical address of a userspace address completely within userspace or do I need to have a kernel module that, say, calls virt_to_phys or get_user_pages? The device's memory is mapped into userspace memory via MMIO so I can pass it any data that's needed. Thanks.
It was a total hack, but I limited Linux to a range of memory and used MMIO to allocate memory for my device that the kernel was unaware of.
Basically you need the memory to be DMA-able, and as far as I know only a kernel module can do that. See http://lxr.free-electrons.com/source/Documentation/PCI/PCI-DMA-mapping.txt