I am currently working with the Xilinx XDMA driver (see here for source code: XDMA Source), and am attempting to get it to run (before you ask: I have contacted my technical support point of contact and the Xilinx forum is riddled with people having the same issue). However, I may have found a snag in Xilinx's code that might be a deal breaker for me. I am hoping there is something that I'm not considering.
First off, there are two primary modes of the driver, AXI-Memory Mapped (AXI-MM) and AXI-Streaming (AXI-ST). For my particular application, I require AXI-ST, since data will continuously be flowing from the device.
The driver is written to take advantage of scatter-gather lists. In AXI-MM mode, this works because reads are rather random events (i.e., there isn't a flow of data out of the device, instead the userspace application simply requests data when it needs to). As such, the DMA transfer is built up, the data is transfered, and the transfer is then torn down. This is a combination of get_user_pages(), pci_map_sg(), and pci_unmap_sg().
For AXI-ST, things get weird, and the source code is far from orthodox. The driver allocates a circular buffer where the data is meant to continuously flow into. This buffer is generally sized to be somewhat large (mine is set on the order of 32MB), since you want to be able to handle transient events where the userspace application forgot about the driver and can then later work off the incoming data.
Here's where things get wonky... the circular buffer is allocated using vmalloc32() and the pages from that allocation are mapped in the same way as the userspace buffer is in AXI-MM mode (i.e., using the pci_map_sg() interface). As a result, because the circular buffer is shared between the device and CPU, every read() call requires me to call pci_dma_sync_sg_for_cpu() and pci_dma_sync_sg_for_device(), which absolutely destroys my performance (I can not keep up with the device!), since this works on the entire buffer. Funny enough, Xilinx never included these sync calls in their code, so I first knew I had a problem when I edited their test script to attempt more than one DMA transfer before exiting and the resulting data buffer was corrupted.
As a result, I'm wondering how I can fix this. I've considered rewriting the code to build up my own buffer allocated using pci_alloc_consistent()/dma_alloc_coherent(), but this is easier said than done. Namely, the code is architected to assume using scatter-gather lists everywhere (there appears to be a strange, proprietary mapping between the scatter-gather list and the memory descriptors that the FPGA understands).
Are there any other API calls I should be made aware of? Can I use the "single" variants (i.e., pci dma_sync_single_for_cpu()) via some translation mechanism to not sync the entire buffer? Alternatively, is there perhaps some function that can make the circular buffer allocated with vmalloc() coherent?
Alright, I figured it out.
Basically, my assumptions and/or understanding of the kernel documentation regarding the sync API were totally incorrect. Namely, I was wrong on two key assumptions:
If the buffer is never written to by the CPU, you don't need to sync for the device. Removing this call doubled my read() throughput.
You don't need to sync the entire scatterlist. Instead, now in my read() call, I figure out what pages are going to be affected by the copy_to_user() call (i.e., what is going to be copied out of the circular buffer) and only sync those pages that I care about. Basically, I can call something like pci_dma_sync_sg_for_cpu(lro->pci_dev, &transfer->sgm->sgl[sgl_index], pages_to_sync, DMA_FROM_DEVICE) where sgl_index is where I figured the copy will start and pages_to_sync is how large the data is in number of pages.
With the above two changes my code now meets my throughput requirements.
I think XDMA was originally written for x86, in which case the sync functions do nothing.
It does not seem likely that you can use the single sync variants unless you modify the circular buffer. Replacing the circular buffer with a list of buffers to send seems like a good idea to me. You pre-allocate a number of such buffers and have a list of buffers to send and a free list for your app to reuse.
If you're using a Zynq FPGA, you could connect the DMA engine to the ACP port so that FPGA memory access will be coherent. Alternatively, you can map the memory regions as uncached/buffered instead of cached.
Finally, in my FPGA applications, I map the control registers and buffers into the application process and only implement mmap() and poll() in the driver, to give apps more flexibility in how they do DMA. I generally implement my own DMA engines.
Pete, I am the original developer of the driver code (before the X of XMDA came into place).
The ringbuffer was always an unorthodox thing and indeed meant for cache-coherent systems and disabled by default. It's initial purpose was to get rid of the DMA (re)start latency; even with full asynchronous I/O support (even with zero-latency descriptor chaining in some cases) we had use cases where this could not be guaranteed, and where a true hardware ringbuffer/cyclic/loop mode was required.
There is no equivalent to a ringbuffer API in Linux, so it's open-coded a bit.
I am happy to re-think the IP/driver design.
Can you share your fix?
After going through these links,
https://linuxtv.org/downloads/v4l-dvb-apis/uapi/v4l/userp.html
https://linuxtv.org/downloads/v4l-dvb-apis/uapi/v4l/mmap.html
I understood that there are two ways to create a buffer in v4l2 framework
Userpointer buffer: buffer will be created in user space.
Memory buffer: Buffer will be created in kernel space.
I have bit confused, which one to use while doing v4l2 driver development. I mean, which is better approach in terms of performance and handling buffer?
I will be using DMS-SG for data transfer in my hardware.
It depends.. on your requirements.
Case: Visualization of the video stream.
In this case, you might want to write the video data directly to memory that is accessible to the video driver, saving a copy operation. You will also get the shortest camera-to-display time. In this case, a user pointer would be the go to.
Case: Recording of the video stream.
In this case, you do not care about the timely delivery, but you do care about not missing frames. In this case, you can use memory mapped acquisition with multiple buffers.
Case: Single image acquisition for processing.
In this case, both timely delivery and missing frames are both less important, so you could use either method, but buffered operation will give the fastest acquisition time, since there is always a buffer with recent image data available.
Context
Debian 64 bits.
Raw sockets.
Trying to find the most efficient way to send data to the NIC.
User space networking. Avoid the kernel as much as possible.
Question
Does a NIC need a buffer ready to send or is it possible for it to pass multiple pointers to where the information is and let the NIC seek it just before sending it? I am referring to the send/sendto calls.
It would avoid multiple memcpy.
Is it possible or not? ( I will use netmap or dpdk right after but it is important for me to know if I can avoid multiple memcpy upfront to set the data to send.)
Edit: I am quite sure what I want to do is not possible since the NIC won't access the OS mapping. So there is no way to avoid multiple memcpy I guess..
Edit2: what i am seeking for is called scatter gather. My main concern is the kernel bypass constraint now.
Some Linux code is calling malloc in 100 places and I need to know how big any one chunk is.
Normally I'd just record these sizes in a my_malloc function but I'm not allowed to do that in this instance. Is there any way to ask the malloc subsystem to provide chunk size for a malloc'd pointer?
Your best bet is to use the LD_PRELOAD trick to intercept calls to malloc (definition here). You do not even need to recompile your source code.
Depending on what you are trying to discover, Google Perftools might be useful as well.
*((size_t *)ptr - 1) & ~7
/me covers.
Unfortunately, there is no way to do that.
How to get sparse block size and check if data is present at the given offset in sparse file in reiserfs/ext3 in Linux?
I want to use it to implement simple copy-on-write block device using FUSE.
Or I should better keep a bitmap in a separate file?
/usr/src/linux/Documentation/filesystems/fiemap.txt
The fiemap ioctl is an efficient method for userspace to get file
extent mappings. Instead of block-by-block mapping (such as bmap), fiemap
returns a list of extents.
There's a quick example of usage in git://kernel.ubuntu.com/cking/debug-code/fiemap/. A sparse file will lack extents for the "missing" portions.
Since Linux 3.1, lseek provides flags SEEK_HOLE and SEEK_DATA to navigate to the beginning or end of a hole, so this might be an alternative to the ioctl based solution. Haven't tried either in practice, so I don't have any real experience to compare the two.
Well, http://lxr.linux.no/#linux+v2.6.33/arch/um/drivers/cow_user.c indicates that User Mode Linux uses an explicit bitmap for this, FWIW.