I have a driver that needs to:
receive data from an FPGA
DMA data to another another device (DSP) for encoding
send the encoded data via UDP to an external host
The original plan was to have the application handle step 3, but the application doesn't get the processor in time to process the data before the next set of data arrives from the FPGA.
Is there a way to force the scheduler (from the driver) to run my application?
If not, I think work queues are likely the solution I need to use, but I'm not sure how/where to call into the network stack/driver to accomplish the UDP transfers from the work queues.
Any ideas?
You should try to discover why the application "can't get the data fast enough".
Your memory bandwith is probably vastly superior to the thypical ethernet bandwith, so even if passing data from the driver to the application involves copying.
If the udp link is not fast enough in userspace, it won't be faster in kernelspace.
What you need to do is :
understand why your application is not fast enough, maybe by stracing it.
implement queuing in userspace.
You can probably split your application in two thread, sharing buffer list
thread A waits for the driver to have data available, and puts it at the tail of the list.
thread B reads data from the head of the list, and sends it through UDP. If for some reason thread B is busy waiting for a particular buffer to be sent, the fifo fills a bit, but as long as the UDP link bandwith is larger than the rate of data coming from the DSP you should be fine.
Moving things into the kernel does not makes things magically faster, it is just MUCH harder to code and debug and trace.
Related
Suppose you have a PCIE device presenting a single BAR and one DMA area declared with pci_alloc_consistent(..). The BAR's flags indicate non-prefetchable, non-cacheable, memory region.
What are the principle causes for latency in reading the DMA area, and similarly, what are the causes of latency reading the BAR?
Thank you for answering this simple question :D!
This smells a bit like homework but I suspect the concepts are not well understood by many so I'll add an answer.
The best way to think through this is to consider what needs to happen in order for a read to complete. The CPU and the device are on separate sides of the PCIe link. It's helpful to view PCI-Express as a mini network. Each link is point-to-point (like your PC connected to another PC). There may also be intermediate switches (aka bridges in PCI). In that case, it's like your PC is connected to a switch that is in turn connected to the other PC.
So, if the CPU wants to read its own memory (the "DMA" region you allocated), it's relatively fast. It has a high speed bus that is designed to make that happen fast. Also, there are multiple layers of caching built in to keep frequently (or recently) used data "close" to the CPU.
But if the CPU wants to read from the BAR in the device, the CPU (actually the PCIe root complex integrated with the CPU) must compose a PCIe read request, send the request, and wait while the device decodes the request, accesses the BAR location and sends back the requested data. Tick tock. Your CPU is doing nothing else while it waits for this to complete.
This is pretty much analogous to asking for a web page from another computer. You formulate an HTTP request, send it and wait while the web server accesses the content, formulates a return packet and sends it to you.
If the device wishes to access memory residing "in" the CPU, it's pretty much the exact same thing in reverse. ("Direct memory access" just means that it doesn't need to interrupt the CPU to handle it, but something [the root complex here] is still responsible for decoding the request, fulfilling the read and sending back the resulting data.)
Also, if there are intermediate PCIe switches between CPU and device, those may add additional buffering/queuing delays (exactly as a switch or router might in a network). And any such delays are doubled since they're incurred in both directions.
Of course PCIe is very fast, so all of that happens in mere nanoseconds, but that's still orders of magnitude slower than a "local" read.
I'm struggeling with why I should try the multi packet send PACKET_MMAP method.
I got a data mass of around 3 milion bytes every 20ms that i'm going to send over a 10gbps interface.
I need to process all data in an packet so the data is going to be in the cache then I just send it the 'normal' way (sendto). in this case the move to kernel would be from cache so thats one memory transfer.
Since I need to process all data in the packet using PACKET_MMAP would also be one move of the data from userspace to userspace then DMA from userspace. So would PACKET_MMAP gain me anything? my guess is that it would not since both methods will move the data once and even thoug it looks like two times in the (sendto) case since the data will reside in the cache it will effectivly be only once..
Am I wrong?
Thanks for any help.
/Anders.
So, I have an incoming UDP stream composed of 272 byte packets at a data rate of about 5.12Gb/s (around 320e6 packets per second). This data is being sent by an FPGA-based custom board. The packet size is a limit of the digital design being run, so although theoretically it could be possible to increase it to make things more efficient, it would require a large amount of work. At the receiving end these packets are read and interpreted by a network thread and placed in a circular buffer shared with a buffering thread, which will copy this data to a GPU for processing.
The above setup at the receiving end could cope with 5.12Gb/s for 4096 KB packet (used on a different design) using simple recv calls, however with the current packet size I'm having a hard time keeping up with the packet flow, too much time is being "wasted" in context switching and copying small data segments from kernel space to user space. I did a quick test implementation which uses recvmmsg, however thing didn't improve by much. On average I can processes about 40% of the incoming packets.
So I was wondering whether it was possible to get a handle of the kernel's UDP data buffer for my application (mmap style), or use some sort of zero-copying from kernel to user space?
Alternatively, do you know of any other method which would reduce this overhead and be capable of performing the required processing?
This is running on a Linux machine (kernel 3.2.0-40) using C code.
There is support for mmap packet receiving in Linux.
It's not so easy to use as UDP sockets, because you will receive packets like from RAW socket.
See this for more information.
Got some statistics information of our custom hardware to be displayed whenever user asks for using a command in the Linux user space. This implementation is currently uses PROC interface. We started adding more statistics information then we encountered a problem wherein the particular statistics command had to be executed twice for getting the entire data as PROC interface was restricted to 1 page.
As mentioned above the data transfer between the kernel and the user space is not critical but as per the data some decisions might be taken by the user. Our requirement for this interface design is that it should be capable of transferring amount of data maybe greater that 8192 bytes and the command needs to use minimal kernel resources (like locks etc.,) and it needs to be quick.
Using ioctl can solve the issue but since the command is exactly not controlling the device but to collect some statistics information, not sure whether it is a good mechanism to use as per Linux. We are currently using 3.4 kernel; not sure whether Netlink is lossy in this version (Previous versions I came across issues like when the queue becomes full, socket starts to drop data). mmap is another option . Can anyone suggest me what would be the best interface to use
Kernel services can send information directly to user applications over Netlink, while you’d have explicitly poll the kernel with ioctl functions, a relatively expensive operation.
Netlink comms is very much asynchronous, with each side receiving messages at some point after the other side sends them. ioctls are purely synchronous: “Hey kernel, WAKE UP! I need you to process my request NOW! CHOP CHOP!”
Netlink supports multicast communications between the kernel and multiple user-space processes, while ioctls are strictly one-to-one.
Netlink messages can be lost for various reasons (e.g. out of memory), while ioctls are generally more reliable due to their immediate-processing nature.
So If you asking for statistics to kernel from user space(application) it is more reliable and easy to use IOCTL while if you generate statistics in kernel space and you want your kernel space to send those data to user space(application) you have to use Netlink sockets.
You can do a ioctl IO call (rather than IOR, IOW, or IORW). Ioctl's can be very useful for collecting information. You'll have a lot of flexibility this way in that you can pass different size buffers or structs to fill with data.
I've been reading about what happens after packets are captured by NICs, and the more I read, the more I'm confused.
Firstly, I've read that traditionally, after a packet is captured by the NIC, it gets copied to a block of memory in the kernel space, then to the user space for whatever application that then works on the packet data. Then I read about DMA, where the NIC directly copies the packet into memory, bypassing the CPU. So is the NIC -> kernel memory -> User space memory flow still valid? Also, do most NIC (e.g. Myricom) use DMA to improve packet capture rates?
Secondly, does RSS (Receive Side Scaling) work similarly in both Windows and Linux systems? I can only find detailed explanations on how RSS works in MSDN articles, where they talk about how RSS (and MSI-X) works on Windows Server 2008. But the same concept of RSS and MSI-X should still apply for linux systems, right?
Thank you.
Regards,
Rayne
How this process plays out is mostly up to the driver author and the hardware, but for the drivers I've looked at or written and the hardware I've worked with, this is usually the way it works:
At driver initialization, it will allocate some number of buffers and give these to the NIC.
When a packet is received by the NIC, it pulls the next address off its list of buffers, DMAs the data directly into it, and notifies the driver via an interrupt.
The driver gets the interrupt, and can either turn the buffer over to the kernel or it will allocate a new kernel buffer and copy the data. "Zero copy networking" is the former and obviously requires support from the operating system. (more below on this)
The driver needs to either allocate a new buffer (in the zero-copy case) or it will re-use the buffer. In either case, the buffer is given back to the NIC for future packets.
Zero-copy networking within the kernel isn't so bad. Zero-copy all the way down to userland is much harder. Userland gets data, but network packets are made up of both header and data. At the least, true zero-copy all the way to userland requires support from your NIC so that it can DMA packets into separate header/data buffers. The headers are recycled once the kernel routes the packet to its destination and verifies the checksum (for TCP, either in hardware if the NIC supports it or in software if not; note that if the kernel has to compute the checksum itself, it'd may as well copy the data, too: looking at the data incurs cache misses and copying it elsewhere can be for free with tuned code).
Even assuming all the stars align, the data isn't actually in your user buffer when it is received by the system. Until an application asks for the data, the kernel doesn't know where it will end up. Consider the case of a multi-process daemon like Apache. There are many child processes, all listening on the same socket. You can also establish a connection, fork(), and both processes are able to recv() incoming data.
TCP packets on the Internet are usually 1460 bytes of payload (MTU of 1500 = 20 byte IP header + 20 byte TCP header + 1460 bytes data). 1460 is not a power of 2 and won't match a page size on any system you'll find. This presents problems for reassembly of the data stream. Remember that TCP is stream-oriented. There is no distinction between sender writes, and two 1000 byte writes waiting at the received will be consumed entirely in a 2000 byte read.
Taking this further, consider the user buffers. These are allocated by the application. In order to be used for zero-copy all the way down, the buffer needs to be page-aligned and not share that memory page with anything else. At recv() time, the kernel could theoretically remap the old page with the one containing the data and "flip" it into place, but this is complicated by the reassembly issue above since successive packets will be on separate pages. The kernel could limit the data it hands back to each packet's payload, but this will mean a lot of additional system calls, page remapping and likely lower throughput overall.
I'm really only scratching the surface on this topic. I worked at a couple of companies in the early 2000s trying to extend the zero-copy concepts down into userland. We even implemented a TCP stack in userland and circumvented the kernel entirely for applications using the stack, but that brought its own set of problems and was never production quality. It's a very hard problem to solve.
take a look at this paper, http://www.ece.virginia.edu/cheetah/documents/papers/TCPlinux.pdf it might help clearing out some of the memory management questions