I have a kernel module wherein I capture a packet in PRE-ROUTING hook for some processing. I then allocate a new skb(cant do it in the same skb) and put the processed payload of the input skb and the ip header in this new skb. I would then want to do a netif_rx for this new skb and let it traverse the kernel networking stack.
I am little confused with the size of the new skb I should allocate, where my skb->data should point to (to network_header or mac header). What should be the skb->len, should it consider mac header or not?
len; // total length of new ip datagram
skb_new = dev_alloc_skb(len + LL_ALLOCATED_SPACE(skb->dev) + ETH_LEN);
after this how much should I reserve for LL header and trailer and where should my skb_new->data point to.
I want to call netif_rx(skb_new) after filling in the required details in skb. Basically what should follow after allocating the skb and before calling netif_rx.
Any link or description will help.
Thanks in advance.
A socket buffer (skb) holds ALL of the information for the current protocol data unit (PDU) and accounts for ALL of the data being passed around for this PDU. Regardless of how much slack space you leave at the beginning of the skb, you point your skb->data to the beginning of YOUR data.
Related
I'm running Linux 5.1 on a Cyclone V SoC, which is an FPGA with two ARMv7 cores in one chip. My goal is to gather lots of data from an external interface and stream (part of) this data out through a TCP socket. The challenge here is that the data rate is very high and could come close to saturating the GbE interface. I have a working implementation that just uses write() calls to the socket, but it tops out at 55MB/s; roughly half the theoretical GbE limit. I'm now trying to get zero-copy TCP transmission to work to increase the throughput, but I'm hitting a wall.
To get the data out of the FPGA into Linux user-space, I've written a kernel driver. This driver uses a DMA block in the FPGA to copy a large amount of data from an external interface into DDR3 memory attached to the ARMv7 cores. The driver allocates this memory as a bunch of contiguous 1MB buffers when probed using dma_alloc_coherent() with GFP_USER, and exposes these to the userspace application by implementing mmap() on a file in /dev/ and returning an address to the application using dma_mmap_coherent() on the preallocated buffers.
So far so good; the user-space application is seeing valid data and the throughput is more than enough at >360MB/s with room to spare (the external interface is not fast enough to really see what the upper bound is).
To implement zero-copy TCP networking, my first approach was to use SO_ZEROCOPY on the socket:
sent_bytes = send(fd, buf, len, MSG_ZEROCOPY);
if (sent_bytes < 0) {
perror("send");
return -1;
}
However, this results in send: Bad address.
After googling for a bit, my second approach was to use a pipe and splice() followed by vmsplice():
ssize_t sent_bytes;
int pipes[2];
struct iovec iov = {
.iov_base = buf,
.iov_len = len
};
pipe(pipes);
sent_bytes = vmsplice(pipes[1], &iov, 1, 0);
if (sent_bytes < 0) {
perror("vmsplice");
return -1;
}
sent_bytes = splice(pipes[0], 0, fd, 0, sent_bytes, SPLICE_F_MOVE);
if (sent_bytes < 0) {
perror("splice");
return -1;
}
However, the result is the same: vmsplice: Bad address.
Note that if I replace the call to vmsplice() or send() to a function that just prints the data pointed to by buf (or a send() without MSG_ZEROCOPY), everything is working just fine; so the data is accessible to userspace, but the vmsplice()/send(..., MSG_ZEROCOPY) calls seem unable to handle it.
What am I missing here? Is there any way of using zero-copy TCP sending with a user-space address obtained from a kernel driver through dma_mmap_coherent()? Is there another approach I could use?
UPDATE
So I dove a bit deeper into the sendmsg() MSG_ZEROCOPY path in the kernel, and the call that eventually fails is get_user_pages_fast(). This call returns -EFAULT because check_vma_flags() finds the VM_PFNMAP flag set in the vma. This flag is apparently set when the pages are mapped into user space using remap_pfn_range() or dma_mmap_coherent(). My next approach is to find another way to mmap these pages.
As I posted in an update in my question, the underlying problem is that zerocopy networking does not work for memory that has been mapped using remap_pfn_range() (which dma_mmap_coherent() happens to use under the hood as well). The reason is that this type of memory (with the VM_PFNMAP flag set) does not have metadata in the form of struct page* associated with each page, which it needs.
The solution then is to allocate the memory in a way that struct page*s are associated with the memory.
The workflow that now works for me to allocate the memory is:
Use struct page* page = alloc_pages(GFP_USER, page_order); to allocate a block of contiguous physical memory, where the number of contiguous pages that will be allocated is given by 2**page_order.
Split the high-order/compound page into 0-order pages by calling split_page(page, page_order);. This now means that struct page* page has become an array with 2**page_order entries.
Now to submit such a region to the DMA (for data reception):
dma_addr = dma_map_page(dev, page, 0, length, DMA_FROM_DEVICE);
dma_desc = dmaengine_prep_slave_single(dma_chan, dma_addr, length, DMA_DEV_TO_MEM, 0);
dmaengine_submit(dma_desc);
When we get a callback from the DMA that the transfer has finished, we need to unmap the region to transfer ownership of this block of memory back to the CPU, which takes care of caches to make sure we're not reading stale data:
dma_unmap_page(dev, dma_addr, length, DMA_FROM_DEVICE);
Now, when we want to implement mmap(), all we really have to do is call vm_insert_page() repeatedly for all of the 0-order pages that we pre-allocated:
static int my_mmap(struct file *file, struct vm_area_struct *vma) {
int res;
...
for (i = 0; i < 2**page_order; ++i) {
if ((res = vm_insert_page(vma, vma->vm_start + i*PAGE_SIZE, &page[i])) < 0) {
break;
}
}
vma->vm_flags |= VM_LOCKED | VM_DONTCOPY | VM_DONTEXPAND | VM_DENYWRITE;
...
return res;
}
When the file is closed, don't forget to free the pages:
for (i = 0; i < 2**page_order; ++i) {
__free_page(&dev->shm[i].pages[i]);
}
Implementing mmap() this way now allows a socket to use this buffer for sendmsg() with the MSG_ZEROCOPY flag.
Although this works, there are two things that don't sit well with me with this approach:
You can only allocate power-of-2-sized buffers with this method, although you could implement logic to call alloc_pages as many times as needed with decreasing orders to get any size buffer made up of sub-buffers of varying sizes. This will then require some logic to tie these buffers together in the mmap() and to DMA them with scatter-gather (sg) calls rather than single.
split_page() says in its documentation:
* Note: this is probably too low level an operation for use in drivers.
* Please consult with lkml before using this in your driver.
These issues would be easily solved if there was some interface in the kernel to allocate an arbitrary amount of contiguous physical pages. I don't know why there isn't, but I don't find the above issues so important as to go digging into why this isn't available / how to implement it :-)
Maybe this will help you to understand why alloc_pages requires a power-of-2 page number.
To optimize the page allocation process(and decrease external fragmentations), which is frequently engaged, Linux kernel developed per-cpu page cache and buddy-allocator to allocate memory(there is another allocator, slab, to serve memory allocations that are smaller than a page).
Per-cpu page cache serve the one-page allocation request, while buddy-allocator keeps 11 lists, each containing 2^{0-10} physical pages respectively. These lists perform well when allocate and free pages, and of course, the premise is you are requesting a power-of-2-sized buffer.
I know that each NIC has its RX/TX ring in RAM for OS receiving/transmitting packets. And one item(packet descriptor) in the ring includes physical address of a packet, length of a packet and etc. I wonder that does this descriptor point to a sk_buff? And what happens if the packet is a GSO packet?Is this true that one descriptor in the ring = one packet = one sk_buff?
I wonder that does this descriptor point to a sk_buff?
Not exactly. sk_buff is a software construct, roughly, a data structure containing meta information to describe some chunk of network data AND point to the data itself. So, NIC descriptor doesn't need to point to sk_buff - it may only point to a data buffer (DMA/physical address is used).
And what happens if the packet is a GSO packet?
It's quite ambiguous question to answer since such offloads may be implemented in software (say, by the network stack) and may be done in hardware.
In the former case there is nothing to discuss in terms of NIC SW descriptors - the upper layer application provides a contiguous chunk of data, and the network stack produces smaller packets from it, so that sk_buff-s handed over to the network driver already describe small packets.
In the latter case (HW offload) the network driver is supplied with huge chunks of data (by means of handing over single sk_buff-s or sk_buff chains to it), and the network driver in turn posts appropriate descriptors to NIC - it may be one descriptor pointing to a big chunk of data, or a handful of descriptors pointing to smaller parts of the same contiguous data buffer - it doesn't matter a lot since the offload magic will take place in the HW - the overall data chunk will be sliced and packet headers will be prepended accordingly yielding many smaller network packets to be put on wire.
Is this true that one descriptor in the ring = one packet = one sk_buff?
Strictly speaking, no. It depends. Your network driver may be asked to transmit one sk_buff describing one data buffer. However, your driver under certain circumstances may decide to post multiple descriptors pointing to the same chunk of data but with different offsets - i.e. submission will be done in parts and there will be multiple descriptors in the NIC's ring related to a single sk_buff. Also, one packet is not always the same as one sk_buff - a packet may be presented as a handful of segments each described with a separate sk_buff forming an sk_buff chain (please find the next and prev fields in sk_buff).
The Linux kernel uses an sk_buff data structure to describe each
packet. When a packet arrives at the NIC, it invokes the DMA engine to
place the packet into the kernel memory via empty sk_buff's stored in
a ring buffer called rx_ring . An incoming packet is dropped if
the ring buffer is full. When a packet is processed at higher layers,
packet data remains in the same kernel memory, avoiding any extra
memory copies.
http://www.ece.virginia.edu/cheetah/documents/papers/TCPlinux.pdf
That last sentence seems to indicate that incoming packet data is kept in kernel memory in sk_buff structs without redundancy. So I'd say the answer to your question is yes, that descriptor would point to an sk_buff. And yes, each packet is put in it's own sk_buff in rx_ring.
sk_buff has nothing to do with physical network interfaces (not directly, at least). sk_buff lists store data as seen by the socket accessing software and kernel protocol handlers (which manipulate those lists to add/remove headers and/or alter data, e.g. when encryption is employed).
It is a responsibility of a low level driver to translate sk_buff list contents into something physical network adapter will understand. In particular, network hardware can be really dumb (like when doing networking over serial lines), in which case the driver will basically read sk_buff lists byte by byte and send those over the wire.
The more advanced adapters are usually capable of doing scatter/gather DMA - given a list of addresses in RAM they will be able to access each address and either obtain a packet data from there or put a received data back. However the exact details of this mechanism are very much adapter specific and in many cases are not even consistent between single vendor's products.
I wonder that does this descriptor point to a sk_buff?
The answer is YES. This avoids copying memory from one place (the rx_ring dma buffer) to another (the sk_buff).
you can check the implementation of the b44 NIC driver (in drivers/net/ethernet/broadcom/b44.c), the function b44_init_rings pre-allocates a constant number of sk_buff for the rx_ring, which are also used as DMA buffers for the NIC.
static void b44_init_rings(struct b44 *bp)
{
int i;
b44_free_rings(bp);
memset(bp->rx_ring, 0, B44_RX_RING_BYTES);
memset(bp->tx_ring, 0, B44_TX_RING_BYTES);
if (bp->flags & B44_FLAG_RX_RING_HACK)
dma_sync_single_for_device(bp->sdev->dma_dev, bp->rx_ring_dma,
DMA_TABLE_BYTES, DMA_BIDIRECTIONAL);
if (bp->flags & B44_FLAG_TX_RING_HACK)
dma_sync_single_for_device(bp->sdev->dma_dev, bp->tx_ring_dma,
DMA_TABLE_BYTES, DMA_TO_DEVICE);
for (i = 0; i < bp->rx_pending; i++) {
if (b44_alloc_rx_skb(bp, -1, i) < 0)
break;
}
}
I am writing an application server that processes images (large data). I am trying to minimize copies when sending image data back to clients. The processed images I need to send to clients are in buffers obtained from jemalloc. The ways I have thought of sending the data back to the client is:
1) Simple write call.
// Allocate buffer buf.
// Store image data in this buffer.
write(socket, buf, len);
2) I obtain the buffer through mmap instead of jemalloc, though I presume jemalloc already creates the buffer using mmap. I then make a simple call to write.
buf = mmap(file, len); // Imagine proper options.
// Store image data in this buffer.
write(socket, buf, len);
3) I obtain a buffer through mmap like before. I then use sendfile to send the data:
buf = mmap(in_fd, len); // Imagine proper options.
// Store image data in this buffer.
int rc;
rc = sendfile(out_fd, file, &offset, count);
// Deal with rc.
It seems like (1) and (2) will probably do the same thing given jemalloc probably allocates memory through mmap in the first place. I am not sure about (3) though. Will this really lead to any benefits? Figure 4 on this article on Linux zero-copy methods suggests that a further copy can be prevented using sendfile:
no data is copied into the socket buffer. Instead, only descriptors
with information about the whereabouts and length of the data are
appended to the socket buffer. The DMA engine passes data directly
from the kernel buffer to the protocol engine, thus eliminating the
remaining final copy.
This seems like a win if everything works out. I don't know if my mmaped buffer counts as a kernel buffer though. Also I don't know when it is safe to re-use this buffer. Since the fd and length is the only thing appended to the socket buffer, I assume that the kernel actually writes this data to the socket asynchronously. If it does what does the return from sendfile signify? How would I know when to re-use this buffer?
So my questions are:
What is the fastest way to write large buffers (images in my case) to a socket? The images are held in memory.
Is it a good idea to call sendfile on a mmapped file? If yes, what are the gotchas? Does this even lead to any wins?
It seems like my suspicions were correct. I got my information from this article. Quoting from it:
Also these network write system calls, including sendfile, might and
in many cases do return before the data sent over TCP by the method
call has been acknowledged. These methods return as soon as all data
is written into the socket buffers (sk buff) and is pushed to the TCP
write queue, the TCP engine can manage alone from that point on. In
other words at the time sendfile returns the last TCP send window is
not actually sent to the remote host but queued. In cases where
scatter-gather DMA is supported there is no seperate buffer which
holds these bytes, rather the buffers(sk buffs) just hold pointers to
the pages of OS buffer cache, where the contents of file is located.
This might lead to a race condition if we modify the content of the
file corresponding to the data in the last TCP send window as soon as
sendfile is returned. As a result TCP engine may send newly written
data to the remote host instead of what we originally intended to
send.
Provided the buffer from a mmapped file is even considered "DMA-able", seems like there is no way to know when it is safe to re-use it without an explicit acknowledgement (over the network) from the actual client. I might have to stick to simple write calls and incur the extra copy. There is a paper (also from the article) with more details.
Edit: This article on the splice call also shows the problems. Quoting it:
Be aware, when splicing data from a mmap'ed buffer to a network
socket, it is not possible to say when all data has been sent. Even if
splice() returns, the network stack may not have sent all data yet. So
reusing the buffer may overwrite unsent data.
For cases 1 and 2 - does the operation you marked as // Store image data in this buffer require any conversion? Is it just plain copy from the memory to buf?
If it's just plain copy, you can use write directly on the pointer obtained from jemalloc.
Assuming that img is a pointer obtained from jemalloc and size is a size of your image, just run following code:
int result;
int sent=0;
while(sent<size) {
result=write(socket,img+sent,size-sent);
if(result<0) {
/* error handling here */
break;
}
sent+=result;
}
It is working correctly for blocking I/O (the default behavior). If you need to write a data in a non-blocking manner, you should be able to rework the code on your own, but now you have the idea.
For case 3 - sendfile is for sending data from one descriptor to another. That means you can, for example, send data from file directly to tcp socket and you don't need to allocate any additional buffer. So, if the image you want to send to a client is in a file, just go for a sendfile. If you have it in memory (because you processed it somehow, or just generated), use the approach I mentioned earlier.
I want to enqueue a skb buff in more than one queue. So I thought of using the cloning option.
Now my question is if I do kfree_skb on the cloned skb, will it release the original skb, or just drop one reference?
Thanks!
kfree_skb() will do the right thing with cloned skbuffs, i.e. free the skbuff structure itself but not the data, if it is still referenced by other skbuffs.
This is done on skb_release_data(), which checks if the skbuff is not a clone, or if this was the last reference to skb->data (done in a roundabout way to support headerless skbuffs, which hold references to the payload part of skb->data (higher 16 bits of skb->dataref), besides the usual reference to the whole of skb->data).
When a clone of an skb is made, a new memory is allocated for cloned sk_buff and all of the struct sk_buff members of the clone are private to the clone.
The data i.e packet, however, is shared between the original SKB and it's clone. So sk_buff structure only is copied to new memory. If you free the original skb, then data will be lost if dataref count is zero. Here data is your packet.
If you dont want to loose data on freeing up of any of skb, use skb_copy instead of skb_clone: skb_copy will copy both the sk_buff and the packet to new memory area.
EDIT: editing the previous reply with some correction.
i am studying 8139too.c driver. for the transmit, the driver calls skb_copy_and_csum_dev() to copy the entire socket buffer into a descriptor ring whose buffer is big enough for the entire socket buffer.
if the descriptor ring buffer is smaller than skb->data, what is the correct way to break break skb->data up and copy skb->data into multiple descriptors?
(assuming scatter/gather is not being used)
thank you very much.
In the function **rtl8139_start_xmit() ** of 8139 driver, it first check whether the length of skb->data is larger than TX_BUF_SIZE, which is the MAX_ETH_FRAME_SIZE. If it is larger than TX_BUF_SIZE, the driver drops the packet.
if (likely(len < TX_BUF_SIZE)) {
if (len < ETH_ZLEN)
memset(tp->tx_buf[entry], 0, ETH_ZLEN);
skb_copy_and_csum_dev(skb, tp->tx_buf[entry]);
dev_kfree_skb(skb);
} else {
dev_kfree_skb(skb);
tp->stats.tx_dropped++;
return 0;
}
Generally, if the packet you try to send is larger than MAX_ETH_FRAME_SIZE, the IP layer of the protocol stack will fragment the packet, like you said "break XXX up". But when packet goes down to driver, it won't be broken up any more.
More infomation:
IP fragmentation on wikipedia