I am writing a driver for Altera Soc Developement Kit and need to support two modes of data transfer to/from a FPGA:
FIFO transfers: When writing to (or reading from) an FPGA FIFO, the destination (or source) address must not be incremented by the DMA controller.
non-FIFO transfers: These are normal (RAM-like) transfers where both the source and destination addresses require an increment for each word transferred.
The particular DMA controller I am using is the CoreLink DMA-330 DMA Controller and its Linux driver is pl330.c (drivers/dma/pl330.c). This DMA controller does provide a mechanism to switch between "Fixed-address burst" and "Incrementing-address burst" (these are synonymous with my "FIFO transfers" and "non-FIFO transfers"). The pl330 driver specifies which behavior it wants by setting the appropriate bits in the CCRn register
#define CC_SRCINC (1 << 0)
#define CC_DSTINC (1 << 14)
My question: it is not at all clear to me how clients of the pl330 (my driver, for example) should specify the address-incrementing behavior.
The DMA engine client API says nothing about how to specify this while the DMA engine provider API simply states:
Addresses pointing to RAM are typically incremented (or decremented)
after each transfer. In case of a ring buffer, they may loop
(DMA_CYCLIC). Addresses pointing to a device's register (e.g. a FIFO)
are typically fixed.
without giving any detail as to how the address types are communicated to providers (in my case the pl300 driver).
The in the pl330_prep_slave_sg method it does:
if (direction == DMA_MEM_TO_DEV) {
desc->rqcfg.src_inc = 1;
desc->rqcfg.dst_inc = 0;
desc->req.rqtype = MEMTODEV;
fill_px(&desc->px,
addr, sg_dma_address(sg), sg_dma_len(sg));
} else {
desc->rqcfg.src_inc = 0;
desc->rqcfg.dst_inc = 1;
desc->req.rqtype = DEVTOMEM;
fill_px(&desc->px,
sg_dma_address(sg), addr, sg_dma_len(sg));
}
where later, the desc->rqcfg.src_inc, and desc->rqcfg.dst_inc are used by the driver to specify the address-increment behavior.
This implies the following:
Specifying a direction = DMA_MEM_TO_DEV means the client wishes to pull data from a FIFO into RAM. And presumably DMA_DEV_TO_MEM means the client wishes to push data from RAM into a FIFO.
Scatter-gather DMA operations (for the pl300 at least) is restricted to cases where either the source or destination end point is a FIFO. What if I wanted to do a scatter-gather operation from system RAM into FPGA (non-FIFO) memory?
Am I misunderstanding and/or overlooking something? Does the DMA engine already provide a (undocumented) mechanism to specify address-increment behavior?
Look at this
pd->device_prep_dma_memcpy = pl330_prep_dma_memcpy;
pd->device_prep_dma_cyclic = pl330_prep_dma_cyclic;
pd->device_prep_slave_sg = pl330_prep_slave_sg;
It means you have different approaches like you have read in documentation. RAM-like transfers could be done, I suspect, via device_prep_dma_memcpy().
It appears to me (after looking to various drivers in a kernel) the only DMA transfer styles which allows you (indirectly) control auto-increment behavior is the ones which have enum dma_transfer_direction in its corresponding device_prep_... function.
And this parameter declared only for device_prep_slave_sg and device_prep_dma_cyclic, according to include/linux/dmaengine.h
Another option should be to use and struct dma_interleaved_template which allows you to specify increment behaviour directly. But support for this method is limited (only i.MX DMA driver does support it in 3.8 kernel, for example. And even this support seems to be limited)
So I think, we are stuck with device_prep_slave_sg case with all sg-related complexities for a some while.
That is that I am doing at the moment (although it is for accessing of some EBI-connected device on Atmel SAM9 SOC)
Another thing to consider is a device's bus width. memcopy-variant can perform different bus-width transfers, depending on a source and target addresses and sizes. And this may not match size of FIFO element.
Related
I know that each NIC has its RX/TX ring in RAM for OS receiving/transmitting packets. And one item(packet descriptor) in the ring includes physical address of a packet, length of a packet and etc. I wonder that does this descriptor point to a sk_buff? And what happens if the packet is a GSO packet?Is this true that one descriptor in the ring = one packet = one sk_buff?
I wonder that does this descriptor point to a sk_buff?
Not exactly. sk_buff is a software construct, roughly, a data structure containing meta information to describe some chunk of network data AND point to the data itself. So, NIC descriptor doesn't need to point to sk_buff - it may only point to a data buffer (DMA/physical address is used).
And what happens if the packet is a GSO packet?
It's quite ambiguous question to answer since such offloads may be implemented in software (say, by the network stack) and may be done in hardware.
In the former case there is nothing to discuss in terms of NIC SW descriptors - the upper layer application provides a contiguous chunk of data, and the network stack produces smaller packets from it, so that sk_buff-s handed over to the network driver already describe small packets.
In the latter case (HW offload) the network driver is supplied with huge chunks of data (by means of handing over single sk_buff-s or sk_buff chains to it), and the network driver in turn posts appropriate descriptors to NIC - it may be one descriptor pointing to a big chunk of data, or a handful of descriptors pointing to smaller parts of the same contiguous data buffer - it doesn't matter a lot since the offload magic will take place in the HW - the overall data chunk will be sliced and packet headers will be prepended accordingly yielding many smaller network packets to be put on wire.
Is this true that one descriptor in the ring = one packet = one sk_buff?
Strictly speaking, no. It depends. Your network driver may be asked to transmit one sk_buff describing one data buffer. However, your driver under certain circumstances may decide to post multiple descriptors pointing to the same chunk of data but with different offsets - i.e. submission will be done in parts and there will be multiple descriptors in the NIC's ring related to a single sk_buff. Also, one packet is not always the same as one sk_buff - a packet may be presented as a handful of segments each described with a separate sk_buff forming an sk_buff chain (please find the next and prev fields in sk_buff).
The Linux kernel uses an sk_buff data structure to describe each
packet. When a packet arrives at the NIC, it invokes the DMA engine to
place the packet into the kernel memory via empty sk_buff's stored in
a ring buffer called rx_ring . An incoming packet is dropped if
the ring buffer is full. When a packet is processed at higher layers,
packet data remains in the same kernel memory, avoiding any extra
memory copies.
http://www.ece.virginia.edu/cheetah/documents/papers/TCPlinux.pdf
That last sentence seems to indicate that incoming packet data is kept in kernel memory in sk_buff structs without redundancy. So I'd say the answer to your question is yes, that descriptor would point to an sk_buff. And yes, each packet is put in it's own sk_buff in rx_ring.
sk_buff has nothing to do with physical network interfaces (not directly, at least). sk_buff lists store data as seen by the socket accessing software and kernel protocol handlers (which manipulate those lists to add/remove headers and/or alter data, e.g. when encryption is employed).
It is a responsibility of a low level driver to translate sk_buff list contents into something physical network adapter will understand. In particular, network hardware can be really dumb (like when doing networking over serial lines), in which case the driver will basically read sk_buff lists byte by byte and send those over the wire.
The more advanced adapters are usually capable of doing scatter/gather DMA - given a list of addresses in RAM they will be able to access each address and either obtain a packet data from there or put a received data back. However the exact details of this mechanism are very much adapter specific and in many cases are not even consistent between single vendor's products.
I wonder that does this descriptor point to a sk_buff?
The answer is YES. This avoids copying memory from one place (the rx_ring dma buffer) to another (the sk_buff).
you can check the implementation of the b44 NIC driver (in drivers/net/ethernet/broadcom/b44.c), the function b44_init_rings pre-allocates a constant number of sk_buff for the rx_ring, which are also used as DMA buffers for the NIC.
static void b44_init_rings(struct b44 *bp)
{
int i;
b44_free_rings(bp);
memset(bp->rx_ring, 0, B44_RX_RING_BYTES);
memset(bp->tx_ring, 0, B44_TX_RING_BYTES);
if (bp->flags & B44_FLAG_RX_RING_HACK)
dma_sync_single_for_device(bp->sdev->dma_dev, bp->rx_ring_dma,
DMA_TABLE_BYTES, DMA_BIDIRECTIONAL);
if (bp->flags & B44_FLAG_TX_RING_HACK)
dma_sync_single_for_device(bp->sdev->dma_dev, bp->tx_ring_dma,
DMA_TABLE_BYTES, DMA_TO_DEVICE);
for (i = 0; i < bp->rx_pending; i++) {
if (b44_alloc_rx_skb(bp, -1, i) < 0)
break;
}
}
The dmaengine in Linux significantly simplifies writing of drivers for devices using DMA, especially if they support and use scatter-gather (SG) transfers.
However, there is a problem in case if the length of such transfer is not known a priori. This situation is quite common. It may happen in case of USB transfers, or in case of AXI Stream transfers. In the case of an AXI Stream connected device, the transfer should be terminated by setting the tlast signal to '1'. The driver should prepare the buffer for the longest possible transfer, but after the transfer is complete, it should be able to find the number of actually transferred bytes. Unfortunately it seems, that there is no documented way to read the length of the completed transfer.
The single transfer is represented with an SG-table, that is later converted into the dma_async_tx_descriptor, using the dmaengine_prep_slave_sg function , submitted to the DMA layer (e.g. like here), and finally scheduled for execution via dma_async_issue_pending. After that, the transfer descriptor may be accessed only via a dma_cookie returned by the dmaengine_prep_slave_sg function.
The problem with determining the actual length of the transfer was reported for the USB transfers, and a patch adding the transferred field to the dma_async_tx_descriptor was proposed. However that proposal was rejected, and after a discussion a solution based on using the dmaengine_tx_status, and checking the residue field in the returned dma_tx_state structure.
Unfortunately, it seems that the proposed solution does not work neither in the 4.4 kernel (used for Xilinx SoC devices) nor in the newest 4.7 kernel.
The residue field is set to 0 in case of completed transfers regardless of the actual number of transferred bytes.
So my question is: How can I reliably determine the actual number of transferred bytes in the completed SG DMA transfer in a dmaengine compatible driver?
PS. That question with more Xilinx-specific details related to the AXI DMA IP core has been also asked on the Xilinx forum.
I'm working on enhancing the stock ahci driver provided in Linux in order to perform some needed tasks. I'm at the point of attempting to issue commands to the AHCI HBA for the hard drive to process. However, whenever I do so, my system locks up and reboots. Trying to explain the process of issuing a command to an AHCI drive is far to much for this question. If needed, reference this link for the full discussion (the process is rather defined all over because there are several pieces, however, ch 4 has the data structures necessary).
Essentially, one writes the appropriate structures into memory regions defined by either the BIOS or the OS. The first memory region I should write to is the Command List Base Address contained in the register PxCLB (and PxCLBU if 64-bit addressing applies). My system is 64 bits and so I'm trying to getting both 32-bit registers. My code is essentially this:
void __iomem * pbase = ahci_port_base(ap);
u32 __iomem *temp = (u32*)(pbase + PORT_LST_ADDR);
struct ahci_cmd_hdr *cmd_hdr = NULL;
cmd_hdr = (struct ahci_cmd_hdr*)(u64)
((u64)(*(temp + PORT_LST_ADDR_HI)) << 32 | *temp);
pr_info("%s:%d cmd_list is %p\n", __func__, __LINE__, cmd_hdr);
// problems with this next line, makes the system reboot
//pr_info("%s:%d cl[0]:0x%08x\n", __func__, __LINE__, cmd_hdr->opts);
The function ahci_port_base() is found in the ahci driver (at least it is for CentOS 6.x). Basically, it returns the proper address for that port in the AHCI memory region. PORT_LST_ADDR and PORT_LST_ADDR_HI are both macros defined in that driver. The address that I get after getting both the high and low addresses is usually something like 0x0000000037900000. Is this memory address in a space that I cannot simply dereference it?
I'm hitting my head against the wall at this point because this link shows that accessing it in this manner is essentially how it's done.
The address that I get after getting both the high and low addresses
is usually something like 0x0000000037900000. Is this memory address
in a space that I cannot simply dereference it?
Yes, you are correct - that's a bus address, and you can't just dereference it because paging is enabled. (You shouldn't be just dereferencing the iomapped addresses either - you should be using readl() / writel() for those, but the breakage here is more subtle).
It looks like the right way to access the ahci_cmd_hdr in that driver is:
struct ahci_port_priv *pp = ap->private_data;
cmd_hdr = pp->cmd_slot;
I'm writing a linux device driver for an dma and while going across the source of dma drivers in LXR i came across the functions dma_cap_zero and dma_cap_set and whole family of dma_cap_* . What are these functions ?
Also there a structure called dma_transaction_type
enum dma_transaction_type {
DMA_MEMCPY,
DMA_XOR,
DMA_PQ,
DMA_XOR_VAL,
DMA_PQ_VAL,
DMA_MEMSET,
DMA_INTERRUPT,
DMA_SG,
DMA_PRIVATE,
DMA_ASYNC_TX,
DMA_SLAVE,
DMA_CYCLIC,
DMA_INTERLEAVE,
/* last transaction type for creation of the capabilities mask */
DMA_TX_TYPE_END,
};
What do the enum types represent ?
These functions are actually preprocessor macro functions, and are used by slave DMA devices to configure and request DMA channels.
Here is an example of them being used:
dma_cap_mask_t mask;
dma_cap_zero(mask);
dma_cap_set(DMA_MEMCPY,mask);
dma_chan1 = dma_request_channel(mask,0,NULL);
This code is from http://ecourse.wikidot.com/dmatest.
First, there's the datatype dma_cap_mask_t defined in dmaengine.h, ~line 233. It is a type of bitfield where the bits indicate what kind of transfers a DMA channel is capable of.
In the code snippet above, which occurs in the linked code's __init routine, the mask is declared to be of the special dma_cap_mask_t datatype. Then the dma_cap_zero() function is called and mask is passed to it.
I believe dma_cap_zero is simply zeroing out the capability mask. It is defined in dmaengine.h, ~line 733. The function has returns void, and I think is zeroing the bitfield. I'm not entirely sure, though, because kernel code is a massive pile of macro magic that I have a hard time deciphering sometimes.
After the mask is zeroed, or initialized in some fashion by dma_cap_zero, the capabilities of the channel must be set. The dma_cap_set function accomplishes this. It takes the request channel type and sets the mask according to the capabilities required to perform that type of transaction. If you're confused about how the enumeration is being used, take a look at this page for a simple review of enums. In this case, it looks like the values in the enum are used to describe different types of DMA transactions, each of which need a different set of "capabilities". The dma_set_cap function sets the capabilities mask according to the capabilities required for the specified transaction type.
Once the mask is properly set for the type of DMA transactions you want to perform, you request the DMA channel.
The other dma_cap* macros are used to perform other types of manipulations on the DMA mask without really knowing what's going on behind the scenes. These types of macros are all over the place in the kernel code, for many more manipulations that just DMA. They allow device drivers to get things done in the kernel without having to worry about how the kernel does it.
I am writing a device driver to handle interrupts for a PCIe card, which currently works for any interrupt vector raised on the IRQ line.
But it has a few types that can be raised, flagged by the Vector register. So now I need to read the vector information and be a bit cleverer...
So, do I :-
1/ Have separate dev nodes /dev/int1, /dev/int2, etc for each interrupt type, and just doc that int1 is for vector type A etc?
1.1/ As each file/char-devices will have its own minor number, when opened I'll know which is which. i think.
1.2/ ldd3 seems to demo this method.
2/ Have one node /dev/int (as I do now) and have multiple processes hanging off the same read method? sounds better?!
2.1/ Then only wake the correct process up...?
2.2/ Do I use separate wait_queue_head_t wait_queues? Or different flag/test conditions?
In the read method:-
wait_event_interruptible(wait_queue, flag);
In the handler not real code! :-
int vector = read_vector();
if vector = A then
wake_up_interruptible(wait_queue, flag)
return IRQ_HANDLED;
else
return IRQ_NONE/IRQ_RETVAL?
EDIT: notes from peoples comments :-
1) my user-space code mmap's all of the PCIe firmware registers
2) User-space code has a few threads, each perform a blocking read on the device driver device nodes, which then returns data from the firmware when an interrupt occurs. I need the correct thread woken up depending on the interrupt type.
I am not sure I understand correctly what you mean with the Vector register (a pointer to some documentation would help me precise for your case).
Anyway, any PCI device gets a unique interrupt number (given by the BIOS or some firmware on other architectures than x86). You just need to register this interrupt in your driver.
priv->name = DRV_NAME;
err = request_irq(pdev->irq, your_irqhandler, IRQF_SHARED, priv->name,
pdev);
if (err) {
dev_err(&pdev->dev, "cannot request IRQ\n");
goto err_out_unmap;
}
One other thing that I do not really understand is why you would export your interrupts as a dev node: interrupts are certainly something that need to remain in your driver/kernel code. But I guess here you want to export a device that is then accessed in userspace. I just find /dev/int no to be a good naming.
For your question about multiple dev nodes: if your different interrupt sources then provide access to different hardware resources (even if on the same PCI board) I would go for option 1), with a wait_queue for each device. Otherwise, I would go for option 2)
Since your interrupts are coming from the same physical device, if you chose option 1) or option 2), the interrupt line will have to be shared and you will have to read the vector in your interrupt handler to define which hardware resource raised the interrupt.
For option 1), it would be something like this:
static irqreturn_t pex_irqhandler(int irq, void *dev) {
struct pci_dev *pdev = dev;
int result;
result = pci_read_config_byte(pdev, PCI_INTERRUPT_LINE, &myirq);
if (result) {
int vector = read_vector();
if (vector == A) {
set_flagA(flag);
} else if (vector == B) {
set_flagB(flag);
}
wake_up_interruptible(wait_queue, flag);
return IRQ_HANDLED;
} else {
return IRQ_NONE;
}
For option 2, it would be similar, but you would have only one if clause (for the respective vector value) in every different interrupt handler that you would request for every node.
If you have different chanel you can read() from, then you should definitely use different minor number. Imagine you have a card whith four serial port, you would definitely want four /dev/ttySx.
But does your device fit whith this model ?
First, I assume you're not trying to get your code into the mainline kernel. If you are, expect a vigorous discussion about the best way to do this. If you're writing a simple interrupt handling driver for a card which is mostly driven by mmap from user-space, there are a lot of ways to solve this problem.
If you use multiple device nodes (option 1), you can also implement poll so that a single application can open multiple device nodes and wait for a selection of interrupts. The minor number will be sufficient to tell them apart. If you have a wake queue for each vector, you can wake only the relevant listeners. You'll need to latch the vector after a successful poll to be sure that the read succeeds.
If you use a single device node (option 2), you'll need to add some extra magic so that the threads can register their interest in particular interrupt vectors. You could do this with an ioctl, or have the threads write the interrupt vectors to the device. Each thread should open the device node to get its own file descriptor. You can then associate the list of requested vectors with each open file descriptor. As a bonus, you can let the application read the interrupt vector from the device, so it knows which one happened.
You'll need to think about how the interrupt gets cleared. The interrupt handler will need to remove the interrupt, then store the result so it can be passed to user-space. You might find a kfifo useful for this rather than a wait queue. If you have a fifo for each open file descriptor, you can distribute the interrupt notifications to each listening application.