Dual Channel Tuner allocation - redhawksdr

I have a dual channel tuner and am trying to allocate two channels using Frontend 2.0. I am using Redhawk 1.9 installation. When the allocateCapacity is called it says that the capacities length is 1. Should this be 2 for a dual channel tuner? I thought I read that the number of tuners is specified in the .prf.xml file, but I don't see where the number of tuners is specified. Is this the correct approach?
CORBA::Boolean DEVICE_i::allocateCapacity(const CF::Properties & capacities)
throw (CORBA::SystemException, CF::Device::InvalidCapacity, CF::Device::InvalidState) {
std::cout << "In DEVICE_i::allocateCapacity...capacities length = " << capacities.length() << std::endl;

A call to allocateCapacity should be made for each tuner. To allocate two tuners, make two calls to allocateCapacity.
The capacities in this context is referring to the request that is passed into allocateCapacity, not the Device capacity. A single request should be made at a time, meaning the length of the request should be 1, as you are seeing.
The capacity of the Device is advertised using the frontend_tuner_status struct sequence Property, which has an entry for each tuner. For a dual tuner, there should be two entries in the frontend_tuner_status struct sequence. This could be populated in the .prf.xml if known and constant, but it is more commonly populated at run-time in the Device code. To specify that there are two tuners, add two struct entries to the frontend_tuner_status struct sequence (either in the prf or at run-time).
The USRP_UHD is an example of a FEI 2.0 Device that supports multiple tuners that you could use as an example.

Related

What is ethtool and is it important for usable ethernet NIC. Can I remove it from device driver and still able to use network

I am studying realtek device driver and came across ethtool operations like ethtool_ops there are many operation included in the operations object. Following is the code
static const struct ethtool_ops rtl8169_ethtool_ops = {
.supported_coalesce_params = ETHTOOL_COALESCE_USECS |
ETHTOOL_COALESCE_MAX_FRAMES,
.get_drvinfo = rtl8169_get_drvinfo,
.get_regs_len = rtl8169_get_regs_len,
.get_link = ethtool_op_get_link,
.get_coalesce = rtl_get_coalesce,
.set_coalesce = rtl_set_coalesce,
.get_regs = rtl8169_get_regs,
.get_wol = rtl8169_get_wol,
.set_wol = rtl8169_set_wol,
.get_strings = rtl8169_get_strings,
.get_sset_count = rtl8169_get_sset_count,
.get_ethtool_stats = rtl8169_get_ethtool_stats,
.get_ts_info = ethtool_op_get_ts_info,
.nway_reset = phy_ethtool_nway_reset,
.get_eee = rtl8169_get_eee,
.set_eee = rtl8169_set_eee,
.get_link_ksettings = phy_ethtool_get_link_ksettings,
.set_link_ksettings = phy_ethtool_set_link_ksettings,
};
What are these operations
for example there is eee mentioned so I am interested in what its matching functionality with higher level layers like ip or tcp.
What are these: coalesce, wol, stats, ksettings, ts_info and eee from list above. I believe they have something to do with physical layer of OSI model, But if u can give me what higher level functions they achieve. For each these functions matching high level brief info will be great
For example wol is something like
Wake-on-LAN is an Ethernet or Token Ring computer networking standard
that allows a computer to be turned on or awakened by a network message. > The message is usually sent to the target computer by a program executed > on a device connected to the same local area network, such as a
smartphone. Wikipedia
And Coalese is
Packet coalescing is the grouping of packets as a way of limiting the
number of receive interrupts and, as a result, lowering the amount of
processing required.
Just give me some technical info on kernel api about these functions. Thanks
Long story short, ethtool is a means to display and adjust generic NIC/driver parameters (identification, the number of receive and transmit queues, receive and transmit offloads, you name it) in a userspace application. On the one side, there's kernel API which a NIC driver is connected to (in particular, by means of struct ethtool_ops). On the other side, there's an ioctl command named SIOCETHTOOL which a userspace application can use to send requests to the kernel side and get responses.
Sub-identifiers enumerating specific commands for SIOCETHTOOL exist, and they correspond to the methods of struct ethtool_ops. From the question of yours it follows that it's struct ethtool_ops that should be explained.
I suggest that you take a look at include/linux/ethtool.h. There's a comment section right before the definition of struct ethtool_ops. In particular:
* #get_wol: Report whether Wake-on-Lan is enabled
* #set_wol: Turn Wake-on-Lan on or off. Returns a negative error code
* #get_coalesce: Get interrupt coalescing parameters. Returns a negative
* error code or zero.
* #set_coalesce: Set interrupt coalescing parameters. Supported coalescing
* types should be set in #supported_coalesce_params.
* Returns a negative error code or zero.
* #get_ts_info: Get the time stamping and PTP hardware clock capabilities.
* Drivers supporting transmit time stamps in software should set this to
* ethtool_op_get_ts_info().
* #get_eee: Get Energy-Efficient (EEE) supported and status.
* #set_eee: Set EEE status (enable/disable) as well as LPI timers.
For WoL feature and interrupt coalescing, these brief comments mostly match the explanations in the question of yours. The rest of the comments in the header file are also of great use. You may refer to them to get the gist.
It may also come in handy to take a look at ethtool mechanism from the userspace perspective. The most commonly used userspace application to use ethtool mechanism is also named ethtool; it's available as a package in popular distributions (Debian, Ubuntu, you name it). It's the man page of the package which contains useful information. In example:
ethtool -c|--show-coalesce devname
ethtool -C|--coalesce devname [adaptive-rx on|off]
[adaptive-tx on|off] [rx-usecs N] [rx-frames N]
[rx-usecs-irq N] [rx-frames-irq N] [tx-usecs N]
[tx-frames N] [tx-usecs-irq N] [tx-frames-irq N]
[stats-block-usecs N] [pkt-rate-low N] [rx-usecs-low N]
[rx-frames-low N] [tx-usecs-low N] [tx-frames-low N]
[pkt-rate-high N] [rx-usecs-high N] [rx-frames-high N]
[tx-usecs-high N] [tx-frames-high N] [sample-interval N]
-c --show-coalesce
Queries the specified network device for coalescing
information.
-C --coalesce
Changes the coalescing settings of the specified network
device.
All in all, there might be quite a few places which you may take a look at to clarify the meaning of ethtool commands/methods. These commands (as well as the features they serve) do not necessarily have something to do with the physical layer of OSI model. Some may be out of scope of the OSI model (like device identification); some may correspond to higher levels of the OSI model, in example, controls for packet offloads like TCP segmentation offload, RSS (Receive Side Scaling), etc. A particular feature can be discussed separately should the need arise.

spi_write_then_read with variant register size

As I understand the term "word length" (spi_bits_per_word) in spi, defines the CS (chip select) active time.
It therefore seems that linux driver will function correctly when dealing with simple spi protocols which keeps word size constant.
But, How can we deal with spi protocols which use different spi size as part of protocol.
for example cs need to be active for sending spi word - 9 bits, and then reading spi - 8 bits or 24 bits (the length of the register read is different each time, depends on register)
How can we implement that using spi_write_then_read ?
Do we need to set bits_per_word size for the sending and then another bits_per_word for the receiving ?
Regards,
Ran
"word length" means number of bits you can send in one transaction. It doesn't defines the CS (chip select) active time. You can keep it active for whatever time you want(least is for word-length).
SPI has got some format. You cannot randomly read-write whatever number of bits you want.Most of SPI supports 4-bit, 8-bit, 16-bit and 32-bit mode. If the given mode doesn't satisfy your requirement then you need to break your requirement. For eg:- To read 24-bit data, we need to use 8-bit word-length transfer for 3 times.
Generally SPI is fullduplex means it will read at same time it will write.

Linux DMA API: specifying address increment behavior?

I am writing a driver for Altera Soc Developement Kit and need to support two modes of data transfer to/from a FPGA:
FIFO transfers: When writing to (or reading from) an FPGA FIFO, the destination (or source) address must not be incremented by the DMA controller.
non-FIFO transfers: These are normal (RAM-like) transfers where both the source and destination addresses require an increment for each word transferred.
The particular DMA controller I am using is the CoreLink DMA-330 DMA Controller and its Linux driver is pl330.c (drivers/dma/pl330.c). This DMA controller does provide a mechanism to switch between "Fixed-address burst" and "Incrementing-address burst" (these are synonymous with my "FIFO transfers" and "non-FIFO transfers"). The pl330 driver specifies which behavior it wants by setting the appropriate bits in the CCRn register
#define CC_SRCINC (1 << 0)
#define CC_DSTINC (1 << 14)
My question: it is not at all clear to me how clients of the pl330 (my driver, for example) should specify the address-incrementing behavior.
The DMA engine client API says nothing about how to specify this while the DMA engine provider API simply states:
Addresses pointing to RAM are typically incremented (or decremented)
after each transfer. In case of a ring buffer, they may loop
(DMA_CYCLIC). Addresses pointing to a device's register (e.g. a FIFO)
are typically fixed.
without giving any detail as to how the address types are communicated to providers (in my case the pl300 driver).
The in the pl330_prep_slave_sg method it does:
if (direction == DMA_MEM_TO_DEV) {
desc->rqcfg.src_inc = 1;
desc->rqcfg.dst_inc = 0;
desc->req.rqtype = MEMTODEV;
fill_px(&desc->px,
addr, sg_dma_address(sg), sg_dma_len(sg));
} else {
desc->rqcfg.src_inc = 0;
desc->rqcfg.dst_inc = 1;
desc->req.rqtype = DEVTOMEM;
fill_px(&desc->px,
sg_dma_address(sg), addr, sg_dma_len(sg));
}
where later, the desc->rqcfg.src_inc, and desc->rqcfg.dst_inc are used by the driver to specify the address-increment behavior.
This implies the following:
Specifying a direction = DMA_MEM_TO_DEV means the client wishes to pull data from a FIFO into RAM. And presumably DMA_DEV_TO_MEM means the client wishes to push data from RAM into a FIFO.
Scatter-gather DMA operations (for the pl300 at least) is restricted to cases where either the source or destination end point is a FIFO. What if I wanted to do a scatter-gather operation from system RAM into FPGA (non-FIFO) memory?
Am I misunderstanding and/or overlooking something? Does the DMA engine already provide a (undocumented) mechanism to specify address-increment behavior?
Look at this
pd->device_prep_dma_memcpy = pl330_prep_dma_memcpy;
pd->device_prep_dma_cyclic = pl330_prep_dma_cyclic;
pd->device_prep_slave_sg = pl330_prep_slave_sg;
It means you have different approaches like you have read in documentation. RAM-like transfers could be done, I suspect, via device_prep_dma_memcpy().
It appears to me (after looking to various drivers in a kernel) the only DMA transfer styles which allows you (indirectly) control auto-increment behavior is the ones which have enum dma_transfer_direction in its corresponding device_prep_... function.
And this parameter declared only for device_prep_slave_sg and device_prep_dma_cyclic, according to include/linux/dmaengine.h
Another option should be to use and struct dma_interleaved_template which allows you to specify increment behaviour directly. But support for this method is limited (only i.MX DMA driver does support it in 3.8 kernel, for example. And even this support seems to be limited)
So I think, we are stuck with device_prep_slave_sg case with all sg-related complexities for a some while.
That is that I am doing at the moment (although it is for accessing of some EBI-connected device on Atmel SAM9 SOC)
Another thing to consider is a device's bus width. memcopy-variant can perform different bus-width transfers, depending on a source and target addresses and sizes. And this may not match size of FIFO element.

Enabling multiple MSI in PCI driver with different IRQ handlers

Currently i have a requirement to support MSI with 2 vectors on my PCI device. Each vector needs to have a different handler routine. HW document says the following
vector 0 is for temperature sensor
vector 1 is for power sensor
Below is the driver code i am following.
1. First enable two vectors using pci_enable_msi_block(pdev, 2)
2. Next assign interrupt handlers using request_irq(two different irq, two diff interrupt handlers).
int vecs = 2;
struct pci_dev *pdev = dev->pci_dev;
result = pci_enable_msi_block(pdev, vecs);
Here result is zero which says call succeeded in enabling two vectors.
Questions i have is:
HW document says vector 0, i hope this is not the vector 0 of OS right? In any case i can't get vector 0 in OS.
Difficult problem i am facing is when i do request_irq() for first irq, how do i say to OS that i need to map this request to vector 0 of HW? Consecutively for second irq, how do i map t vector 1 of HW?
pci_enable_msi_block:
If 2 MSI messages are requested using this function and if the function call returns 0, then 2 MSI messages are allocated for the device and pdev->irq is updated to the lowest of the interrupts assigned to the device.
So pdev->irq and pdev->irq+1 are the new interrupts assigned to the device. You can now register two interrupt handlers:
request_irq(pdev->irq, handler1, ...)
request_irq(pdev->irq+1, handler2, ...)
With MSI and MSI-X, the interrupt number(irq) is a CPU "vector". Message signaled interrupts allow the device to write a small amount of data to a special memory-mapped I/O address; the chipset then delivers the corresponding interrupt to a processor.
May be there are two different MSI interrupt data that can be written to a MSI address. Its like your hardware supports 2 MSI (one for Temperature Sensor and one for Power Sensor). So when you issue pci_enable_msi_block(pdev, 2);, the interrupt will be asserted by the chipset to the processor whenever any of the two MSI data is written to that special memory-mapped I/O address (MSI address).
After the call to pci_enable_msi_block(pdev, 2); ,you can request two irqs through request_irq(pdev->irq, handler, flags....) and request_irq(pdev->irq + 1, handler, flags....). So whenever the MSI data is written to the MSI address, pdev->irq or pdev->irq + 1 will be asserted depending on which sensor sent the MSI and the corresponding handler will be invoked.
This two MSI data can be configured into the hardware's MSI data register.

Difference between skbuff frags and frag_list

The sk_buff has two places where it can store the next fragmentation data:
skb_shinfo(head)->frag_list
skb_shinfo(head)->frags[]
What are the differences between these two ways to handle fragmentation?
Both are used for different cases.
frags[]
When your device supports scatter-gather I/O, and you want it to do the combining of data, etc., you can populate the frags[] structure starting with the second fragment till the nth fragment. The first fragment is always specified by the data and tail pointers. The rest of the fragments are filled in the frags[] structure. If you don't use scatter gather, this variable is empty.
frag_list
This is the list of IP fragments. This will be filled during ip_push_pending_frames.
Say your sk_buffs are in this arrangement,
sk_buff0->next = sk_buff1
sk_buff1->next = sk_buff2
...
sk_buffn-1->next = sk_buffn
After ip_push_pending_frames is called
sk_buff0->frag_list = sk_buff1
sk_buff1->next = sk_buff2
...
sk_buffn-1->next = sk_buffn
Simply put
frags[] are for scatter-gather I/O buffers
frag_list is for IP fragments
skb_shinfo(head)->frags[]
If the NIC supports SG I/O, __ip_append_data will copy user space data to skb_shinfo(head)->frags. The NIC driver (e.g., ixgbe_add_rx_frag) can also use these frags[] to carry the received network traffic; please note that every content in frags[] is a part of a complete packet. A complete packet consists of all frags[] + (skb->data ~ skb->tail).
skb_shinfo(head)->frag_list
This member is not used by IP fragmentation directly.
In __ip_make_skb(), the frag_list is used to collect all skbs from sk->sk_write_queue; some NIC drivers also use this frag_list for carrying a packet to the upper network stack. Every content/skb in frag_list is also not a complete packet; tcp_v4_send_ack -> ip_send_unicast_reply -> ip_push_pending_frames -> ip_finish_skb -> __ip_make_skb;

Resources