What type of FEI should I use? - redhawksdr

I have a device that has one 40MHz wideband input and can output one wideband output and 8 narrowband DDC channels from a device. I am not sure how to setup the FEI interface for this device. Should it be a CHANNELIZER or an RX_DIGITIZER_CHANNELIZER? I am leaning towards the latter due to the wideband output.
Regarding the allocation of such a device. For the RX_DIGITIZER_CHANNELIZER types, the manual is vague in the use of the CHANNELIZER portion in this instance. Should I still allow CHANNELIZERS to be allocated or just allocate the RX_DIGITIZER_CHANNELIZERs and DDCs? Should changes to the RX_DIGITIZER_CHANNELIZER determine when to drop the DDCs in this instance? If CHANNELIZERS can still be allocated with a RX_DIGITIZER_CHANNELIZER, how does this work?

A channelizer device takes already digitized data and provides DDCs from this digital stream. A RX_DIGITIZER_CHANNELIZER has a analog input and performs the receiver, digitizer and channelizer function in a non-separable device. It sounds like you choose correctly to use the RX_DIGITIZER_CHANNELIZER for a device with an analog input. Typically allocating the RX_DIGITIZER_CHANNELIZER takes care of allocating the Channelizer because all three parts are linked together. Therefore the only additional allocations are usually DDCs.

Related

Synchronization of WASAPI Audio Devices

Is there a way with WASAPI to determine if two devices (an input and an output device) are both synced to the same underlying clock source?
In all the examples I've seen input and output devices are handled separately - typically a different thread or event handle is used for each and I've not seen any discussion about how to keep two devices in sync (or how to handle the devices going out of sync).
For my app I basically need to do real-time input to output processing where each audio cycle I get a certain number of incoming samples and I send the same number of output samples. ie: I need one triggering event for the audio cycle that will be correct for both devices - not separate events for each device.
I also need to understand how this works in both exclusive and shared modes. For exclusive I guess this will come down to finding if devices have a common clock source. For shared mode some information on what Windows guarantees about synchronization of devices would be great.
You can use the IAudioClock API to detect drift of a given audio client, relative to QPC; if two endpoints share a clock, their drift relative to QPC will be identical (that is, they will have zero drift relative to each other.)
You can use the IAudioClockAdjustment API to adjust for drift that you can detect. For example, you could correct both sides for drift relative to QPC; you could correct either side for drift relative to the other; or you could split the difference and correct both sides to the mean.

What do the ALSA timestamping function return and how do the result relate to each other?

There are several "hi-res" timestamping functions in ALSA:
snd_pcm_status_get_trigger_htstamp
snd_pcm_status_get_audio_htstamp
snd_pcm_status_get_driver_htstamp
snd_pcm_status_get_htstamp
I would like to understand what points in time the resulting functions represent.
My current understanding is that trigger_htstamp represents the time when stream was started/stopped/paused. snd_pcm_status_get_trigger_htstamp returns a constant value and when I add audio_htstamp to that value the result is very close to the current system time.
audio_htstamp seems to start from zero on my system and it is incremented by a value that is equal to the period size I use. Hence on my system it is a simple frame counter. If I understand ALSA correctly audio_htstamp can also work in different more accurate way depending on the system capabilities.
driver_htstamp I guess by the name is a timestamp generated by the audio driver.
Question 1: When is the timestamp driver_htstamp usually generated?
With htstamp I am really unsure where and when it is generated. I have a hunch that it may be related to DMA.
Question 2: Where is htstamp generated?
Question 3: When is htstamp generated?
Question 4: Is the assumption audio_htstamp < htstamp < driver_htstamp generally correct?
It seems like this with a little test program I wrote, but I want to verify my assumption.
I can not find this information in the ALSA documentation.
I just dug through the code for this stuff for my own purposes, so I figured I would share what I found.
The purpose of these timestamps is to allow you to determine subtle differences in the rate of different clocks; most importantly in this case the main system clock that Linux uses for general timekeeping compared with the different clock that determines the rate at which samples move in and out of the sound device. This can be very important for applications that need to keep audio from different hardware devices in sync, since the rates of different physical clocks are never exactly the same.
The technique used is sometimes called "cross-timestamping"; you capture timestamps from the clocks you want to compare as close to simultaneously as possible, and repeat this at regular intervals. There is usually some measurement error introduced, but some relatively simple filtering can get you a good characterization of the difference in the rate at which the clocks count.
The core PCM driver arranges to take a system clock timestamp as closely as possible to when an audio stream starts, and then it does a cross-timestamp between the system clock and audio clock (which can be measured in different ways) whenever it is asked to check the state of the hardware pointers for the DMA engine that moves samples around.
The default method of measuring the audio clock is via DMA hardware pointer comparsion. This isn't terribly precise, but over longer periods of time you can still get a good measure of the rate difference. At the start of snd_pcm_update_hw_ptr0, a system timestamp is captured; this will end up being htstamp. The DMA pointers are then checked, and if it's determined that they've moved since the last check, audio_htstamp is calculated based on the number of frames DMA has copied and the nominal frequency of the audio clock. Then, once all the DMA pointer update is done and right before snd_pcm_update_hw_ptr0 returns, another system timestamp is captured in driver_htstamp. This isn't meant to be used when you're using the DMA hw_ptr method of calculating the audio_htstamp though.
If you happen to have an audio device using the HDAudio driver, you can use an alternate and much more precise method of measuring the audio clock. It supplies an extra operation callback called get_time_info that is used instead of the default method of capturing the system and audio timestamps. It the HDAudio case, it takes a system timestamp for htstamp as close to possible to when it reads an interal counter driven by the same clock source as the audio clock; this forms the audio_htstamp. Afterwards, the same DMA hw_ptr bookkeeping is done, but the code that translates the pointer movement into time is skipped. The driver_htstamp is still taken right before the routine ends, though; this is "to let apps detect if the reference tstamp read by low-level hardware was provided with a delay" as the comment says in the code. This is because there's no guarantee that the get_time_info callback is going to take a new system timestamp; it may have previously recorded an audio timestamp along with a system timestamp as part of an interrupt handler. In this case, the timestamps you get might not match with the available frames and delay frames counts calculated by hw_ptr bookkeeping, but the driver_htstamp will let you know the closest system time to when those calculations were made.
In any case, the code is designed in both cases to capture htstamp and audio_htstamp as closely together as possible, and for htstamp - trigger_htstamp to represent the amount of system time that passed during the period measured by audio_htstamp of the audio clock. You mostly shouldn't need to use driver_htstamp, but I guess it might be used with the USB Audio driver, as I think it and HDAudio are the only ones that do anything special with these interfaces right now.
The documentation for this, although it doesn't contain all the details you might want to know, is part of the kernel documentation: http://lxr.free-electrons.com/source/Documentation/sound/alsa/timestamping.txt?v=4.9

The dmaengine in Linux - how to check the number of actually transferred bytes in a single SG transfer?

The dmaengine in Linux significantly simplifies writing of drivers for devices using DMA, especially if they support and use scatter-gather (SG) transfers.
However, there is a problem in case if the length of such transfer is not known a priori. This situation is quite common. It may happen in case of USB transfers, or in case of AXI Stream transfers. In the case of an AXI Stream connected device, the transfer should be terminated by setting the tlast signal to '1'. The driver should prepare the buffer for the longest possible transfer, but after the transfer is complete, it should be able to find the number of actually transferred bytes. Unfortunately it seems, that there is no documented way to read the length of the completed transfer.
The single transfer is represented with an SG-table, that is later converted into the dma_async_tx_descriptor, using the dmaengine_prep_slave_sg function , submitted to the DMA layer (e.g. like here), and finally scheduled for execution via dma_async_issue_pending. After that, the transfer descriptor may be accessed only via a dma_cookie returned by the dmaengine_prep_slave_sg function.
The problem with determining the actual length of the transfer was reported for the USB transfers, and a patch adding the transferred field to the dma_async_tx_descriptor was proposed. However that proposal was rejected, and after a discussion a solution based on using the dmaengine_tx_status, and checking the residue field in the returned dma_tx_state structure.
Unfortunately, it seems that the proposed solution does not work neither in the 4.4 kernel (used for Xilinx SoC devices) nor in the newest 4.7 kernel.
The residue field is set to 0 in case of completed transfers regardless of the actual number of transferred bytes.
So my question is: How can I reliably determine the actual number of transferred bytes in the completed SG DMA transfer in a dmaengine compatible driver?
PS. That question with more Xilinx-specific details related to the AXI DMA IP core has been also asked on the Xilinx forum.

Designing a Linux char device driver so multiple processes can read

I notice that for serial devices, e.g. /dev/ttyUSB0, multiple processes can open the device but only one process gets the bytes (whichever reads them first).
However, for the Linux input API, e.g. /dev/input/event0, multiple processes can open the device, and all of the processes are able to read the input events.
My current goal:
I'd like to write a driver for several multi-position switches (e.g. like a slider switch with 3 or 4 possible positions), where apps can get a notification of any switch position changes. Ideally I'd like to use the Linux input API, however it seems that the Linux input API has no support for the concept of multi-position switches. So I'm looking at making a custom driver with similar capabilities to the Linux input API.
Two questions:
From a driver design point-of-view, why is there that difference in behaviour between Linux input API and Linux serial devices? I reckon it could be useful for multiple processes to all be able to open one serial port and all listen to incoming bytes.
What is a good way to write a Linux character device driver so that it's like the Linux input API, so multiple processes can open the device and read all the data?
The distinction is partly historical and partly due to the different expectation models.
The event subsystem is designed for unidirectional notification of simple events from multiple writers into the system with very little (or no) configuration options.
The tty subsystem is intended for bidirectional end-to-end communication of potentially large amounts of data and provides a reasonably flexible (albeit fairly baroque) configuration mechanism.
Historically, the tty subsystem was the main mechanism of communicating with the system: you plug your "teletype" into a serial port and bits went in and out. Different teletypes from different vendors used different protocols and thus the termios interface was born. To make the system perform well in a multi-user context, buffering was added in the kernel (and made configurable). The expectation model of the tty subsystem is that of a point-to-point link between moderately intelligent endpoints who will agree on what the data passing between them will look like.
While there are circumstances where "single writer, multiple readers" would make sense in the tty subsystem (GPS receiver connected to a serial port, continually reporting its position, for instance), that's not the main purpose of the system. But you can easily accomplish this "multiple readers" in userspace.
The event system on the other hand, is basically an interrupt mechanism intended for things like mice and keyboards. Unlike teletypes, input devices are unidirectional and provide little or no control over the data they produce. There is also little point in buffering the data. Nobody is going to be interested in where the mouse moved ten minutes ago.
I hope that answers your first question.
For your second question: "it depends". What do you want to accomplish? And what is the "longevity" of the data? You also have to ask yourself whether it makes sense to put the complexity in the kernel or if it wouldn't be better to put it in userspace.
Getting data out to multiple readers isn't particularly difficult. You could create a receive buffer per reader and fill each of them as the data comes in. Things get a little more interesting if the data comes in faster than the readers can consume it, but even that is mostly a solved problem. Look at the network stack for inspiration!
If your device is simple and just produces events, maybe you just want to be an input driver?
Your second question is a lot more difficult to answer without knowing more about what you want to accomplish.
Update after you added your specific goal:
When I do position switches, I usually just create a character device and implement poll and read. If you want to be fancy and have a lot of switches, you could do mmap but I wouldn't bother.
Userspace just opens your /dev/foo and reads the current state and starts polling. When your switches change state, you just wake up the readers and they they'll read again. All your readers will wake up, they'll all read the new state and everyone will be happy.
Be careful to only wake up readers when your switches are 'settled'. Many position switches are very noisy and they'll bounce around a fair bit.
In other words: I would ignore the input system altogether for this. As you surmise, position switches are not really "inputs".
How a character device handles these kinds of semantics is completely up to the driver to define and implement.
It would certainly be possible, for example, to implement a driver for a serial device that will deliver all read data to every process that has the character driver open. And it would also be possible to implement an input device driver that delivers events to only one process, whichever one is queued up to receive the latest event. It's all a matter of coding the appropriate implementation.
The difference is that it all comes down to a simple question: "what makes sense". For a serial device, it's been decided that it makes more sense to handle any read data by a single process. For an input device, it's been decided that it makes more sense to deliver all input events to every process that has the input device open. It would be reasonable to expect that, for example, one process might only care about a particular input event, say pointer button #3 pressed, while another process wants to process all pointer motion events. So, in this situation, it might make more sense to distribute all input events to all concerned parties.
I am ignoring some side issues, for simplicity, like in the situation of serial data being delivered to all reading processes what should happen when one of them stops reading from the device. That's also something that would be factored in, when deciding how to implement the semantics of a particular device.
What is a good way to write a Linux character device driver so that it's like the Linux input API, so multiple processes can open the device and read all the data?
See the .open member of struct file_operations for the char device. Whenever userspace opens the device, then the .open function is called. It can add the open file to a list of open files for the device (and then .release removes it).
The char device data struct should most likely use a kernel struct list_head to keep a list of open files:
struct my_dev_data {
...
struct cdev cdev;
struct list_head file_open_list;
...
}
Data for each file:
struct file_data {
struct my_dev_data *dev_data;
struct list_head file_open_list;
...
}
In the .open function, add the open file to dev_data->file_open_list (use a mutex to protect these operations as needed):
static int my_dev_input_open(struct inode * inode, struct file * filp)
{
struct my_dev_data *dev_data;
dev_data = container_of(inode->i_cdev, struct my_dev_data, cdev);
...
/* Allocate memory for file data and channel data */
file_data = devm_kzalloc(&dev_data->pdev->dev,
sizeof(struct file_data), GFP_KERNEL);
...
/* Add open file data to list */
INIT_LIST_HEAD(&file_data->file_open_list);
list_add(&file_data->file_open_list, &dev_data->file_open_list);
...
file_data->dev_data = dev_data;
filp->private_data = file_data;
}
The .release function should remove the open file from dev_data->file_open_list, and release the memory of file_data.
Now that the .open and .release functions maintain the list of open files, it is possible for all open files to read data. Two possible strategies:
A separate read buffer for each open file. When data is received, it is copied into the buffers of all open files.
A single read buffer for the device, but a separate read pointer for each open file. Data can be freed from the buffer once it has been read through all open files.
Serial to input/event
You could try to look into serial mouse driver source code.
This seem to be what you're searching for: from a TTYSx build a input/event device.
Simplier: creating a server, instead of a driver.
Historically, the 1st character device I remember is /dev/lp0.
To be able to write on it from many source, without overlap or other conflict,
a LPR server was wrotten.
To share a device, you have to
open this device in exclusive (rw) mode.
Create a socket (un*x or TCP) to listen on
redirect request from socket's clients to the device and maybe
store device status (from reading device's answers)
send device status to socket's clients when required.

What would be the simplest way to interface custom hardware with one input to have switch somewhere in /proc?

I have a device that takes low current 3-12v input signal to do it's magic and I would like to interface it to my linux box. What kind of options do I have on this? It would be great to have some low-cost possibly user-space solution.
If I understand right, you need to control your box by changing 3-12v input signals to it. Here's the choices I can think of from the top of my head:-
a: Using RS232 serial handshake lines. RTS/CTS can usually controlled programatically as "on/off" signals without driver development using IOCTL calls.
b: Use a "GPI dongle" such as the Advantech ADAM range. These typically take serial or TCP/IP inputs and convert them to suitable output signals.
c: You may be able to do something with a parallel printer port if your PC stil has such a thing.
As shodanex says, be aware that RS232 levels are NOT directly compatible with TTL/CMOS inputs so you may need some minor level shifting/clamping electronics to fix this.

Resources