We are using Redhawk for an FM modulator. It reads an audio modulating signal from a file, performs the modulation, then sends the modulated data from Redhawk to an external program via TCP/IP for DAC and up-conversion to RF.
The data flows through the following components: rh.FileReader, rh.DataConverter, rh.fastfilter, an FM modulator, rh.DataConverter, and rh.sinksocket. The FM modulator is a custom component.
The rh.sinksocket sends data to an external server program that sends the samples from Redhawk to an FPGA and DAC.
At present the sample rate appears to be controlled via the rh.FileReader component. However, we would like the external DAC to set the sample rate of the system, not the rh.FileReader component of Redhawk, for example via TCP/IP flow control.
Is it possible to use an external DAC as the clock source for a Redhawk waveform?
The property on FileReader dictating the sample rate is simply telling it what the sample rate of the provided file is. This is used for the Signal Related Information (SRI) passed to down stream components and then output rate if you do not block or throttle. Eg. FileReader does not do any resampling of the given file to meet the sample rate given.
If you want to resample to a given rate you can try the ArbitraryRateResampler component.
Regarding setting these properties via some external mechanism (TCP/IP) you would want to write a specific component or REDHAWK service that listens for this external event and then makes a configure call to set the property you'd like changed.
If these events are global and can apply to many applications on your domain then a service is the right pattern, if these events are specific to a single application then a component might make more sense.
Related
I have created (C++, Win10, VS2022) a simple source DirectShow filter. It gets audio stream from the external source (file – for testing, network – in future) and produces audio stream on output pin, which I connect to soundspeaker.
In order to do it I have implemented FillBuffer method for the output pin (CSourceStream) of the filter. Media type - MEDIATYPE_Stream/MEDIASUBTYPE_PCM.
Before being connected the pin gets info about media type via SetMediaType (WAVEFORMATEX) and remembers parameters of audio - wBitsPerSample; nSamplesPerSec; nChannels. Audio stream comes from the external source (file or net) to FillBuffer with the parameters - wBitsPerSample; nSamplesPerSec; nChannels. It works fine.
But I need to handle situation, when external source will send audio stream to the filter , with another parameter (for example, old sample had 11025 Hz, and the current = 22050).
Could you help me – which actions and calls should I make in FillBuffer() method if I will receive audio stream with changed wBitsPerSample or nSamplesPerSec or nChannels parameter ?
The fact is that these parameters have already been agreed between my output pin and the input pin of the soundspeaker and I need to change these agreement correctly.
You need to improve the implementation and handle
Dynamic Format Changes
...
QueryAccept (Downstream) is used when If an output pin proposes a format change to its downstream peer, but only if the new format does not require a larger buffer.
This might be not trivial because baseline DirectShow filters are not required to support dynamic changes. That is, ability to change format is dependent on your actual pipeline and implementation of other filters.
You should also be able to find SDK helpers, CDynamicSourceStream and CDynamicSource.
Is there a way with WASAPI to determine if two devices (an input and an output device) are both synced to the same underlying clock source?
In all the examples I've seen input and output devices are handled separately - typically a different thread or event handle is used for each and I've not seen any discussion about how to keep two devices in sync (or how to handle the devices going out of sync).
For my app I basically need to do real-time input to output processing where each audio cycle I get a certain number of incoming samples and I send the same number of output samples. ie: I need one triggering event for the audio cycle that will be correct for both devices - not separate events for each device.
I also need to understand how this works in both exclusive and shared modes. For exclusive I guess this will come down to finding if devices have a common clock source. For shared mode some information on what Windows guarantees about synchronization of devices would be great.
You can use the IAudioClock API to detect drift of a given audio client, relative to QPC; if two endpoints share a clock, their drift relative to QPC will be identical (that is, they will have zero drift relative to each other.)
You can use the IAudioClockAdjustment API to adjust for drift that you can detect. For example, you could correct both sides for drift relative to QPC; you could correct either side for drift relative to the other; or you could split the difference and correct both sides to the mean.
Hey every one I was developing a J2ME app that records fm radio,I have tried so many methods but I have failed. The major problem I faced is that in the media api for J2ME once the code for tuning into a specific fm channel is written(and works but only outputs directly to the speaker) I couldn't find a way to buffer the output and write it into a file.Thanks in advance.
I think it is not possible with MMAPI directly. I assume the fm radio streams via RTSP, and you can specify it as data source for MMAPI, but if you want to store the audio data, you need to fetch it in your own buffer instead, and then pass to MMAPI Player via InputStream.
In that way you will need to code your own handling for RTSP (or whatever your fm radio uses), and convert data into format acceptable by MMAPI Player via InputStream, for example audio/x-wav or audio/amr. If header of the format doesn't need to specify length of data, then you probably can 'stream' it via your buffer receiving data from RTSP source.
This is some kind of low level coding, I think it will be hard to implement in J2ME.
I have a raw data file of a sound recording, with each sample stored as a 16 bit short. I want to play this file through Redhawk.
I have a file_source_s connected to AudioSink like so:
I was expecting to hear sound from my speakers when starting these components. But when I start both components, I cannot hear any sound.
Here are the file_source_s properties values:
filename: name
itemsize: 2
repeat: true
seek: true
seek_point: 0
whence: SEEK_SET
I know:
the problem is not AudioSink. I have tested the AudioSink with the signal generator (SigGen) and I could hear sound through my speakers.
file_source_s is finding the file. When I put in a non-existent file name, file_source_s gives the "No such file or directory" error. I can also see the first 1024 bytes of the file when I plot the short_out port, but the plot does not update.
The AudioSink component uses the information from the received SRI (Signal Related Information) in order to determine the audio's sample rate. This is seen here from line 156 of the AudioSink component:
int sample_rate = static_cast<int>(rint(1.0/current_sri.xdelta));
It receives the SRI from downstream components, in this case, file_source_s.
The component file_source_s is part of the gnuhawk component package. The GNUHAWK library provides software that enables a GNU Radio block to be integrated into the REDHAWK software framework. Since SRI is a REDHAWK construct and not present in GNURADIO, it does not appear as though the file_source_s block gathers enough information via properties to represent the correct xdelta / sample rate for the audio file.
I'd recommend using a pure REDHAWK component like DataReader which takes in as a property the sample rate.
I have a dual channel radio where I have two RX_DIGITIZER_CHANNELIZERs and two DDCS. My waveform allocates both channels. The waveform just takes the data from each channel and outputs it to two DataConverters. I am using the snapshot function to capture data. When I start to collect data at higher rates some of the packets get dropped. Is there a way to measure how long a call such as pushPacket takes? If I used the logging function, it would produce too much output to measure how long it takes.
#michael_sw can you plot the data coming from the device in the IDE instead of saving to disk?
How are you monitoring the packet drops?
Do you need to go through the data converter? If you have to it is possible to set a blocking flag in SRI in the downstream REDHAWK device (see chapter 15 in manual) to cause back pressure and block until data converter is done consuming the previous data. This only helps if the data converter is dropping packets.
In the IDE there is a port monitoring mode where you can actually tell when data is being dropped (right click on port and select port monitoring) by a component.
Another option in the data converter you could modify the code to watch the getPacket call for the inputQueueFlushed to be true.
I commonly use timestamping - make a call to one of the system clock functions and either log the time or print the time to the console. If you do this in the function that calls pushPacket and again in the pushPacket handler then you simply take the difference. If this produces too much data, you can simply use a counter and log it only every 1000 calls, etc. Or collect the data for a period of time in an array and log/print them after the component is shut down. Calls to the system clock does not effect performance much compared to CORBA calls.