gnuradio - handle no stream of data - gnu

I am working on GNU radio project and I need block that have two inputs and one output that do the following:
1.transfer the data from the first input to the output.
2.when the stream of data stops on the first input, the block switch to the second input - I mean that the stream of data from the second input transfer to the output until the stream of the first input start again...
do you familiar with such a block?
if not, do you have idea how to do it?
thanks

There's no such thing as a stream "stopping", unless the upstream block signals "I'm done", but then it can't start again.
So: What you want is impossible, architecturally.
I presume this is a bit due to a slight misconception about the signal processing being done: the "wall clock" time doesn't matter to the processing at all; to the processing, all that counts is the sequence of numbers, not when it comes. The signal is the same when there's 10 µs between two batches of samples, or 10 hours.
Therefore, there can't be a block that does what you want; you're trying to break the DSP abstraction; GNU Radio has no means for that.

Related

How to playback realtime audio in python while also constantly recording?

I want to create a speech jammer. It is essentially something that repeats back to you what you just said, but it is continuous. I was trying to use the sounddevice library and record what I am saying while also playing it back. Then I changed it to originally record what I was saying, then play it back while also recording something new. However it is not functioning as I would like it. Any suggestions for other libraries? Or if someone sees a suggestion for the code I already have.
Instead of constantly playing back to me, it is starting and stopping. It does this at intervals of the duration specified. So it will record for 500 ms, then play that back for 500 ms and then start recording again. Wanted behavior would be - recording for 500ms while playing back the audio it is recording at some ms delay.
import sounddevice as sd
import numpy as np
fs = 44100
sd.default.samplerate = fs
sd.default.channels = 2
#the above is to avoid having to specify arguments in every function call
duration = .5
myarray = sd.rec(int(duration*fs))
while(True):
sd.wait()
myarray = sd.playrec(myarray)
sd.wait()
Paraphrasing my own answer from https://stackoverflow.com/a/54569667:
The functions sd.play(), sd.rec() and sd.playrec() are not meant to be used repeatedly in rapid succession. Internally, they each time create an sd.OutputStream, sd.InputStream or sd.Stream (respectively), play/record the audio data and close the stream again. Because of opening and closing the stream, gaps will occur. This is expected.
For continuous playback you can use the so-called "blocking mode" by creating a single stream and calling the read() and/or write() methods on it.
Or, what I normally prefer, you can use the so-called "non-blocking mode" by creating a custom "callback" function and passing it to the stream on creation.
In this callback function, you can e.g. write the input data to a queue.Queue and read the output data from the same queue. By filling the queue by a certain amount of zeros beforehand, you can specify how long the delay between input and output shall be.
You can have a look at the examples to see how callback functions and queues are used.
Let me know if you need more help, then I can try to come up with a concrete code example.
I'm seeing a potential problem here of you trying to use myarray as both the input and the output of the .playrec() function. I would recommend having two arrays, one for recording the live audio, and one for playing back the recorded audio.
Instead of using the .playrec() command, you could just rapidly alternate between the use of .record() and .play() with a small delay between within your while-loop.
For example, the following code should record for one millisecond, wait a millisecond, and then playback the one millisecond of audio:
duration = 0.001
while(True):
myarray= sd.rec(int(duration*fs))
sd.wait()
sd.play(myarray, (int(duration*fs)))
There is no millisecond delay after the playback because you want to go right back to recording the next millisecond straight away. It should be noted, however, that this does not keep a recording of your audio for more than one millisecond! You would have to add your own code that adds to the array of a specified size and fills it up over time.

When does WASAPI GetNextPacketSize return 0

The sample code of WASAPI capture on MSDN, loops till the GetNextPacketSize return 0.
I just want to understand when will this happen:
Will it happen if there is silence registered on the microphone? (In this case will it loop infinitely if i keep making noise on microphone?)
It depends on some audio capture fundamental concept which I am missing (I am quite new to audio APIs :)).
The API helps in determining the size of the data buffer to be captured so that API client does not need to guess or allocate a buffer with excess etc. The API will return zero when there is no data to capture yet (not a single frame). This can happen in ongoing audio capture session when/if you call the API too early, and the caller is basically expected to try once again later since new data can still be generated.
In some conditions zero return might indicate an end of the stream. Specifically, if you capture from loopback device and there are no active playback sessions that can generate data for loopback delivery, capture API might keep delivering no data until new playback session emerges.
The sample code loop checks for zero packet size in conjunction with Sleep call. This way the loop expects that at least some data is generated during the sleep time and under normal conditions of continuous generation of audio data there is no zero length returned every first call within the outer loop. The inner loop attempts to read as many non-empty buffers as possible until zero indicates that all data, which was ready for delivery, was already returned to the client.
Outer loop keeps running until sink passes end-of-capture event through bDone variable. There is a catch here that somehow inner loop might be rolling without breaking into outer loop - according to the sample code - and capture is not correctly stopped. The sample assumes that sink processes data fast enough so that inner loop could process all currently available data and break out to reach Sleep call. That is, the WASAPI calls are all non-blocking and in assumption that these loops runs pretty fast the idea is that audio data is processed faster than it is captured, and the loop spends most of the thread time being in the Sleep call. Perhaps not the best sample code for beginners. You can improve this by checking bDone in the inner loop as well, to make it more reliable.

Streaming output from program to an arbitrary number of programs under Linux?

How should I stream the output from one program to an undefined number of programs in such a fashion that the data isn't buffered anywhere and that the application where the stream originates from doesn't block even if there's nothing reading the stream, but the programs reading the stream do block if there's no output from the first-mentioned program?
I've been trying to Google around for a while now, but all I find is methods where the program does block if nothing is reading the stream.
How should I stream the output from one program to an undefined number of programs in such a fashion that the data isn't buffered anywhere and that the application where the stream originates from doesn't block even if there's nothing reading the stream
Your requirements as stated can not possibly be satisfied without some form of a buffer.
Most straightforward option is to write the output to the file and let consumers read that file.
Another option is to have a ring-buffer in a form of a memory mapped file. As the capacity of a ring-buffer is normally fixed there needs to be a policy for dealing with slow consumers. Options are: block the producer; terminate the slow consumer; let the slow consumer somehow recover when it missed data.
Many years ago I wrote something like what you describe for an audio stream processing app (http://hewgill.com/nwr/). It's on github as splitter.cpp and has a small man page.
The splitter program currently does not support dynamically changing the set of output programs. The output programs are fixed when the command is started.
Without knowing exactly what sort of data you are talking about (how large is the data, what format is it, etc, etc) it is hard to come up with a concrete answer. Let's say for example you want a "ticker-tape" application that sends out information for share purchases on the stock exchange, you could quite easily have a server that accepts a socket from each application, starts a thread and sends the relevant data as it appears from the recoder at the stock market. I'm not aware of any "multiplexer" that exists today (but Greg's one may be a starting point). If you use (for example) XML to package the data, you could send the second half of a packet, and the client code would detect that it's not complete, so throws it away.
If, on the other hand, you are sending out high detail live update weather maps for the whole country, the data is probably large enough that you don't want to wait for a full new one to arrive, so you need some sort of lock'n'load protocol that sets the current updated map, and then sends that one out until (say) 1 minute later you have a new one. Again, it's not that complex to write some code to do this, but it's quite a different set of code to the "ticker tape" solution above, because the packet of data is larger, and getting "half a packet" is quite wasteful and completely useless.
If you are streaming live video from the 2016 Olympics in Brazil, then you probably want a further diffferent solution, as timing is everything with video, and you need the client to buffer, pick up key-frames, throw away "stale" frames, etc, etc, and the server will have to be different.

Can I do execute function during recording audio signal in MATLAB?

I would like to get the pitch of frame of audio data during recording the signal. (without stop recording)
Firstly, I executed following code.
r = audiorecorder(fs,16,1);
while 1
recordblocking(r,T); % T is frame length [s] (A)
sample{k} = getaudiodata(r);
pitch{k} = get_pitch(sample{k}); % (B)
end
However, recording procedure stop during get_pitch function is executed, and it causes
missing some part of music signal to be recorded.
I want the sample recording is executed without stop, but send data of length T[s] to the function get_pitch so that pitch of all the frame is obtained continuously.
Is there anyone who can give me some advice? I really appreciate all of your comments.
Generally, if you are using Matlab, and you want multi-threading, your only hope is the Parallel computing toolbox.
What you have here is a typical consumer/producer scenario - Try to google for it.
However, the problem with your approach is not necessarily lack of threads. If the get_pitch command was fast enough, you would have had no problem. You might as well save all of the samples and do the analysis afterwards (If it fits in the application).

realtime midi input and synchronisation with audio

I have built a standalone app version of a project that until now was just a VST/audiounit. I am providing audio support via rtaudio.
I would like to add MIDI support using rtmidi but it's not clear to me how to synchronise the audio and MIDI parts.
In VST/audiounit land, I am used to MIDI events that have a timestamp indicating their offset in samples from the start of the audio block.
rtmidi provides a delta time in seconds since the previous event, but I am not sure how I should grab those events and how I can work out their time in relation to the current sample in the audio thread.
How do plugin hosts do this?
I can understand how events can be sample accurate on playback, but it's not clear how they could be sample accurate when using realtime input.
rtaudio gives me a callback function. I will run at a low block size (32 samples). I guess I will pass a pointer to an rtmidi instance as the userdata part of the callback and then call midiin->getMessage( &message ); inside the audio callback, but I am not sure if this is thread-sensible.
Many thanks for any tips you can give me
In your case, you don't need to worry about it. Your program should send the MIDI events to the plugin with a timestamp of zero as soon as they arrive. I think you have perhaps misunderstood the idea behind what it means to be "sample accurate".
As #Brad noted in his comment to your question, MIDI is indeed very slow. But that's only part of the problem... when you are working in a block-based environment, incoming MIDI events cannot be processed by the plugin until the start of a block. When computers were slower and block sizes of 512 (or god forbid, >1024) were common, this introduced a non-trivial amount of latency which results in the arrangement not sounding as "tight". Therefore sequencers came up with a clever way to get around this problem. Since the MIDI events are already known ahead of time, these events can be sent to the instrument one block early with an offset in sample frames. The plugin then receives these events at the start of the block, and knows not to start actually processing them until N samples have passed. This is what "sample accurate" means in sequencers.
However, if you are dealing with live input from a keyboard or some sort of other MIDI device, there is no way to "schedule" these events. In fact, by the time you receive them, the clock is already ticking! Therefore these events should just be sent to the plugin at the start of the very next block with an offset of 0. Sequencers such as Ableton Live, which allow a plugin to simultaneously receive both pre-sequenced and live events, simply send any live events with an offset of 0 frames.
Since you are using a very small block size, the worst-case scenario is a latency of .7ms, which isn't too bad at all. In the case of rtmidi, the timestamp does not represent an offset which you need to schedule around, but rather the time which the event was captured. But since you only intend to receive live events (you aren't writing a sequencer, are you?), you can simply pass any incoming MIDI to the plugin right away.

Resources