NI Reaktor Audio Input Vertical Line - audio

I recently restartet patching in Reaktor and found something strange. In one of my Primary structures there is an input pin switching its icon to vertical line every now and then when I edit the structure. It somehow depends on the audio processing sequence.
Unfortunately I cannot post an image of the structure, because this needs more reputation... however, does anyone know what is the point with this port? It feels spooky. Thanks!

sounds to me like a audio loop indicator.
It appears if the signal flow is feed back so that it can't processed at the same time. In this case the feedback signal gets feed in one audio clock later.
What means that this port has a delay of 1 sample.
In order to determine yourself where this delay applies you may insert the Unit Delay what does the same job (1 sample delay) but at the place you want.

Related

Programmatic access to a sound played through OpenAL

I am working with an application that uses OpenAL API quite extensively. In particular, there are multiple sound sources, non-trivial listener filters, etc.
I want to be able to run this application significantly faster than real-time. At the same time, the sound must be saved for later postprocessing. Is there a way to access the OpenAL output programmatically (virtually) without ever playing the sound on the real playback device?
Ideally, I'd like to have access that would be played during every tick of the main loop of my application. Normally one tick corresponds to one rendered frame (e.g. 1/30th of a second). But in this case we would be running the app as fast as possible.
We ended up using OpenAL Soft to do this. Example:
#include "alext.h"
LPALCLOOPBACKOPENDEVICESOFT alcLoopbackOpenDeviceSOFT;
alcLoopbackOpenDeviceSOFT = alcGetProcAddress(NULL,"alcLoopbackOpenDeviceSOFT");
replace your default device with this device
ALCcontext *context = alcCreateContext(device, attrs);
Set the attrs as you would for your default device
Then in the main loop use:
LPALCRENDERSAMPLESSOFT alcRenderSamplesSOFT;
alcRenderSamplesSOFT = alcGetProcAddress(NULL, "alcRenderSamplesSOFT");
alcRenderSamplesSOFT(device, buffer, 1024);
Here the buffer will store 1024 samples. This code runs faster than real-time, therefore you can sample frames every tick
Are you able to do your required functions with the audio data prior to its being shipped to OpenAL? I've done a lot with javax.sound.sampled when it is untethered by the blocking write() method in SourceDataLine, especially when saving to file rather than playing back.
From what little I know about OpenAL, there is also a blocking process occurs when data is shipped, with a queue of arrays that are managed. I've been meaning to look into this further...
(Probably not being very helpful here. Apologies.)

Custom player using NDK/C++/MediaCodec - starvation/buffering in decoder

I have a very interesting problem.
I am running custom movie player based on NDK/C++/CMake toolchain that opens streaming URL (mp4, H.264 & stereo audio). In order to restart from given position, player opens stream, buffers frames to some length and then seeks to new position and start decoding and playing. This works fine all the times except if we power-cycle the device and follow the same steps.
This was reproduced on few version of the software (plugin build against android-22..26) and hardware (LG G6, G5 and LeEco). This issue does not happen if you keep app open for 10 mins.
I am looking for possible areas of concern. I have played with decode logic (it is based on the approach described as synchronous processing using buffers).
Edit - More Information (4/23)
I modified player to pick a stream and then played only video instead of video+audio. This resulted in constant starvation resulting in buffering. This appears to have changed across android version (no fix data here). I do believe that I am running into decoder starvation. Previously, I had set timeouts of 0 for both AMediaCodec_dequeueInputBuffer and AMediaCodec_dequeueOutputBuffer, which I changed on input side to 1000 and 10000 but does not make much difference.
My player is based on NDK/C++ interface to MediaCodec, CMake build passes -DANDROID_ABI="armeabi-v7a with NEON" and -DANDROID_NATIVE_API_LEVEL="android-22" \ and C++_static.
Anyone can share what timeouts they have used and found success with it or anything that would help avoid starvation or resulting buffering?
This is solved for now. Starvation was not caused from decoding perspective but images were consumed in faster pace as clock value returned were not in sync. I was using clock_gettime method with CLOCK_MONOTONIC clock id, which is recommended way but it was always faster for first 5-10 mins of restarting device. This device only had Wi-Fi connection. Changing clock id to CLOCK_REALTIME ensures correct presentation of images and no starvation.

Measuring Multiple Voltages in LabView w/USB 6001

I'm trying to set up my LabView VI + my USB 6001 I/O box to be able to read multiple independent voltages at once, while also outputting a single constant voltage.
I've successfully gotten my USB box to output the voltage I want while reading back a single voltage, but so far I've been unable to read back more than one voltage (and if I do, the two voltages seem to be co-dependent on one another in some way).
Here's a screenshot of my VI:
Everything to the right of the screenshot window should be unimportant to the question.
If anyone is curious, this is to drive multiple LVDT's and read back their respective voltages.
Thank you all for your help!
Look at your DAQ's manual, especially the pages I noted below.
http://www.ni.com/pdf/manuals/374259a.pdf
Page 11
All the AI channels get multiplexed, and the low-side reference can be switched (RSE vs. differential). So the two channels you're sampling require both of those to switch. It might be a settling issue where the ADC is taking a sample before the input value is stable.
To verify this, try using using the same low side (differential or RSE) on both channels. Also try slowing down your sample rate (but your 1 kHz should already be slow enough...).
Page 14
Check this to make sure you have everything connected and grounded correctly.
Page 18
Check this for more details about switching between 2 sources quickly.
Perhaps you could try it using the Daqmx express VIs:
http://www.ni.com/tutorial/2744/en/

Overlap add in audio analysis-synthesis

I wrote some code that takes an audio signal (currently a sine wave) as an input and does the following:
Take frames of n (1024) samples
Apply FFT
Apply iFFT
Play output
With this process the output signal is basically the same as the input signal.
Now, in a second attempt I do:
Take overlapping frames from the input
Apply a window function
FFT
iFFT
Overlap the output frames
In step 1, if I take overlapping frames using a hop size (number of samples to jump to take next frame) of a power of 2 (4, 8, 256...) the output sound is smooth and resembles the original input sound, but with any other hop size, the sound starts to crack down. This happens for any frequency of the input signal. Question 1. Why is the sound smooth only if the hop size is 2^n?.
Currently I use a Hanning window. When the hop size is large (e.g. 512) the output sound has a lower volume than when the hop size is small (e.g.64). This seems an expected behavior, because a small hop-size implies that a sample is reconstructed with more frames, so more signals are added. Question 2. Is there a way to properly scale the output signal so that the volume resembles the original signal?
Thank you!
This should not be happening, Overlap-add method can reconstruct your signal without the problems described, we not know exactly what are you doing, I did it some time ago and its works for any hop size and window size, a small secret is apply zeros after and before your signal to ensure a continuous signal, if you look calmly will realize that your window function works like a fade-in/fade-out if you just concatenate the frames you will notice some clicks or the output signal will look like a vibrato, gets a little tricky to tell where your problem actually find !
Just for Debug, skip the FFT and iFFT steps and see if your signal was correctly constructed, if yes your overlap-add process works, and your problem can be in your FFT/iFFT ...
Overlap-add is normally done without using a non-rectangular window function. Zero-pad instead.
If you do use a window function, then you have to make sure that all the offset window functions sum to a constant level, which for a Von Hann window happens with certain offsets (except at the very beginning or end of the series sum). As
2 - (cos(x)+cos(x+Pi)) == 2
Sum more windows into a result without any scaling, and of course the level of the sum will increase.

low latency sounds on key presses

I am trying to write an application(I'm a gui first timer) for my son, he has autism. There is a video player in the top half and a text entry area in the bottom. When letters are typed sounds are produced to mimic the words in the video.
There have been other posts on this site in regard to playing sounds on key presses, using gstreamer as a system call. I have also tried libcanberra but both seem to have significant delays between sounds. I can write the app in python or C but will likely do at least some of it in C.
I also want to mention that the video portion is being played by gstreamer. I tried to create two instances of gstreamer, to avoid expensive system calls but the audio instance seemed to kill the app when called.
If anyone has any tips on creating faster responding sounds I would really appreciate it.
You can upload a raw audio sample directly to PulseAudio so there will be no decoding and (perhaps save) extra switches by using the following function from Canberra:
http://developer.gnome.org/libcanberra/unstable/libcanberra-canberra.html#ca-context-cache
The next ca_context_play() will use it.
However, the biggest problem you'll encounter with this scenario (with simultaneous video playback) is that the audio device might be configured with large latency with PulseAudio (up to 1/2s or more for normal playback). It may be reasonable to file a bug to libcanberra to support a LOW_LATENCY flag, as it currently doesn't attempt to minimize delay for sound events afaik. That would be great to have.
GStreamer pulsesink could probably get low latency too (it has some properties for that), but I am afraid it won't be as lightweight as libcanberra, and you won't be able to cache a sample for instance. Ideally, GStreamer could also learn to cache samples, or pre-fill PulseAudio...

Resources