Altering default DirectSound audio renderer buffer size - audio

I've implemented a custom "sample grabber" filter for DirectShow. I grab samples with my host app, perform an FFT on it, and display the results via Direct3D.
The problem is there is nearly a 1 second delay between my visual result and when I hear the audio (the data is visualized before I hear it).
I've looked into it and the reason is that the default audio renderer has an internal one second buffer, as stated by this guy. He states that implementing either IAMBufferNegotiation or IAMPushSource should solve the problem. I have tried both and neither seem to make a difference.
I was curious if anyone else has had the same problem, and I want to make sure there is no other (easy) solution before I write my own audio renderer.
ALL input is appreciated!

Instead of changing the audio renderer filter's internal buffer size, you have to synchronize your drawing (rendering the result) with sample time stamps of the buffer that you calculated the FFT. You can use IReferenceClock::AdviseTime for the synchronization.

Related

Digital audio formatting

I have a vectorized wav file with values between -1 and 1, 88,200 samples, 44.1 kHz sampling rate to hear the audio within two seconds. I'd like to send the audio through bluetooth to a bluetooth module, arduino, DAC, and 3.5mm breakout board with earbuds.
I am getting crackly audio when I receive it at the end. I tried to recreate this is MATLAB and it turns out to be a combination of the scaling (multiplying + shifting the values over 0) and the sampling rate change due to the receivers. Of course, I could be completely recking the sampling frequency with inefficient Arduino code, but since a factor is also the initial scaling my guess is that I am misunderstanding something fundamental to audio processing.
What is the proper way to format and or scale the values between 0-4095 (which are needed for the DAC input) so that the audio itself is not distorted upon listening due to the scaling factor, sampling rate retention aside? OR is there something else I am missing in the big picture of this?
Clarification: Currently I am using the python sockets library to send an audio string array char by char into an Arduino array and reading them as an integers, then inputting into the DAC. Not sure if python sockets is the best way to go, there should be something better or a more robust implementation of sockets to send the data
UPDATE: I realized that the HC-05 uses SPP bluetooth protocol, which seems to be waaay too low resolution to send reliable audio. I will see if I can send a more compressed audio file, store it in the arduino, then output to the DAC. That could provide more reliable audio.
Have you tried setting in and out values in your samples? I know that video that includes audio, that could be one thing being overlooked, anyhow, that can cause issues for uploading to YouTube. It seems similar to this, because it might not know where to begin and end and it can affect audio too.
Another issue may be the format of the samples, against Bluetooth technology. AAC should probably be the format, but confirm this because I am not 100% sure what all it will accept.
The library has an example for bandwidth:
https://www.arduino.cc/en/Reference/AudioFrequencyMeter
But there are other functions for begin() and end(). You could declare them as variable to your start and end times within the samples, such that one will be the active track at a given time. You could also declare your frequency() as a constant value of 44.1, but you might have to escape the period for that. (It otherwise reads 60 to 1500.)

How to decrease pitch of audio file in nodejs server side?

I have a .MP3 file stored on my server, and I'd like to modify it to be a bit lower in pitch. I know this can be achieved by increasing the length of the audio, however, I don't know of any libraries in node that can do this.
I've tried using the node web audio api, and soundbank-pitch-shift, but the former doesn't seem to have the capabilities of pitch shifting (AFAIK), and the latter seems designed toward client
I need the solution within the realm of node ONLY- that means no external programs, etc., and it needs to be automated as well, so I can't manually pitch shift.
An ideal solution would be a function that takes a file/filepath as an input, and then creates (or overwrites) another MP3 file but with the pitch shifted by x amount, but really, any solution that produces something with a lower pitch than the original, works.
I'm totally lost here. Please help.
An audio file is basically a list of numbers. Those numbers are read one at a time at a particular speed called the 'sample rate'. The sample rate is otherwise defined as the number of audio samples read every second e.g. if an audio files sample rate is 44100, then there are 44100 samples (or numbers) read every second.
If you are with me so far, the simplest way to lower the pitch of an audio file is to play the file back at a lower sample rate (which is normally fixed in place). In most cases you wont be able to do this, so you need to achieve the same effect by resampling the file i.e adding new samples to the file in between the old samples to make it literally longer. For this you would need to understand interpolation.
The drawback to this technique in either case is that the sound will also play back at a slower speed, as well as at a lower pitch. If it is a problem that the sound has slowed down as well as lowered in pitch as a result of your processing, then you will also have to use a timestretching algorithm to fix the playback speed.
You may also have problems doing this using MP3 files. In this case you may have to uncompress the data in the MP3 file before you can operate on it in such a way that changes the pitch of the file. WAV files are more ideal in audio processing. In any case, you essentially need to turn the file into a list of floating point numbers, and change those numbers to be effectively read back at a slower rate.
Other methods of pitch shifting would probably need to involve the use of ffts, and would be a more complicated affair to say the least.
I am not familiar with nodejs I'm afraid.
I managed to get it working with help from Ollie M's answer and node-lame.
I hadn't known previously that sample rate could affect the speed, but thanks to Ollie, suddenly this problem became a lot more simple.
Using node-lame, all I did was take one of the examples (mp32wav.js), and make it so that I change the parameter sampleRate of the format object, so that it is lower than the base sample rate, which in my application was always a static 24,000. I could also make it dynamic since node-lame can grab the parameters of the input file in the format object.
Ollie, however perfectly describes the drawback with this method
The drawback to this technique in either case is that the sound will
also play back at a slower speed, as well as at a lower pitch. If it
is a problem that the sound has slowed down as well as lowered in
pitch as a result of your processing, then you will also have to use a
timestretching algorithm to fix the playback speed.
I don't have a particular need to implement a time stretching algorithm at the moment (thankfully, because that's a whole other can of worms), since I have the ability to change the initial speed of the file, but others may in the future.
See https://www.npmjs.com/package/audio-decode, https://github.com/audiojs/audio-buffer, and related linked at bottom of audio-buffer readme.

Is there a way to use ffmpeg audio filters to automatically synchronize 2 streams with similar content

I have a situation where I have a video capture of HD content via HDMI with audio from a sound board that goes through a impedance drop into a microphone input of a camcorder. That same signal is split at line level to a 'line in' jack on the same computer that is capturing the HDMI. Alternatively I can capture the audio via USB from the soundboard which is probably the best plan, but carries with it the same issue.
The point is that the line in or usb capture will be much higher quality than the one on HDMI because the line out -> impedance change -> mic in path generates inferior quality in that simply brushing the mic jack on the camera while trying to change the zoom (close proximity) can cause noise on the recording.
So I can do this today:
Take the good sound and the camera captured sound and load each into
audacity and pretty quickly use the timeshift toot to perfectly fit
the good audio to the questionable audio from the HDMI capture and
cut the good audio to the exact size of the video. Then I can use
ffmpeg or other video editing software to replace the questionable
audio with the better audio.
But while somewhat quick and easy, it always carries with it a bit of human error and time. I'd like to automate this if possible as this process is repeated at least weekly throughout the year.
Does anyone have a suggestion if any of these ideas have merit or could suggest another approach?
I suspect but have yet to confirm that the system timestamp of the start time may be recorded in both audio captured with something like Audacity, or the USB capture tool from the sound board as well as the HDMI mpeg-2 video. I tried ffprobe on a couple audacity captured .wav files but didn't see anything in the results about such a time code, but perhaps other audio formats or other probing tools may include this info. Can anyone advise if this is common with any particular capture tools or file formats?
if so, I think I could get best results by extracting this information and then using simple adelay and atrim filters in ffmpeg to sync reliably directly from the two sources in one ffmpeg call. This is all theoretical for me right now-- I've never tried either of these filters yet-- just trying to optimize against blind alleys by asking for advice up front.
If such timestamps are not embedded, possibly I can use the file system timestamp for the same idea expressed in 1a, but I suspect the file open of the two capture tools may have different inherant delays. Possibly these delays will be found to be nearly constant and the approach can work with a built-in constant anticipation delay but sounds messy and less reliable than idea 1. Still, I'd take it, if it turns out reasonably reliable
Are there any ffmpeg or general digital audio experts out there that know of particular filters that can be employed on the actual data to look for similarities like normalizing the peak amplitudes or normalizing the amplification of the two to some RMS value and then stepping through a short 10 second snippet of audio, moving one time stream .01s left against the other repeatedly and subtracting the two and looking for a minimum? Sounds like it could take a while, but if it could do this in less than a minute and be reliable, I suspect it could work. But I have only rudimentary knowledge of audio streams and perhaps what I suggest is just not plausible-- but since each stream starts with the same source I think there should be a chance. I am just way out of my depth as to how to go down this road, so if someone out there knows such magic or can throw me some names of filters and example calls, I can explore if I can make it work.
any hardware level suggestions to take a line level output down to a mic level input and not have the problems I am seeing using a simple in-line impedance drop module, so that I can simply rely on the audio from the HDMI?
Thanks in advance for any pointers or suggestinons!

Signal/Sound Processing: Making text vibrate to music

I'm working on a simple music visualization. Probably not relevant, but I am doing the sound processing using the new WebKit Audio Data API and the dsp.js library.
I want to make a text vibrate (grow/shrink) to the rhythm of the music. What is the best way to do this?
What I've done so far is ran the signals through a FFT. I look at the bottom 10% of frequencies (bass notes?) and when the amplitude surpasses a certain threshold, I animate the text.
Does this sound right? Or am I completely off?
You say you've done it, and then you ask if you are way off? Well, you tell us: does it work for your application?
One potential problem is that the FFT is slow, both in that there may be a lag between your input and output and there will be a lot of CPU used. I don't expect this will matter for your application, but, in general, you are better off using a low-pass filter. When the output of the low-pass goes above some level, you can use that to trigger something for some short amount of time.
Another issue is simply that this is only a very basic beat detection algorithm. It might work for bass-heavy "four on the floor" music, but you'll need to figure out where the threshold goes and how to keep it moving when the bass stops or something. You may want to research beat detection algorithms. The open source aubio has some.
http://aubio.org/

Using After Effects expressions to trigger an audio file

Is there a way to trigger an audio file to play in the After Effects timeline when a layer has visible content.
It's a small click sound and when the text layer IN point is reached, I simply want the click wav file to play. Any help would be appreciated.
You have to use scripts to change anything other than keyframe-able layer and effect parameters. You might be able to fake a "click" effect with an expression by triggering a momentary change in the volume of a constant noise audio layer, and using markers to trigger it.
I think using the start times of other layers is problematic because writing an expression that would check any number of layers would involve some kind of for-loop that could get complicated, and you can't easily pass values or variables among different expressions. The question with expressions in AE is always whether the solution saves you time in the long run over just doing it manually, so it depends on your needs.
The quickest way to do it would probably be to just pre-comp your sound effect and whatever layer it needs to match, so that each time the pre-comp plays, you also get the click.
try pressing period(.) After effects dosent let you listen to audio while scrubbing, due to the fact your not looking at the true frame rate. So if you click RAM Preview and play you timeline you will hear you audio files. But for your instance if you press period(.) it will override and play your audio file. I use it when placing a small accent or foley sound.

Resources