How to determine if an audio track is a Dolby Pro Logic II mixdown - audio

I'm trying to find out if there's a way to determine if an AAC-encoded audio track is encoded with Dolby Pro Logic II data. Is there a way of examining the file such that you can see this information? I have for example encoded a media file in Handbrake with (truncated to audio options) -E av_aac -B 320 --mixdown dpl2 and this is the audio track output that mediainfo shows:
Audio #1
ID : 2
Format : AAC
Format/Info : Advanced Audio Codec
Format profile : LC
Codec ID : 40
Duration : 2h 5mn
Bit rate mode : Variable
Bit rate : 321 Kbps
Channel(s) : 2 channels
Channel positions : Front: L R
Sampling rate : 48.0 KHz
Compression mode : Lossy
Stream size : 288 MiB (3%)
Title : Stereo / Stereo
Language : English
Encoded date : UTC 2017-04-11 22:21:41
Tagged date : UTC 2017-04-11 22:21:41
but I can't tell if there's anything in this output that would suggest that it's encoded with DPL2 data.

tl:dr; it's probably possible; it may be easier if you're a programmer.
Because the information encoded is just a stereo analog pair, there is no guaranteed way of detecting a Dolby Pro Logic II (DPL2) signal therein, unless you specifically store your own metadata saying "this is a DPL2 file." But you can probably make a pretty good guess.
All of the old analog Dolby Surround formats, including DPL2, store surround information in two channels by inverting the phase of the surround or surrounds and then mixing them into the original left and right channels. Dolby Surround type decoders, including DPL2, attempt to recover this information by inverting the phase of one of the two channels and then looking for similarities in these signal pairs. This is either done trivially, as in Dolby Surround, or else these similarities are artificially biased to be pushed much further to the left or right, or the left or right surround, as in DPL2.
So the trick is to detect whether important data is being stored in the surround channel(s). I'll sketch out for you a method that might work, and I'll try to express it without writing code, but it's up to you to implement and refine it to your liking.
Crop the first N seconds or so of program content into a stereo file, where N is between one and thirty. Call this file Input.
Mix down the Input stereo channels to a new mono file at -3dB per channel. Call this file Center.
Split the left and right channels of Input into separate files. Call these Left and Right.
Invert the right channel. Call this file RightInvert.
Mix down the Left and RightInvert channels to a new mono file at -3dB per channel. Call this file Surround.
Determine the RMS and peak dB of the Surround file.
If the RMS or peak DB of the Surround file are below "a tolerance", stop; the original file is either mono or center-panned and hence contains no surround information. You'll have to experiment with several DPL2 and non-DPL2 sources to see what these tolerances are, but after a dozen or so files the numbers should become clear. I'm guessing around -30 dB or so.
Invert the Center file into a new file. Call this file CenterInvert.
Mix the CenterInvert file into the Surround file at 0 dB (both CenterInvert and Surround should be mono). Call this new file SurroundInvert.
Determine the RMS and peak dB of the SurroundInvert file.
If either the RMS and/or peak dB of SurroundInvert are below "a tolerance," stop; your original source contains panned left or right front information, not surround information. You'll have to experiment with several DPL2 and non-DPL2 sources to see what these tolerances are, but after a dozen or so files the numbers should become clear -- I'm guessing around -35 dB or so.
If you've gotten this far, your original Input probably contains surround information, and hence is probably a member of the Dolby Surround family of encodings.
I've written this algorithm out such that you can do each of these steps with a specific command in sox. If you want to be fancier, instead of doing the RMS/peak value step in sox, you could run an ebur128 program and check your levels in LUFS against a tolerance. If you want to be even fancier, after you create the Surround and Center files, you could filter out all frequencies higher than 7kHz and do de-emphasis on them, just like a real DPL2 decoder would.
To keep this algorithm simple, I've sketched it out entirely in the amplitude domain. The calculation of the SurroundLevel file would probably be a lot more accurately done in the frequency domain, if you know how to calculate the magnitude and angle of FFT bins and you use windows of 30 to 100 ms. But this cheapo version above should get you started.
One last caution. AAC is a modern psychoacoustic codec, which means that it likes to play games with stereo phasing and imaging to achieve its compression. So I consider it likely that the mere act of encapsulating DPL2 into an AAC stream will likely hose some of the imaging present in DPL2. To be candid, neither DPL2 nor AAC belongs anywhere in this pipeline. If you must store an analog stream originally encoded with DPL2, do it in a lossless format like WAV or FLAC, not AAC.
As of this writing, operational concepts behind Dolby Pro Logic (I) are here. These basic concepts still apply to DPL2; operational concepts for DPL2 are here.

If the file has more than one channel, you can with some certainty assume that they are used for surround purposes, although they could be just multiple tracks.
In this case it falls on a playing system to do with channels as it "thinks" best. (if file header doesn't say what to do)
But your file is stereo. If you want to know whether it is a virtual surround file then you look in header for an encoder field to see which encoder was used.
This may help somewhat, although not much. Mostly encoder field is left empty, and second thing is that the encoder doesn't have to be same as the recoder that mixed down the surround data.
I.e. the recoder will first create raw PCM data, then feed it to some encoder to produce compressed file. (AAC or whatever)
Also, there are many applications and versions vary, so might the encoder field, so tracking all of them would be nasty work.
However, you can, with over 60% certainty, deduce whether something is virtual surround or not by examining the data.
This would be advanced DSP and, for speed, even machine learning may be involved.
You would have to find out whether the stereo signals contain certain features of HRTF (head related transfer function).
This may be achieved by examining intensity difference and delay features between same sounds appearing in time domain and harmonic features (characteristic frequency changes) in frequency domain.
You would have to do both, because one without another may just tell you that something is very good stereo recording,, not a virtual surround.
I don't know whether there are HRTF specific features mapped somewhere already, or you would need to do it by yourself.
It's a very complicated solution that takes a lot of time to make properly. Also it's performance would be problematic.
With this method you can also break the stereo mixdown to the nearly original surround channels.
But for stereo to surround conversion other methods are used and they sound well.
If you are determined to perform such a detection, dedicate half a year or more of hard work if no HRTF features are mapped, few weeks if they are,
brace yourself for big stress and I wish you luck. I have done something similar. It is a killer.
If you want an out of the box solution, then the answer to your question is no, unless header provides you with encoder field and the encoder is distinctive and known to be used only for doing surround to stereo conversion.
I do not think anyone did this from actual data as I described, or if they did it is a part of commercial product. Doing what you want is not usually needed, but it can be done.
Ow, BTW, try googling HRTF inversion, it might give some help.

Related

Is there a command line utility for Linux that can change the tempo of an audio track at non-fixed rates, i.e. with the rate changing in the process?

I need a command line utility for Linux which can change the tempo of an audio track at a non-constant rate: for example, the tempo rate might be 100% right at the beginning of the output track and 150% at its end, and would grow between the two points linearly. I only need tempo changes, which means that pitch should not be affected. I expect the application to be capable of changing the tempo linearly, any additional modes are, of course, welcome. I expect the application to support “wav” audio format, any other common audio formats, like “mp3” or “ogg”, are fine too.
I've considered “sox”, “rubberband”, “ffmpeg”, and “soundstretch”; all the four applications only seem to change tempo by a fixed amount. I haven't considered any GUI applications with no CLI counterparts, because I would like to use the utility in Bash-scripts. I have considered splitting the audio into multiple small fragments and changing the tempo of each one, but I'm not inclined to use this method because, given the amount of material I expect to work with, this will either take too much time or produce unsatisfactory sound quality.

Fieldwork audio recording for acoustic analysis: stereo or mono? appropriate gain?

I work in the field of phonetics and often need to record human speech for acoustic analysis. I have two questions that I couldn't find answers:
If I record in stereo channels, I need to convert to mono later on to proceed with annotation. So in principle mono signal is good enough. Are there reasons that stereo sound should be used (e.g. the signal would be better?)
Also, we were warned that the gain level should be kept small so that the recording level shouldn't exceed the maximum, which leads to signal cuttoff. However, I was also criticised when the recording file shows too low an amplitude (it's still very clear though), for that leads to a low SNR. How do people choose an appropriate gain level?
As the act of recording is involved, the Sound Design forum might be your best bet.
I can't think anything that might be gained, in terms of frequency analysis, by having a stereo signal. Stereo is more about locating the source of a sound in 3D space. Does the source of sound emit different frequency profiles in different directions? Does the environment filter the sound differently over the course of the two paths to the stereo inputs? If the the answer is "not significantly" then mono should be fine.
Choosing an appropriate gain level is mostly a matter of knowing your equipment. Ideally, your recording setup will provide feedback (usually a visual meter of some sort) that shows the signal strength. The "best" would be (theoretically) the loudest level that does not distort. So you have to know at what level distortion happens on all the elements of the recording chain.
There can be some fudging on this, given that the loudest peak on a recorded segment may be an outlier.

How to decrease pitch of audio file in nodejs server side?

I have a .MP3 file stored on my server, and I'd like to modify it to be a bit lower in pitch. I know this can be achieved by increasing the length of the audio, however, I don't know of any libraries in node that can do this.
I've tried using the node web audio api, and soundbank-pitch-shift, but the former doesn't seem to have the capabilities of pitch shifting (AFAIK), and the latter seems designed toward client
I need the solution within the realm of node ONLY- that means no external programs, etc., and it needs to be automated as well, so I can't manually pitch shift.
An ideal solution would be a function that takes a file/filepath as an input, and then creates (or overwrites) another MP3 file but with the pitch shifted by x amount, but really, any solution that produces something with a lower pitch than the original, works.
I'm totally lost here. Please help.
An audio file is basically a list of numbers. Those numbers are read one at a time at a particular speed called the 'sample rate'. The sample rate is otherwise defined as the number of audio samples read every second e.g. if an audio files sample rate is 44100, then there are 44100 samples (or numbers) read every second.
If you are with me so far, the simplest way to lower the pitch of an audio file is to play the file back at a lower sample rate (which is normally fixed in place). In most cases you wont be able to do this, so you need to achieve the same effect by resampling the file i.e adding new samples to the file in between the old samples to make it literally longer. For this you would need to understand interpolation.
The drawback to this technique in either case is that the sound will also play back at a slower speed, as well as at a lower pitch. If it is a problem that the sound has slowed down as well as lowered in pitch as a result of your processing, then you will also have to use a timestretching algorithm to fix the playback speed.
You may also have problems doing this using MP3 files. In this case you may have to uncompress the data in the MP3 file before you can operate on it in such a way that changes the pitch of the file. WAV files are more ideal in audio processing. In any case, you essentially need to turn the file into a list of floating point numbers, and change those numbers to be effectively read back at a slower rate.
Other methods of pitch shifting would probably need to involve the use of ffts, and would be a more complicated affair to say the least.
I am not familiar with nodejs I'm afraid.
I managed to get it working with help from Ollie M's answer and node-lame.
I hadn't known previously that sample rate could affect the speed, but thanks to Ollie, suddenly this problem became a lot more simple.
Using node-lame, all I did was take one of the examples (mp32wav.js), and make it so that I change the parameter sampleRate of the format object, so that it is lower than the base sample rate, which in my application was always a static 24,000. I could also make it dynamic since node-lame can grab the parameters of the input file in the format object.
Ollie, however perfectly describes the drawback with this method
The drawback to this technique in either case is that the sound will
also play back at a slower speed, as well as at a lower pitch. If it
is a problem that the sound has slowed down as well as lowered in
pitch as a result of your processing, then you will also have to use a
timestretching algorithm to fix the playback speed.
I don't have a particular need to implement a time stretching algorithm at the moment (thankfully, because that's a whole other can of worms), since I have the ability to change the initial speed of the file, but others may in the future.
See https://www.npmjs.com/package/audio-decode, https://github.com/audiojs/audio-buffer, and related linked at bottom of audio-buffer readme.

Is there a way to use ffmpeg audio filters to automatically synchronize 2 streams with similar content

I have a situation where I have a video capture of HD content via HDMI with audio from a sound board that goes through a impedance drop into a microphone input of a camcorder. That same signal is split at line level to a 'line in' jack on the same computer that is capturing the HDMI. Alternatively I can capture the audio via USB from the soundboard which is probably the best plan, but carries with it the same issue.
The point is that the line in or usb capture will be much higher quality than the one on HDMI because the line out -> impedance change -> mic in path generates inferior quality in that simply brushing the mic jack on the camera while trying to change the zoom (close proximity) can cause noise on the recording.
So I can do this today:
Take the good sound and the camera captured sound and load each into
audacity and pretty quickly use the timeshift toot to perfectly fit
the good audio to the questionable audio from the HDMI capture and
cut the good audio to the exact size of the video. Then I can use
ffmpeg or other video editing software to replace the questionable
audio with the better audio.
But while somewhat quick and easy, it always carries with it a bit of human error and time. I'd like to automate this if possible as this process is repeated at least weekly throughout the year.
Does anyone have a suggestion if any of these ideas have merit or could suggest another approach?
I suspect but have yet to confirm that the system timestamp of the start time may be recorded in both audio captured with something like Audacity, or the USB capture tool from the sound board as well as the HDMI mpeg-2 video. I tried ffprobe on a couple audacity captured .wav files but didn't see anything in the results about such a time code, but perhaps other audio formats or other probing tools may include this info. Can anyone advise if this is common with any particular capture tools or file formats?
if so, I think I could get best results by extracting this information and then using simple adelay and atrim filters in ffmpeg to sync reliably directly from the two sources in one ffmpeg call. This is all theoretical for me right now-- I've never tried either of these filters yet-- just trying to optimize against blind alleys by asking for advice up front.
If such timestamps are not embedded, possibly I can use the file system timestamp for the same idea expressed in 1a, but I suspect the file open of the two capture tools may have different inherant delays. Possibly these delays will be found to be nearly constant and the approach can work with a built-in constant anticipation delay but sounds messy and less reliable than idea 1. Still, I'd take it, if it turns out reasonably reliable
Are there any ffmpeg or general digital audio experts out there that know of particular filters that can be employed on the actual data to look for similarities like normalizing the peak amplitudes or normalizing the amplification of the two to some RMS value and then stepping through a short 10 second snippet of audio, moving one time stream .01s left against the other repeatedly and subtracting the two and looking for a minimum? Sounds like it could take a while, but if it could do this in less than a minute and be reliable, I suspect it could work. But I have only rudimentary knowledge of audio streams and perhaps what I suggest is just not plausible-- but since each stream starts with the same source I think there should be a chance. I am just way out of my depth as to how to go down this road, so if someone out there knows such magic or can throw me some names of filters and example calls, I can explore if I can make it work.
any hardware level suggestions to take a line level output down to a mic level input and not have the problems I am seeing using a simple in-line impedance drop module, so that I can simply rely on the audio from the HDMI?
Thanks in advance for any pointers or suggestinons!

Frequency differences from MP3 to mic

I'm trying to compare sound clips based on microphone recording. Simply put I play an MP3 file while recording from the speakers, then attempt to match the two files. I have the algorithms in place that works, but I'm seeing a slight difference I'd like to sort out to get better accuracy.
The microphone seem to favor some frequencies (add amplitude), and be slightly off on others (peaks are wider on the mic).
I'm wondering what the cause of this difference is, and how to compensate for it.
Background:
Because of speed issues in how I'm doing comparison I select certain frequencies with certain characteristics. The problem is that a high percentage of these (depending on how many I choose) don't match between MP3 and mic.
It's called the response characteristic of the microphone. Unfortunately, you can't easily get around it without buying a different, presumably more expensive, microphone.
If you can measure the actual microphone frequency response by some method (which generally requires having some etalon acoustic system and an anechoic chamber), you can compensate for it by applying an equaliser tuned to exactly inverse characteristic, like discussed here. But in practice, as Kilian says, it's much simpler to get a more precise microphone. I'd recommend a condenser or an electrostatic one.

Resources