Dynamic range compression at audio volume normalization [closed] - audio

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 4 years ago.
Improve this question
I already asked about audio volume normalization. On most methods (e.g. ReplayGain, which I am most interested in), I might get peaks that exceed the PCM limit (as can also be read here).
Simple clipping would probably be the worst thing I can do. As Wikipedia suggests, I should do some form of dynamic range compression.
I am speaking about the function which I'm applying on each individual PCM sample value. On another similar question, one answer suggests that doing this is not enough or not the thing I should do. However, I don't really understand that as I still have to handle the clipping case. Does the answer suggest to do the range compression on multiple samples at once and do to simple hard clipping in addition on every sample?
Leaving that aside, the functions discussed in the Wikipedia article seem to be somewhat not what I want (in many cases, I would still have the clipping case in the end). I am thinking about using something like tanh. Is that a bad idea? It would reduce the volume slightly but guarantee that I don't get any clipping.
My application is a generic music player. I am searching for a solution which mostly works best for everyone so that I can always turn it on and the user very likely does not want to turn this off.

Using any instantaneous dynamic range processing (such as clipping or tanh non-linearity) will introduce audible distortion. Put a sine wave into an instantaneous non-linear function and you no longer have a sine wave. While useful for certain audio applications, it sounds like you do not want these artefacts.
Normalization does not effect the dynamics (in terms of min/max ratio) of a waveform. Normalization involves element-wise multiplication of a waveform by a constant scalar value to ensure no samples ever exceed a maximum value. This process can only by done off-line, as you need to analyse the entire signal before processing. Normalization is also a bad idea if your waveform contains any intense transients. Your entire signal will be attenuated by the ratio of the transient peak value divided by the clipping threshold.
If you just want to protect the output from clipping you are best off using a side chain type compressor. A specific form of this is the limiter (infinite compression ratio above a threshold with zero attack time). A side-chain compressor calculates the smoothed energy envelope of a signal and then applies a varying gain according to that function. They are not instantaneous, so you reduce audible distortion that you'd get from the functions you mention. A limiter can have instantaneous attack to prevent from clipping, but you allow a release time so that the limiter remains attenuating for subsequent waveform peaks, the subsequent waveform is just turned down and so there is no distortion. After the intense sound, the limiter recovers.
You can get a pumping type sound from this type of processing if there are a lot of high intensity peaks in the waveform. If this becomes problematic, you can then move to the next level and do the dynamics processing within sub-bands. This way, only the offending parts of the frequency spectrum will be attenuated, leaving the rest of the sound unaffected.

The general solution is to normalize to some gain level significantly below 1 such that very few songs require adding gain. In other words, most of the time you will be lowering the volume of signal rather than increasing. Experiment with a wide variety of songs in different styles to figure out what this level is.
Now, occasionally, you'll still come across a song that requires enough gain that, that, at some point, it would clip. You have two options: 1. don't add that much gain. This one song will sound a bit quieter. C'est la vie. (this is a common approach), or 2. apply a small amount of dynamic range compression and/or limiting. Of course, you can also do some combination 1 and 2. I believe iTunes uses a combination of 1 and 2, but they've worked very hard on #2, and they apply very little.
Your suggestion, using a function like tanh, on a sample-by-sample basis, will result in audible distortion. You don't want to do this for a generic music player. This is the sort of thing that's done in guitar amp simulators to make them sound "dirty" and "grungy". It might not be audible in rock, pop, or other modern music which is heavy on distortion already, but on carefully recorded choral, jazz or solo violin music people will be upset. This has nothing to do with the choice of tanh, by the way, any nonlinear function will produce distortion.
Dynamic range compression uses envelopes that are applied over time to the signal: http://en.wikipedia.org/wiki/Dynamic_range_compression
This is tricky to get right, and you can never create a compressor that is truly "transparent". A limiter can be thought of as an extreme version of a compressor that (at least in theory) prevents signal from going above a certain level. A digital "lookahead" limiter can do so without noticeable clipping. When judiciously used, it is pretty transparent.
If you take this approach, make sure that this feature can be turned off, because no matter how transparent you think it is, someone will hear it and not like it.

Related

Methods for simulating moving audio source

I'm currently researching an problem regarding DOA (direction of arrival) regression for an audio source, and need to generate training data in the form of audio signals of moving sound sources. In particular, I have the stationary sound files, and I need to simulate a source and microphone(s) with the distances between them changing to reflect movement.
Is there any software online that could potentially do the trick? I've looked into pyroomacoustics and VA as well as other potential libraries, but none of them seem to deal with moving audio sources, due to the difficulties in simulating the doppler effect.
If I were to write up my own simulation code for dealing with this, how difficult would it be? My use case would be an audio source and a microphone in some 2D landscape, both moving with their own velocities, where I would want to collect the recording from the microphone as an audio file.
Some speculation here on my part, as I have only dabbled with writing some aspects of what you are asking about and am not experienced with any particular libraries. Likelihood is good that something exists and will turn up.
That said, I wonder if it would be possible to use either the Unreal or Unity game engine. Both, as far as I can remember, grant the ability to load your own cues and support 3D including Doppler.
As far as writing your own, a lot depends on what you already know. With a single-point mike (as opposed to stereo) the pitch shifting involved is not that hard. There is a technique that involves stepping through the audio file's DSP data using linear interpolation for steps that lie in between the data points, which is considered to have sufficient fidelity for most purposes. Lot's of trig, too, to track the changes in velocity.
If we are dealing with stereo, though, it does get more complicated, depending on how far you want to go with it. The head masks high frequencies, so real time filtering would be needed. Also it would be good to implement delay to match the different arrival times at each ear. And if you start talking about pinnas, I'm way out of my league.
As of now it seems like Pyroomacoustics does not support moving sound sources. However, do check a possible workaround suggested by the developers here in Issue #105 - where the idea of using a time-varying convolution on a dense microphone array is suggested.

audio mixing wrt time

I've read through many questions on stack overflow which states that to mix the audios, you just have to add the byte frames together (and make sure to clip when necessary). But what should I do if I want to say mix an audio with an another with some offset. For example, I want to mix second audio into the first one when the first audio reaches 5th second.
Any help would be appreciated!
Typically when working with audio on a computer, you will be working with audio in the time domain, in the format of PCM samples. That is, many times per second, the pressure level at that point of time will be measured an quantified into a number. If you are working with CD-quality audio, 44,1000 samples per second is the sample rate. The number is often quantified into 16-bit integers. (-32,767 to 32,768). (Other sample rates, bit depths, and quantization are out there and often used, this is just an example.)
If you want to mix two audio streams of the same sample rate, it is possible to simply add the values of each sample together. If you think about it, if you were to hear sound from two sources, their pressure levels would affect each other in much the same way. Sometimes they will cancel each other out, sometimes they will add to each other. You mentioned clipping... you can do this, but you will be introducing distortion into the mix. When a sound is too loud to be quantified, it is clipped at the maximum and minimums of the quantifiable range, causing audible clicks, pops, and poor quality sound. If you want to avoid this problem, you can cut the level of each in half, guaranteeing that even with both streams at their maximum level, they will be within the appropriate range.
Now, your question is about mixing audio with offset. It's absolutely no different. If you want to start mixing 5 seconds in, then 5 * 44,100 = 220500, meaning align sample zero of one stream to sample 220500 of the other stream and mix.

Quickest and easiest algorithm for comparing the frequency content of two sounds

I want to take two sounds that contain a dominant frequency and say 'this one is higher than this one'. I could do FFT, find the frequency with the greatest amplitude of each and compare them. I'm wondering if, as I have a specific task, there may be a simpler algorithm.
The sounds are quite dirty with many frequencies, but contain a clear dominant pitch. They aren't perfectly produced sine waves.
Given that the sounds are quite dirty, I would suggest starting to develop the algorithm with the output of an FFT as it'll be much simpler to diagnose any problems. Then when you're happy that it's working you can think about optimising/simplifying.
As a rule of thumb when developing this kind of numeric algorithm, I always try to operate first in the most relevant domain (in this case you're interested in frequencies, so analyse in frequency space) at the start, and once everything is behaving itself consider shortcuts/optimisations. That way you can test the latter solution against the best-performing former.
In the general case, decent pitch detection/estimation generally requires a more sophisticated algorithm than looking at FFT peaks, not a simpler algorithm.
There are a variety of pitch detection methods ranging in sophistication from counting zero-crossing (which obviously won't work in your case) to extremely complex algorithms.
While the frequency domain methods seems most appropriate, it's not as simple as "taking the FFT". If your data is very noisy, you may have spurious peaks that are higher than what you would consider to be the dominant frequency. One solution is use window overlapping segments of your signal, and do STFTs, and average the results. But this raises more questions: how big should the windows be? In this case, it depends on how far apart you expect those dominant peaks to be, how long your recordings are, etc. (Note: FFT methods can resolve to better than one-bin size by taking into account phase information. In this case, you would have to do something more complex than averaging all your FFT windows together).
Another approach would be a time-domain method, such as YIN:
http://recherche.ircam.fr/equipes/pcm/cheveign/pss/2002_JASA_YIN.pdf
Wikipedia discusses some more methods:
http://en.wikipedia.org/wiki/Pitch_detection_algorithm
You can also explore some more methods in chapter 9 of this book:
http://www.amazon.com/DAFX-Digital-Udo-ouml-lzer/dp/0471490784
You can get matlab sourcecode for yin from chapter 9 of that book here:
http://www2.hsu-hh.de/ant/dafx2002/DAFX_Book_Page_2nd_edition/matlab.html

Frequency differences from MP3 to mic

I'm trying to compare sound clips based on microphone recording. Simply put I play an MP3 file while recording from the speakers, then attempt to match the two files. I have the algorithms in place that works, but I'm seeing a slight difference I'd like to sort out to get better accuracy.
The microphone seem to favor some frequencies (add amplitude), and be slightly off on others (peaks are wider on the mic).
I'm wondering what the cause of this difference is, and how to compensate for it.
Background:
Because of speed issues in how I'm doing comparison I select certain frequencies with certain characteristics. The problem is that a high percentage of these (depending on how many I choose) don't match between MP3 and mic.
It's called the response characteristic of the microphone. Unfortunately, you can't easily get around it without buying a different, presumably more expensive, microphone.
If you can measure the actual microphone frequency response by some method (which generally requires having some etalon acoustic system and an anechoic chamber), you can compensate for it by applying an equaliser tuned to exactly inverse characteristic, like discussed here. But in practice, as Kilian says, it's much simpler to get a more precise microphone. I'd recommend a condenser or an electrostatic one.

Is algebraic sound synthesis possible? [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 4 years ago.
Improve this question
Lets say you have an normal song with two layers, one instrumental and another of just vocals. Now lets say you also have just the instrumental layer. Is it possible to "subtract" the instrumentals and obtain the pure vocals? Is there going to be loss? How would I go about performing this specific type of subtractive synthesis?
Yes it's possible but yes there can be loss as well since sound waves can additively cancel each other out (destructive interference). For example, two sine waves that are 180 degrees out of phase would produce silence.
Ideally, it should be possible. The catch is, "ideal" is pretty restrictive. In order to pull this off properly, you would have to have a song file that was constructed by additive synthesis in the first place, i.e. by adding the vocal track to the exact same instrumental track you have. Now, if you do have that situation, then it's simple enough; as others have said, you just add the inverse of the instrumental track to the overall song. Unfortunately, there are a lot of things that can get in the way of that. For example, if the additive synthesis was clipped at some points (which means that the sum of the instrumental and vocal tracks was louder than the maximum volume that can be stored), you won't be able to recover the vocals at those points. More generally, lossy audio compression tends to remove different pieces of the sound depending on what is most/least audible, and that's heavily dependent on whether you have vocals or not, so if any of these sound files have been compressed using a lossy codec like MP3, you've probably lost the information you need to reconstruct the vocal track. The thing is, even minor changes to a signal can sometimes produce a big difference when you add it to or subtract it from another signal (because of wave interference and such things) so the results are kind of unpredictable when you don't have the exact sound to work with.
By the way, if you do have the exact signals you need to do this, you can perform the subtraction using Audacity or any other decent audio editor. There are even some mathematical programs you can use (like Matlab, which is able to read/write WAV files IIRC).
Using a technique (and software) like this: Audacity Vocal Removals I bet you can achieve what you need. As Daniel and Paolo said if you can apply the inverse of a soundwave to the original soundwave you are able to cancel it out (muting the sound).
Generally, the word 'Synthesis' is used slightly differently, though the dictionary meaning might agree with your question. As pointed out, audacity/vst plugin/pro-tools versions of 'extract vocals' exist.
// Is there going to be a loss?// Of course, there will be some loss. Vocal and Instrumental tracks are 'mixed' and 'mastered'. Panning, adding effects (echo/reverb), and additional shaping (compress etc) take place in these stages. Besides, there'll be many instrument tracks (keyboards, guitars, bass, drums) in most of the music that's produced these days.
I mean to say, even you make your own music - with 1 instrument track and 1 vocal track, if you just pan your tracks, your logic of subtraction is going to be affected. And, if you had recorded those tracks, say by playing guitar and singing, most probably there'll be some 'leak' in both of the tracks, that makes matters worse.
Hypothetically speaking, wonderful things are possible with this idea. Practically, there are too many imperfections in the actual music production process.

Resources