How to change pitch and tempo together, reliably with ffmpeg - audio

I know how to change tempo with atempo, but the audio file becomes distorted a bit, and I can't find a reliable way to change pitch. (say, increase tempo and pitch together 140%)
Sox has a speed option, but truncates the volume AND isn't as widely available as ffmpeg. mplayer has a speed option which works perfectly, but I can't output without additional libraries.
I seem to understand ffmpeg doesn't have a way to change pitch (maybe it does recently?) but is there a way to change frequency or some other flags to emulate changing pitch? Looked quite far and can't find a decent solution.
Edit: asetrate:48k*1.4 (assuming originally 48k) doesn't seem to work, still distortion and pitch doesn't really change much.
Edit2: https://superuser.com/a/1076762 this answer sort of works, but the quality is so much lower than sox speed 1.4 option

ffmpeg -i <input file name> -filter:a "asetrate=<new frequency>" -y <output file name> seems to be working for me. I checked the properties of both input and output files with ffprobe and there doesn't seem to be any differences that could affect its quality. Although it's true that I've run it a few times and the resulting file on some of those had some artifacts, even if the line of code was the same, so it may be caused by some ffmpeg bug; try to run it again if you aren't satisfied with the quality.

As of 2022 (though contributed in 2015), FFmpeg has a rubberband filter that works out of the box without any aforementioned ugly, allegedly slow and poor quality and unintuitive workarounds.
To change the pitch using the rubber band filter, you will have to specify the pitch using the frequency ratio of a semi-tone. This is based on using the formula (2^x/12), where x represents the number of semitones you would like to transpose.
For example, to transpose up by one semitone you would use the following command:
ffmpeg -i my.mp3 -filter:a "rubberband=pitch=1.059463094352953" -acodec copy my-up.mp3
To transpose down, simply use a negative number for x.
To alter both properties simultaneously, specify tempo and pitch values. The tempo value is specified as a multiple of the original speed.
The following command transposes down by one semitone and bumps the speed up 4x:
ffmpeg -i slow.mp3 -filter:a "rubberband=pitch=0.9438743126816935, rubberband=tempo=4" -acodec copy fast.mp3
Quality degradation is imperceptible unless measured statistically.

Related

(no accepted answer) How to merge 2 overlapping videos into one video using ffmpeg or opencv?

Merging two videos is easy, been answered couple of times. What I have is multiple overlapping videos. A video might have overlaps with video before it. Meaning if video 1 covers 1-5 timeline then video 2 may overlap 1, and cover 3 to 8. Merging them as is would result in 1-5|3-8, when i need 1-8 only.
Videos are alphabetically sorted.
My general idea of solution is...
grab last frame of the video
if it's first video continue
if it's not first video, ie. 2nd, search for frame saved in previous steps frame by frame
if it reaches to last frame of current video then there is no overlap continue
if it founds a frame then clip 2nd video up to that frame inclusive and then go to next frame
once all videos have been analyzed, merge them into one video.
I need to translate this to ffmpeg commands. Or opencv if that's a better tool.
If there is better way of doing that, I'm interested in that too.
for ffmepg you can use the script below. it tested it. But timing wise, you have to change of this STARTPTS+5 to +25 in your video. I put 5 here to test the merging is happening.
ffmpeg -i 2.mp4 -i 1.mp4 -filter_complex "[1]setpts=PTS-STARTPTS+5/TB[top];[0:0][top]overlay=enable='between(t\,10,15)'[out]" -shortest -map [out] -map 0:1 -pix_fmt yuv420p -c:a copy -c:v libx264 -crf 18 output1.mp4
Limitation
This one need the source to be long enough which means you need video canvas then use this script to add each video into the canvas.
And there is no fully autonomous way of use it in ffmpeg.
You are right. Opencv cant deal with audio. need 3rd party library support to run concurrently. Before then I have to use ROS to get both sound and vision to the robot system from a webcam. The sound is then process with NLP for natual language user interface and vision is used separately for locozlaiton and mapping.
There is some way to walk around.
First, you use OpenCV template matching or image difference on a local window batch. The smallest error position will give you the correct location A to insert. This should be accurate in terms of mili-second level. (if error is always big, then it means there is no overlap and return exception)
Second, based on the correct location obtained from opencv. call system.call to invoke the above script with A parameter as input to do auto merge.
Depends on your application, if you need to do it frequently, write opencv python script to automatic fuse. If just once every month, do it manually with ffmepg is good enough

Compare the volume of two audio files

I want to test the performance of a hand-made microphone, so I recorded the same audio source with or without the microphone and got two files. Is there a way to compare the volume of two files so that I know the mic actually works?
Could the possible solution be a package in Python or Audacity?
You will want to compare by loudness. The minimally accurate measure for this is A-weighted RMS. RMS is root-mean-square, ie. the square root of the mean of the squares of all the sample values. This is significantly thrown off by low-frequency energy, and so you need to apply a frequency weighting. The A curve is commonly used.
The answer here explains how to do this with Python, though it doesn't go into detail on how to apply the weighting curve: Using Python to measure audio "loudness"
There doesn't seem to be a built-in function to do this with Audacity, but viable plugins might be available, eg: http://forum.audacityteam.org/viewtopic.php?f=39&t=38134&p=99454#p99454
Another promising route might be ffmpeg, but all the options I found either normalise or tag the files, rather than simply printing a measurement. You might look into http://r128gain.sourceforge.net/ (it uses LUFS, a more sophisticated loudness measure).
Update: for a quick and dirty un-weighted RMS reading, looks like you can use the following command from https://trac.ffmpeg.org/wiki/AudioVolume :
ffmpeg -i input.wav -filter:a volumedetect -f null /dev/null
This question might be best migrated to Sound Design Stack Exchange.

What Is the Difference Between asetpts and atempo in FFmpeg Audio Filters?

I have been using FFmpeg to slow down or speed up video files (with audio). It seems that to speed up a video, setpts=0.5*PTS should be used. However, when speeding up an audio, asetpts=0.5*PTS and atempo=2.0 are both available. What is the difference between these two options? Which is the better option?
Like setpts, asetpts drops or duplicates audio frame to the specified frame rate while atempo changes the speed of audio.
Comparing asetpts=PTS/2 and atempo=2.0, some information is lost when you use in asetpts. Try it and you can hear the difference.
If you only use setpts=0.5*pts as part of your filter, you'll notice that this only applies to the video stream causing the output to become desynchronized. That's why the atempo=2.0 option is available and intended to be used in conjunction with setpts.
More information can be found here
From the ffmpeg official wiki, we can see that atempo is recommended.
In my own test case, the asetpts can't work. (I used ffprobe to check the pkt_pts, it doesn't change. Also, I played it and it doesn't change too)

Normalize audio, then reduce the volume in ffmpeg

I have a question relating to ffmpeg. First here is the scenario, I am working on a project where I need to have some audio with a presenter talking and then potentially some background music. I also have the requirement to normalize the audio. I would like to do this without presenting a bunch of options to the user.
For normalization I use something similar to this post:
How to normalize audio with ffmpeg.
In short, I get a volume adjustment which I then apply to ffmpeg like this:
ffmpeg -i <input> -af "volume=xxxdB" <output>
So far so good. Now let's consider the backing track, it doesn't want to be the same volume as the presenters voice, this would be really distracting, so I want to lower that by some percentage. I can also do this with ffmpeg, I could do it like this (example would set volume to 50%):
ffmpeg -i <input> -af "volume=0.5" <output>
Using these two commands back to back, I can get the desired result.
My question has two parts:
Is there a way to do this in one step?
Is there any benefit to doing it in one step?
Thanks for any help!
After testing some more, I actually think the answer was pretty straight forward, I just needed to do this.
ffmpeg -i <input> -af "volume=xxxdB,volume=0.5" <output>
Took me a while to realize it, I had to try with a view samples before I felt confident.

change pitch of multiple audio files with Sox

I am intending to take my entire music collection and change the pitch
from the original recorded a=440hz to the more natural sounding/feeling a=432hz.
For those of you who are not familiar with this concept, or the "why" for doing this,
I highly encourage you to do a google search and see what it's all about.
But that is not entirely relevant.
I understand that I could even take Audacity and one-by-one,
convert and re-export the files with the new pitch. I have tried this
and yes, it does work. However, my collection is quite large and I was
excited to find are more fitting command-line option, SOX. Any idea ?
$ sox your_440Hz_music_file.wav your_432Hz_music_file.wav pitch -31
This is asking way more than one question. Break it down into subproblems, for instance:
how to batch-process files (in whatever language you like: perl, bash, .bat, ruby)
how to structure a set of directories to simplify that task
how to change the pitch (with or without changing duration) of a single audio file
how to detect the mean pitch (concert, baroque, or whatever) of a recording of tonal music, by using a wide FFT, so you don't accidentally change something that's already 432 to 424
As you work through these, when you get stuck, ask a question in the form of a "simplest possible example" (SO gives much more advice about how to ask). Often, while formulating such a question, you'll find the answer in the related questions that SO offers you.
sox's pitch filter only accepts 'cents' (100th of a semitone), so you have to calculate the distance between 432Hz and 440Hz in 'cents'. This involves the following logarithmic calculation:
2x/12 = 432/440
x/12 = log(432/440) / log(2)
x = log(432/440) / log(2) * 12
x = -0.3176665363342927165015877324608 semitones
x = -31.76665363342927165015877324608 'cents'
So this sox command should work:
sox input.wav output.wav pitch -31.76665363342927165015877324608
For those interested; this can also be done with sox's open-source counterpart ffmpeg:
ffmpeg -i input.wav -af "asetrate=44100*432/440,aresample=44100,atempo=440/432" output.wav
Or if ffmpeg is compiled with the Rubberband library:
ffmpeg -i input.wav -af "rubberband=pitch=432/440" output.wav

Resources