I am intending to take my entire music collection and change the pitch
from the original recorded a=440hz to the more natural sounding/feeling a=432hz.
For those of you who are not familiar with this concept, or the "why" for doing this,
I highly encourage you to do a google search and see what it's all about.
But that is not entirely relevant.
I understand that I could even take Audacity and one-by-one,
convert and re-export the files with the new pitch. I have tried this
and yes, it does work. However, my collection is quite large and I was
excited to find are more fitting command-line option, SOX. Any idea ?
$ sox your_440Hz_music_file.wav your_432Hz_music_file.wav pitch -31
This is asking way more than one question. Break it down into subproblems, for instance:
how to batch-process files (in whatever language you like: perl, bash, .bat, ruby)
how to structure a set of directories to simplify that task
how to change the pitch (with or without changing duration) of a single audio file
how to detect the mean pitch (concert, baroque, or whatever) of a recording of tonal music, by using a wide FFT, so you don't accidentally change something that's already 432 to 424
As you work through these, when you get stuck, ask a question in the form of a "simplest possible example" (SO gives much more advice about how to ask). Often, while formulating such a question, you'll find the answer in the related questions that SO offers you.
sox's pitch filter only accepts 'cents' (100th of a semitone), so you have to calculate the distance between 432Hz and 440Hz in 'cents'. This involves the following logarithmic calculation:
2x/12 = 432/440
x/12 = log(432/440) / log(2)
x = log(432/440) / log(2) * 12
x = -0.3176665363342927165015877324608 semitones
x = -31.76665363342927165015877324608 'cents'
So this sox command should work:
sox input.wav output.wav pitch -31.76665363342927165015877324608
For those interested; this can also be done with sox's open-source counterpart ffmpeg:
ffmpeg -i input.wav -af "asetrate=44100*432/440,aresample=44100,atempo=440/432" output.wav
Or if ffmpeg is compiled with the Rubberband library:
ffmpeg -i input.wav -af "rubberband=pitch=432/440" output.wav
Related
I want to test the performance of a hand-made microphone, so I recorded the same audio source with or without the microphone and got two files. Is there a way to compare the volume of two files so that I know the mic actually works?
Could the possible solution be a package in Python or Audacity?
You will want to compare by loudness. The minimally accurate measure for this is A-weighted RMS. RMS is root-mean-square, ie. the square root of the mean of the squares of all the sample values. This is significantly thrown off by low-frequency energy, and so you need to apply a frequency weighting. The A curve is commonly used.
The answer here explains how to do this with Python, though it doesn't go into detail on how to apply the weighting curve: Using Python to measure audio "loudness"
There doesn't seem to be a built-in function to do this with Audacity, but viable plugins might be available, eg: http://forum.audacityteam.org/viewtopic.php?f=39&t=38134&p=99454#p99454
Another promising route might be ffmpeg, but all the options I found either normalise or tag the files, rather than simply printing a measurement. You might look into http://r128gain.sourceforge.net/ (it uses LUFS, a more sophisticated loudness measure).
Update: for a quick and dirty un-weighted RMS reading, looks like you can use the following command from https://trac.ffmpeg.org/wiki/AudioVolume :
ffmpeg -i input.wav -filter:a volumedetect -f null /dev/null
This question might be best migrated to Sound Design Stack Exchange.
I know how to change tempo with atempo, but the audio file becomes distorted a bit, and I can't find a reliable way to change pitch. (say, increase tempo and pitch together 140%)
Sox has a speed option, but truncates the volume AND isn't as widely available as ffmpeg. mplayer has a speed option which works perfectly, but I can't output without additional libraries.
I seem to understand ffmpeg doesn't have a way to change pitch (maybe it does recently?) but is there a way to change frequency or some other flags to emulate changing pitch? Looked quite far and can't find a decent solution.
Edit: asetrate:48k*1.4 (assuming originally 48k) doesn't seem to work, still distortion and pitch doesn't really change much.
Edit2: https://superuser.com/a/1076762 this answer sort of works, but the quality is so much lower than sox speed 1.4 option
ffmpeg -i <input file name> -filter:a "asetrate=<new frequency>" -y <output file name> seems to be working for me. I checked the properties of both input and output files with ffprobe and there doesn't seem to be any differences that could affect its quality. Although it's true that I've run it a few times and the resulting file on some of those had some artifacts, even if the line of code was the same, so it may be caused by some ffmpeg bug; try to run it again if you aren't satisfied with the quality.
As of 2022 (though contributed in 2015), FFmpeg has a rubberband filter that works out of the box without any aforementioned ugly, allegedly slow and poor quality and unintuitive workarounds.
To change the pitch using the rubber band filter, you will have to specify the pitch using the frequency ratio of a semi-tone. This is based on using the formula (2^x/12), where x represents the number of semitones you would like to transpose.
For example, to transpose up by one semitone you would use the following command:
ffmpeg -i my.mp3 -filter:a "rubberband=pitch=1.059463094352953" -acodec copy my-up.mp3
To transpose down, simply use a negative number for x.
To alter both properties simultaneously, specify tempo and pitch values. The tempo value is specified as a multiple of the original speed.
The following command transposes down by one semitone and bumps the speed up 4x:
ffmpeg -i slow.mp3 -filter:a "rubberband=pitch=0.9438743126816935, rubberband=tempo=4" -acodec copy fast.mp3
Quality degradation is imperceptible unless measured statistically.
I have a question relating to ffmpeg. First here is the scenario, I am working on a project where I need to have some audio with a presenter talking and then potentially some background music. I also have the requirement to normalize the audio. I would like to do this without presenting a bunch of options to the user.
For normalization I use something similar to this post:
How to normalize audio with ffmpeg.
In short, I get a volume adjustment which I then apply to ffmpeg like this:
ffmpeg -i <input> -af "volume=xxxdB" <output>
So far so good. Now let's consider the backing track, it doesn't want to be the same volume as the presenters voice, this would be really distracting, so I want to lower that by some percentage. I can also do this with ffmpeg, I could do it like this (example would set volume to 50%):
ffmpeg -i <input> -af "volume=0.5" <output>
Using these two commands back to back, I can get the desired result.
My question has two parts:
Is there a way to do this in one step?
Is there any benefit to doing it in one step?
Thanks for any help!
After testing some more, I actually think the answer was pretty straight forward, I just needed to do this.
ffmpeg -i <input> -af "volume=xxxdB,volume=0.5" <output>
Took me a while to realize it, I had to try with a view samples before I felt confident.
Earlier I wrote so:
ffmpeg -i input.mp4 -sameq output.mp3
...and thus receive audio from video file. Ffmpeg just taken out or converted audio to mp3 with an appropriate quality. All thanks to key: -sameq [use same quantizer as source]
Now in Ubuntu instead of ffmpeg we have libav and there (in man for avcomv) I see no -sameq key. Well, here is a question: what I have to do now?..
What I have to do now to get converted audio file with the same quality as in original?
PS. -sameq : Use same quantizer as source (implies VBR).
$ man ffmpeg | col -b > ./man_ffmpeg
this man_ffmpeg is there: http://pastebin.com/qYxz1M1E
FFMPEG(1)
NAME
ffmpeg - ffmpeg video converter
SYNOPSIS
ffmpeg [[infile options][-i infile]]... {[outfile options] outfile}...
...
...
...
-sameq
Use same quantizer as source (implies VBR).
...
...
...
SEE ALSO
avplay(1), avprobe(1), avserver(1) and the Libav HTML documentation
AUTHORS
The Libav developers
2014-02-06
FFMPEG(1)
You are correct, -sameq option has been deprecated and then removed from avconv, there were many reasons for it. Not the least of it being that there are different quantizers and it makes little sense talking about same quantizer parameters when reencoding between different codecs.
Majority of people, when reencoding are looking for quality, not quantizers. So they should use -qscale n where n is between 1 and 31 representing quality from best to worst.
In a way if you have gotten used to -sameq option, you have fallen victim to a tool that should have been there at best for testing purposes. It doesn't produce anything reasonable, and can be kinned to trying to put "same metadata" into the container that doesn't support it, or doing "copy stream" into an archaic file format (leading to things like AVI with vorbis audio, that can't even be played). You can hack something together that does all these things, but it has no place in a video encoding tool.
I suggest that if you are going to be doing much stress testing of different containers and codecs, then you install ffmpeg which has more tools allowing the creation of frankensteins. If you are reencoding for the purposes of actually keeping the files that you produce or distributing them, than you can create another question explaining your situation, and what is your desired outcome.
In short "How can i create reencoding process with exactly the same quantizer?" Can only be answered with "No".
From ASP.Net, I am using FFMPEG to convert flv files on a Flash Media Server to wavs that I need to mix into a single MP3 file. I originally attempted this entirely with FFMPEG but eventually gave up on the mixing step because I don't believe it it possible to combine audio only tracks into a single result file. I would love to be wrong.
I am now using FFMPEG to access the FLV files and extract the audio track to wav so that SOX can mix them. The problem is that I must offset one of the audio tracks by a few seconds so that they are synchronized. Each file is one half of a conversation between a student and a teacher. For example teacher.wav might need to begin 3.3 seconds after student.wav. I can only figure out how to mix the files with SOX where both tracks begin at the same time.
My best attempt at this point is:
ffmpeg -y -i rtmp://server/appName/instance/student.flv -ac 1 student.wav
ffmpeg -y -i rtmp://server/appName/instance/teacher.flv -ac 1 teacher.wav
sox -m student.wav teacher.wav combined.mp3 splice 3.3
These tools (FFMEG/SoX) were chosen based on my best research, but are not required. Any working solution would allow an ASP.Net service to input the two FMS flvs and create a combined MP3 using open-source or free tools.
EDIT:
I was able to offset the files using the delay switch in SOX.
sox -M student.wav teacher.wav combined.mp3 delay 2.8
I'm leaving the question open in case someone has a better approach than the combined FFMPEG/SOX solution.
For what it's worth, this should be possible with a combination of -itsoffset and the amix filter, but a bug with -itsoffset prevents it. If it worked, the command would look something like this:
ffmpeg -i student.flv -itsoffset 3.3 -i teacher.flv -vn -filter_complex amix out.mp3
mixing can be pretty simple: how to mix two audio channels?
well i suggest you should use flash.
it may sounds weird, correct me if im wrong but with Flash's new multimedia abilities you can mix a couple tracks.
im not sure, but i'm just trying to help you,
theese 2 link can help you for your aim (specially second link i guess);
http://3d2f.com/programs/25-187-swf-to-mp3-converter-download.shtml
http://blog.debit.nl/2009/02/mp3-to-swf-converter-in-actionscript-3/