I have spent some time experimenting with MPlayer slave mode protocol: in a custom application I have two controls: one for changing pitch and one for changing speed.
This is easy to implement using the scaletempo filter and *speed_set* / *speed_mult* commands from the MPlayer API.
There's a problem however if I try to modify pitch and speed independently. To give an example: I would like to be able to slow down the speed by e. g. 20%, while transposing the pitch up two or three semitones.
I've tried to do this with adding two scaletempo filters, but without success:
af_add scaletempo=scale=1.0:speed=pitch
speed_mult 1.1224620482959342
af_add scaletempo=scale=0.8:speed=tempo
This method changes speed, preserving the original pitch.
Is there any other solution to do this with MPlayer or any other media player?
Thanks in advance!
interesting question. As far as mplayer goes, here is one idea, it looks to be free. this may be more what you are after. Of course you could go in a different direction with this. There's quite a bit of stuff on the net. I hope this helps you get started! CHEERS!
Related
I'm not sure if this is the right site for this question; please point me in the right direction if it isn't :)
I literally started using iMovie today so I have no idea what anything's called so bear with me.
I have two audio clips next to each other in iMovie. I want one audio clip to end when one image changes to another image, and the next audio clip to start immediately thereafter.
For some reason, when I place these (disconnected) audio files next to each other, iMovie does a automatic crossfade between them that ruins the effect I was going for. I have put those fader circle thingies at the extreme ends of the audio clips so as to avoid any kind of fade, but iMovie still adds it by default. I have no idea how to remove this, I can't find anything in settings that would work.
I'm probably missing something super obvious. Can someone point me in the right direction please?
Thanks.
I was having this problem too, and I managed to find a small workaround - When the audio clips touch/immediately follow on from eachother, they will blend together, but if you change either the clips durations by even a tiny amount, they'll disconnect and won't mix.
Or even better, how to get the size of the amplitude or the volume of the wave sound every certain time.
In fact I need the two ways, the full waveform and measure it each time. the first one for have a view of the song wave and the second one for visual effects.
this is for Android (NDK) systems.
come on people, I don't ask for the full code answer, I just want you to tell me some advices or something that can help me. You can simply say that the question is hard or makes no sense. but say something.
Whatever, I researched a little bit and I didn't find the answer for the question, but I did find a better solution for the problem, and is a free library named "superpowered", simple, fast, cross-platform, and has all the functions for analize sounds.
hope this help people new to this world of sound programming
Let's say I have two separate recordings of the same concert (created on a user's phone and then uploaded to our server). These recordings are then aligned according to their creation timestamp. However, when these recordings are played together or quickly toggled between, it is revealed that their creation timestamps must be off because there is a perceptible delay.
Since the time stamp is not a reliable way to align these recordings, what is an alternative? I would really prefer not to have to learn about audio signal processing to solve this problem, but recognize this may be the only way. So, I guess my question is:
Can I get away with doing some kind of clock synchronization? Is that even possible if the internal device clocks are clearly off by an unknown amount? If yes, a general outline of how this would work and key words would be appreciated.
If #1 is not an option, I guess I need to learn about audio signal processing? Again, a general outline of how to tackle the problem from that angle and some key words would be appreciated.
There are 2 separate issues you need to deal with. Issue 1 is the alignment of the start time of the recordings. I doubt you can expect that both user's pressed record at the exact same moment. Even if they did they may be located different distances from the speaker and it takes time for sound to travel. Aligning the start times by hand is pretty trivial. The human brain is good at comparing the similarities of sound. Programmatically it's a different story. You might try using something like cross correlation or looking over on dsp.stackexchange.com. There is no exact method though.
Issue 2 is that the clocks driving the A/D converters on the two devices are not going to be running at the same exact rate. So even if you synchronize the start time, eventually the two are going to drift apart. The time it takes to noticeably drift is a function of the difference of the two clock frequencies. If they are relatively close you may not notice in a short recording. To counter act this you need to stretch the time of one of the recordings. This increases or decreases the duration of the recording without affecting the pitch. There are plenty of audio recording apps that allow you to time stretch but they don't give you any help in figuring out by how much. Start be googling "time stretching" or again have a look at dsp.stackexchange.com.
I realize neither of these are direct answers - rather suggestions.
Take a look at this document, describes how you can align recordings using Sonic Visualizer(GPL) and a plugin.
I've not used it before, but found the document (and this question) when I was faced with a similar problem.
I am trying to change the pitch of a buffer sample using a scriptprocessor, but what kind of formulas do I need to do this? I am not looking for the exact js code, but just for some general mathematical how to. I would love to have some code for this, as the first answer has a lot of formulas where I have no idea on how to implement that in JS.
I know that this is working with time, but according to this it can be done with the FFT, but I have no idea how one should do that.
For one method of doing time-pitch modification using an FFT, look up phase vocoder. Here's one explanation of how a phase vocoder works (but a search will turn up many others): http://www.guitarpitchshifter.com/algorithm.html
I believe https://github.com/mikolalysenko/pitch-shift would be appropriate (the quality is not on par with other code, but this library is rather easy to understand/use). You can hear a demo at http://mikolalysenko.github.io/pitch-shift/.
I'm working on a simple music visualization. Probably not relevant, but I am doing the sound processing using the new WebKit Audio Data API and the dsp.js library.
I want to make a text vibrate (grow/shrink) to the rhythm of the music. What is the best way to do this?
What I've done so far is ran the signals through a FFT. I look at the bottom 10% of frequencies (bass notes?) and when the amplitude surpasses a certain threshold, I animate the text.
Does this sound right? Or am I completely off?
You say you've done it, and then you ask if you are way off? Well, you tell us: does it work for your application?
One potential problem is that the FFT is slow, both in that there may be a lag between your input and output and there will be a lot of CPU used. I don't expect this will matter for your application, but, in general, you are better off using a low-pass filter. When the output of the low-pass goes above some level, you can use that to trigger something for some short amount of time.
Another issue is simply that this is only a very basic beat detection algorithm. It might work for bass-heavy "four on the floor" music, but you'll need to figure out where the threshold goes and how to keep it moving when the bass stops or something. You may want to research beat detection algorithms. The open source aubio has some.
http://aubio.org/