I'm not sure if this is the right site for this question; please point me in the right direction if it isn't :)
I literally started using iMovie today so I have no idea what anything's called so bear with me.
I have two audio clips next to each other in iMovie. I want one audio clip to end when one image changes to another image, and the next audio clip to start immediately thereafter.
For some reason, when I place these (disconnected) audio files next to each other, iMovie does a automatic crossfade between them that ruins the effect I was going for. I have put those fader circle thingies at the extreme ends of the audio clips so as to avoid any kind of fade, but iMovie still adds it by default. I have no idea how to remove this, I can't find anything in settings that would work.
I'm probably missing something super obvious. Can someone point me in the right direction please?
Thanks.
I was having this problem too, and I managed to find a small workaround - When the audio clips touch/immediately follow on from eachother, they will blend together, but if you change either the clips durations by even a tiny amount, they'll disconnect and won't mix.
Related
My Zoom H4n somehow decided it didn't want to properly save two recordings this weekend, leaving me with four zero byte files (which I have tried any which way to open/convert, but nothing was working).
I then used CardRescue to scan the SD card for any audio it could find, and - lo and behold - I got .wav files! However, instead of two files for each session (one was an XLR output from the desk, the other the on-Zoom mics), or even a nice stereo with one left, the other right, I have a mess.
In importing as raw data to Audacity (the rescued .wavs themselves do not open), the right channel has the on-Zoom mic audio, with intermittent silence. The left has the on-Zoom audio, followed by the same part of the XLR input audio. This follows the same pattern as the silences.
I have spent hours chopping up in Garageband, but as it is audio for a video, it needs to match what 'really' happened perfectly (I appreciate for a podcast/audio-only I could relatively simply take away the on-Zoom mic audio from the left channel). I began attempting to sync the mic audio to the on-camera audio (which, despite playing around with settings is as unusable as it always is) but because it's a pattern, can't help but wonder if there's a cleaner fix: either analysing the audio somehow as there are clean lines if I look at the spectral data, or a case of adding a couple of numbers to the wav's binary that'd click the two into place?
I've tried importing to Audacity with different settings, different offsets - this has ended up in either slow audio, fast audio, or heavily distorted audio (but always the same patterns with the files).
I use a Mac (and don't know any PC users close by!) so any software suggestions will need to run on Mac. However, I'm willing to try just about anything that's not dragging tiny clips.
I just want to see how exactly they work, and I can't seem to find them in either moviepy or pygame's websites. Basically I just want to see at what time a user presses a specific key during a clip, and record that time/possibly insert an image at that time while the movie is playing. I know moviepy does that already to some extent, but it's only for mouse clicks.
Thank you for your time.
I found the source code but no answer. I ended up editing the source code, and while that works, I would much rather do something else than that if possible.
To have a more elaborate answer to the rest of my question, basically it's not something I think is feasible to directly edit the video file WHILE it's playing. I also don't know if it would be a good idea to save every single and just combine them. I was able to find an extremely efficient, but niche solution by modifying the preview frame while it plays, and having that persist across every new frame. Then I saved JUST the overlay to a file, and can use that however else I feel.
I have seen no other threads/users actually deal with moviepy in this way, so feel free to PM me or ask on the thread if you want more info.
Source code here
Let's say I have two separate recordings of the same concert (created on a user's phone and then uploaded to our server). These recordings are then aligned according to their creation timestamp. However, when these recordings are played together or quickly toggled between, it is revealed that their creation timestamps must be off because there is a perceptible delay.
Since the time stamp is not a reliable way to align these recordings, what is an alternative? I would really prefer not to have to learn about audio signal processing to solve this problem, but recognize this may be the only way. So, I guess my question is:
Can I get away with doing some kind of clock synchronization? Is that even possible if the internal device clocks are clearly off by an unknown amount? If yes, a general outline of how this would work and key words would be appreciated.
If #1 is not an option, I guess I need to learn about audio signal processing? Again, a general outline of how to tackle the problem from that angle and some key words would be appreciated.
There are 2 separate issues you need to deal with. Issue 1 is the alignment of the start time of the recordings. I doubt you can expect that both user's pressed record at the exact same moment. Even if they did they may be located different distances from the speaker and it takes time for sound to travel. Aligning the start times by hand is pretty trivial. The human brain is good at comparing the similarities of sound. Programmatically it's a different story. You might try using something like cross correlation or looking over on dsp.stackexchange.com. There is no exact method though.
Issue 2 is that the clocks driving the A/D converters on the two devices are not going to be running at the same exact rate. So even if you synchronize the start time, eventually the two are going to drift apart. The time it takes to noticeably drift is a function of the difference of the two clock frequencies. If they are relatively close you may not notice in a short recording. To counter act this you need to stretch the time of one of the recordings. This increases or decreases the duration of the recording without affecting the pitch. There are plenty of audio recording apps that allow you to time stretch but they don't give you any help in figuring out by how much. Start be googling "time stretching" or again have a look at dsp.stackexchange.com.
I realize neither of these are direct answers - rather suggestions.
Take a look at this document, describes how you can align recordings using Sonic Visualizer(GPL) and a plugin.
I've not used it before, but found the document (and this question) when I was faced with a similar problem.
For one of our projects, we got a new requirement on our hands but I don't have any idea about how to do it.
We need to process audio captured from the environment and do something in the app (show a message, picture, etc) when a specific pattern is recognized.
The first thing that came to my mind when I heard this requirement was Shazam. I did a little research and found Echo Print library (http://echoprint.me). I think that works on the whole of the songs, what I need to do is constantly listen to environment and act when the patterns are recognized. I don't know anything about audio processing (at least for now) but this sounds more like steganography. Correct me if I'm wrong.
Any help will be greatly appreciated!
EDIT: I think I need to correct some points in my question. Yes, the application will listen to the environment but it will recognize the patterns in a pre-configured audio. A specific song on a radio, a dog bark, etc. So the patterns will be artificially defined in the audio.
Thanks.
I have spent some time experimenting with MPlayer slave mode protocol: in a custom application I have two controls: one for changing pitch and one for changing speed.
This is easy to implement using the scaletempo filter and *speed_set* / *speed_mult* commands from the MPlayer API.
There's a problem however if I try to modify pitch and speed independently. To give an example: I would like to be able to slow down the speed by e. g. 20%, while transposing the pitch up two or three semitones.
I've tried to do this with adding two scaletempo filters, but without success:
af_add scaletempo=scale=1.0:speed=pitch
speed_mult 1.1224620482959342
af_add scaletempo=scale=0.8:speed=tempo
This method changes speed, preserving the original pitch.
Is there any other solution to do this with MPlayer or any other media player?
Thanks in advance!
interesting question. As far as mplayer goes, here is one idea, it looks to be free. this may be more what you are after. Of course you could go in a different direction with this. There's quite a bit of stuff on the net. I hope this helps you get started! CHEERS!