Is there any way to start playing a track from a certain time spot? I found this old hints & tips, but didn't work when I tried it in libspotify program. Any update on this?
Thanks.
In libSpotify, you need to start playing the track, then immediately seek to the offset you want.
Related
I'm not sure if this is the right site for this question; please point me in the right direction if it isn't :)
I literally started using iMovie today so I have no idea what anything's called so bear with me.
I have two audio clips next to each other in iMovie. I want one audio clip to end when one image changes to another image, and the next audio clip to start immediately thereafter.
For some reason, when I place these (disconnected) audio files next to each other, iMovie does a automatic crossfade between them that ruins the effect I was going for. I have put those fader circle thingies at the extreme ends of the audio clips so as to avoid any kind of fade, but iMovie still adds it by default. I have no idea how to remove this, I can't find anything in settings that would work.
I'm probably missing something super obvious. Can someone point me in the right direction please?
Thanks.
I was having this problem too, and I managed to find a small workaround - When the audio clips touch/immediately follow on from eachother, they will blend together, but if you change either the clips durations by even a tiny amount, they'll disconnect and won't mix.
I just want to see how exactly they work, and I can't seem to find them in either moviepy or pygame's websites. Basically I just want to see at what time a user presses a specific key during a clip, and record that time/possibly insert an image at that time while the movie is playing. I know moviepy does that already to some extent, but it's only for mouse clicks.
Thank you for your time.
I found the source code but no answer. I ended up editing the source code, and while that works, I would much rather do something else than that if possible.
To have a more elaborate answer to the rest of my question, basically it's not something I think is feasible to directly edit the video file WHILE it's playing. I also don't know if it would be a good idea to save every single and just combine them. I was able to find an extremely efficient, but niche solution by modifying the preview frame while it plays, and having that persist across every new frame. Then I saved JUST the overlay to a file, and can use that however else I feel.
I have seen no other threads/users actually deal with moviepy in this way, so feel free to PM me or ask on the thread if you want more info.
Source code here
Let's say I have two separate recordings of the same concert (created on a user's phone and then uploaded to our server). These recordings are then aligned according to their creation timestamp. However, when these recordings are played together or quickly toggled between, it is revealed that their creation timestamps must be off because there is a perceptible delay.
Since the time stamp is not a reliable way to align these recordings, what is an alternative? I would really prefer not to have to learn about audio signal processing to solve this problem, but recognize this may be the only way. So, I guess my question is:
Can I get away with doing some kind of clock synchronization? Is that even possible if the internal device clocks are clearly off by an unknown amount? If yes, a general outline of how this would work and key words would be appreciated.
If #1 is not an option, I guess I need to learn about audio signal processing? Again, a general outline of how to tackle the problem from that angle and some key words would be appreciated.
There are 2 separate issues you need to deal with. Issue 1 is the alignment of the start time of the recordings. I doubt you can expect that both user's pressed record at the exact same moment. Even if they did they may be located different distances from the speaker and it takes time for sound to travel. Aligning the start times by hand is pretty trivial. The human brain is good at comparing the similarities of sound. Programmatically it's a different story. You might try using something like cross correlation or looking over on dsp.stackexchange.com. There is no exact method though.
Issue 2 is that the clocks driving the A/D converters on the two devices are not going to be running at the same exact rate. So even if you synchronize the start time, eventually the two are going to drift apart. The time it takes to noticeably drift is a function of the difference of the two clock frequencies. If they are relatively close you may not notice in a short recording. To counter act this you need to stretch the time of one of the recordings. This increases or decreases the duration of the recording without affecting the pitch. There are plenty of audio recording apps that allow you to time stretch but they don't give you any help in figuring out by how much. Start be googling "time stretching" or again have a look at dsp.stackexchange.com.
I realize neither of these are direct answers - rather suggestions.
Take a look at this document, describes how you can align recordings using Sonic Visualizer(GPL) and a plugin.
I've not used it before, but found the document (and this question) when I was faced with a similar problem.
How can I increase the animation speed in Corona? Changing the sprite.timeScale property makes the animation restart.
I don't have an answer for the first question, but for the 2nd ones, we know the forum search is borked and we are in the process of fixing it, but in the mean time, your best search results come from Google. Do your google search like:
site:coronalabs.com magic formula
and that will give you the best search results until the problem gets solved.
What's a quick and easy way to find out how much silence is at the start of an MP3? I know there's a lot that goes into that... I don't need anything too precise. Within 50 or so milliseconds is great.
Note that I don't want to remove the silence. I just want to find out the length of it.
Also, I need to do this with some 1000 files, so a scripting solution would be great.
Open it in Audacity, scale it so you get the resolution you want, and eyeball it.