Safari: AudioContext MediaElementAudioSourceNode does not respect playbackRate - audio

I'm currently working on an application that uses the AudioContext api to control audio for both video clips and background audio. We would like to use the AudioContext (and therefore MediaElementAudioSourceNodes) so we can make adjustments to the audio programmatically.
Because the application is syncing up media to a timeline, this often means adjusting the playbackRate of the media element to catch up. In Chrome, this works fine: you adjust playbackRate and the media speeds up or slows down accordingly. Now in Safari, any audio piped through a MediaElementAudioSourceNode will not respect the changed media playbackRate, playing the audio at normal speed and will then sputter out after a few seconds. (Safari audio will respect the playbackRate when played directly from the media element, notably without pitch correction, but that is a separate, known, issue)
Here's a CodeSandbox that replicates the issue. The first player on the page will play audio back coming directly from the HTMLMediaElement, where as the second player will pipe it through a MediaElementAudioSourceNode.
We've tried a couple other avenues, such as using an AudioBufferSourceNode for the audio source, but due to the size of clip we're often working with, this is not a desired avenue. If at all possible, we would like to still use the AudioContext api for both Chrome and Safari as well.

Related

Blender VSE Audio out-of-sync when animation (video) is rendered

Ok, so I found out that Blender has this really cool video-editing interface and I was beginning to love it. Until, I created this awesome project composition and when I exported the animation as a video file, the audio was out of sync :(.
Actual Problem
Audio is in-sync with video when the animation is played in Blender but is out-of-sync in the rendered video.
Solutions I tried out and failed
I used the 'Audio-Sync' option in the sequencer but that made no difference.
Then I thought that my scene audio frequency might have been an issue since it was initially 48kHz and my videos were at 24kHz, so I changed the scene audio frequency to 24kHz, this still failed to solve the issue.
Initially, I was combining videos with different frame rates and thought that might have been an issue (although animation played as expected in Blender), so I recreated the source videos to ensure all videos I was using in my project had the same frame rate, but this also did not work.
Someone online suggested exporting the video and audio separately and then combining them using a command-line tool like FFMPEG, this also failed.
What's really frustrating
This lag (audio is a few frames ahead of the video) is noticeable only in longer videos (>12 mins, my video is 1 hr long) suggesting a very small rendered rate difference between the video and the audio.
Also, note that the animation plays absolutely fine in Blender, so all I could figure out was that this was a rendering issue.
So if anyone figured this out please let me know. I am a noob in video/audio codecs so please forgive me if I used some incorrect nomenclature above.
I encountered this issue on OBS capture (a 13 minute clip) with Blender 2.93.3. OBS capture is constant framerate at 60 fps, I did try Handbrake conversion to 60 fps constant framerate also with no help. Workaround to solve the issue is to set Blender rendering fps to 59.94, sequencer shows audio track extending over video track but after render everything matches perfectly. Unfortunately you cannot edit the video in 59.94 fps mode, so you need to switch back to 60 fps for editing.
In case your video is 24 fps then use 23.98 fps preset and for 30 fps you can use the 29.97 fps preset.
May 2021. Blender v2.92.0 - I experienced the same as described out-of-sync problem with rendered videos that were over five minutes long. Source was as-is (3.6GB, 10mins) file from Canon EOS 5DMKII, which is an old camera, so pretty much any software can handle the encoding.
In Blender's preview mode everything looks in-sync. Audio and Video tracks are of the same length. I didn't even cut or merged any segments of the source video. I tried running rendering after a clean boot, gave Blender highest resource priority in Win10, allocated more memory to caching, etc. Source and output was on SSD. Rendered result still didn't match what GUI showed. Very frustrating, and a lot of wasted time.
What worked better for me is the following:
Change Video Codec to "FFmpeg video codec #1". This produces a lossless file that is about 27 times bigger (13.8GB for 10mins) than H.264 codec file (0.5GB). However, the audio remains in sync all the way through.
Use HandBrake open source video transcoder to convert FFmpeg file into H.264 (or H.265). End result produces a smallish-size file with A/V that is in sync.
This workaround is relatively painless and produces good-quality results because there's only a single lossy compression step. The time required to get to final file more than triples though. I believe the issue continues to be with the way H.264 rendering in Blender is implemented. I also experienced similar out-of-sync issues in ShotCut a year ago while working with cheap action cam H.265 files. I also found ShotCut to be less stable than Blender.
So after a lot of online searching, I did find an answer to fix this problem, but not in Blender. If you are like me and would like to use Blender for video editing and still get around the issue, then I found a workaround, but you need Shotcut for this. Shotcut is another great free and open-source video editor
Export the entire long video from Blender (the rendered video has desync issues as expected).
Open the video in Shotcut and detach the audio from it.
Use the audio properties to make very fine adjustments to the audio playback speed to suit your requirements (make fine adjustments until video and audio are in sync).
Follow the GIF attached.
(I am using a shorter video in the GIF but you get the idea)
Explanation
Blender has issues while rendering long videos and I noticed that the video is exported at 1.0x speed but the audio is sometimes faster (1.00400x or something like that) and hence the rendered video has audio not in sync with the video.
Another bad thing is that Blender does not really allow very fine playback speed adjustment just to the audio.
One trick is to adjust the pitch of the audio in Blender which in turn changes the playback speed but this is only allowed up to 2 decimal places (not what we want for long videos) and it makes the audio sound funny (since it actually changes the pitch).
Shotcut is a great tool that allows fine playback adjustment, and it also has a pitch compensation feature so that your pitch is kind of unaffected (since we don't want the characters to be sounding funny in our edited video).
Shotcut allows playback speed adjustment up to 6 decimal places.
I landed at this thread because of the same issue happening in a video that I have just finished. The "View animation CTRL F11" command starts an internal player that has sync issues with long videos. Opening the same video file on "Videos" in Fedora, it plays perfectly synchronized.

Speed up playback of a video with a video editor

Recently, I discover that my tutorial videos could be seen at 1.5x playback speed without losses in quality (they are actually better to see, as I normally speak slowly). My problem is that if I change the speed of the video when using a video editor, like Kdenlive, the audio becomes distorted and turns into a mess (higher pitch, I believe).
How could I obtain the same quality as VLC "playback fast" and Youtube "playback speed 1.5" for the audio track? I'm a layman in audio/video editing, so I'm also satisfied with partial answers, like the identification of which terms I should search for in this case.
It might be better to take your audio track and use something like Sound Forge to automatically remove silence. Just be sure to add a pad to that (built into sound forge) otherwise the speech will sound way to chopped and fast.
Aside from that, you could also use Vegas to (then) chop the video to keep pace with your new speech rate. Vegas is a video editing program that is best for this kind of down and dirty editing.

Making an audio equalizer video

I need to make a video of an audio equalizer.
So i need a script that analyses audio every frame, and extracts the frequency apectrum so i can draw that somehow and make an equalizer.
The first part of the problem is easily solvable on frontend as there is a myriad of open source equalizer visualisations in canvas.
The thing works nicely in browser but i have a problem to make an mp4 of that.
Ive tried using headless browsers(pupeteer and phantomjs) to capture frames from canvas, but i could not get the framerate above 10fps, resulting in unacceptable video quality and sync issues when connecting the jpg frames and mp3 via ffmpeg. The plan was to speed it up, so you dont have to wait for the full audio length to finish to get an mp4, but i cant even get it to show above 10fps on regular playback speed.
I feel the tech i thought would work is not there yet, and i might be in need of a different approach.
The only condition is that it has to run as a script on a linux server. So any programmimg language or any equalizer design will work.
Any ideas or resources are more than welcome. Thanks

Corona sdk : Is there a difference between audio.play() and media.play() and which one is better?

Is there a difference between audio.play() and media.play() and which one is better?
The audio.* API calls use the OpenAL audio layer to play. They are considered a safer and better way to play audio in Corona SDK. You can have 32 different sounds playing at once. You can control the volume on each channel independently, pause and resume, fade in, fade out, etc. It is the preferred way to play sound.
The media.* API calls write directly to the hardware and you cannot control the volume, have multiple sounds going on. The media.* API Calls though are good for video, playing long clips, like podcasts since that audio can be backgrounded, but more importantly, on Android, Google has decided to poorly implement OpenAL and under 4.x there is a significant lag from the time you tell audio.play() to play a sound and it really happening. The lag isn't as bad under 2.2 and 2.3, but there still is a lag. The media.* api calls, if you're playing a short clip will play in a timely fashion.
media API:Only one sound can be playing using this sound API. Calling this API with a different sound file will stop the existing sound and play the new sound.

html5 access of a MP3 file's ByteArray

I would like to build a gapless audio player in html5. The end of the currently playing song should overlap the following one so that there is no pause between them. Conventional gapless players in html5 are not reliable:
Some say : Gapless playback cannot be reliably implemented using HTML5 Audio. There is always going to be an inherent pause between songs. The only way I could simulate gapless playback is by using two HTML5 audio objects, but I would never be able to perfect the timing between the two objects on all devices. So sometimes the songs would play with no gap, sometimes there would be a gap, and sometimes the audio from two consecutive songs would overlap.
source: http://forums.precentral.net/webos-homebrew-apps/261502-music-player-remix-2-0-homebrew-edition-62.html
I believe I can work-around this problem if html5 can access a MP3 file's ByteArray and play "data generated sound". Do you know if html5 is capable to do that?
Thanks a lot for any feedback.
Unfortunately, at this point HTML doesn't provide any means for accessing raw audio data in the specification. Some browsers (e.g. Firefox, Chrome) provide audio APIs that can accomplish this. But obviously it's not cross browser compliant. This would leave you writing implementations for various browsers and no support for IE.

Resources