Encode Side by Side Video Sync'd by Audio (FFMPEG or similar) - audio

I am trying to encode 2 videos side by side, sync'd by the audio of the 2 clips. I can successfully encode the 2 videos side by side and select the audio from one of the input streams. However the system we are using to record the 2 videos does not start and stop the recording at the same time (could be up to a second different between cameras). Basically we are using a CCTV system to capture what's going on in a room from multiple angles. We export the 2 cameras between 2 timestamps and due to the way the system records the videos the start of the 2 clips are not the same point in time.
e.g. Export videos between 09:00:00:000 and 09:10:00:000
Video 1 - exports from 08:59:59:123 to 09:10:00:123
Video 2 - exports from 08:59:59:789 to 09:10:00:789
Therefore when video 1 and video 2 are stitched together side by side, they are out of sync by 666ms (which is very noticeable in the encoded video)
Both input streams have (near) identical audio and are both in the exact same format. We are currently placing these videos into Premiere Pro and syncing these videos by the audio and exporting them side by side, however we have a project where we need to do a lot of these in quick succession and this is not really an option. We need to look at scripting this.
Does anyone know if FFMPEG can do this? Or anything else?
Any info would be greatly appreciated.

You can use audio-offset-finder in bash file to calculate offset, cut of the head from one of the video, stitch them together ( like stated here ).
You would need to extract audio streams into separate files and use finder to calculate offset.
offset=`audio-offset-finder --find-offset-of file1.wav --within file2.wav`

Related

Looking to split audio from different sources that's become enmeshed in recovery

My Zoom H4n somehow decided it didn't want to properly save two recordings this weekend, leaving me with four zero byte files (which I have tried any which way to open/convert, but nothing was working).
I then used CardRescue to scan the SD card for any audio it could find, and - lo and behold - I got .wav files! However, instead of two files for each session (one was an XLR output from the desk, the other the on-Zoom mics), or even a nice stereo with one left, the other right, I have a mess.
In importing as raw data to Audacity (the rescued .wavs themselves do not open), the right channel has the on-Zoom mic audio, with intermittent silence. The left has the on-Zoom audio, followed by the same part of the XLR input audio. This follows the same pattern as the silences.
I have spent hours chopping up in Garageband, but as it is audio for a video, it needs to match what 'really' happened perfectly (I appreciate for a podcast/audio-only I could relatively simply take away the on-Zoom mic audio from the left channel). I began attempting to sync the mic audio to the on-camera audio (which, despite playing around with settings is as unusable as it always is) but because it's a pattern, can't help but wonder if there's a cleaner fix: either analysing the audio somehow as there are clean lines if I look at the spectral data, or a case of adding a couple of numbers to the wav's binary that'd click the two into place?
I've tried importing to Audacity with different settings, different offsets - this has ended up in either slow audio, fast audio, or heavily distorted audio (but always the same patterns with the files).
I use a Mac (and don't know any PC users close by!) so any software suggestions will need to run on Mac. However, I'm willing to try just about anything that's not dragging tiny clips.

Making an audio equalizer video

I need to make a video of an audio equalizer.
So i need a script that analyses audio every frame, and extracts the frequency apectrum so i can draw that somehow and make an equalizer.
The first part of the problem is easily solvable on frontend as there is a myriad of open source equalizer visualisations in canvas.
The thing works nicely in browser but i have a problem to make an mp4 of that.
Ive tried using headless browsers(pupeteer and phantomjs) to capture frames from canvas, but i could not get the framerate above 10fps, resulting in unacceptable video quality and sync issues when connecting the jpg frames and mp3 via ffmpeg. The plan was to speed it up, so you dont have to wait for the full audio length to finish to get an mp4, but i cant even get it to show above 10fps on regular playback speed.
I feel the tech i thought would work is not there yet, and i might be in need of a different approach.
The only condition is that it has to run as a script on a linux server. So any programmimg language or any equalizer design will work.
Any ideas or resources are more than welcome. Thanks

Collection of video samples with different codecs

I'm in a middle of trying to buy iptv device and of course different iptv devices supports different kind of file formats, video codecs and audio codecs.
Can someone recommend me a collection of videos which would be encoded using different versions and different video and audio codec - as much as possible different combinations.
I understand that supporting everything (all video and all audio codecs) is pretty much impossible - so it would be good if they are sorted in most used - least used order. For example:
.avi - xvid vx.xx video codec + yyy audio codec
.mkv - ....
YouTube .flv format ...
...
But of course which codec is used and where depends of which movies you get and from where. I could perform ordering of videos on my own.
Preferably so that videos would be as small as possible - for example 20 seconds per clip, and some video / audio which you can easily inspect - understandable video / audio. (language does not matter)
I suspect also that this kind of collection does not exists - then it's ok to give me video clips for different codecs here, and I will collect them into one collection.
Eventually I want to place all these clips on usb stick - come to shop and try out which of clips can be played and on which iptv-device.
Two collections of video test files are on kodi.tv site : https://kodi.tv/media-samples/ (archived link - right click + save to download files) and http://kodi.wiki/view/Samples
Another one is on the site of MPlayer: http://samples.mplayerhq.hu/

WinRT - Rendering audio to different devices

I'm working on a WinRT project in which I'm playing multiple video files at the same time. I have 3 audio devices attached to machine which will be used distinctively to render audio from video file(s) that's playing. Maximum number of videos that can be played simultaneously is 3. Hence each audio device would be used to render audio from its corresponding video file. i.e. Audio device 1 would play video 1 and so on. That's the requirement I have.
So far, I came across two approaches. First, we use Dolby or any other API to channelize audio to corresponding device. i.e. left channel is rendered to device 1, middle/center to device 2, and right to device 3. I've tried Dolby Audio sample app for Windows 10. They've done channeling in embedded video, not in code. I couldn't find documentation for Windows 10 Dolby API. So for this approach, can I render audio in form of a channel to a particular audio device? And I don't want to merge audio in anyway.
Second, we use 3 sound cards and attach an audio device to each one. We choose the device we want to play audio on by providing device ID. I've tried this approach with XAudio2 by calling createMasteringVoice() method with device ID I want. That worked for single audio file, however, I want to render audio of multiple videos that are being played.
Both approaches didn't solve the core requirement yet. So considering the scenario, what is best approach to follow to fulfill the requirement?
I would say you can go with XAudio2 as you mentioned in second approach. Since you can pass deviceId to createMasteringVoice(), you can create multiple instance of UniversalAudioPlayer and pass different IDs to each one. This way multiple sounds can be played concurrently. Take a look at function definition and community additions here.

mp3 website player with synchronized playback (not streaming)

Want a player (easy enough to put up) that plays back a directory of mp3s in such a way that if you join at 3:33:33 pm, you hear what others hear, not track one. like a pseudo broadcast/stream. how do i achieve that - what looks nice / is probably minimizable / is easy?
i am trying to use mirvling but no such luck. any ideas?
It's unlikely you're going to find something to drop in place. Plus, this isn't typically handled on the client side of things. You neglected to specify what languages and what not that you are using, so I'll provide a general answer.
There are two methods to accomplish this.
Method 1: Encode the stream on the server
Basically with this, you create an audio stream on the server that is made up of the audio files being played back. The clients play an audio stream like any traditional "live" internet radio station, without knowledge of how the stream was created. You can use SHOUTcast/Icecast for the servers, and a number of different source stream encoders, such as Ices.
Method 2: Make the media available and let the clients figure it out
For this, you'll be starting from scratch. Have a JSON feed or similar served up that contains a playlist of the audio files that should be played and when. On the client side, you can use JWPlayer or similar, and seek to the desired position of the current track when it starts, and then play tracks in order from there.

Resources