Collection of video samples with different codecs - audio

I'm in a middle of trying to buy iptv device and of course different iptv devices supports different kind of file formats, video codecs and audio codecs.
Can someone recommend me a collection of videos which would be encoded using different versions and different video and audio codec - as much as possible different combinations.
I understand that supporting everything (all video and all audio codecs) is pretty much impossible - so it would be good if they are sorted in most used - least used order. For example:
.avi - xvid vx.xx video codec + yyy audio codec
.mkv - ....
YouTube .flv format ...
...
But of course which codec is used and where depends of which movies you get and from where. I could perform ordering of videos on my own.
Preferably so that videos would be as small as possible - for example 20 seconds per clip, and some video / audio which you can easily inspect - understandable video / audio. (language does not matter)
I suspect also that this kind of collection does not exists - then it's ok to give me video clips for different codecs here, and I will collect them into one collection.
Eventually I want to place all these clips on usb stick - come to shop and try out which of clips can be played and on which iptv-device.

Two collections of video test files are on kodi.tv site : https://kodi.tv/media-samples/ (archived link - right click + save to download files) and http://kodi.wiki/view/Samples
Another one is on the site of MPlayer: http://samples.mplayerhq.hu/

Related

Encode Side by Side Video Sync'd by Audio (FFMPEG or similar)

I am trying to encode 2 videos side by side, sync'd by the audio of the 2 clips. I can successfully encode the 2 videos side by side and select the audio from one of the input streams. However the system we are using to record the 2 videos does not start and stop the recording at the same time (could be up to a second different between cameras). Basically we are using a CCTV system to capture what's going on in a room from multiple angles. We export the 2 cameras between 2 timestamps and due to the way the system records the videos the start of the 2 clips are not the same point in time.
e.g. Export videos between 09:00:00:000 and 09:10:00:000
Video 1 - exports from 08:59:59:123 to 09:10:00:123
Video 2 - exports from 08:59:59:789 to 09:10:00:789
Therefore when video 1 and video 2 are stitched together side by side, they are out of sync by 666ms (which is very noticeable in the encoded video)
Both input streams have (near) identical audio and are both in the exact same format. We are currently placing these videos into Premiere Pro and syncing these videos by the audio and exporting them side by side, however we have a project where we need to do a lot of these in quick succession and this is not really an option. We need to look at scripting this.
Does anyone know if FFMPEG can do this? Or anything else?
Any info would be greatly appreciated.
You can use audio-offset-finder in bash file to calculate offset, cut of the head from one of the video, stitch them together ( like stated here ).
You would need to extract audio streams into separate files and use finder to calculate offset.
offset=`audio-offset-finder --find-offset-of file1.wav --within file2.wav`

WinRT - Rendering audio to different devices

I'm working on a WinRT project in which I'm playing multiple video files at the same time. I have 3 audio devices attached to machine which will be used distinctively to render audio from video file(s) that's playing. Maximum number of videos that can be played simultaneously is 3. Hence each audio device would be used to render audio from its corresponding video file. i.e. Audio device 1 would play video 1 and so on. That's the requirement I have.
So far, I came across two approaches. First, we use Dolby or any other API to channelize audio to corresponding device. i.e. left channel is rendered to device 1, middle/center to device 2, and right to device 3. I've tried Dolby Audio sample app for Windows 10. They've done channeling in embedded video, not in code. I couldn't find documentation for Windows 10 Dolby API. So for this approach, can I render audio in form of a channel to a particular audio device? And I don't want to merge audio in anyway.
Second, we use 3 sound cards and attach an audio device to each one. We choose the device we want to play audio on by providing device ID. I've tried this approach with XAudio2 by calling createMasteringVoice() method with device ID I want. That worked for single audio file, however, I want to render audio of multiple videos that are being played.
Both approaches didn't solve the core requirement yet. So considering the scenario, what is best approach to follow to fulfill the requirement?
I would say you can go with XAudio2 as you mentioned in second approach. Since you can pass deviceId to createMasteringVoice(), you can create multiple instance of UniversalAudioPlayer and pass different IDs to each one. This way multiple sounds can be played concurrently. Take a look at function definition and community additions here.

Difference between audio encoding/decoding and format conversion

Recently i have been trying to convert an audio file from one format to another through ffmpeg. i was trying to do some google but results made me a little confused about the difference between encoding and decoding an audio file and converting from one format to another.
Let me describe it this way: There are several different file formats for video files (sometimes also called "wrappers"). There are also several different codecs which can be used to encode (or compress) the audio and video. Audio and video use different codecs - and the encoded formats can be sorted in different file types/formats.
So when you talk about "encoding" vs. "converting" a couple of things come into play.
"Encoding" would be the act of taking audio/video and encoding them into a given codec(s). "Converting" implies having stuff in one format, but wanting it in another. There are two ways of looking at this:
Often called "repackaging" - this is when the video (for example) has been encoded correctly (let's say h264, with a bunch of parameters), but you want it in a different file-type - maybe it's an .AVI and you wanted it in an .MP4. This doesn't involve changing the actual video - just re-wraping the h264 stream in a new "wrapper", and is thus a fast operation.
Re-encoding. Let's say your audio was in a MP3 format, and you wanted it in an AAC format. This would require decoding the entire MP3 stream, and re-encoding it into AAC.
Obviously you can also do "1" and "2" together.
Refer Formats and Codecs for detailed information.
Hope it helps!

html5 access of a MP3 file's ByteArray

I would like to build a gapless audio player in html5. The end of the currently playing song should overlap the following one so that there is no pause between them. Conventional gapless players in html5 are not reliable:
Some say : Gapless playback cannot be reliably implemented using HTML5 Audio. There is always going to be an inherent pause between songs. The only way I could simulate gapless playback is by using two HTML5 audio objects, but I would never be able to perfect the timing between the two objects on all devices. So sometimes the songs would play with no gap, sometimes there would be a gap, and sometimes the audio from two consecutive songs would overlap.
source: http://forums.precentral.net/webos-homebrew-apps/261502-music-player-remix-2-0-homebrew-edition-62.html
I believe I can work-around this problem if html5 can access a MP3 file's ByteArray and play "data generated sound". Do you know if html5 is capable to do that?
Thanks a lot for any feedback.
Unfortunately, at this point HTML doesn't provide any means for accessing raw audio data in the specification. Some browsers (e.g. Firefox, Chrome) provide audio APIs that can accomplish this. But obviously it's not cross browser compliant. This would leave you writing implementations for various browsers and no support for IE.

Multiple audio streams in a MPEG-4 file

The MPEG-4 file format allows multiple streams to be present in a file.
This is useful for videos containing audio in multiple languages. In the case of such a video, the audio streams are synchronized to the video.
Is it possible to create a MPEG-4 file the contains desynchronized audio streams, i.e. the audio track are played on after another?
I want to design a MPEG-4 file that contains a music album, so it is crucial that the tracks are played one after another by media players such as VLC.
When I use MP4Box (from the GPAC framework) the resulting file is recognised by VLC as having synchronized audio streams. Which box of the MPEG-4 file format is responsible for this? Or how can I tell VLC that these audio streams are not synchronized?
Thanks in advance!
I can think of two ways you could do that, and both would be somewhat problematic.
You could concatenate all the audio streams into one audio track in the MP4 file. This won't be ideal, for some obvious reasons. For one thing, it's not exactly what you were asking for.
You could also just store the tracks as synchronized audio streams, but set the timing information in such a way that the first sample of the second track won't start playing until the first track finished playing, etc.
I'm not aware of any tools that can do this, but the file format will support such a scheme. Since it's an unusual way to store audio in an MP4 file, I would expect players to have problems with this, too.
Concatenating all streams would work and the individual tracks can be addressed by adding chapters. It works at least with VLC.
MP4Box -new -cat track1.m4a -cat track2.m4a -chap chapters.txt album.m4a
The chapters.txt would look something like this:
CHAPTER1=00:00:00.00
CHAPTER1NAME=Track 1
CHAPTER2=00:03:40.00
CHAPTER2NAME=Track 2
But this is only a hack.
The solution I'm looking for should preserve the tracks as individual streams.

Resources