Is it possible for podcast audio stream to have script/text?
I am interested in file formats for audio and timed text and how well these formats are supported by players (Android/iPhone/desktop).
I found:
https://en.wikipedia.org/wiki/LRC_(file_format)
https://en.wikipedia.org/wiki/WebVTT
but have no idea how this or any other possible timed text format can be used in audio streaming.
My use case is creating language learning podcast and embed script into audio file.
Do I need to use video container to provide text with audio?
Is it normal to provide links to MP4/AVI/MPEG containers in RSS/Atom feeds to podcast? Do podcast players understand containers formats?
Related
I am pretty new with processing audio file. '
I want to build a web app that can take audio file and turn the into visualization for user like this https://github.com/CrowdCurio/audio-annotator
Right now I want to research on visualize audio datas. Original data that was stored in S3 come in two form .ts and .flac. That's why I want to ask if there's any visualization tool which can directly use .ts or .flac audio file.
Because right now the solution I think of will be first convert them into .wav or .mp3, so most visualization tool can process them, but .wav file is really storage-wasting as far as I know.
So if you know any approach or tool to do this. Please let me know!
Audio visualization requires audio data. Your compressed audio isn't audible until decoded. Therefore, you must decode them to PCM before visualizing.
This doesn't require that you store the files as WAV, but you'll at least have to decode them on-the-fly.
I have recorded a video on my phone, I don't get why it needs to be encoded at all. Doesn't the format persist? Maybe I missing the point of encoding here. After the recording is it not already in format that is viewable to users?
It's a valid question if you wanted to just upload the existing MP4 file that was encoded on your phone and just stream it as a single bitrate HLS or DASH packaged file.
Most users of our service prefer that the uploaded MP4 file is first encoded to multiple bitrates and resolutions to allow for Adaptive Bitrate Streaming.
If you are not familiar with what Adapative Streaming is or how it works, I recommend watching a few of these - https://www.youtube.com/results?search_query=Adaptive+bitrate+streaming+overview
Or read through this article
https://en.wikipedia.org/wiki/Adaptive_bitrate_streaming
We have two types of encoding presets to enable this. One called Adaptive Streaming, which will generate a fixed "ladder" of bitrates and qualities, and one called Content Aware Encoding, which will look at your video, analyze it, and generate the best set of tracks and bitrates for the content type.
https://learn.microsoft.com/en-us/azure/media-services/latest/content-aware-encoding
Thanks,
John D.
I'm in a middle of trying to buy iptv device and of course different iptv devices supports different kind of file formats, video codecs and audio codecs.
Can someone recommend me a collection of videos which would be encoded using different versions and different video and audio codec - as much as possible different combinations.
I understand that supporting everything (all video and all audio codecs) is pretty much impossible - so it would be good if they are sorted in most used - least used order. For example:
.avi - xvid vx.xx video codec + yyy audio codec
.mkv - ....
YouTube .flv format ...
...
But of course which codec is used and where depends of which movies you get and from where. I could perform ordering of videos on my own.
Preferably so that videos would be as small as possible - for example 20 seconds per clip, and some video / audio which you can easily inspect - understandable video / audio. (language does not matter)
I suspect also that this kind of collection does not exists - then it's ok to give me video clips for different codecs here, and I will collect them into one collection.
Eventually I want to place all these clips on usb stick - come to shop and try out which of clips can be played and on which iptv-device.
Two collections of video test files are on kodi.tv site : https://kodi.tv/media-samples/ (archived link - right click + save to download files) and http://kodi.wiki/view/Samples
Another one is on the site of MPlayer: http://samples.mplayerhq.hu/
I am trying to write a code in c/c++ (objective c) to parse the audio and video data from mp4 file.
I know that data in mp4 file contains under the mp4 atom but not sure how i can parse out the audio and video data separately.
Thanks in advance for any help.
Mp4 format is fairly complicated. I suggest you use a library. But if you can't use a library, or just wan to learn the format, Than you must parse about a dozen boxes or atoms under the root moov box. The information from there can be used to find frames within the mdat atom. The full specifications is numbered ISO/IEC 14496-12 You should be able to find a copy online.
The MPEG-4 file format allows multiple streams to be present in a file.
This is useful for videos containing audio in multiple languages. In the case of such a video, the audio streams are synchronized to the video.
Is it possible to create a MPEG-4 file the contains desynchronized audio streams, i.e. the audio track are played on after another?
I want to design a MPEG-4 file that contains a music album, so it is crucial that the tracks are played one after another by media players such as VLC.
When I use MP4Box (from the GPAC framework) the resulting file is recognised by VLC as having synchronized audio streams. Which box of the MPEG-4 file format is responsible for this? Or how can I tell VLC that these audio streams are not synchronized?
Thanks in advance!
I can think of two ways you could do that, and both would be somewhat problematic.
You could concatenate all the audio streams into one audio track in the MP4 file. This won't be ideal, for some obvious reasons. For one thing, it's not exactly what you were asking for.
You could also just store the tracks as synchronized audio streams, but set the timing information in such a way that the first sample of the second track won't start playing until the first track finished playing, etc.
I'm not aware of any tools that can do this, but the file format will support such a scheme. Since it's an unusual way to store audio in an MP4 file, I would expect players to have problems with this, too.
Concatenating all streams would work and the individual tracks can be addressed by adding chapters. It works at least with VLC.
MP4Box -new -cat track1.m4a -cat track2.m4a -chap chapters.txt album.m4a
The chapters.txt would look something like this:
CHAPTER1=00:00:00.00
CHAPTER1NAME=Track 1
CHAPTER2=00:03:40.00
CHAPTER2NAME=Track 2
But this is only a hack.
The solution I'm looking for should preserve the tracks as individual streams.