The MPEG-4 file format allows multiple streams to be present in a file.
This is useful for videos containing audio in multiple languages. In the case of such a video, the audio streams are synchronized to the video.
Is it possible to create a MPEG-4 file the contains desynchronized audio streams, i.e. the audio track are played on after another?
I want to design a MPEG-4 file that contains a music album, so it is crucial that the tracks are played one after another by media players such as VLC.
When I use MP4Box (from the GPAC framework) the resulting file is recognised by VLC as having synchronized audio streams. Which box of the MPEG-4 file format is responsible for this? Or how can I tell VLC that these audio streams are not synchronized?
Thanks in advance!
I can think of two ways you could do that, and both would be somewhat problematic.
You could concatenate all the audio streams into one audio track in the MP4 file. This won't be ideal, for some obvious reasons. For one thing, it's not exactly what you were asking for.
You could also just store the tracks as synchronized audio streams, but set the timing information in such a way that the first sample of the second track won't start playing until the first track finished playing, etc.
I'm not aware of any tools that can do this, but the file format will support such a scheme. Since it's an unusual way to store audio in an MP4 file, I would expect players to have problems with this, too.
Concatenating all streams would work and the individual tracks can be addressed by adding chapters. It works at least with VLC.
MP4Box -new -cat track1.m4a -cat track2.m4a -chap chapters.txt album.m4a
The chapters.txt would look something like this:
CHAPTER1=00:00:00.00
CHAPTER1NAME=Track 1
CHAPTER2=00:03:40.00
CHAPTER2NAME=Track 2
But this is only a hack.
The solution I'm looking for should preserve the tracks as individual streams.
Related
I am using moviepy to generate MP4 files from sets of shorter clips, each with their own audio. The problem is that the resulting MP4 often has a very high dynamic range from one clip to the next and I would like to apply audio compression to make it easier on the ears. In Google I can only find results about audio information compression, but not about audio compression from the audio engineering perspective.
I would like to know if there is some way of doing this with moviepy, or with some other library. I have no issue with invoking (non interactive) command line utilities either.
Thank you.
Currently, I am implementing a new feature of my software using the Libav API. This is the requirement: to merge a list of audio files (MP3 and WAV) and create a unique
audio file (MP3) as output. Note: The challenge is not about concatenating files, but merging them. When the output sound is played, all the input audio content must sound at the same time, as when you merge several files in a video editor.
I was researching about Libav audio streams, and I am just guessing that my requirement is related to the "channels" concept, I mean, that there is possible to include several audios in the stream, using one channel per audio or something like that. I was hoping to find more information about this topic, but FFmpeg/Libav documentation is actually scarce.
Right now, I am able to merge several audio streams to a video stream successfully and I can create a playable MP4 file. My problem is that players like MPlayer/VLC only reproduce the first audio stream with the video, the other two audio streams are ignored.
I was looking at the set of examples included in the FFmpeg source code, but there is nothing specifically related to my requirement, so I would appreciate any
source code reference or algorithm explanation about how to merge several audio files into one using libav. Thanks.
Update:
The ffmpeg command to merge several audio files requires de filter flag "amix", like in this example:
ffmpeg -i 1.mp3 -i 2.mp3 -i 3.mp3 -filter_complex amix=inputs=3:duration=first result.mp3
All the syntax related to this option is described in the FFmpeg Documentation
Checking the FFmpeg source code, it seems the amix feature implementation is included in the file af_amix.c
I am not 100% sure, but it seems the general algorithm is described in the function:
static int activate(AVFilterContext *ctx)
Do you know how to merge several audio files using command line ffmpeg? It would help you if you first understand how to do it with the ffmpeg command then reverse engineer how it achieves it. It's all about how to constrct a filtergraph and pass data through it.
As for examples, check out examples/filter_audio.c and examples/filtering_audio.c
This C example gets two WAV audio files and merges them to generate a new WAV file using ffmpeg-4.4 API. Tip: The key of the process is to use these filters: abuffer, amix and abuffersink.
https://github.com/xtingray/audio_mixer/
Although it doesn't support MP3 format as the output, it gives you the basics to understand how to implement your own requirements. I hope it can be handy for anyone looking for references about this specific topic.
I'm in a middle of trying to buy iptv device and of course different iptv devices supports different kind of file formats, video codecs and audio codecs.
Can someone recommend me a collection of videos which would be encoded using different versions and different video and audio codec - as much as possible different combinations.
I understand that supporting everything (all video and all audio codecs) is pretty much impossible - so it would be good if they are sorted in most used - least used order. For example:
.avi - xvid vx.xx video codec + yyy audio codec
.mkv - ....
YouTube .flv format ...
...
But of course which codec is used and where depends of which movies you get and from where. I could perform ordering of videos on my own.
Preferably so that videos would be as small as possible - for example 20 seconds per clip, and some video / audio which you can easily inspect - understandable video / audio. (language does not matter)
I suspect also that this kind of collection does not exists - then it's ok to give me video clips for different codecs here, and I will collect them into one collection.
Eventually I want to place all these clips on usb stick - come to shop and try out which of clips can be played and on which iptv-device.
Two collections of video test files are on kodi.tv site : https://kodi.tv/media-samples/ (archived link - right click + save to download files) and http://kodi.wiki/view/Samples
Another one is on the site of MPlayer: http://samples.mplayerhq.hu/
I am trying to merge two different AAC audio files and a H264 video file to form a single TS file using C++ code. I have been successful in it. So now my TS file possess the following order. First, video part from the video file, then audio part from the first audio file and then audio part from the second audio file and then again the video part and it goes on the same way. On hearing the resulting file, I recognized the presence of the different audio files with the video.The problem is that the resulting audio ain't that much cleared. Distortions can be recognized making it unclear to hear. Also note that the resulting audio seems slow as compared to the original.Can anyone guide me in getting off those distortions and procuring the exact replica of my original files ?
Thanks,
Ashish.
Recently i have been trying to convert an audio file from one format to another through ffmpeg. i was trying to do some google but results made me a little confused about the difference between encoding and decoding an audio file and converting from one format to another.
Let me describe it this way: There are several different file formats for video files (sometimes also called "wrappers"). There are also several different codecs which can be used to encode (or compress) the audio and video. Audio and video use different codecs - and the encoded formats can be sorted in different file types/formats.
So when you talk about "encoding" vs. "converting" a couple of things come into play.
"Encoding" would be the act of taking audio/video and encoding them into a given codec(s). "Converting" implies having stuff in one format, but wanting it in another. There are two ways of looking at this:
Often called "repackaging" - this is when the video (for example) has been encoded correctly (let's say h264, with a bunch of parameters), but you want it in a different file-type - maybe it's an .AVI and you wanted it in an .MP4. This doesn't involve changing the actual video - just re-wraping the h264 stream in a new "wrapper", and is thus a fast operation.
Re-encoding. Let's say your audio was in a MP3 format, and you wanted it in an AAC format. This would require decoding the entire MP3 stream, and re-encoding it into AAC.
Obviously you can also do "1" and "2" together.
Refer Formats and Codecs for detailed information.
Hope it helps!