Rebuilding MP4 file from fragmented MP4 "mdat" atom? - c#-4.0

I'm trying to rebuild a video file from a Smooth Streaming server. Smooth Streaming serves fMP4 files which are regular MP4 files without neither their FTYP nor their MOOV atoms.
All the informations stored in those atom are placed into a Manifest XML file, which I have.
Is there a way to programmatically rebuild the original MP4 file, either by:
rebuilding a new file straight from H264/AAC content located in MDAT
(and picture format infos); or
rebuilding FTYP and MOOV atoms
Or else, is there a tool which can merge fMP4?

Yes. It is completely possible.
You can do this with FFmpeg. Study the mov.c [MP4 demuxer] from libavformat.
You will need to complete the MP4 in memory with all data that is "missing" in the fMP4. In other words, when you need an atom that doesn't exist in fMP4 [almost all], you will have to input all information hard-coded (such information, most of them come from the manifest).
It's not easy... but for sure it's possible. I've done by myself. Unfortunately the code is not my property.
Good luck! ;-)
UPDATE: the PIFF format specification will be very useful (http://go.microsoft.com/?linkid=9682897) so one can understand what is already in the fMP4 and what is not!

It is simple to rebuild a mp4 file, if there are ism and ismc file which are related to fragmented mp4 files.
It is requested that you should know media type, codec type, codec specific data and time scale of each trak to rebuild a moov and ftyp atom.
these information can be retrieved from ism and ismc file.
you can retrieve a media type of each track from the ism file.
you can retrieve codec type, codec specific data and time scale of each track from the ismc file.
simply speaking, ism/ismc files are meta data for server and client so that you can rebuild meta data(ftyp, moov atom) for a mp4 file.

Related

How to write a webm (or other) audio/video block of data from MediaRecorder to a properly formatted .webm (or other) container file?

I am using javascript to capture audio data from MediaRecorder, and base64 encode it so I can send it back to the web server where it can be saved for later playback.
data:audio/webm;codecs=opus;base64,GkXfo59ChoEBQveBA...(too much data to post, but you get the idea)
I can put that data into an HTML5 audio element's .src field, and play it back on a Chrome browser just fine. But Safari can't handle the data in that format, I guess it doesn't support the opus codec.
One solution for me would be to figure out how to write the audio data into a properly formatted .webm container file, and then use ffmpeg.exe to convert it to some other Safari friendly format.
But I don't know the file format for .webm file - I'm looking for tips or guidance how to write such a .webm file.
Anybody have any suggestions, libraries, or tips to write data like above to a .webm file? I prefer a C# .net answer, but javascript will also do, or any examples are appreciated.
Well, I got a tip from smart developer (earnabler) that if I stripped off the header portion of the content:
"data:audio/webm;codecs=opus;base64,"
and decoded just the base64 portion:
"GkXfo59ChoEBQveBA...(too much data to post, but you get the idea)"
...back to binary (example in C#):
byte[] decodedBinaryData;
decodedBinaryData = Convert.FromBase64String(encodedBase64String);
...and wrote that binary to a file with a matching file extension (.webm in this example), that the file would be a properly formatted file of that type understandable by other media software.
Lo and behold, it was! I could play the file in MediaPlayer, or QuickTime, or whatever, and could use FFMPEG to convert it to other types.
So that gives me a pathway to save/use/convert the media in many ways. Problem solved.

Easily differentiate video files from image files in Node

I'm building a project where people can upload files, I would like to then display those files in a browser where people can interact with them (vote, comment etc)
However, this means I need to programatically build the html depending on the format of the video or image. Is there a way to feed a file (or filename) into a library, and determine whether I need to display it in a video element or an image element? Even a list of video formats vs image formats would help but I haven't seen anything in regards to that.
No module can reliably determine the file type. The user could either change the extension or even the magic number of the file to obfuscate it. The only reliable way it to try to pass file to some image / video transcoder to let it decide or error out if the format is invalid. This way you know you are working with known formats since all files are transcoded to your specific extensions. That could be mp4 or png. I recommend using handbrake for videos and sharp for images. Leaving the NPM links down below:
https://www.npmjs.com/package/handbrake-js
https://www.npmjs.com/package/sharp

how to play mpd file

.
I am trying to understand how mpd file plays and i am referring to the following data set:
http://www-itec.uni-klu.ac.at/ftp/datasets/mmsys12/Valkaama/MPDs/Valkaama_1s_act_isoffmain_DIS_23009_1_v_2_1c2_2011_08_30.mpd
In mpd file format there is segment base consists of mp4 chunk and within it has chunk list with extension .m4s.I downloaded mpd file using :
http://www-itec.uni-klu.ac.at/ftp/datasets/mmsys12/Valkaama/valkaama_1s/valkaama_1s_50kbit/valkaama_50kbit_dash.mp4
and m4s chunk by following link:
http://www-itec.uni-klu.ac.at/ftp/datasets/mmsys12/Valkaama/valkaama_1s/valkaama_1s_50kbit/valkaama_1s1.m4s
I tried to play both mp4 and m4s in vlc player but not able to play any of these two so i want to ask which of the chunk links in the mpd file forma i can be able to play standalone in vlc player.
Please correct me if any of my observations is wrong:
Regards
Mayank
MPD file is just a index of streams with various formats in order to adapt to your bandwidth, to get more information follow the links of the other answers here.
It's possible to download all streams and merge them into a single file, you could achieve this by using youtube-dl:
youtube-dl http://URL/TO/manifest.mpd
You can get more information in https://stackoverflow.com/a/39931712/1522342.
Also, VLC 3.0.0+ can play that kind of file from a url, just open VLC, use the shortcut CTRL+N, paste the url and enjoy.
A DASH player plays an MPD by selecting a Period, and in the Period one or more AdaptationSet, and then one Representation per AdaptationSet. For the chosen Representation, it downloads and passes the intialization segment and some media segments to the media engine. As indicated by others you can simulate that by concatenating (simply using caton Linux).
The MPEG-DASH standard requires that initialization segments (in your case the mp4 file) contain no data. This is because when switching the player might use several times the initialization segment. You can open it in a player but it does not contain any media.
For m4s files, they contain media data but they cannot be interpreted without the associated initialization segment.
you can download init segment (SegmentBase/Initialization#sourceURL) and all media segments (SegmentList/SegmentURL#media) and concatenate everything (e.g. with the copy command on windows). The result should be playable on VLC. This has to be done for audio and video separately. In the next step you can then use MP4Box or similar tools to mux audio and video.
Alternatively you can use www.dash-downloader.com to download everything in one step. The page will display some log explaining what it's doing. That might be helpfull.
(full disclosure: that's my website).
I dont think it is possible to play any of the files in a standalone player. The mp4 is the init segment which is required to decode the .m4s media segments. I dont think there is an option in the vlc player to map an init segment to multiple media segments. Nevertheless you can try a Dash player to play the manifest file. For instance dash.js
Initialization segment: A sequence of bytes that contain all of the initialization information required to decode a sequence of media segments. This includes codec initialization data, Track ID mappings for multiplexed segments, and timestamp offsets (e.g. edit lists).
Media segment: A sequence of bytes that contain packetized & timestamped media data for a portion of the media timeline. Media segments are always associated with the most recently appended initialization segment.
Source: http://www.w3.org/TR/media-source/#init-segment

Need a way to write headers on a wav file generated by sox

I'm using sox to convert some mp3 files to wav for a project. The problem is that the software that plays the files does not have the media name for the element it is playing. I can't seem to find a win32 cli tool to read the header of the wav file and write what I need to it.
Sox will read the header but it's not showing the title of the media element that was inserted with the old software I used. I just couldnt automate it or I would have used it instead.
I have determined the info is written in either CART CHUNK section or just to the file headers. I can't figure out to write my own data there.
The way I was able to do this was with this project: https://github.com/JamesHeinrich/getID3

How to programmatically convert .rm (RealPlayer Media) file to MP3 or another format?

I would like to convert .rm (RealPlayer Media) file to MP3 or another format?
First, I successfully managed that using VLC but the quality was not good. Then I tried the Real Alternative codec with DirectShow, this also worked ok, but then I found that the codec is no longer developed because RealNetworks sued the developer.
Now, I have installed the RealPlayer and I am trying to use it's DirectShow filters to convert .rm to .mp3 but without success:( (Actually after adding RealPlayer Transcode filter and choosing a file the GraphStudio crashes.)
Is there a legal way to programmatically convert .rm file to another format? How to make RealPlayer to programmatically convert files? Do you have any hints or examples, how to use RealPlayer Transcode filter? (I am new to DirectShow.)
UPDATE to make the question more concrete: How can I list implemented interfaces and its members of RealPlayer Transcode filter? I have not found any documentation:( (The GraphStudio says it has 0 pins and just common properties.)
You need to build a DirectShow graph to read and decode .rm, then compress audio into MP3 and write it into a file. This is similar to recompressing an AVI file, described in some detail on MSDN: Recompressing an AVI File. You just have audio without video there, and the container formats are different.
UPDATE: There is no way to reliably list implemented interfaces in COM. Sometimes you can find this out by checking the type library, however a lot of DirectShow filters are coming without it. Typically, you need an SDK header file from the filter vendor to get a definition of implemented so called 'private' interfaces.

Resources