Is there any way to find the type of stream data in the mdat of a MP4 file?
In other word how can i separate video and audio units of file?
I suppose that we don't have any headers or footers (only mdat is available).
Related
Is there an audio file format, where I can save all the individual chunks (recorded in javascript) while splitting them up at any point to save them to different files and have them still all playable?
Yes this is what WAV file does ... if you save the file to conform to WAV payload format you can play back the file you create as a WAV file even without file having its normal 44 byte header
I store raw audio data in arrays that can be sent to Web Audio API's AudioBuffer. The raw audio data arrays can be manipulated as you wish.
Specifics for obtaining the raw data are going to vary from language to language. I've not obtained raw data from within JavaScript. My experience comes from generating the data algorithmically or from reading .wav files with Java's AudioInputLine, and shipping the data to JavaScript via Thymefeaf.
I am converting .ima files, collected by an audiologger, into .wav format. It works fine, but when doing this I loose the information about the date/time at which the (original, .ima) files were created. Is there a way of having the .wav files somehow 'timestamped' so I could recover the date/time at which the audio was recorded?
Many thanks for any hint provided.
As commented, you can either:
Store the date/time information in the file name
For example, store files with file names in the format 2018-09-23-19-53-45.wav, or whatever time format you like.
Store the audio in Broadcast WAV format files (BWF)
Broadcast WAV is based on WAV format but allows for metadata in the file. The difference between a Broadcast WAV file and a normal WAV is the presence of the BEXT chunk, and as such the file is compatible with existing WAV players.
The BEXT chunk contains two appropriate fields called OriginationDate and OriginationTime. The layout for the chunk can be found here: BEXT Audio Metadata Information.
Here is the background of the problem I'm trying to solve:
I have a video file (MPEG-2 encoded) sitting on some remote server.
My job is to write a program to conduct the face detection on this video file. The output is the collection of frames on which the face(s) detected. The frames are saved as JPEG files.
My current thinking is like this:
Using a HTTP client to download the remote video file;
For each chunk of video data being downloaded, I split it on the GOP boundary; so the output of this step is gonna be a video segment that contains one or more GOPs;
Create a RDD for each video segment aligned on the GOP boundary;
Transform each RDD into a collection of frames;
For each frame, run face detection;
if the face is detected, mark it and save the frame to JPEG file
My question is: Is Apache-Spark the right tool for this kind of work? If so, could someone point me to some example does the similar thing?
Having a mp4 video and its vtt subtitle file, how can I encode this video to a h264 multiple bitrate adaptative streaming including subtitles?
How I have to include caption files in the encoding process?
We only have a solution that is specific to Azure Media Player. Go to http://aka.ms/azuremediaplayer, and in the Samples drop down, select “Subtitles (WebVTT) – On Demand[Tears of Steel]”. This will show you an example where the multiple bitrate MP4 files are stored in one Asset and published, and the WebVTT is stored in a separate asset with its own locator.
We need to extract the volume information for every second from a video file in order to produce a graphical representation of volume changes during the video progress.
I'm trying to use FFMPEG with audio filter but I get stucked in how to extract the volume information for every second (or frame) and then export this information to some report file.
Thanks in advance.