mpeg-dash with live stream - mpeg-dash

I would like to use MPEG-DASH technology in situations where I am constantly receiving a live video stream from a client. The Web server gets a live video stream, keeps generating the m4s file, and declares it in mpd. So the new segment can be played back constantly.
(I'm using FFMPEG's ffserver. So the video stream continues to accumulate in /tmp/feed1.ffm file.)
Using MP4Box seems to be able to generate mpd, init.mp4, m4s for already existing files. But it does not seem to support live streaming.
I want fragmented mp4 in segment format rather than mpeg-ts.
A lot of advice is needed!

GPAC maintainer here. The dashcast project (and likely its dashcastx replacement from our Signals platform should help you). Please open issues on github if you have any issues.
Please note that there are some projects like this one using FFmpeg to generate some HLS and then GPAC to ingest the TS segments to produce MPEG-DASH. This introduces some latency but proved to be very robust.

Below information may be useful.
latest ffmpeg supports the live streaming and also mp4 fragmenting.
Example command
ffmpeg -re -y -i <input> -c copy -f dash -window_size 10 -use_template 1 -use_timeline 1 <ClearLive>.mpd

Related

Is there a way to ensure mp3 duration accuracy with variable bit rate using FFMPEG?

In our application, we are processing audio files using ffmpeg. Specifically, we use the NodeJS library fluent-ffmpeg, (npm link).
Our audio files are generated from various text to speech providers. We recently noticed that when we converted audio using ssml to add pauses to the generated audio, the duration on the file is no longer correct. Upon further investigation, we noticed that the standard audios were also incorrect, just more accurate overall due to the more consistent data. When we put a pause at the beginning of the audio, the estimate was the worst, overshooting it by a very large margin (e.g., a 25s audio clip would read as 3 minutes long, but skip to the end when playing past the 25s mark.
I did some searching and research into the structure of MP3 files, and to me it seems like the issue is because the duration gets estimated by various audio players. Windows media player is an example, but Firefox's web player seems to also do this. I tried changing the ffmpeg command from using .audioQuality(0), which sets ffmpeg to use VBR, to .audioBitrate(320), which tells ffmpeg to use a constant bitrate.
For reference, the we are using libmp3lame, and the full command that gets run is the following, for the VBR and CBR cases respectively:
For VBR (broken durations): ffmpeg -i <URL> -acodec libmp3lame -aq 0 -f mp3 pipe:1
For CBR (correct duration): ffmpeg -i <URL> -acodec libmp3lame -b:a 320k -f mp3 pipe:1
Note: we then pipe the output to the requesting client application after sending the appropriate file headers, hence the pipe:1 output. The input is a cloud storage url where the source file is located
This fixes our problem of having a correct duration, and it makes sense to me why this would fix it if the problem was because the duration is being estimated by some of these players / audio consumers. But, this came at the cost that the file size was significantly larger, which also makes sense to me. While testing we found that compared to the same file in WAV, the VBR mp3 was about 10% of the WAV file size, while the CBR mp3 was still 50% of the WAV file size. This practically defeats the purpose of supporting the mp3 format for our use-case, which is a smaller but slightly lossy alternative to the large WAV file.
While researching, I found that there can be ID3 tags in a chunk at the beginning of the mp3 file, specifying information for the consumer of the audio to know the duration before potentially having processed the whole file. But, I also found that there doesn't seem to be a standard, at least for duration. More things like song title, album, artist, etc.
My question is, is there a way to get the proper duration onto an mp3 file, preferably via some ffmpeg mechanism, while still using VBR? Thanks!
FFmpeg does write a Xing header by default with duration info. However, that value is only known after the entire stream data has been received, so ffmpeg has to seek to the head to write it. Since you're piping the output, that can't be done.
Write the file locally or to some seekable destination, and then upload.

HLS Live streaming with re-encoding

I come to a technical problem and I need you.
Situation data:
I record the screen as well as 1 to 2 audio tracks (microphone and speaker).
These three recordings are done separately (it could be mixed but I don't prefer) and every 10s (this is configurable), I send the chunk of recorded data to my backend. We, therefore, have 2 to 3 chunks sent every 10s.
These data chunks are interdependent. Example: The 1st video chunk starts with the headers and a keyframe. The second chunk can be in the middle of a frame. It's like having the entire video and doing a random one-bit split.
The video stream is in h264 in a WebM container. I don't have a lot of control over it.
The audio stream is in opus in a WebM container. I can't use aac directly, nor do I have much control.
Given the reality, the server may be restarted randomly (crash, update, scaled, ...). It doesn't happen often (4 times a week). In addition, the customer can, once the recording ends on his side, close the application or his computer. This will prevent the end of the recording from being sent. Once it reconnects, the missing data chunks are sent. This, therefore, prevents the use of a "live" stream on the backend side.
Goals :
Store video and audio as it is received on the server in cloud storage.
Be able to start playing the video/audio even when the upload has not finished (so in a live stream)
As soon as the last chunks have been received on the server, I want the entire video to be already available in VoD (Video On Demand) with as little delay as possible.
Everything must be distributed with the audios in AAC. The audios can be mixed or not, and mixed or not with the video.
Current and blocking solution:
The most promising solution I have seen is using HLS to support the Live and VoD mode that I need. It would also bring a lot of optimization possibilities for the future.
Video isn't a problem in this context, here's what I do:
Every time I get a data chunk, I append it to a screen.webm file.
Then I spit the file with ffmpeg
ffmpeg -ss {total_duration_in_storage} -i screen.webm -c: v copy -f hls -hls_time 8 -hls_list_size 0 output.m3u8
I ignore the last file unless it's the last chunk.
I upload all the files to the cloud storage along with a newly updated output.m3u8 with the new file information.
Note: total_duration_in_storage corresponds to the time already uploaded
on cloud storage. So the sum of the parts presents in the last output.m3u8.
Note 2: I ignore the last file in point 3 because it allows me to have keyframes in each song of my playlist and therefore to be able to use a seeking which allows segmenting only the parts necessary for each new chunk.
My problem is with the audio. I can use the same method and it works fine, I don't re-encode. But I need to re-encode in aac to be compatible with HLS but also with Safari.
If I re-encode only the new chunks that arrive, there is an auditory glitch
The only possible avenue I have found is to re-encode and segment all the files each time a new chunk comes along. This will be problematic for long recordings (multiple hours).
Do you have any solutions for this problem or another way to achieve my goal?
Thanks a lot for your help!

How to merge video file with audio file and maintain creation time?

I was finicking around with youtube-dl and ended up downloading a video that youtube-dl wasn't able to merge the generated audio and video. After some investigation, I found that there was an issue in my ffmpeg config.
Normally, if you actually run youtube-dl a second time after fixing ffmpeg, it will automatically merge the files for you. But as fate would have it, the online video has since been deleted so youtube-dl freaks out.
Fortunately ffmpeg itself can also merge audio and video files, but loses a very nice feature youtube-dl's implementation has, keeping the creation time of the files (i.e. creation rather than download or publication time).
Is there any way to merge an audio and video file and keep the creation/last modified date?
Here's my own solution on a Mac OS (should work on any UNIX), partially adapted from https://superuser.com/a/277667/776444:
I'm sure there's a way to do this using only FFMPEG but I ended up using touch:
ffmpeg -i originalVideo.mp4 -i originalAudio.mp4 -c:v copy -c:a aac combined.mp4
touch -r originalVideo.mp4 combined.mp4
Using these, I was able to change the file creation time for combined.mp4 to 28 April 2020, to match originalVideo.mp4.

Play local .avi videos in Node.js / Electron app

Maddening gap in an app I'm developing is there appears to be little (or no) support for AVI in the HTML5 video implementation. So, I need a workaround that is cross-platform, and package-able with my electron app.
Videos are hosted locally
I'm not averse to encoding on the fly (ffmpeg avi -> mp4 and use HTML5 natively?)
WebChimera appears dying due to VLC and Electron changes (devs can't keep up) (Is there another npm package that can do this?)
A wrapper that calls a native VLC instance might work -- but how do I ensure that VLC is available on the system with my packaging?
Should I just spawn a native app in a separate window (ie, Totem on Linux)? (seems clunky)
Latest videoj-java plugin apparently has issue (https://github.com/Afterster/videojs-java/issues/2) and adding another layer (java) to the electron stack seems somehow unsavory.
FFBinaries (https://github.com/vot/ffbinaries-node) seems promising... but oddly FFPlay is not available for Linux (though I suspect my linux consumers likely have a ffmpeg already installed).
NB: Files are decidedly AVI. I can't change this.
Any hints / pointers greatly appreciated!
UPDATE
On my system, using ffmpeg to convert:
ffmpeg -i infile.AVI -vcodec copy -acodec copy outfile.mp4
Takes no time at all (they are short videos):
real 0m0.138s
user 0m0.100s
sys 0m0.032s
So I'm leaning toward packaging ffmpeg with my program and converting before loading.
Take a look at this project:
https://github.com/RIAEvangelist/electron-video-player
According to the known supported formats:
https://github.com/RIAEvangelist/electron-video-player#known-supported-video-types
it supports:
mp4
webm
ogg
mov (MPEG4 | H.264)
avi (MPEG4 | H.264)
mkv (MPEG4 | H.264)
m4v (MPEG4 | H.264)
Take a look at its source code and see if you can implement it similarly.
You said that you need AVI support but AVI is just a container - if you need other codecs than the ones supported by this project then you will still need to transcode it first.
If you cannot do it like this then you may try using something similar to:
https://www.npmjs.com/package/mplayermanager
and bundle mplayer with your app, or some other player.
According to this SO answer, Electron now suports multiple video formats in the <video> tag, including .mkv, .avi and other formats. You don't need to rely on an external player.

Update .m3u8 playlist file for HTTP Live streaming?

I am converting MPEG-2 transportable format from incoming movie for live streaming which is not playable, then validate the .m3u8 file by using mediastreamvalidator, it says "WARNING: stream discontinuity detected without EXT-X-DISCONTINUITY tag". The conversion happen using FFMPEG, please help me what i am missing?
Sri
Possibly this refers to PTS and PCR discontinuity. When a different streams are generated using fresh context - the PTS and PCR values might be completely off the sync. This can be a potential problem that throws Warning of discontinuity.
When using FFMPEG, you can rewrite the timestamps on the output files with the setpts option.

Resources