Livestream of prerecorded flv videos with ffmpeg and red5 - linux

My goal is to acheive the following steps :-
rtmp livestream of prerecoreded flv videos using ffmpeg.
videos should be played continuously just like a tv station.
We are currently using red5 and ffmpeg to acheive this goal and we have successfully published the live stream of prerecorded single video to jwplayer using the following conversion command :
for i in *.avi; do ffmpeg -i $i -acodec copy -vcodec copy -f flv rtmp://localhost/oflaDemo/livestream
But the problem comes when we need to livestream two videos one after the other. User have to click play button again in order to stream second video which is not a tv-station thing instead we need to continuously play stream for user instead of clicking on play button on ending of each stream.

Maybe this is helpful, concatenating media files using ffmpeg:
http://ffmpeg.org/trac/ffmpeg/wiki/How%20to%20concatenate%20%28join,%20merge%29%20media%20files

I can suggest, as an alternative, to use Wowza Streaming Engine (commercial software, though developer license is free, but with limit on connections and 180 day validity). I tried the ffmpeg concatenation route, but all in all just a big mess with huge files.
With Wowza You can quite easily create your own playlists with scheduling, put it on repeat etc, through creating your own simple modules in Java or using the premade modules ( http://www.wowza.com/forums/content.php?145-How-to-schedule-streaming-with-Wowza-Streaming-Engine-ServerListenerStreamPublisher )
I've done this and have several live streams of prerecorded video files.

Related

when video or audio is played from a uri is it streamed or downloaded fully and played?

I have a content creation site I am building and im confused on audio and video.
If I have a content creators audio or video stored in s3 and then I want to display their file will the html video player or audio player stream the media or will it download it fully then play it?
I ask because what if the video or audio is significantly long. like 2 hours for example. I need to know how to solve the use case.
Lastly what file type is most acceptable for viewing on webpages? It seems like MPEG-4 is the best bet. Is that true?
Most video player clients and browsers will attempt to stream the video if they can.
For an mp4 video file hosted on a server, so long as the header is at the start and the server accepts range requests, this will mean the player downloads the video in chunks and starts playing as soon as it has enough to decide the first frames.
For more professional streaming services, they will generally use an adaptive bit rate streaming protocol like DASH or HLS (see this answer: https://stackoverflow.com/a/42365034/334402) and again the video will be streamed in chunks, or segments, and will start playing while it is streaming.
To answer your last question you need to be aware that the raw video is encoded (e.g. h.264, VP9 etc) and the video, audio, subtitle etc tracks stored in a video container (e.g. mp4, Web etc).
The most common format is probaly h.264 encoded and mp4 containers at this time.
The particular profile for h.264 can matter also depending on the device - baseline is probably the most supported profile at this time. You can find examples of media support for different devices online, e.g. for Android: https://developer.android.com/guide/topics/media/media-formats
#Mick's answer is spot on. I'll just add that mp4 (with h264 encoding) will work in just about every browser out there.
The issue with mp4 files (especially with a 2 hour long movie) isn't so much the seeking & streaming. If your creator creates a 4K video - thats what you'll deliver to everyone (even mobile phones). HLS streaming on the other hand has adaptive bitrates - where the video adapts to both the screen & the available network speeds. You'll get better playback results with less buffering (and if you're using AWS - a LOT LESS data egress) with video streaming.
(there are a bunch of APIs and services that can help you do this - including api.video (where I work), Mux and others).

Create dynamic audio broadcast stream (node, ffmpeg, ..?)

I have coded a videoboard. Like a soundboard but with video. You go to one URL that's just a black screen and another one which has a list of different videos (sender). When you click one of these videos it plays on the black screen (receiver). If you play 2 different videos at the same time both videos are shown next to each other on the receiver. That's working fine for several months now. It just creates multiple html video-elements with multiple source-tags (x265 mp4 and vp9 webm).
I recently made a discord bot which takes the webm, extracts the opus stream and plays its sound in the voice channel where the bot is connected. This has one disadvantage: It can only play one sound at a time. It happens a lot that there are multiple videos/sounds playing at the same time so this a bit of a bummer.
So I thought I should create a audiostream on the server which hosts the videoboard and just connect the bot to that stream. But I have no clue how to do this. All I know is that it's very likely going to involve ffmpeg.
What would be the best option here? What I think I would need is basically an infinite silence stream and the possibility to add a audio file onto that stream at any point which will play simultaneously with other audio files that were added before and have not ended payback yet. How is that possible? Somehow with m3u8 playlist-files or via rtsp protocol?
Thanks :)
I think it can be helpful for you https://bitbucket.org/kaleniuk_ihor/neuro_vision/src/db_watch/
Also this library was very useful for me https://github.com/kyriesent/node-rtsp-stream you can just install npm i node-rtsp-stream

Azure Media Services for transcoding and delivering audio

I have a common use case scenario where I want to do the following
Upload an audio file. (wav/mp3)
Transcodes to 128k or 192k mp3.
Stores the audio asset.
Allows the audio asset to be streamed.
Supports streaming actions such as play pause and seek.
The documentation for azure media services seems like it might be able to support this but I am not too sure, seems like they focus on video content. Anyone have experience with this?
You can manage audio and encode audio only assets with azure media services.
WAV is supported input format/conatiner as a input asset. To see full list of supported formats check following link:
https://azure.microsoft.com/en-us/documentation/articles/media-services-media-encoder-standard-formats/
Check https://github.com/Azure/azure-content/blob/master/articles/media-services/media-services-custom-mes-presets-with-dotnet.md#audio_only to see audio only preset options which you will use to encode an audio only preset.

How to stream audio mp3 file on web

Approx we all know about gaana.com, and saavn.com, that website stream audio mp3 files to client side but does't allow to users to grab the audio files, actually we want to know what technology he used to stream the audio mp3 files.
is he using streaming server or or something else ?
Can you describe the technology he is using in steaming the audio files.
Actually we are also creating a web app where audio files will be streammed in client side and we also don't want to allow users to download our mp3 files like gaana.com or saavn.com.
and we are also curious about if we want to stream our audio mp3 files in three different quality the what should i do. Should we convert all the mp3 files in all the three different quality and upload to the server or is any another solution exist for this purpose.
If you want to code your own streaming server then you can use this link
https://pypi.python.org/pypi/DeeFuzzer/ it's a python based streaming server, or you can also use ffmpeg or even VLC

qt faststart and ffmpeg to generate a live mp4 file [duplicate]

This question already has answers here:
Live video streaming using progressive download (and not RTMP) in Flash
(2 answers)
Closed 9 years ago.
I am using ffmpeg to create an mp4 file on my server. I am also trying to use qt fast start to be able to move the moov atom to the front so it will stream. I have searched all over the internet with no luck. Is it possible to put my video/audio in a mp4 buffer type file and then be able to play it while ffmpeg is still dumping video and audio data into the stream? the point is I am trying to stream from a camera and Android is horrid... I know both ios and android support mp4 so I was trying to figure a way I can make my rtsp Mp4.
main point of the story: I want to continuously feed my mp4 container my camera feed and still be able to playback the file os my clients can watch.
any help appreciated thank you.
You can publish a live stream and when the stream has ended you publish the progressive download.
In FFmpeg, to stream live and save a duplicate of that stream into a file at the same time without encoding twice you can use the Tee pseudo-mixer. Something like this:
ffmpeg \
-i <input-stream> \
-f tee "[movflags=+faststart]output.mp4|http://<ffserver>/<feed_name>"
Update: You might try to directly stream a fragmented mp4.
Update 2:
Create a fragmented mp4:
ffmpeg -i input -frag_duration 1000 stream.mp4
Normally, when serving a file using a web server it will want to know the file size, so to serve the file without knowing it's size, you need to configure your web server to do Chunked Transfer Encoding.

Resources