How do I use DASH instead of HLS in Cloudflare video streaming? - http-live-streaming

I'm using Cloudflare as the video streaming provider for a project. I'm trying to pre-fetch multiple videos on a mobile device, so using HLS (with it's larger chunk size) is impacting performance; this is why I would like to request the video be sent using DASH. Here, the Cloudflare team writes: "Cloudflare uses two standards for adaptive streaming: HLS and MPEG-DASH".
Every get request to the video has yielded a stream with HLS. Is there any way to request DASH given my Cloudflare video id?

Typically a video origin server and CDN will serve the stream that best matches a devices capabilities - usually this triggered by the device requesting either a HLS or a MPEG DASH stream, the two most popular streaming format today.
Cloudflare Stream should provide you urls to both a HLS manifest and DASH manifest automatically - they should look something like:
MPEG-DASH: https://videodelivery.net/VIDEOID/manifest/video.mpd
HLS: https://videodelivery.net/VIDEOID/manifest/video.hls

Related

DVR RTMP Stream into HLS (m3u8) in SRS

For SRS SaaS, DRV output are HLS (m3u8), mentioned at here https://github.com/ossrs/srs/issues/2856 and here: https://mp.weixin.qq.com/s/UXR5EBKZ-LnthwKN_rlIjg.
Same idea also discussed recently https://www.bilibili.com/video/BV1234y1b7Pv?spm_id_from=333.999.0.0 At around timestamp 9:50, mentioned that, for SRS SaaS, DRV output are HLS (m3u8).
Question: can we also DVR RTMP Stream into HLS (m3u8) in SRS , as only mp4 and flv options are discussed in wiki https://github.com/ossrs/srs/wiki/v4_EN_DVR
The answer is SRS supports DVR to FLV/MP4 file, and you could also use HLS as DVR, because what DVR does is to covert RTMP to file such as FLV/MP4/HLS.
If you only want to get a record file of live streaming, you could simply use the DVR of SRS, you will see a variety of files is generated. It works like this:
OBS --RTMP--> SRS --DVR--> FLV/MP4 file
But you could also use HLS to DVR the live stream, and it's more complex and powerful way. For example, if you stop publishing, adjust the params of encoder or just change one, then continue publishing, how to DVR it to one file?
If you use DVR of SRS, you will get multiple files, because each stream is covert to a file, and DVR will start a new file when another publishing starts.
If you use HLS, you need to write a backend server, and you will get the on_hls callback, you could determine writing to previous m3u8 or start a new one, it's controlled by your backend server, and because you must write a backend server so it's more complex. It works like this:
OBS --RTMP--> SRS --HLS--> m3u8/ts file
+
+--on-hls---------> Your Backend Server
(HTTP Callback)
There is an example about how to use HLS to covert RTMP to a VoD file, please read srs-cloud for detail.

How do I receive video stream data in node server?

I don't know how to get started with this.
What I am trying to do is get a video + audio stream from front-end and host the live stream as mp4 thats accessible on browser.
I was able to find information on WebRTC, socket.io, rtmp, but I'm not really sure what tool to use / whats best suited for something like this?
also follow up question, my front-end is iOS app. So what format would I send the live stream to the server?
It depends on which live streaming protocol you want to play on the player, as #Brad said, HLS is the most common protocol for player.
Note: Besides of HLS, iOS native app is able to use fijkplayer or FFmpeg to play any format of live streaming, like HLS, RTMP or HTTP-FLV, even MKV. However, the most straight forward solution is HLS, only need a tag to play MP4 or HLS, and MSE is also a optional solution to use flv.js/hls.js to play live streaming on iOS/Android/PC, this post is about these protocols.
The stream flow is like this:
FFmpeg/OBS ---RTMP--->--+
+--> Media Server---> HLS/HTTP-FLV---> Player
Browser ----WebRTC--->--+
The protocol to push to media server, or receive in node server, depends on your encoder, by RTMP or H5(WebRTC):
For RTMP, you could use FFmpeg or OBS to push stream to your media server.
If want to push stream by H5, the only way is use WebRTC.
The media server coverts the protocol from publisher to player, which use different protocols in live streaming right now(at 2022.01), please read more from this post.

Is there any point in supporting byte-range requests for VBR (variable bitrate) audio?

I'm working on a server application which will hand out audio streams. These audio streams are being consumed in web browsers, specifically through the <audio> element.
Now, let's say the server application offers MP3 streams. If these MP3 streams are VBR (variable bitrate), is there any use for the server application to support byte-range requests at all?
The server application will respond with a Content-Length header, but I can't see how the client (browser) will be capable of letting the user seek to any given position in the stream as we're dealing with variable bitrate.
Am I missing something?

Stream audio from web browser

Is it possible to capture all audio stream on my PC (from web browser) and stream it via LAN ?
I use Yandex Music (music.yandex.ru) service. So I logged into my yandex account and I have no any audio files, just online stream. I want to make something like LAN-radio. Users will visit an HTML-page located on our server and listen my audio stream.
Can I use icecast or similar software to stream non-file audio?
Or should I connect my PC's line out to line IN (or mic) and read audio stream via Java or flash? Any ideas?
Have you tried looking at things like Jack and Soundflower? These allow you to reroute the audio from one program to another. You could then reroute the sound into Java or flash and go from there.
https://rogueamoeba.com/freebies/soundflower/
http://jackaudio.org/
You can try WebRTC and MediaStream API for that. You can get audio from user's audio device or a stream they are playing in browser. You can find dcoumentation on those APIs from MDN pages.

how to create a RTSP streaming server

So I am trying to create a RTSP server that streams music.
I do not understand how the server plays a music and different requests get what ever is playing at that time.
so, to organize my questions:
1) how does the server play a music file?
2) how does the request to the server look like to get whats currently playing?
3) what does the response look like to get the music playing in the client that requested the music?
First: READ THIS (RTSP), and THEN READ THIS (SDP), and then READ THIS (RTP). Then you can ask more sensible questions.
It doesn't, server streams little parts of the audio data to the client, telling it when each part is to be played.
There is no such request. If you want, you can have URL for live streaming, and in RTSP DESCRIBE request, tell the client what is currently on.
Read the first (RTSP) document, all is there! Answer to your question is this:
RTSP/1.0 200 OK
CSeq: 3
Session: 123456
Range: npt=now-
RTP-Info: url=trackID=1;seq=987654
But to get the music playing you will have to do a lot more to initiate a streaming session.
You should first be clear about what is RTSP and RTP. The Real Time Streaming Protocol (RTSP) is a network control protocol designed for use in communications systems to control streaming media servers. where as Most RTSP servers use the Real-time Transport Protocol (RTP) for media stream delivery. RTP uses UDP to deliver the Packet Stream. try to Understanding these concepts.
then Have a look at this project.
http://sourceforge.net/projects/unvedu/
This a open source project developed by our university, which is used to stream video(MKV) and audio file over UDP.
You can also find a .Net Implementation of RTP and RTSP here # https://net7mma.codeplex.com/ which includes a RTSP Client and Server implementation and many other useful utilities e.g. implementations of many popular Digital Media Container Formats.
The solution has a modular design and better performance than ffmpeg or libav at the current time.

Resources