How to ensure access the right backend M3U8 file in origin cluster mode - http-live-streaming

From SRS how to transmux HLS wiki, we know SRS generate the corresponding M3U8 playlist in hls_path, here is my config file:
http_server {
enabled on;
listen 8080;
dir ./objs/nginx/html;
}
vhost __defaultVhost__ {
hls {
enabled on;
hls_path /data/hls-records;
hls_fragment 10;
hls_window 60;
}
}
In one SRS server case, every client play the HLS stream access the same push SRS server, that's OK. But in origin cluster mode, there are many SRS servers, and each stream is in one of them. When client play this HLS stream we can't guard it can access the right origin SRS server(cause 404 http status code if not exist). Unlike the RTMP and HTTP-FLV stream, SRS use coworker by HTTP-API feature to redirect the right origin SRS.
In order to fix this issue, I think below two solutions:
Use specialized backend HLS segment SRS server:
Don't generate the M3U8 in origin SRS server, every stream is forward to this SRS server, all the M3U8 are generated in this server and all HLS request is proxy to this server(use nginx). The cons. of this solution is limit to one instance, no scaling ability and single node risk.
the origin srs.conf forward config like this:
vhost same.vhost.forward.srs.com {
# forward stream to other servers.
forward {
enabled on;
destination 192.168.1.120:1935;
}
}
where 192.168.1.120 is the backend hls segment SRS server.
Use cloud storage such as NFS/K8S PV/Distributed File System:
Mount the cloud storage as local folder in every SRS server, whatever the stream in which SRS server, the M3U8 file and ts segment is transfer to same big storage, so after HLS request, the http server served them as static file. From my test, if the cloud storage write speed is reliable, it is a good solution. But if network shake or write speed is not as fast as received speed, it will block the other coroutine and this cause the SRS abnormal.
The hls_path config like this:
vhost __defaultVhost__ {
hls {
enabled on;
hls_path /shared_storage/hls-records;
hls_fragment 10;
hls_window 60;
}
}
Here 'shared_stoarge' means a nfs/cephfs/pv mount point.
The above solutions in my perspective are not radically resolve the access issue, I am looking forward to find better reliable product solution for such case?

As you use OriginCluster, then you must get lots of streams to serve, there are lots of encoders to publish streams to your media servers. The key to solve the problem:
Never use single server, use cluster for elastic ability, because you might get much more streams in future. So forward is not good, because you must config a special set of streams to foward to, similar to a manually hash algorithm.
Beside of bandwidth, the disk IO is also the bottleneck. You definitely need a high performance network storage cluster. But be careful, never let SRS directly write to the storage, it will block SRS coroutine.
So the best solution, as I know, is to:
Use SRS Origin Cluster, to write HLS on your local disk, or RAM disk is better, to make sure the disk IO never block the SRS coroutine(driven by state-threads network IO).
Use network storage cluster to store the HLS files, for example cloud storage like AWS S3, or NFS/K8S PV/Distributed File System whatever. Use nginx or CDN to deliver the HLS.
Now the problem is: How to move data from memory/disk to a network storage cluster?
You must build a service, by Python or Go:
Use on_hls callback, to notify your service to move the HLS files.
Use on_publish callback, to notify your service to start FFmpeg to convert RTMP to HLS.
Note that FFmpeg should pull stream from SRS edge, never from origin server directly.

Related

DVR RTMP Stream into HLS (m3u8) in SRS

For SRS SaaS, DRV output are HLS (m3u8), mentioned at here https://github.com/ossrs/srs/issues/2856 and here: https://mp.weixin.qq.com/s/UXR5EBKZ-LnthwKN_rlIjg.
Same idea also discussed recently https://www.bilibili.com/video/BV1234y1b7Pv?spm_id_from=333.999.0.0 At around timestamp 9:50, mentioned that, for SRS SaaS, DRV output are HLS (m3u8).
Question: can we also DVR RTMP Stream into HLS (m3u8) in SRS , as only mp4 and flv options are discussed in wiki https://github.com/ossrs/srs/wiki/v4_EN_DVR
The answer is SRS supports DVR to FLV/MP4 file, and you could also use HLS as DVR, because what DVR does is to covert RTMP to file such as FLV/MP4/HLS.
If you only want to get a record file of live streaming, you could simply use the DVR of SRS, you will see a variety of files is generated. It works like this:
OBS --RTMP--> SRS --DVR--> FLV/MP4 file
But you could also use HLS to DVR the live stream, and it's more complex and powerful way. For example, if you stop publishing, adjust the params of encoder or just change one, then continue publishing, how to DVR it to one file?
If you use DVR of SRS, you will get multiple files, because each stream is covert to a file, and DVR will start a new file when another publishing starts.
If you use HLS, you need to write a backend server, and you will get the on_hls callback, you could determine writing to previous m3u8 or start a new one, it's controlled by your backend server, and because you must write a backend server so it's more complex. It works like this:
OBS --RTMP--> SRS --HLS--> m3u8/ts file
+
+--on-hls---------> Your Backend Server
(HTTP Callback)
There is an example about how to use HLS to covert RTMP to a VoD file, please read srs-cloud for detail.

What would be the best way to simulcast multiple broadcasters streams using Nginx RTMP module? Considering using Docker, but it seems like overkill

It looks like simulcasting a single users RTMP stream is as simple as creating an RTMP server configuration in my nginx.conf file and using the push directive to push the stream to the different Social Media RTMP url, but what would be the best way to do this if I have multiple users needing to stream their data to their own social media live accounts?
Here are the possible solution that I can think of:
Use docker to create multiple containers with Nginx RTMP installed for each individual who signs up. I could then edit & manage separate RTMP server configurations and reload the configuration so they can each begin streaming. This sounds like a lot of overhead though.
If it's possible, I could setup multiple RTMP server configs for each user in a single environment (sites-enabled) and reload the config without NGINX going down, but using different ports doesn't seem ideal and I feel like if something happens while the server is reloading the config there is a possibility of every individual who is streaming dropping their connection. Edit: Sites enabled seems out of the question since it needs to be within root context (nginx.conf only) as per https://github.com/arut/nginx-rtmp-module/issues/1492
Map to each users RTMP push directives using their stream key and then forward to that users social media?
Any thoughts?
Here's my example single server configuration:
rtmp {
server {
listen 1935;
chunk_size 4096;
application live {
live on;
record off;
push rtmp://facebook/STREAM KEY HERE;
push rtmp://youtube/STREAM KEY HERE;
}
}
}
I'm new to RTMP if you haven't picked that up yet :P
Check out the following project on GitHub, it does exactly what you need:
https://github.com/jprjr/multistreamer
Note: the project is now archived.

CDN for RTMP live streaming URL

I am new into RTMP and live streaming. We have a live stream URL rtmp://someIPaddress/match, that is getting from a 3rd party source.
what I am doing is installing flussonic streamer in an ubuntu machine in aws and put the live stream url in flussonic.
flussonic will provide a url which is using in my android app through which the end users are watching.
Is it possible to achieve CDN in my scenario? I prefer to use aws CDN but I am confused. I have only used CDN when the images are stored in aws S3 bucket.
CloudFront supports RTMP(distribution type) but only VOD, it cannot serve Live stream. The other and better way would be convert your RTMP live stream to HLS/DASH stream using Medialive and push it to Mediapackage and use CDN(CloudFront) to serve HLS/DASH stream, however I think it will be more costly then your current solution.

How to stream audio files in real time

I'm writing an audio streaming server - similar to Icecast, and I'm running into a problem with streaming audio files. Proxying audio works fine (an audio source connects and sends audio in real time, which is then transmitted to clients over HTTP), but when I try to stream an audio file it goes by to quickly - clients end up with the entire audio file within their local buffer. I want them to only have a few 10s of seconds in their local buffer.
Essentially, how can I slow down the sending of an audio file over HTTP?
The files are all MP3. I've managed to get it pretty much working by experimenting with hardcoded thread delays etc... but that's not a sustainable solution.
If you're sticking with http you could use chunked transfer encoding and delay sending the packets/chunks. This would indeed be something similar to hardcoded thread::sleep but you could use an event loop to determine when to send the next chunk instead of pausing the thread.
You might run into timing issues though, maybe your sleep logic is causing longer delays than the runtime of the song. YouTube has similar logic to what you're talking about. It looks like they break videos into multiple http requests and the frontend client requests a new chunk when the buffer is too small. Breaking the file into multiple http body requests and then reassembling them at the client might have the characteristics you're looking for.
You could simply implement the http Range header and allow the client to only request a specific Range of the mp3 file. https://developer.mozilla.org/en-US/docs/Web/HTTP/Range_requests
The easiest method (by far) would be to have the client request chunks of the audio file on demand. std::net::TcpStream (which is what you said you're using) doesn't have a method to throttle the transfer rate, so you don't have many options to limit streaming backend short of using hard-coded thread delays.
As an example, you can have your client store a segment of audio, and when the user listening to the audio reaches a certain point before the end of the segment (or skips ahead), the client makes a request to the server to fetch the relevant segment.
This is similar to how real-world streaming services (like Youtube) work, because as you said, it would be a bad idea to store the entire file client-side.

Which is more secure for audio streams: RTMP or HTTP Streaming?

I'd like the .mp3 files being streamed inaccessible to the listeners, but without having to sacrifice mobile compatibility. Which protocol would be best for that?
There is no such thing as inaccessible streaming. How are you going to stream if it is inaccessible? :) If a user can listen to any song via streaming, it is accessible to user.
If you are trying to prevent users from recording or downloading .mp3 files for your stream, you are falsely thinking security through obscurity. If a device can get a data over a network and play it, there is surely a way record the data. It's either by capturing the network traffic, or reverse-engineering your application to understand the protocol you are using to play songs. Whatever you do to obscure your protocol, it will surely be reverse engineered.

Resources