What would be the best way to simulcast multiple broadcasters streams using Nginx RTMP module? Considering using Docker, but it seems like overkill - node.js

It looks like simulcasting a single users RTMP stream is as simple as creating an RTMP server configuration in my nginx.conf file and using the push directive to push the stream to the different Social Media RTMP url, but what would be the best way to do this if I have multiple users needing to stream their data to their own social media live accounts?
Here are the possible solution that I can think of:
Use docker to create multiple containers with Nginx RTMP installed for each individual who signs up. I could then edit & manage separate RTMP server configurations and reload the configuration so they can each begin streaming. This sounds like a lot of overhead though.
If it's possible, I could setup multiple RTMP server configs for each user in a single environment (sites-enabled) and reload the config without NGINX going down, but using different ports doesn't seem ideal and I feel like if something happens while the server is reloading the config there is a possibility of every individual who is streaming dropping their connection. Edit: Sites enabled seems out of the question since it needs to be within root context (nginx.conf only) as per https://github.com/arut/nginx-rtmp-module/issues/1492
Map to each users RTMP push directives using their stream key and then forward to that users social media?
Any thoughts?
Here's my example single server configuration:
rtmp {
server {
listen 1935;
chunk_size 4096;
application live {
live on;
record off;
push rtmp://facebook/STREAM KEY HERE;
push rtmp://youtube/STREAM KEY HERE;
}
}
}
I'm new to RTMP if you haven't picked that up yet :P

Check out the following project on GitHub, it does exactly what you need:
https://github.com/jprjr/multistreamer
Note: the project is now archived.

Related

How to ensure access the right backend M3U8 file in origin cluster mode

From SRS how to transmux HLS wiki, we know SRS generate the corresponding M3U8 playlist in hls_path, here is my config file:
http_server {
enabled on;
listen 8080;
dir ./objs/nginx/html;
}
vhost __defaultVhost__ {
hls {
enabled on;
hls_path /data/hls-records;
hls_fragment 10;
hls_window 60;
}
}
In one SRS server case, every client play the HLS stream access the same push SRS server, that's OK. But in origin cluster mode, there are many SRS servers, and each stream is in one of them. When client play this HLS stream we can't guard it can access the right origin SRS server(cause 404 http status code if not exist). Unlike the RTMP and HTTP-FLV stream, SRS use coworker by HTTP-API feature to redirect the right origin SRS.
In order to fix this issue, I think below two solutions:
Use specialized backend HLS segment SRS server:
Don't generate the M3U8 in origin SRS server, every stream is forward to this SRS server, all the M3U8 are generated in this server and all HLS request is proxy to this server(use nginx). The cons. of this solution is limit to one instance, no scaling ability and single node risk.
the origin srs.conf forward config like this:
vhost same.vhost.forward.srs.com {
# forward stream to other servers.
forward {
enabled on;
destination 192.168.1.120:1935;
}
}
where 192.168.1.120 is the backend hls segment SRS server.
Use cloud storage such as NFS/K8S PV/Distributed File System:
Mount the cloud storage as local folder in every SRS server, whatever the stream in which SRS server, the M3U8 file and ts segment is transfer to same big storage, so after HLS request, the http server served them as static file. From my test, if the cloud storage write speed is reliable, it is a good solution. But if network shake or write speed is not as fast as received speed, it will block the other coroutine and this cause the SRS abnormal.
The hls_path config like this:
vhost __defaultVhost__ {
hls {
enabled on;
hls_path /shared_storage/hls-records;
hls_fragment 10;
hls_window 60;
}
}
Here 'shared_stoarge' means a nfs/cephfs/pv mount point.
The above solutions in my perspective are not radically resolve the access issue, I am looking forward to find better reliable product solution for such case?
As you use OriginCluster, then you must get lots of streams to serve, there are lots of encoders to publish streams to your media servers. The key to solve the problem:
Never use single server, use cluster for elastic ability, because you might get much more streams in future. So forward is not good, because you must config a special set of streams to foward to, similar to a manually hash algorithm.
Beside of bandwidth, the disk IO is also the bottleneck. You definitely need a high performance network storage cluster. But be careful, never let SRS directly write to the storage, it will block SRS coroutine.
So the best solution, as I know, is to:
Use SRS Origin Cluster, to write HLS on your local disk, or RAM disk is better, to make sure the disk IO never block the SRS coroutine(driven by state-threads network IO).
Use network storage cluster to store the HLS files, for example cloud storage like AWS S3, or NFS/K8S PV/Distributed File System whatever. Use nginx or CDN to deliver the HLS.
Now the problem is: How to move data from memory/disk to a network storage cluster?
You must build a service, by Python or Go:
Use on_hls callback, to notify your service to move the HLS files.
Use on_publish callback, to notify your service to start FFmpeg to convert RTMP to HLS.
Note that FFmpeg should pull stream from SRS edge, never from origin server directly.

WebRTC through host with nodeJS express and socketio

I created a web app to let people communicate. I want to implement screen sharing and audio calls.
My current app is programmed in NodeJs and uses express and socket.io to serve the client connection and open a socket connection. I want to stream video and audio. My problem with WebRTC is that all those who connect to a call are vulnerable to a DDoS attack since it is p2p. I found an article from Discord explaining how they managed to let the entire traffic go through their servers: https://blog.discord.com/how-discord-handles-two-and-half-million-concurrent-voice-users-using-webrtc-ce01c3187429, that's exactly what I want to achieve.
Could I possibly use socket.io-stream https://www.npmjs.com/package/socket.io-stream ? I didn't yet figure out how, and it seems like all socket.io streaming libraries are made for file upload/download, not for actual video/audio streaming.
If that doesn't work, a library such as what Discord managed to make would be the perfect solution, since all traffic is proxied, and not p2p. Though I couldn't find any of those libraries, maybe I'm just looking for the wrong thing?
Best regards
You will want to use a SFU.
Each peer negotiates a session with the SFU. They then exchange media through it. Each Peer will just communicate with the server. It has lots of other benefits and is what most WebRTC deploys today use.
There are lots of Open Source SFUs out there. You can even build your own with Open Source libraries.

Video Stream Hosting

Good day! I'm a newbie on video streaming. Can you help me find good ways on how to make a video streaming secure?
I'm having some issues on my video hosting project security.
I am creating a web page which calls a video stream hosted on a different server where
my web page is deployed.
Server 1(web page video embed) calls video to stream on Server 2(video host).
The problem is that they are hosted on an absolute different network. Should Server 2 where the video is hosted should be private and only allow Server 1 to fetch the video stream creating a server to server transfer of data, or should it be public for the clients to be able access it.
Can you help me decide what to do to secure my videos?
I badly need some idea on this... thanks guys!
How are you streaming and what streaming protocol are you using?
Server to server wont help in securing the video.it is better to stream the video direcly from your Server 2(video host) directly to the client,so that it wont be overhead for server 1(web page video embed).You need to use secure way to protect you video on server 2.if the server2 is not secure,even if you stream through server1 it wont help.
Here are details of security level on different video streamings.
If you are using progressive download.This can be done using normal http protocol.In this approach you would be able to see the video url in the browser.Once you got the url you can download it as a normal file download.Security is very low here.Even if you sign the video url,the user can download the video easily.
Streaming,you can stream the video using different protocol like rtmp etc.If you are streaming videos using some rtmp.In this approch, you wont be able to download the video directly,but you can use some good software to capture the video stream and save to the pc.
Streaming securly.There are some protocols like rtmpe.I tried only rtmpe,In this protocol,the streaming content will be encrypted on the server and decrypted on the client.so the software wont be able to capture the video stream.
Along with approach 3,if you sign the video url,it will add more security.Hope this helps.

Node.JS/Meteor on multiple servers serving the same application

When coming to deploy Node.JS/Meteor for large scale application a single CPU will not be sufficient. We also would like to have it on multiple servers for redundancy.
What is the recommended setup for such deployment ? how does the load balancing works ? will this support the push data technology for cross servers clients (one client connects to server 1, 2nd client connects to server 2 and we would like an update in client one to be seen in client 2 and vice versa).
Thanks Roni.
At the moment you just need to use a proxy between them. The paid galaxy solution should help but details are scarce at the moment as the product isn't out yet.
You can't simply proxy (normally using nginx, etc) between two servers as each server will store the user's state (i.e their login state) during the DDP Session (the raw wire protocol meteor uses to transmit data).
There is one way you could do it at the moment. Get meteorite and install a package called meteor-cluster.
The package should help you relay data between instances and relay data between the instances via Redis. A youtube video also shows this and how to set it up.
An alternative solution is to use Hipache to manage the load balancing. You can use multiple workers (backends) for one frontent, like so:
$ redis-cli rpush frontend:www.yourdomain.com http://address.of.server.2
$ redis-cli rpush frontend:www.yourdomain.com http://address.of.server.2
There is more information on how to do this in the git page I linked to above, there is a single config file to edit and the rest is done for you. You may also want to have a dedicated server for MongoDB.

Which is more secure for audio streams: RTMP or HTTP Streaming?

I'd like the .mp3 files being streamed inaccessible to the listeners, but without having to sacrifice mobile compatibility. Which protocol would be best for that?
There is no such thing as inaccessible streaming. How are you going to stream if it is inaccessible? :) If a user can listen to any song via streaming, it is accessible to user.
If you are trying to prevent users from recording or downloading .mp3 files for your stream, you are falsely thinking security through obscurity. If a device can get a data over a network and play it, there is surely a way record the data. It's either by capturing the network traffic, or reverse-engineering your application to understand the protocol you are using to play songs. Whatever you do to obscure your protocol, it will surely be reverse engineered.

Resources