video chat. red5 faster/needed?? why not just p2p? - p2p

Pardon my ignorance, but I am researching making a video chatroom, and what I am finding just seems really counter intuitive to me. From what I have read, it sounds like the standard is for each user to stream their video to a media server, like red5, and then the server sends the stream to the other person. Intuitively it seems like this just adds a middle man that would add lag to the video streaming because it has to go to a server, then turn around and go to a person, rather then just directly to a person. Why not just p2p with something like adobe status/Cirrus? Just use the service to get the other users ip, and then stream them your video directly? Yet, it seems like almost everyone uses an FMS like red5..
What am I failing to understand here? What is the advantage of having this "middle man"?

It would require lots of bandwidth (download speeds may be high enough but uploads are usually low) to send the video to the viewers. NAT makes it difficult to connect to a specific computer (from the public side there is only one IP for the computers under the router).

Related

How to use `getUserMedia()` api to simulate WebRTC like behaviour?

My primary intention is to setup a VoIP session between 2 users A & B; Here the raw audio / video media bytes are fetched from A's browser are played in B's browser and vice versa.
The reason is that, when the user C & D are added into this call, we need not have to create a P2P mesh network which limits the performance.
Tried recording media with getUserMedia() and playback, but it is not real time. It also gives a bad user experience. (However, haven't experimented yet with videos of small chunks as 200 ms)
Is there any approach where I can get the raw bytes of the media and play it on other browser? Currently I have a server in between which can connect to both peers if required.
Any online examples or libraries are welcome.
Have already asked 2 questions in this regard with 100-100 bounties, but not much of use:
How to use libsrtp or similar library to decrypt/encrypt the WebRTC data stream?
How to integrate part of WebRTC as a static / dynamic library with the existing C++ code?
Related: How to stream, live video playing on my browser to browser of another user?
If i understand you well is you're looking on how to have more than two users on the session right? without using mesh topology
thats possible and configurable as well by means that some maybe active speaker or everyone is active speaker not only receiver whatever configuration you choose but to me it seems that you're asking for video conferencing
there are couple of tools for this the best one i might recommend is mediasoup its a SFU as selective fowarding unit mediasoup
I don't know if I understand correctly, but it is not likely that you will get raw video data and play it on the browser, it will just kill your bandwith and performance because the raw data is huge.
You need to use the compressed data ( media codec ex.H264 ) and you need a protocol to send and receive it. If you are looking for sub-second latency than webrtc is your best choice in here already. If you have a server in between, distribute your media through that server instead of Mesh. Check this out for webrtc network topologies:
https://antmedia.io/webrtc-servers/

WebRTC 5 person conference with recording for playbacks?

I am working on a project for large group broadcasting in WebRTC since it needs to work on iOS and Android devices, I am using Kurento, and iOSWEBRTC cordvoa plugin to build this I am curious if anyone can help improve my plan, or if there is a easier way to achieve this.
We need to have a video/audio conference with 5 people per room, however we need to be able to show that video to large audiences. Now my idea would be use Kurento as a middle-man and capture the streams into .webm files for live playback as the conference is going on.
Is there a better way to achieve this? And how would I playback the webm file as it is being recorded, it needs to update and continue playing as more video is sent, basically a live stream copy of the camera.
I am unsure if I am going the best route but I figured that would reduce the bandwidth from my original idea, I originally was thinking of making it like this:
5 person conference for broadcasters X number of viewers then downloaded those streams however I realize the upload bandwidth requirement would be crazy high, that is why I settled on this idea. Additionally the viewers do not have to see real time like the broadcasters. They need to be able to see and communicate with each other at the same time and the viewers can be a few seconds behind.
TL;DR:
Trying to make a 5 person video conference with video/audio capturing to then live stream it to viewers players. This would allow avoiding of PeerConnection bandwidth limitations. Would this work or am I forgetting something?
You'll need to look into using an SFU or MCU. An MCU is very costly, but multiplexes video streams and sends down a single video stream to all peers, and can also record that stream. An SFU is a single point of receipt of all streams, and selectively forwards them to clients. It could record off individual streams and then you could do post-processing to make a single recording out of the multiple recorded streams. A mesh network of connections really doesn't work for this use case.

Linux streaming server with playlist

everybody would like to help you, I need indication of a free software for streaming video, more accurate meet some needs
A transmission enabling accurate real-time
create playlist for playback (for when not live)
able to transmit live remotely.
some good streaming video for linux requires a webcam or video files is within the own server to stream live, broadcast remotely accurate
my dedicated server will take charge of Transmit, and will get the client computer that will be in Brazil, I need remotely, so far not found it'm hoping you indicate me some good
obs: to be a free software
if someone can indicate'm very grateful, thank you for your attention.
The only big robust free software I know of is red5:
http://www.red5.org/
You may also want to look into the NGINX streaming module. I played with it for a little bit and it worked great. I never tested it with a high load though:
https://github.com/arut/nginx-rtmp-module
However, where I work we use Wowza. Its not free but man is it so easy and so good:
http://www.wowza.com/
So the thing about all these is I have never done what you are saying. I've used all three of these for live streaming and they all will work but I have never done a simulated live streaming like what you are looking for. I know Wowza can do it, I would be shocked if Red5 couldnt and I have no idea about Nginx. Its not the best answer but hopefully it gives you some options.
I know VLC has some playlist to streaming abilities so if anything you can use VLC from the client side and then just push it to Red5. Hopefully this points you in the right direction!

Live media streaming involving different kinds of devices

I am working on a project which will involve http live media streaming from a variety of devices like android phones/tablets, iphone, ipad, browser,etc. It will be a 2 way communication for all the devices with multiple devices connected to a conversation. I have implemented it partially i.e. one way by capturing audio from android phone(native app) and streaming to a web browser(HTML5 app) with a PHP server using ffmpeg and cvlc. I wanted to know of the best way to go ahead about it. Like, if there are any standards to be followed. Also what kind of a server should I be using? I don't want to use any streaming servers like Red5. I would like to implement the streaming logic similar to Http LiveStreaming by apple. I have come across MPEG-DASH that seems to be a standard for http streaming. I still have to look deeper into it. I was also thinking of using NodeJS for its popularity with streaming. Another worry was how do I go about capturing of media from devices? As in, should I use the native capability of the devices to convert media into an mp4 or any container that it supports and then stream it to the server or capture audio and images for a particular period of time and then send it to server and create a common output(I am not really sure of this idea). The separate capture is basically for simplifying the process of video streaming from the server end to any device. I was also thinking if I could completely bypass the server in any cases like a phone to phone or phone to tablet connection.
I just wanted to be sure of the things I will be using/implementing so that I wouldn't have to make drastic changes later on. Any help is deeply appreciated. Thank you.

How to program an audio/video application on network?

I want to make (for fun, challenge) a videoconference application, I have some ideas about this:
1) taking the audio/video streams (I don't know what an audio/video stream is)
2) pass this to a server that lets communicate the clients. I can figure out how to write a server(there are a lot of books and documentation about this) but I really don't know how to interact with the webcam and with the audio/video in general.
I want some links, book, suggestions about the basics of digital audio/video expecially on programming. Please help me!!!
I want to make it run on a Linux platform.
Linux makes video grabbing really nice. As long as you have a driver that outputs the video stream to the /dev/video/v* channels. All you have to do is open up a control connection to the device [an exercise for the OP] and then read in the channel like a file [given the parameters set by the control connection. Audio should be the same way, but don't quote me on it.
BTW: Video streaming from a server is a very complex issue. You have to develop or use an existing protocol. You have to be very aware of networking delays, and adjust the information sent (resize or recompress) to the client based on the link size between the client and the server.

Resources