We have deployed webrtc on wowza. However, we are getting our own voice back. Could be feedback or echo?
As far as I see, there is currently no method in wowza to battle echo. However, you can install extra layers of audio filtering - for example, this article shows how to use PBXMate for echo cancellation. In case this link becomes invalid, full requirements are following:
The Flashphoner Client is a flash based client. It could be replaced by other flash based clients.
The Wowza server is a standard streaming server.
The Flashphoner is responsible for translating the protocol of the streaming data to the standard SIP protocol.
The Elastix server is a well known unified communication server.
The PBXMate is an Elastix AddOn for audio filtering.
Related
I would like to use webRTC but instead of p2p would like to broadcast my audio/video feed to nodejs in realtime. I can encode the video to 125 kbps and 10-12 frames per second for smooth transmission. The idea is nodejs will receive this feed, save it and broadcast it on the same time as a realtime session/webinar. I can connect p2p but I am not sure how to
send feed to nodejs instead of peer
on nodejs how to receive feed
The WeRTC protocol suite is complex enough that an implementation from scratch for a selective forwarding unit SFU is likely to take at least a year by a team of experts. It requires handling a variety of networking protocols including datagrams (UDP) and TCP. And it may require transcoding between video and audio codecs.
The good news is that browser endpoints are now excellent. And open-source server implementations are good enough to get to a minimum viable product.
What's the most straightforward way of receiving and sending a real time audio between a VoIP calling service and a node app that's only connected to the internet? It needs to be able to 'dial' a call, and send/ receive audio.
Right now, the architecture I've figured out is roughly to use Twilio's Elastic SIP trunking, then setup a SIP server like Asterisk that proxies RTP to WebRTC and connect that to Twilio, and then use something like JsSIP (although I'm not even sure if it allows getting an audio stream in a node environment) as a SIP over WebRTC client, but this is extremely complicated to setup, and just feels like overkill.
Is there an easier way/ service that provides this functionality, or at least an existing guide on how to do this?
You have following options
1) WebRTC
2) EAGI(audio to script file#3, one way).
3) asterisk to JACK
4) create your own c/c++ handler and do in whatever format you want.
Good day! I'm a newbie on video streaming. Can you help me find good ways on how to make a video streaming secure?
I'm having some issues on my video hosting project security.
I am creating a web page which calls a video stream hosted on a different server where
my web page is deployed.
Server 1(web page video embed) calls video to stream on Server 2(video host).
The problem is that they are hosted on an absolute different network. Should Server 2 where the video is hosted should be private and only allow Server 1 to fetch the video stream creating a server to server transfer of data, or should it be public for the clients to be able access it.
Can you help me decide what to do to secure my videos?
I badly need some idea on this... thanks guys!
How are you streaming and what streaming protocol are you using?
Server to server wont help in securing the video.it is better to stream the video direcly from your Server 2(video host) directly to the client,so that it wont be overhead for server 1(web page video embed).You need to use secure way to protect you video on server 2.if the server2 is not secure,even if you stream through server1 it wont help.
Here are details of security level on different video streamings.
If you are using progressive download.This can be done using normal http protocol.In this approach you would be able to see the video url in the browser.Once you got the url you can download it as a normal file download.Security is very low here.Even if you sign the video url,the user can download the video easily.
Streaming,you can stream the video using different protocol like rtmp etc.If you are streaming videos using some rtmp.In this approch, you wont be able to download the video directly,but you can use some good software to capture the video stream and save to the pc.
Streaming securly.There are some protocols like rtmpe.I tried only rtmpe,In this protocol,the streaming content will be encrypted on the server and decrypted on the client.so the software wont be able to capture the video stream.
Along with approach 3,if you sign the video url,it will add more security.Hope this helps.
I using RTSP for transmitting video from server to client.
At some points during the transmission I need the server to "send" metadata to the client (some information that something was changed).
I need the sessions to be fully "standard" (VLC should be able to display the video).
I thought of sending DESCRIBE to the server at some interval from the client and using the SDP data to contain the relevant information.
Is it a "standard" approach? shouldn't the DESCRIBE be used for initialization purposes only?
Thanks.
According to the RTSP standard the DESCRIBE method simply describes the URL in the request and should only be used for that purpose. Try using GET_PARAMETER method or use extensibility features of RTSP.
RTSP Draft 2.0 has support for PLAY_NOTIFY although I am not 100% sure that is what you need, you may just need to be able to have a server which is capable of sending an Announce from the Client to Server when the media changes... or that may be encompassed by just using dynamic as the payload types and specifying an additional payload type in the SDP...
My media server implementation should handle this easily and contains a RtspClient which may help also!
http://net7mma.codeplex.com
So I am trying to create a RTSP server that streams music.
I do not understand how the server plays a music and different requests get what ever is playing at that time.
so, to organize my questions:
1) how does the server play a music file?
2) how does the request to the server look like to get whats currently playing?
3) what does the response look like to get the music playing in the client that requested the music?
First: READ THIS (RTSP), and THEN READ THIS (SDP), and then READ THIS (RTP). Then you can ask more sensible questions.
It doesn't, server streams little parts of the audio data to the client, telling it when each part is to be played.
There is no such request. If you want, you can have URL for live streaming, and in RTSP DESCRIBE request, tell the client what is currently on.
Read the first (RTSP) document, all is there! Answer to your question is this:
RTSP/1.0 200 OK
CSeq: 3
Session: 123456
Range: npt=now-
RTP-Info: url=trackID=1;seq=987654
But to get the music playing you will have to do a lot more to initiate a streaming session.
You should first be clear about what is RTSP and RTP. The Real Time Streaming Protocol (RTSP) is a network control protocol designed for use in communications systems to control streaming media servers. where as Most RTSP servers use the Real-time Transport Protocol (RTP) for media stream delivery. RTP uses UDP to deliver the Packet Stream. try to Understanding these concepts.
then Have a look at this project.
http://sourceforge.net/projects/unvedu/
This a open source project developed by our university, which is used to stream video(MKV) and audio file over UDP.
You can also find a .Net Implementation of RTP and RTSP here # https://net7mma.codeplex.com/ which includes a RTSP Client and Server implementation and many other useful utilities e.g. implementations of many popular Digital Media Container Formats.
The solution has a modular design and better performance than ffmpeg or libav at the current time.