What are the protocols that are used in teamviewer audio chat? - audio

My country is blocking VOIP, so basically I can't make audio or video chats. However when using Teamviewer, it's always possible to make audio and video conferences, so what I want to know is how is that possible, and what are exactly the VOIP protocols used to allow me chat in audio and video?
I knew from a previous question that the teamviewer is a closed source but I'm still waiting for any protocols that would work even if the VOIP is blocked.
Regards.

If your network / country network do support for WebRTC then this would help.
RTC - Real time communication is a protocol, everything works on this only. Whether it is skype, teamviewer, whatsapp video call etc.
Setup you own WebRTC server and obtain absolute URL with its protocol. Make sure that has to be work within your network only. When your devices are in wifi / network. They can connect each other through absolute URL.
Refer - https://github.com/ISBX/apprtc-ios/blob/master/README.md

Related

How to use `getUserMedia()` api to simulate WebRTC like behaviour?

My primary intention is to setup a VoIP session between 2 users A & B; Here the raw audio / video media bytes are fetched from A's browser are played in B's browser and vice versa.
The reason is that, when the user C & D are added into this call, we need not have to create a P2P mesh network which limits the performance.
Tried recording media with getUserMedia() and playback, but it is not real time. It also gives a bad user experience. (However, haven't experimented yet with videos of small chunks as 200 ms)
Is there any approach where I can get the raw bytes of the media and play it on other browser? Currently I have a server in between which can connect to both peers if required.
Any online examples or libraries are welcome.
Have already asked 2 questions in this regard with 100-100 bounties, but not much of use:
How to use libsrtp or similar library to decrypt/encrypt the WebRTC data stream?
How to integrate part of WebRTC as a static / dynamic library with the existing C++ code?
Related: How to stream, live video playing on my browser to browser of another user?
If i understand you well is you're looking on how to have more than two users on the session right? without using mesh topology
thats possible and configurable as well by means that some maybe active speaker or everyone is active speaker not only receiver whatever configuration you choose but to me it seems that you're asking for video conferencing
there are couple of tools for this the best one i might recommend is mediasoup its a SFU as selective fowarding unit mediasoup
I don't know if I understand correctly, but it is not likely that you will get raw video data and play it on the browser, it will just kill your bandwith and performance because the raw data is huge.
You need to use the compressed data ( media codec ex.H264 ) and you need a protocol to send and receive it. If you are looking for sub-second latency than webrtc is your best choice in here already. If you have a server in between, distribute your media through that server instead of Mesh. Check this out for webrtc network topologies:
https://antmedia.io/webrtc-servers/

Is SIP required for webRTC calling to legacy VTC products

I am working on a webRTC application and would like to be able to support multiple calls and be able to call from the browser to legacy VoIP or Videoconferencing systems as well as browser to browser.
now that Asterisk has added websocket in their latest builds would you need SIP and a SIP proxy in order to communicate with VoIP systems or will Asterisk allow this?
now that H.264 has been open sourced by Cisco would you still need a transcoder in order to call a legacy VTC system?
Is Node.js the preferred technology for implementing webrtc client/server deployments? I've looked into Mobicents SIP Servlets a bit but that seems to be the only alternative technology available beside a node.js solution.
If needed I am planning on creating a SIP trunk between an Asterisk server and our Polycom VBP so the webrtc clients should be able to get presence information through that connection so if no media transcoding is required with the recent changes then media should be able to pass directly from polycom endpoint to browser with the asterisk handling the signalling.
Thank you anyone who is able to answer any of these questions, it is still early in the r&d portion of this project for me and i'd like to get as much information as possible.
also: i did see SIP over websockets to true SIP. I understand that "something" needs to stand in between the webRTC client and the VoIP phone or Legacy SIP endpoint. what I would like to know is if that can be just asterisk with the recent update. if asterisk is all that is required, is there a way to include a media transcoder like red5? I haven't seen anything in the webrtc API that would allow you to include a transcoder, asterisk has transcoding mods but none that will do vp8 to h.26x or Opus to anything as far as i know.
Answer on that question higly depend of destination "legacy" system. Cisco "legacy" systems use h323 and sip, which is not compatible with webRTC.
Sure there are alot of ways to setup asterisk, red5, opensips or other as translation level.
Webrtc goal is call from browser. It never supposed have any API for transcode. That have be done by server part(which require special knowledge and experience to be propertly setup)
There are alot of availible documentation in internet, no any way put answer in less then 30 pages of text.

Build own Chromecast device

The Chromecast device is a "receiver device [that] runs a scaled-down Chrome browser with a receiver application". Can I download and install this receiver app on a chrome browser for example on my Windows notebook?
I have implemented a complete chromecast v2 receiver, called YouMap ChromeCast Receiver, available in Google play store and Amazon store, xda-developer thread here: http://forum.xda-developers.com/android-tv/chromecast/app-youmap-chromecast-receiver-android-t3161851
The current Chromecast protocol is a completely different one from the original DIAL based protocol. Right now, only YouTube still uses the old protocol, which chromecast maintains its backward compatibility.
The discovery is mDNS, exactly same as Apple TV bonjour protocol.
The most difficult part is device authentication, the sender and the receiver perform handshakes by exchanging keys and certificates in a way extremely difficult to crack. AppleTV does the same using FairPlay encryption.
The next difficult part is the mirroring protocol, which is also very complicated, need to deal with packet splits, packet retransmissions. Overall, chromecast mirroring protocol is well designed, better than miracast, better than AirPlay mirroring (I have also implemented both of them, so I know what I am talking about).
When I get chances, will write more here.
The chromecast device works using the DIAL protocol. It is completely possible to emulate this protocol using some simple code to listen on the multicast group for discovery and then handle the HTTP requests to launch applications. It is then the launched application that communicates with the casting device, I believe using the RAMP protocol.
Luckily for us the applications that the chromecast device uses are mostly web applications meaning our device emulator just needs to launch a web browser and point it to a specific url when it receives an application request.
For example the youtube app, after device discovery and establishing where the applications are located (part of DIAL). Will send a HTTP POST request containing a pairing key to /<apps url>/YouTube. All the emulating device needs to do now is open https://www.youtube.com/tv?<pairing key> in a browser window. From here, I believe, communication for controlling the youtube app is not sent through the casting device but through the open tabs on the casting device and the emulator.
This is my understanding of how the chromecast device works and specifically the youtube app from looking at https://github.com/dz0ny/leapcast which is a python emulator that has youtube and google music working.
Google is in progress of open sourcing some part of the chrome cast.
https://code.google.com/p/chromium/codesearch#chromium/src/chromecast/
https://code.google.com/p/chromium/issues/list?q=label:Chromecast
So theoretically you can build a similar device.

Two-way audio for software ip camera

I am trying to setup a raspberry pi box with a usb camera as a IP Camera that can be viewed from a a generic android IP Camera monitor app. I've found some examples on how to get the video stream, and that works, but what I also need is two-way audio. This seems to come out of the box in standalone network cameras -- any ideas how that works? I want to set it up in a way compatible with typical network cameras so that my cam can be used by any generic ip camera viewer app.
Well, the modern cameras nowadays implement the ONVIF protocol. This protocol specifies that you have a RTSP server that streams audio and video from the camera to the pc, but it also mandates a so called audio backchannel. It's a bit long to explain how it works, check it in the specs.
ONVIF is the standard, but you could also install an existing SIP client and do a video/audio VoIP call rather than implementing ONVIF - depends on the long term goals of your project.

How to program an audio/video application on network?

I want to make (for fun, challenge) a videoconference application, I have some ideas about this:
1) taking the audio/video streams (I don't know what an audio/video stream is)
2) pass this to a server that lets communicate the clients. I can figure out how to write a server(there are a lot of books and documentation about this) but I really don't know how to interact with the webcam and with the audio/video in general.
I want some links, book, suggestions about the basics of digital audio/video expecially on programming. Please help me!!!
I want to make it run on a Linux platform.
Linux makes video grabbing really nice. As long as you have a driver that outputs the video stream to the /dev/video/v* channels. All you have to do is open up a control connection to the device [an exercise for the OP] and then read in the channel like a file [given the parameters set by the control connection. Audio should be the same way, but don't quote me on it.
BTW: Video streaming from a server is a very complex issue. You have to develop or use an existing protocol. You have to be very aware of networking delays, and adjust the information sent (resize or recompress) to the client based on the link size between the client and the server.

Resources