I am developing a voip solution, where my voice has to pass through 2 hops other than originator of voice before handed over to TDM.
Lets say, Hop A is generating Voice (RTP) and sent to Hop B (it may mix the stream with other streams) and sends to Hop C (it may also do some processing) and hands over the stream to PBX (TDM).
Now how can i measure how much delay is introduced by each hop?
Related
In a SFU audio conference platform, media server simply route audio packets. Lets say in client side I keep audio packet queue for each present participant (updated by signaling server) and at a certain rate I simply dequeue from every queue, handle, pick top 4-6 voice packets and mix for play. If sequence number is missing for some participants I even send nack and wait for some threshold time for that participants queue to be dequeued (to maintain the voice flow).
But to make this solution scalable, I have to do this dequeue then pick top 4-6 voice from media server side and send it to every one. Now, from client side, even if some participant's packet sequence gets missing I am not sure whether it was actually missing or it was not able to make it to top 4-6 voice packets in server (as I need to send nNack and wait if packet actually got missing).
How can I handle this usecase efficiently and any suggestion with top mixing numbers or anything is highly appreciable?
I am developing a MCU based voip service. I think the traditional way of doing MCU is, you have N audio mixers at server and every participant in the call receive a steam that does not have their own voice encoded.
Guess what I wish to do is, have only 1 audio mixer running at server and (on a broadcast kind model) send the final mixer audio to every participant (For scalability obviously).
Now this obviously creates a problem of hearing your own voice coming from speaker as MCU’s output stream.
I am wondering if there is any “client side echo cancellation” project that I can use to cancel the voice of user at desktop/mobile level.
The general approach is to filter/subtract the own voice in the MCU. Doing this on the client side does not work.
I am trying to build a basic conference call system based on plain RTP.
_____
RTP IN #1 ______ | | _______ MIX RTP receiver #1
|______| MIX |_____|
______| | RTP | |_______ MIX RTP receiver #2
RTP IN #2 |_____|
I am creating RTP streams on Android via the AudioStream class and using a server written in Node.js to receive them.
The naive approach I've been using is that the server receives the UDP packets and forwards them to the participants of the conversation. This works perfectly as long as there are two participants, and it's basically the same as if the two were sending their RTP stream to each other.
I would like this to work with multiple participants, but forwarding the RDP packets as they arrive to the server doesn't seem to work, probably for obvious reasons. With more than two participants, the result of delivering the packets coming in from different sources to each of the participants (excluding the sender of such packet) results in a completely broken audio.
Without changing the topology of the network (star rather than mesh) I presume that the server will need to take care of carrying out some operations on the packets in order to extract a unique output RTP stream containing the mixed input RTP streams.
I'm just not sure how to go about doing this.
In your case I know two options:
MCU or Multipoint Control Unit
Or RTP simulcast
MCU Control Unit
This is middle box (network element) that gets several RTP streams and generate one or more RTP streams.
You can implement it by yourself but it is not trivial because you need to deal with:
Stream decoding (and therefore you need jitter buffer and codecs implementation)
Stream mixing - so you need some synchronisation between streams (collect some data from source 1 and source 2, mix them and send to destination 3)
Also there are several project that can do it for you (like Asterisk, FreeSWITCH etc), you can try to write some integration level with them. I haven't heard anything about something on Node.js
Simulcast
This is pretty new technology and their specifications available only in IETF drafts. Core idea here is to send several RTP streams inside one RTP stream simultaneously.
When destination receives several RTP streams it needs to do exactly the same as MCU does - decode all streams and mix them together but in this case destination may use hardware audio mixer to do that.
Main cons for this approach is bandwidth to the client device. If you have N participants you need:
either send all N streams to all other
or select streams based on some metadata like voice activity or audio level
First one is not efficient, second is very tricky.
The options suggested by Dimtry's answer were not feasible in my case because:
The middle box solution is difficult to implement, requires too many resources or requires to rely on an external piece of software, which I didn't want to have to rely on, especially because Android RTP stack should work out of the box with basic support from a server component, especially for hole punching
The simulcast solution cannot be used because the Android RTP package cannot handle that and ad far as my understanding goes it's only capable of handling simple RTP streams
Other options I've been evaluating:
SIP
Android supports it but it's more of a high level feature and I wanted to build the solution into my own custom application, without relying on additional abstractions introduced by a high level protocol such as SIP. Also, this felt just too complex to set up, and conferencing doesn't even seem to be a core feature but rather an extension
WebRTC
This is supposed to be the de-facto standard for peer 2 peer voice and video conferencing but looking through code examples it just looks too difficult to set up. Also requires support from servers for hole punching.
Our solution
Even though I had, and still have, little experience on this I thought there must be a way to make it work using plain RTP and some support from a simple server component.
The server component is necessary for hole punching, otherwise getting the clients to talk to each other is really tricky.
So what we ended up doing for conference calling is have the caller act as the mixer and the server component as the middle-man to deliver RTP packets to the participants.
In practice:
whenever a N-user call is started, we instantiate N-1 simple UDP broadcast servers, listening on N-1 different ports
We send those N-1 ports to the initiator of the call via a signaling mechanism built on socket.io and 1 port to each of the remaining participants
The server component listening on those ports will simply act as a relay: whenever it receives a UDP packet containing the RTP data it will forward it to all the connected clients (the sockets it has seen thus far) except the sender
The initiator of the call will receive and send data to the other participants, mixing it via the Android AudioGroup class
The participants will send data only to the initiator of the call, and they will receive the mixed audio (together with the caller's own voice and the other participants' voices) on the server port that has been assigned to them
This allows for a very simple implementation, both on the client and on the server side, with minimal signaling work required. It's certainly not a bullet proof conferencing solution, but given the simplicity and feature completeness (especially regarding common network issues like NAT traversal, which using a server aid is basically a non-issue) is in my opinion better than writing lots of code which requires many resources for mixing server-side, relying on external software like SIP servers, or using protocols like WebRTC which basically achieve the same with lots more effort implementation wise.
this is my first question here and I realize this question might be open ended, but I'm looking for specific solutions, and any solution would be accepted.
I have GPS devices which send data packets to an IP on a port, both of which I can configure. I wish to use one of Google's, Amazon's or Microsoft's offering of cloud services. I am using python. Here is an implementation I found online :-
https://github.com/rdkls/gps-tracker-server
The data is coming as packets which are not over HTTP protocol. I have considered building a network listener over a socket on Google Compute Engine, but I'm not sure if it will be able to handle simultaneous requests from 1000 devices if such a situation ever arises. The Google Cloud IoT core offering seems to fit my need perfectly, but it is in private beta right now, which means I can't use it. I think I'll need a message queue service. But most of the offerings from these three companies requires messages over HTTP. Keep in mind that I can't change how the messages are sent from the GPS devices.
The messages sent are in this format -
https://drive.google.com/file/d/0B2EklrIn3KugS2NJYWZGWlVWeGdMbjM4WHQ2TUZmYWhIRmt3/view?usp=drive_web
Format:
data is sent in (byte sized) packets directly to the IP:Port over GPRS connections, one heartbeat packet every minute and GPS details every minute from each device. It also requires teh server to eply to the messagee for acknowledgement since it's not over TCP/IP.
So basically, which service and which architecture should I use keeping scalability, reliability and cost in mind?
I think for a 1000 devices, that would send such messages every minute, total would be 43M messages. I'm not sure but I'm looking for something that'll cost me about 1000$ that is 1$ per device per month.
I have connected two Linux Machines using netcat over WLAN using Server-Client design. And now i am able to send and receive messages between them. On the server i use UDP socket creation :
$ nc -u -l 3333
and on the client side i connect to the port using the port number and destination IP :
$ nc -u 192.168.178.160 3333
This leads to a bi-directional connection between server and client. One couldn't tell, but i guess it is quite Real-Time.
now i want to develop the functionality and try and establish a real-time speech connection between the two sides. Recording via Microphones is also feasible through arecord commands which write the speech data to a .wav file . Transmission of the .wav file is possible, only after it has been fully recorded but this is of no use since what is desired, is a Real-Time communication. Of course the received speech signals have to be instantly played back on the other end.
Has anyone any idea how to make it Real-Time?
Fidelity means a large buffer count to preserve sound continuity despite network latency and latency variation, low sound delay approximating to real time means a small buffer count to reduce overall latency. You cannot have both.
IME, you need to keep ~250ms max. of sound buffered at both ends to maintain an illusion of 'real time' speech. This queue of buffers needs to be emptied at the fixed rate necessary to reproduce the speech and kept topped-up by the network protocol as necessary. If the network latency is too low to top up buffer pools of that size, the buffer pool has to be made larger, the queue longer and the perceived real-time performance will suffer.
The TCP/UDP issue is a red-herring on most network connections.
Just be thankful that you are not streaming video:)