AudioTrack Latency - voip

I'm making an application, that will be capable to make a VOIP communications on WAN, using AudioManager, specifically AudioTrack and AudioRecord, AudioRecord works fine but I have serious problems with the latency to play with AudioTrack. It is really high and unacceptable. I'm receiving chunks of 160Bytes and my audio settings are of 16 bit, 8KHz, 1 Channel, by that, in my chunk of 160 bytes I have about 10 ms by chunk, I will not have significative latency
I know that they are a lot of peoples with the same problem, but VOIP applications exists and probably the problem is mine.
PD: I'm programming in Java, I have tested it between a Motorola Milestone (Droid, android 2.2) and in another Samsung phone (android 2.3) and I have the same problems in both playing device. Also, I have tried to play my sound streamed to my computer and it is in real time. By that, the problem is in the player (AudioTrack). The latency of the network is very low (I'm on WAN) and I receive more than 99% of the packets (about 16Kb/s).
There is any way to continue with a VOIP program and make it usable?
Really thanks, beyond this problem, I haven't found some clear solution and it surely exist. It is a very usual and usefull, more in a communication device.

Do you use UDP instead of TCP? If not, consider Google for "Android UDP example". If yes, sorry for bothering.

Related

How BLE 4.2 streming is working

I cannot understand how BLE 4.2 headphones are working.
As I know, with BLE protocol you can send 20 bytes only in each packet, so normal listening quality is not possible in this case.
Somebody knows the correct answer?
Thanks a lot!
Headsets running Bluetooth 4.2 spec doesn’t run audio via the BLE links. This is a quite common misunderstanding, but for streaming music and making phone calls etc all phones and computer as of today still use the Bluetooth 2.1 spec and what’s called “classic” profiles to do the work (eg., A2DP for music, HFP for voice calls, etc).
There are indeed streaming audio profiles for GATT/BLE in the making but nothing that’s available yet and consequently not anything that’s supported by products available today.
It’s quite common to see headsets that claim superior audio quality etc “because we use the latest Bluetooth 4.2 spec”. :) The only reason that the product IS indeed listed/qualified as a 4.2 or 5.0 spec is because you typically always qualify your products using the latest spec — but that’s doesn’t imply that the product USE all the latest bits and pieces in that spec...
Even though packets are small, the radio still operates at 1 Mbit/s when it actually sends/receives something. What you want to make sure is that the radio is active as much as possible with as low overhead as possible. With data length extension each packet can be up to 251 bytes instead of 27 bytes. And with several packets per connection event you can get very high throughput (over 800 kbit/s).

Decoding audio from RJ11 / Phone Plug

What I would like to do involves a small bit of hardware. 1) a phone headset, 2) a PCI-modem, and 3) a phone wire. What I would like to do is read audio from the modem, and then digitize it for processing. I'm sure the best way to do this is with Linux, but if it can be done in Windows as well that would be awesome. A second extension of this, is that I would like to be able to translate digital audio to analog audio and send that to the modem so it can be heard from the headset.
Any advice would be greatly appreciated. ( Also, if anybody has a general "pointer" to what I should investigate to replicate the audio stream to a TCP server so it can be accessed over LAN, that would be even cooler. I know how to handle TCP well enough, but I haven't a clue about audio encoding / decoding ).
If anybody's curious, I'm wanting to create a home-wide audio-stream with ears and mouths. Since the phone cables can do that with normal headsets, I thought "why not".
Not just any modem will do. You need a "voice modem", which includes audio capability as well as general modem functionality. These devices usually expose themselves as a regular sound card on the system, once the drivers are installed. From there, you can use any mechanism you want to read/write from those audio streams.
Be warned though that your plan of a whole-house speakerphone isn't simple at all. There are significant feedback issues when using regular POTS lines. There are entire companies that work to solve this problem. The best of them use microphone arrays that are steerable in software. You would be better off using one of these off-the-shelf systems.

Is 600kbps upload speed too slow for VOIP app?

I've built a voip app for iphone and android phone (audio only). I've been testing the call quality. I noticed that if a person has an upload speed of less than 600kbps, then his partner will have trouble hearing him clearly. His partner might hear lots of crackling, dropped sentences in speech, or nothing at all.
I am currently using GSM codec on my android, iphone and on my asterisk server.
So my question is whether upload speeds of less than 600kpbs is generally considered too slow for VOIP calls?
If so, is there anything I can do to reduce necessary bandwidth for call? I could consider the G722 codec, but from what I remember, it doesn't offer significant bandwidth improvement compared to GSM
Additional Notes
I used speedtest.net to inform me of my upload speed. I'm not sure if that's reliable way to test speeds necessary for voip service.
Also, I'm using linphone core as my SIP library.

How to redirect audio stream from microphone to headphone instantly

I need to open the incoming audio stream from the microphone and address it instantly to the headphones so you hear what is being acquired by the microphone.
it is not possible to do this using xna microphone, because it is necessary to pass through a buffer that slows down the headphone listening. I think needs to use Windows.Phone.Media.Capture as AudioDevice and perhaps AudioSink but do not understand how. Do you have any suggestions?
thanks
It looks you've got tighter latency requirements than the a previous question in this area. For phone 8 the best latency result is probably going to be to use the DirectX audio APIs from C++.
The documentation on MSDN will tell you which APIs you can use from the phone, but if I know MSDN I expect you will need to go digging to find examples of how to use them - and when you find them you'll need to be familiar with C/C++, and how to set up a "hybrid" C#/C++ project in Visual Studio.
Even then you'll have to try and it out on a real phone to see if the latency performance is good enough for your purposes.

Streaming audio over wifi: feasible and how?

I'm evaluating building an application which, simplifying the requirements, records from a microphone equipped small computer (eg: a Raspberry PI) and streams the digitalized sound over wireless connection in almost realtime to a server on the same LAN (No Internet involved). Ideally, the server application would record different streams from various wifi microphones and mix them together..
I'm currently looking into obtain a pretty good quality out of this, comparable somehow to a 128Kb stereo MP3.
At this point, I'm still evaluating options here, so I'm also looking to see your opinion on the feasibility of this.. if you think it's doable, what libraries, APIs, protocols would you use? Consider that this will be likely deployed on Linux based embedded computers (for the wifi mic part) and Linux based servers.
Thanks for your help.
I listen often Shoutcast on the iPad. This sounds pretty good to me. I do not know the kb/s rate there, I think they stream mp3. So I do not think this would be a big issue if you can live with the quality loss which comes with mp3. The bigger issue might be, how good your wireless connection is. When your network is pretty busy, there are more errors and lower speed. It also depends on the wireless standard and the hardware you are using. You may think about buffering, too.

Resources