I didn't get any audio in sip calling when using early media feature of asterisk.
Normally its working fine but due to playback apps, asterisk answer the sip channel and timer of user start that should not be happen.
That why I am trying to implement early media option in asterisk.
It is working as said but problem occur in audio. I could not hear anything unless user pick the phone.
extension.conf
exten => _X.,1,Progress()
exten => _X.,n,Playback(/var/lib/asterisk/sounds/verification,noanswer)
exten => _X.,n,Dial(SIP/channel/number)
sip.conf
[xxx]
fullname=xxx
type=friend
host=dynamic
disallow=all
allow=g729
allow=ulaw
allow=alaw
allow=gsm
username=xxx
secret=xxx
context=sip-calling-test
qualify=yes
call-limit=2
nat=yes
Does any one know what is the problem.
Early media highly depending of endpoints/provider used.
So you need use ATA with early media support or did provider with early media.
Related
I am using react-native-webrtc to handle the WebRTC portion of this.
I am using Websockets to signal and using ICE trickling to keep track of the ICE candidates.
I queue my ICE candidates until setLocalDescription has been called on the callee side. Then I addIceCandidate for each candidate in the queue.
On the caller side I am doing the same thing and not processing my ICE candidates until setRemoteDescription has been called.
I am only doing audio so no video being used.
When I test this with two mobile devices on the same network I have no issues.
But if I disconnect one device from the WiFi the calls still connect just fine except the audio cannot be heard on either device.
The onConnectionStateChange handler will still return "connected" and the onIceGatheringStateChanged will still return "complete".
I thought maybe I needed to use a TURN server to get this working so I started using Twilio's paid TURN/STUN server but the issue is still persisting.
Any ideas what to look into?
BACKGROUND
Ok, so you have to take some background on P2P connection on RTC platforms. And so, it begins (in very short version):
In order to establish connection you have to establish direct connection between two clients (how obviously, I know). In order to find this routes you need help on network servers.
And that's why you setup local SDP with setting, to which server we can access. ICE, TURN, STUN (you can find any information, for ex. this one). Now ICE candidates most obvious one, because this server endpoints within your local network and that's why your version is not working with different network.
Right, you have to use TURN/STUN to find NAT and correct routes between peers. Most TURN server are private and paid, but for less loaded application you might use public STUN servers, that would be more then enough.
You can find many available over there. One ex. is here.
stun.l.google.com:19302
stun1.l.google.com:19302
stun2.l.google.com:19302
SOLUTION
Now coming to your problem. If you think you have connected your devices with your signaling it doesn't mean you connected devices. (It's just to clarify, if you don't have media on your devices your RTC connection failed to establish, and it's not just audio).
The problem in using it's TURN/STUN servers on your devices, and you have to trace SDP which established during setRemoteDescription and check the servers were included. Furthermore there is always a Google demo which is working perfectly.
UPDATE
In order to trace how remote SDP will be set and connection establish oyu have to print candidates which will be used to setup. To do that, you have to print information which candidates gathered during setLocalDescription and setRemoteDescription.
In place where you are gathering candidates add logging to print information. You have to see, that STUN, TURN candidates will be there. Below ex in Java. Word ICE shouldn't bother you, because it's just means that candidates AFTER ICE traversing will be found.
// Listen for local ICE candidates on the local RTCPeerConnection
peerConnection.addEventListener('icecandidate', event => {
if (event.candidate) {
// Here should be your part where you are sending this candidate to your signaling channel
// Add logging to print entire candidate information. You should see some data related to ICE, TURN.
}
});
I am developing a MCU based voip service. I think the traditional way of doing MCU is, you have N audio mixers at server and every participant in the call receive a steam that does not have their own voice encoded.
Guess what I wish to do is, have only 1 audio mixer running at server and (on a broadcast kind model) send the final mixer audio to every participant (For scalability obviously).
Now this obviously creates a problem of hearing your own voice coming from speaker as MCU’s output stream.
I am wondering if there is any “client side echo cancellation” project that I can use to cancel the voice of user at desktop/mobile level.
The general approach is to filter/subtract the own voice in the MCU. Doing this on the client side does not work.
We have deployed webrtc on wowza. However, we are getting our own voice back. Could be feedback or echo?
As far as I see, there is currently no method in wowza to battle echo. However, you can install extra layers of audio filtering - for example, this article shows how to use PBXMate for echo cancellation. In case this link becomes invalid, full requirements are following:
The Flashphoner Client is a flash based client. It could be replaced by other flash based clients.
The Wowza server is a standard streaming server.
The Flashphoner is responsible for translating the protocol of the streaming data to the standard SIP protocol.
The Elastix server is a well known unified communication server.
The PBXMate is an Elastix AddOn for audio filtering.
I'm looking for a SIP client based on linux (console only, debian if possible) for one simple goal : To let my CRM app know what is the incoming call number.
There is no need to use voice, autoresponder, etc., I just need to get the incoming call number send somewhere (fill a file with the number, add a row in a sql database, a curl request to my CRM or anything else like that)
Do you know a SIP client that can let me do this ?
Is your intent to receive a SIP INVITE and identify the calling number using this? Because you mentioned you don't need Voice or anything else, a simple SIPP kind of test tool should be fine.
Or do you want to test it over the mobile Network and hence want to use a VoIP Client. or just use the freeware of Xlite etc from either a desktop or mobile device.
I want to a Java ME application that transfers any SMS received to a PC using bluetooth. The PC can then direct the Java ME application via bluetooth to send a response SMS. Is there library available for this architecture or I have to design it myself?
Is this approach correct or a better one exists? I want to use bluetooth as then I will not have dependency on the cable.
You'll need to create this yourself, however you'll find that you can't do what you want with J2ME.
J2ME can't access any old SMS that the handset receives, only ones sent to a specific port upon which the MIDlet is listening. So to get all the other SMSes, create a bluetooth serial/dial-up connection to your handset in the way I've described in this answer.
Create a PC client which repeatedly issues AT+CGML commands (as described in the AT command set document linked to in the answer above), to see when an SMS has been received. Use AT+CGMR to read and parse the message text. Then use AT+CGMS to sent a response. This can all be done over bluetooth.
It's better to use the serial connection to send a response, because a MIDlet cannot usually be triggered to open based on incoming bluetooth data.
Hope this helps.
You may have already achieved your task, anyway for the reference I think it is much better if you try using Gammu . I'm using it for the same task (Send / receive SMS through PC ) with a simple bat file I have written, works like a charm.
Anyway you don't need any J2me program for this.
Wammu takes care of making the connection to phone and sending AT commands.