Convert .mp3/.wav file into g729/amr/g711 codec file and vice versa using PJSIP - audio

PJSIP is used for SIP Registering, audio/video Calling and supporting some other VoIP Call features. If you want to create an VoIP Application, you will create an application using their libraries.
In this library, there are available lot of codecs and some of codecs are available as third party codecs and you can integrate into PJSIP library. Then you can able to support those codecs while calling another SIP user/client.
Generally In mobile phones, the voice is record as audio file through
mic and it has passed to PJSIP library. Then those codec/pjmedia library take care
of other operations like converting into anyother codec format which
they want to send.
Instead of that, can we able to pass .mp3/.wav file into PJSIP library and convert it into codec files like .g729/.amr/.g711-u and vice versa
I don't about how codec conversion/codec engine internally works around.
If you know about the codec conversion from .mp3 format is possible using PJSIP library, please suggest me how to solve this problem.
Thanks in Advance!

Related

Spotify Streaming - Wireless Bluetooth Codec

As I understand it, streaming via bluetooth is handled via the A2DP profile. While the SBC codec is default, A2DP supports AAC, MP3, and a few other Codecs.
My question is, since spotify files are in the OGG VORBIS format (OGG Container, Vorbis Codec), what is the best way to handle streaming via Bluetooth without quality loss? Is there a specific A2DP implementation? Are folks like Jambox, etc just using the SBC implementation?
Spotify's streaming format is an implementation detail to all clients, and making the assumption that it's OGG Vorbis is not something you should do, and in some circumstances is actually a false assumption.
Since you've managed to use every single Spotify tag in your question, I don't know which platform you're developing for. However, the correct thing to do is take the PCM data the Spotify playback library gives you and use whatever playback stack your target platform gives you. On Android, iOS, Mac OS, etc the system will handle audio output devices for you, including Bluetooth streaming.

Capture audio stream from Xtion pro with OpenNI2?

Dose any one try to captured the audio stream using OpenNI2 library from Xtion pro??
I searched the Internet and found the audio API in OpenNI2 source code. Audio API
It seems that it only can "Play" the audio but capture audio stream.
And it doesn't demonstrate how to use those API.
Is there any example code which recorded the audio stream using OpenNI2 from Xtion pro?
BTW, my OpenNI version is 2.2.0.33.
Thanks anyone's help : )
After I surveyed so much information, I found that OpenNI2 didn't support the audio stream anymore. Hence, someone suggest me to use another library to capture audio stream from Xtion Pro.
Now, I'm using the PortAudio to deal with the audio stream. It's quit powerful tool and easy to use.
Moreover, it's cross-platform library which support Windows, Mac OS, and Unix using C/C++, and the Documentation is clear and the example code is understandable.
So, if some newbies like me want to use Xtion Pro to capture audio stream, I will recommend this library.
Any other suggestions are very welcome : )

manipulating audio input buffer on Ubuntu Linux

Suppose that I want to code an audio filter in C++ that is applied on every audio or to a specific microphone/source, where should I start with this on ubuntu ?
edit, to be clear I don't get how to do this and what is the role of Pulseaudio, ALSA and Gstreamer.
Alsa provides an API for accessing and controlling audio and MIDI hardware. One portion of ALSA is a series of kernel-mode device drivers, whilst the other is a user-space library that applications link against. Alsa is single-client.
PulseAudio is framework that facilitates multiple client applications accessing a single audio interface (alsa is single-client). It provides a daemon process which 'owns' the audio interface and provides a IPC transport for audio between the daemon and applications using it. This is used heavily in open source desktop environments. Use of Pulse is largely transparent to applications - they continue to access the audio input and output using the alsa API with audio transport and mixing. There is also Jack which is targeted more towards 'professional' audio applications - perhaps a bit of a misnomer, although what is meant here is low latency music production tools.
gStreamer is a general purpose multi-media framework based on the signal-graph pattern, in which components have a number of inputs and output pins and provide a transformation function. A Graph of these components is build to implement operations such as media decoding, with special nodes for audio and video input or output. It is similar in concept to CoreAudio and DirectShow. VLC and libAV are both open source alternatives that operate along similar lines. Your choice between these is a matter of API style, and implementation language. gStreamer, in particular, is an OO API implemented in C. VLC is C++.
The obvious way of implementing the problem you describe is to implement a gStreamer/libAV/VLC component. If you want to process the audio and then route it to another application, this can be achieved by looping it back through Pulse or Jack.
Alsa provides a plug-in mechanism, but I suspect that implementing this from the ALSA documentation will be tough going.
The de-facto architecture for building effects plug-ins of the type you describe is Steinberg's VST. There are plenty of open source hosts and examples of plug-ins that can be used on Linux, and crucially, there is decent documentation. As with a gStreamer/libAV/VLC, you be able to route audio in an out of this.
Out of these, VST is probably the easiest to pick up.

Play sound over SIP without sound card

Is it possible to play custom audio (*.wav file) over VOIP (SIP) without sound card being installed on SIP client machine? All my needs is to perform SIP call and play custom sound message.
You can transmit a recorded audio in the form of a WAV file over a SIP signal if you convert it to the appropriate codec first. This does not require a sound card to transmit this audio. A sound card is only required to listen to the audio. Which codec to use depends on the platform. Here is a link for converting to appropriate codecs when using Asterisk. There are a lot more if you just Google something like "audio codec conversion".
A simpler approach is to just use a platform that does this for you, like Voxeo Prophecy. This is a software only IVR solution that has a 2 port version for free. It is easy to install and program using the open standard VoiceXML. It will play back audio files recorded in a WAV file format and the telephony interface is SIP.

Audio Streaming Using J2ME

I've got audio online in the form of MP3 files, how do I stream the audio from my J2ME app? A website give the app a list of audio to play, select the audio and it must then stream from the website.
Sample code would be nice. thanks
There is no reliable way to ensure that a MIDlet will stream audio data because you don't control how the phone manufacturer implemented JSR-135 (the specification that gives you the API to play audio in a MIDlet).
Technically, creating a Java media player using javax.microedition.media.Manager.createPlayer(String aUrl) should make the JSR-135 implementation stream the audio data located at the url.
Unfortunately, only streaming of very simple audio content (wav more often than mp3), if any, is usually supported over a network connection and, more often than not, a call to createPlayer(String aUrl) will throw an exception if the url doesn't begin with "file://"
There are probably devices where the manufacturer managed to plug a more complete audio/networking module into the JSR-135 implementation but finding them will require a lot of testing for you.
J2ME won't let you do this over HTTP. It will download the entire audio before it starts playback. What you need is to host it on an RTP server instead; only then will J2ME stream the audio.
If that's no good, then you might be stuck looking for devices that have their own proprietary libraries for this kind of thing.
There's a better way to do this.
Create your own InputStream extended class. say MyHTTPInputStream, implement all the methods. Run a thread to retrieve the data from HTTP and store it in buffer, when the Player class calls InputStream.read() method, provide the data from the buffer.
Before implementing this class for Player, test the MyHTTPInputStream using a dummy WAV file stored in phone memory or add-on card. So, you can know which methods are called from InputStream and also you can know the sequence of method calls made by the class Player.

Resources