Spotify Streaming - Wireless Bluetooth Codec - spotify

As I understand it, streaming via bluetooth is handled via the A2DP profile. While the SBC codec is default, A2DP supports AAC, MP3, and a few other Codecs.
My question is, since spotify files are in the OGG VORBIS format (OGG Container, Vorbis Codec), what is the best way to handle streaming via Bluetooth without quality loss? Is there a specific A2DP implementation? Are folks like Jambox, etc just using the SBC implementation?

Spotify's streaming format is an implementation detail to all clients, and making the assumption that it's OGG Vorbis is not something you should do, and in some circumstances is actually a false assumption.
Since you've managed to use every single Spotify tag in your question, I don't know which platform you're developing for. However, the correct thing to do is take the PCM data the Spotify playback library gives you and use whatever playback stack your target platform gives you. On Android, iOS, Mac OS, etc the system will handle audio output devices for you, including Bluetooth streaming.

Related

Can anyone explain how voice commands works via Bluetooth remote(Nexus player remote) in Android(Nexus player)?

Can anyone please elaborate following questions?
How bluetooth stack handles audio data?
How audio commands are processed?
Did we need any service to process audio data?
Thanks in advance.
Basically, voice commands over BLE require:
some audio codec for reducing required bandwidth (ADPCM and SBC are common, OPUS is emerging),
some audio streaming method through BLE,
decoding and getting the audio stream from BLE daemon to a command processing framework.
In the android world, command processing framework is google sauce (closed) that most easily gets its audio from an ALSA device. What is left to be done is getting audio from the remote to an ALSA device.
So for audio streaming, either you:
use a custom L2CAP channel or a custom GATT service, this requires a custom android service app and/or modifications to Bluedroid to handle those, it will need a way to inject audio stream as ALSA, most probably with a "loop" audio device driver,
declare audio as custom HID reports, this way, Bluedroid injects them back to the kernel, then add a custom HID driver that processes these reports and exposes an audio device.
Audio over BLE is not standard, so all implementations do not do the actual same thing. In Nexus Player case, implementation uses HID: It streams an ADPCM audio stream, chunked in HID reports. There is a special HID driver "hid-atv-remote.c" in Android linux kernel that exposes an ALSA device in addition to input device. Bluedroid has no information about audio, all it does is forwarding HID reports from BLE to UHID.

manipulating audio input buffer on Ubuntu Linux

Suppose that I want to code an audio filter in C++ that is applied on every audio or to a specific microphone/source, where should I start with this on ubuntu ?
edit, to be clear I don't get how to do this and what is the role of Pulseaudio, ALSA and Gstreamer.
Alsa provides an API for accessing and controlling audio and MIDI hardware. One portion of ALSA is a series of kernel-mode device drivers, whilst the other is a user-space library that applications link against. Alsa is single-client.
PulseAudio is framework that facilitates multiple client applications accessing a single audio interface (alsa is single-client). It provides a daemon process which 'owns' the audio interface and provides a IPC transport for audio between the daemon and applications using it. This is used heavily in open source desktop environments. Use of Pulse is largely transparent to applications - they continue to access the audio input and output using the alsa API with audio transport and mixing. There is also Jack which is targeted more towards 'professional' audio applications - perhaps a bit of a misnomer, although what is meant here is low latency music production tools.
gStreamer is a general purpose multi-media framework based on the signal-graph pattern, in which components have a number of inputs and output pins and provide a transformation function. A Graph of these components is build to implement operations such as media decoding, with special nodes for audio and video input or output. It is similar in concept to CoreAudio and DirectShow. VLC and libAV are both open source alternatives that operate along similar lines. Your choice between these is a matter of API style, and implementation language. gStreamer, in particular, is an OO API implemented in C. VLC is C++.
The obvious way of implementing the problem you describe is to implement a gStreamer/libAV/VLC component. If you want to process the audio and then route it to another application, this can be achieved by looping it back through Pulse or Jack.
Alsa provides a plug-in mechanism, but I suspect that implementing this from the ALSA documentation will be tough going.
The de-facto architecture for building effects plug-ins of the type you describe is Steinberg's VST. There are plenty of open source hosts and examples of plug-ins that can be used on Linux, and crucially, there is decent documentation. As with a gStreamer/libAV/VLC, you be able to route audio in an out of this.
Out of these, VST is probably the easiest to pick up.

Play sound over SIP without sound card

Is it possible to play custom audio (*.wav file) over VOIP (SIP) without sound card being installed on SIP client machine? All my needs is to perform SIP call and play custom sound message.
You can transmit a recorded audio in the form of a WAV file over a SIP signal if you convert it to the appropriate codec first. This does not require a sound card to transmit this audio. A sound card is only required to listen to the audio. Which codec to use depends on the platform. Here is a link for converting to appropriate codecs when using Asterisk. There are a lot more if you just Google something like "audio codec conversion".
A simpler approach is to just use a platform that does this for you, like Voxeo Prophecy. This is a software only IVR solution that has a 2 port version for free. It is easy to install and program using the open standard VoiceXML. It will play back audio files recorded in a WAV file format and the telephony interface is SIP.

Audio hooking or a custom audio driver for audio processing and routing to the default audio device

I have developed a pretty complex audio software for my client with plugins for Winamp, Windows Media player and VST. Now the client is interested in some method to avoid maintaining the multitude of plugins, we have no way to support all the media players out there.
The client does not care for Unix/Mac yet, so I can look only at Windows XP and Vista/7/
Basically, what we need is a way to always reliably intercept as much audio stream protocols as possible (well, except maybe ASIO, that's another story, I guess), then pass this audio through our custom effects engine and then route back to the default audio device, whatever it is.
Now I am thinking, what options do I have (theoretically).
I could use hooks. I need to hook globally older vaweOut and also DirectSound.
But will this still work on Vista/7?
I could use a virtual driver, like the author of the Virtual Audio Cable did:
http://software.muzychenko.net/eng/vac.htm
Seems a pretty daunting task. Anyway, the client will contact the author of VAC to see if he agrees to sell his source code for a reasonable price.
This driver could install itself as a default audio output device, intercept the audio stream from Windows, and pass it back to default device. Hmm, but what about various DirectSound audio buffers, do I have to mix them myself or is there any way I could tell Windows mixer to mix all for me and pass a single mixed audio stream?
It seems, this custom driver will of course kill all the hardware audio acceleration, but we can live with that, if we warn our customers about this issue.
As I understand, the most current Windows driver standard is WDF.
But maybe it does not work for audio on Windows Vista/7?
I know, Vista/7 has a different audio stack from XP.
If I can do it using WDF, what driver should I write - kernel mode or user mode?
Maybe I am missing more elegant and simple options to intercept, process and route audio on Windows?
Try Virtual Audio Streaming SDK. Also virutal sound card and let you read/process audio data in realtime.
http://www.virtualaudiostreaming.net/sdk-license.html

Audio/video streaming to mobile browsers

I am developing a WAP/mobile website that would allow users to stream audio/video (although the priority is audio) via their mobile browsers..
For music I would be streaming mp3 files, and for video I would be streaming flv and 3gp files (but mostly 3gp).
Can anyone recommend solutions (i.e. what to use and/or a point to the right direction) to enable streaming audio/video to a mobile browser?
AFAIK, there is RTSP (probably via Darwin Streaming Server?) which is supported in most 3G devices, and flash lite. (Would using flash lite as a player even be a good idea, since the users would need to have flash lite installed on their mobile devices? I'm not that familiar with flash.)
Most mobile phones support video streaming via RTSP, and the cheapest method is the Darwin Stream Server, and it integrates with the Real Video player
As for flash lite that has limited handset support so I wouldn't recommend using it.
The only thing I would add is that without wireless carrier support streaming of data to a mobile phone can be very expensive for an end user, so please ensure that the end user knows about the potential data charges

Resources