video conferencing stack for embedded devices - linux

I am looking for a video conferencing stack that I can run on an embedded device. Cam will be connected through USB, hw video acceleration and ethernet is available. We are running linux & directfb. Any suggestions?

Gstreamer might be an option. It is a C stack, and it is used for a similar purpose (I think) on embedded hardware, ie TI's davinci processor.
I don't know to which extent it is effectively used or useable on such hardware. However, Gstreamer effectively has all the component needed for video and audio
muxing and streaming.
Since it is a pipelined / modular approch, you can plug into gstreamer at any stage, ie keep the video acquisition / compression as custom code, and only use the RTP side of your app to gstreamer.Or you can write a custom compression plugin, and use "standard" gstreamer apps with your custom hardware accelerated hardware.

Related

Can PulseAudio/ALSA work without built-in soundcard?

I am new to PulseAudio and ALSA, so please go easy on me. This might seem like a dumb question, but it is quite important to have it answered.
I am developing application on ARM imx6 board (lets call it BOARD1), with built-in sound card support. With ALSA, I am able to play audio throgh Headset_OUT. But now, we want to move to a new board (lets call it BOARD2), which does not have built in soundcard. But the idea is to connect a bluetooth module to the BOARD2 and have the audio streamed to the bluetooth speaker.
My question is, is it possible to use PULSEAUDIO to send/receive audio to external (bluetooth) audio device without local embedded soundcard (i.e. is it possible to do audio encoding/decoding in just software with pulseaudio and gstreamer combination) ?
Regards

Is sound system in linux layered system as OSI model?

I'm new with linux and especially with sound system. I've read many articles about this subject but I'm still confused. I know that Alsa provides audio functionality to the rest of the system. This mean that Alsa is lowest "layer" on sound system (after hardware itself). I also know that ALSA by itself can only handle one application at a time. So here are my questions:
1)Is PulseAudio a bridge to provide usage of Alsa for multiple apps?
2)Are Gstreamer, Phonon and Xine same bridge programs as PulseAudio?
3)Is Alsa converting analog signal into digital signal?
My questions may seem stupid. Thank you.
The OSI model isn't really a good fit to ALSA, as it really only provides layer 1.
PulseAudio is an audio server and is the single client of a ALSA device interface. It provides something analogous to Layer 7 of the OSI model to applications. It mixes the audio output streams from each client application connection down to a single stream for output. It provides an alsa-compatible interface to audio client software (e.g. GStreamer and Xine) which acts as a proxy and connects to the audio server.
Analogue to digital (and digital to analogue) conversion takes place in hardware in what is referred to, rather confusingly, as a CoDec.

Two-way audio for software ip camera

I am trying to setup a raspberry pi box with a usb camera as a IP Camera that can be viewed from a a generic android IP Camera monitor app. I've found some examples on how to get the video stream, and that works, but what I also need is two-way audio. This seems to come out of the box in standalone network cameras -- any ideas how that works? I want to set it up in a way compatible with typical network cameras so that my cam can be used by any generic ip camera viewer app.
Well, the modern cameras nowadays implement the ONVIF protocol. This protocol specifies that you have a RTSP server that streams audio and video from the camera to the pc, but it also mandates a so called audio backchannel. It's a bit long to explain how it works, check it in the specs.
ONVIF is the standard, but you could also install an existing SIP client and do a video/audio VoIP call rather than implementing ONVIF - depends on the long term goals of your project.

manipulating audio input buffer on Ubuntu Linux

Suppose that I want to code an audio filter in C++ that is applied on every audio or to a specific microphone/source, where should I start with this on ubuntu ?
edit, to be clear I don't get how to do this and what is the role of Pulseaudio, ALSA and Gstreamer.
Alsa provides an API for accessing and controlling audio and MIDI hardware. One portion of ALSA is a series of kernel-mode device drivers, whilst the other is a user-space library that applications link against. Alsa is single-client.
PulseAudio is framework that facilitates multiple client applications accessing a single audio interface (alsa is single-client). It provides a daemon process which 'owns' the audio interface and provides a IPC transport for audio between the daemon and applications using it. This is used heavily in open source desktop environments. Use of Pulse is largely transparent to applications - they continue to access the audio input and output using the alsa API with audio transport and mixing. There is also Jack which is targeted more towards 'professional' audio applications - perhaps a bit of a misnomer, although what is meant here is low latency music production tools.
gStreamer is a general purpose multi-media framework based on the signal-graph pattern, in which components have a number of inputs and output pins and provide a transformation function. A Graph of these components is build to implement operations such as media decoding, with special nodes for audio and video input or output. It is similar in concept to CoreAudio and DirectShow. VLC and libAV are both open source alternatives that operate along similar lines. Your choice between these is a matter of API style, and implementation language. gStreamer, in particular, is an OO API implemented in C. VLC is C++.
The obvious way of implementing the problem you describe is to implement a gStreamer/libAV/VLC component. If you want to process the audio and then route it to another application, this can be achieved by looping it back through Pulse or Jack.
Alsa provides a plug-in mechanism, but I suspect that implementing this from the ALSA documentation will be tough going.
The de-facto architecture for building effects plug-ins of the type you describe is Steinberg's VST. There are plenty of open source hosts and examples of plug-ins that can be used on Linux, and crucially, there is decent documentation. As with a gStreamer/libAV/VLC, you be able to route audio in an out of this.
Out of these, VST is probably the easiest to pick up.

Audio hooking or a custom audio driver for audio processing and routing to the default audio device

I have developed a pretty complex audio software for my client with plugins for Winamp, Windows Media player and VST. Now the client is interested in some method to avoid maintaining the multitude of plugins, we have no way to support all the media players out there.
The client does not care for Unix/Mac yet, so I can look only at Windows XP and Vista/7/
Basically, what we need is a way to always reliably intercept as much audio stream protocols as possible (well, except maybe ASIO, that's another story, I guess), then pass this audio through our custom effects engine and then route back to the default audio device, whatever it is.
Now I am thinking, what options do I have (theoretically).
I could use hooks. I need to hook globally older vaweOut and also DirectSound.
But will this still work on Vista/7?
I could use a virtual driver, like the author of the Virtual Audio Cable did:
http://software.muzychenko.net/eng/vac.htm
Seems a pretty daunting task. Anyway, the client will contact the author of VAC to see if he agrees to sell his source code for a reasonable price.
This driver could install itself as a default audio output device, intercept the audio stream from Windows, and pass it back to default device. Hmm, but what about various DirectSound audio buffers, do I have to mix them myself or is there any way I could tell Windows mixer to mix all for me and pass a single mixed audio stream?
It seems, this custom driver will of course kill all the hardware audio acceleration, but we can live with that, if we warn our customers about this issue.
As I understand, the most current Windows driver standard is WDF.
But maybe it does not work for audio on Windows Vista/7?
I know, Vista/7 has a different audio stack from XP.
If I can do it using WDF, what driver should I write - kernel mode or user mode?
Maybe I am missing more elegant and simple options to intercept, process and route audio on Windows?
Try Virtual Audio Streaming SDK. Also virutal sound card and let you read/process audio data in realtime.
http://www.virtualaudiostreaming.net/sdk-license.html

Resources