Streaming audio to a "speaker server" in linux - linux

Is there a way to stream all audio from a laptop (which has low quality speakers) to a desktop with much better speakers in linux (on both computers)? I think that this would have to be a kernel driver, since it would have to fit under the alsa system to make it application transparent.
Thanks,
Andrew

Pulseaudio has network support, and it is the only way I know of to do low-level sound device streaming.

Related

Standard approach for controlling Amplifier?

Currently I am working on a custom board having TPA3118D audio amplifier. The amplifier has 3 GPIO pins (Mute, Enable,Fault) which are controlled from the processor. We are able to hear the audio once I make the enable high from Linux running in the processor. I was thinking to automate this process by telling the Linux to make the enable pin high during the booting. This is where I am having the confusion. Is this the right approach? Is it okay to enable the amplifier as soon as the hardware starts or the Amplifier should be enabled only when we are playing the audio?
I would like to understand the standard approach for controlling Amplifier. How is the enable pin handled? Do we keep the Amplifier always ON or enable it during playback only?
Thanks in Advance.

Is sound system in linux layered system as OSI model?

I'm new with linux and especially with sound system. I've read many articles about this subject but I'm still confused. I know that Alsa provides audio functionality to the rest of the system. This mean that Alsa is lowest "layer" on sound system (after hardware itself). I also know that ALSA by itself can only handle one application at a time. So here are my questions:
1)Is PulseAudio a bridge to provide usage of Alsa for multiple apps?
2)Are Gstreamer, Phonon and Xine same bridge programs as PulseAudio?
3)Is Alsa converting analog signal into digital signal?
My questions may seem stupid. Thank you.
The OSI model isn't really a good fit to ALSA, as it really only provides layer 1.
PulseAudio is an audio server and is the single client of a ALSA device interface. It provides something analogous to Layer 7 of the OSI model to applications. It mixes the audio output streams from each client application connection down to a single stream for output. It provides an alsa-compatible interface to audio client software (e.g. GStreamer and Xine) which acts as a proxy and connects to the audio server.
Analogue to digital (and digital to analogue) conversion takes place in hardware in what is referred to, rather confusingly, as a CoDec.

Build ADC Core Audio Compliant USB or Firewire

I'm looking for documentation on how to build an ADC Core Audio compliant to connect to a mac USB or Firewire. All I've been finding is info on how to deal with Core audio on programing the computer side.
I need info on how to make audio hardware Core Audio compliant.
Can anyone send me the right direction?
This a nice solution. It does all the hard work for you. If you have even basic hardware engineering experience this should get you on your way. This chip will work great. Very few external components needed.
http://www.silabs.com/products/interface/usbtouart/Pages/usb-to-i2s-digital-audio-bridge.aspx

Audio / Camera Driver - FriendlyARM Mini2440 - s3c2440

I am a newbie to embedded linux and am keen on learning to write device drivers. I have got a FriendlyARM Mini2440 board with me.
Please suggest which device driver would be easier to start learning - Audio or Camera or something else?
Need suggestions from experts.
Thanks a lot!
Between those 2, I would say that a camera driver would be simpler. Audio drivers in Linux are more complex than most other drivers, and there seems to not be very much documentation on writing them.
Have you read Linux Device Drivers by Jonathan Corbet, Alessandro Rubini, and Greg Kroah-Hartman? That is probably the best way to start.
I'd recommend starting with serial, flash, or ethernet drivers, in that order. Those are common, the code is straightforward, and there's good documentation and examples for them.

video conferencing stack for embedded devices

I am looking for a video conferencing stack that I can run on an embedded device. Cam will be connected through USB, hw video acceleration and ethernet is available. We are running linux & directfb. Any suggestions?
Gstreamer might be an option. It is a C stack, and it is used for a similar purpose (I think) on embedded hardware, ie TI's davinci processor.
I don't know to which extent it is effectively used or useable on such hardware. However, Gstreamer effectively has all the component needed for video and audio
muxing and streaming.
Since it is a pipelined / modular approch, you can plug into gstreamer at any stage, ie keep the video acquisition / compression as custom code, and only use the RTP side of your app to gstreamer.Or you can write a custom compression plugin, and use "standard" gstreamer apps with your custom hardware accelerated hardware.

Resources