I have an application in an embedded system that has a application which is OSS based. Unfortunately, this application is at a fixed sample rate (8K), but I need it to be at 48K. Furthermore, I can't change the application.
I'm researching sample rate conversion plugins, such as dmix or libsamplerate, but I don't see how to use that with OSS.
Can somebody please point me in the right direction? Can I configure ALSA in such a way as to convert the OSS interface from 8K->48K in/out of the system?
TIA
Mike
What you want is the alsa-oss package which provides a tool you can use to run a program and redirect it's OSS sound output into ALSA where all the normal ALSA tools are available.
Related
Does anyone have any idea how to make the sound output on a mac or PC only at 432Hz instead of 440Hz? Are there any SDKs or opensource frameworks that could enable this?
In an embedded Linux project I have exactly two processes that need to access the audio device. So far I'm using ALSA dmix for that. However, dmix is giving me a lot of trouble (as explained in this question).
Now I'm wondering - are there any simple alternatives to dmix? I can imagine that PulseAudio is doing a much better job, but I'm not sure if its not an overkill to bring a general-usage sound server into a small embedded project, just for mixing two audio streams.
I have a project that I am working on, for the purposes of this question, lets say they are wireless speakers.
We are using the raspberry pi for development right now but we plan to move to our own embedded solution. The codec we've chosen fits our needs best, although it isn't an "ALSA supported" codec. As in the ALSA webpage doesn't have info on it.
Much of the PCM code I've found to develop this on the raspberry pi use ALSA streams. Is it a standard to use an ALSA codec for these types of projects?
I haven't worked much with embedded linux or RTOS. I work with bare-metal systems quite frequently though which explains my confusion on what ALSA exactly is. It seems like some odd middleware or something.
ALSA is
the API that application that want to use sound use; and
the library that implements this API; and
the interface between this library and the kernel; and
the kernel implementation of this interface; and
the framework to be used by sound drivers.
To have your codec supported in Linux, you must write an ALSA driver.
I would like to use ALSA library to play mutliple sound streams, with each stream having its own customizable volume level. Would like to avoid using higher level abstractions like pulseaudio, since this is to be used on a ARM target board with single channel output, would like to avoid compiling pulseaudio and the associated issue. Please suggest in what all possible ways such an implementation can be done. Any guidance with usage of ALSA plugins dmix / softvol is welcome
I'm looking for documentation on how to build an ADC Core Audio compliant to connect to a mac USB or Firewire. All I've been finding is info on how to deal with Core audio on programing the computer side.
I need info on how to make audio hardware Core Audio compliant.
Can anyone send me the right direction?
This a nice solution. It does all the hard work for you. If you have even basic hardware engineering experience this should get you on your way. This chip will work great. Very few external components needed.
http://www.silabs.com/products/interface/usbtouart/Pages/usb-to-i2s-digital-audio-bridge.aspx