Setting channel volume in ALSA - linux

My app plays raw PCM audio data through various channels using ALSA. I'm allocating a new audio channel by using snd_pcm_open(), then setting the PCM format via the snd_pcm_hw_params_xxx() calls and finally feeding raw PCM audio data to ALSA by using the snd_pcm_writei() API.
This is all working fine so far but I haven't found any way to tell ALSA to reduce the volume of a sound channel allocated in the way outlined above. Of course, I could just manually apply volume scaling to the PCM data before sending it to ALSA via snd_pcm_writei() but is there really no way to have ALSA do this on its own?

ALSA has no such function.
You have to do the scaling yourself, or use a sound server like PulseAudio.

You can via amixer:
amixer cset name='Headphone Playback Volume' 98%,100%
To get the name value - check alsamixer, appending 'Playback Volume' to each.
And via alsamixer:
Keyboard z is left channel decrease.
q is left increase.
and
c is right decrease.
e is right increase

Related

Streaming data over bluetooth

I'm working on a project that streams data to a laptop. The data is an analog signal with 5 kHz bandwidth (almost like audio) that is digitized at first and should be transmitted over a bluetooth module to a laptop. I've searched a lot about modules that use this protocol to stream data. I figured out simple bluetooth modules like hc-05 or hc-06 because of their limitation of packet's size and interval time can't be used for this application. It has been suggested to use audio bluetooth module like bc127 and csr because of their appropriate sample rate (I want more than 20 kS/s) and their applications so I want to use them but not for an audio signal for my signal. Now I want to ask you:
1- Can I use these modules to acquire my signal (that is not an audio signal) wireless?
2- Do these modules compress signal for transmission and should I decompress it in receiver side (I know they have some audio DSP but I don't know what are they and their function)?
3- Can laptop's bluetooth hardware receive this data without any problem? If not, what are alternatives?
4- is there any filtering in the proccess? i mean filter about voise band(300 Hz ~ 4 kHz)
thank you.

PortAudio unreliable: Expression '...' failed

I'm currently experimenting with real-time signal processing, so I went and tried out PortAudio (from C).
I have two audio interfaces on my computer, onboard sound (Intel HD Audio) and a USB audio interface. Both generally work fine under ALSA on Linux. I also tried the USB audio interface under JACK on Linux and this also works perfectly.
What I do:
My code just initializes PortAudio, opens and starts a stream (one channel, paInt32 sample format, defaultLowInputLatency / defaultLowOutputLatency, though I tried changing to paFloat32 or defaultHighInputLatency / defaultHighOutputLatency, which didn't improve anything).
On each invocation of the callback, it copies sizeof(int32_t) * frameCount bytes via memcpy from the input to the output buffer, then returns paContinue. It does nothing else in the callback. No memory allocation, no system calls, nothing it could block on. It just outputs what it has read. The code is very simple, still I can't get it running.
Replacing the memcpy with a loop copying frameCount elements of type int32_t over from the input to the output buffer didn't change anything.
What I've tried:
The following scenarios were tried out with PortAudio.
Input and output via USB audio interface, callback mechanism on PortAudio, ALSA backend.
Input and output via USB audio interface, blocking I/O on PortAudio with 1024 samples buffer size, ALSA backend.
Input via USB audio interface, output via onboard sound, callback mechanism on PortAudio, ALSA backend.
Input via USB audio interface, output via onboard sound, blocking I/O on PortAudio with 1024 samples buffer size, ALSA backend.
Input and output via USB audio interface, callback mechanism on PortAudio, JACK backend.
Input and output via USB audio interface, blocking I/O on PortAudio with 1024 samples buffer size, JACK backend.
The problems I encountered:
The results were as follows. (Numbers represent the scenarios described above.)
No output from device.
Output from device unsteady (interrupted). All the time lots of buffer underruns.
No output from device.
Output from device unrealiable. Sometimes it works, sometimes it doesn't. (Without changing anything, just running the executable multiple times.) When it works, latency starts off low, but increases over time and gets very noticeable.
No output from device.
No output from device.
Between each try, ALSA has been tested if it's still responsive (sometimes it got completely "locked up", so that no application could output sound any longer) and rebooted the system in case ALSA got "locked up", then continued the testing.
More details, that might be useful when tracking the problem down:
In the scenarios where there is no output at all, I get the following error messages when using ALSA as backend.
Expression 'err' failed in 'src/hostapi/alsa/pa_linux_alsa.c', line: 3350
Expression 'ContinuePoll( self, StreamDirection_In, &pollTimeout, &pollCapture )' failed in 'src/hostapi/alsa/pa_linux_alsa.c', line: 3876
Expression 'PaAlsaStream_WaitForFrames( stream, &framesAvail, &xrun )' failed in 'src/hostapi/alsa/pa_linux_alsa.c', line: 4248
I get the following error message when using JACK as backend.
Cannot lock down 42435354 byte memory area (Cannot allocate memory)
In addition, no matter what method I use, I always get these warnings.
ALSA lib pcm.c:2267:(snd_pcm_open_noupdate) Unknown PCM cards.pcm.rear
ALSA lib pcm.c:2267:(snd_pcm_open_noupdate) Unknown PCM cards.pcm.center_lfe
ALSA lib pcm.c:2267:(snd_pcm_open_noupdate) Unknown PCM cards.pcm.side
ALSA lib pcm_route.c:867:(find_matching_chmap) Found no matching channel map
ALSA lib pcm_route.c:867:(find_matching_chmap) Found no matching channel map
When using ALSA, I also get one or two complaints about underruns.
ALSA lib pcm.c:7905:(snd_pcm_recover) underrun occurred
The PortAudio functions that I call (Pa_Initialize, Pa_OpenStream, Pa_StartStream, Pa_StopStream, Pa_CloseStream, Pa_Terminate, in this order), all return paNoError.
The paex_read_write_wire.c (blocking I/O) example that comes with PortAudio can usually access the device, but also experiences lots of underruns (like my test case 2).
In either case, there's nothing interesting showing up in dmesg. (Just checked that, since ALSA has a kernel-level component.)
My question:
Anyone knows what's the problem here and how I could fix it? Or at least, how I could narrow it down a bit more?
When you write only a single block of samples, the playback device will run out of samples just when you're about to write the next block.
You should fill up the playback device's buffer with zero samples before you start the read/write loop.
I'm unclear on the details, but for me switching from default low input/output latency to default high input/output latency cured this problem, and I can't perceive a change in latency.

Audio Channel change/swap automatically

I am working with digital TV in Linux platform. Currently I am facing with one issue in audio. When I give stereo audio to
snd_pcm_write_i
Function and after long time running the audio channels get swapped. That is, right channel audio hearing in Left channel and Left in Right. I dumped the PCM data in to a file before giving to alsa in issue case and played using 'aplay' and audio is good.So I think the PCM data is OK. In my system,'AK4643' audio codec device is used. Does any one faces this issue? If so please help me.
The issue was associated with the I2S driver .
Fixed the issue with updated driver from chip vendor.

sounddriver with aplay/arecord works fine not with another application

I wrote an I2S sound driver for the Raspberry Pi. I looks like it works fine.
With alsa-aplay I can playback some music and with alsa-arecord I can record some sound which sounds great!
Now, the thing is.. I crosscompiled a simple application with the pjproject/pjsip with the aim of using the Raspberry Pi as a softphone. Some additional info: I build my own kernel (angstrom distribution) and rootfs with Openembedded, also with PJproject included.
PJproject has some test-applications and one of them (pjsystest) am I using to simply test if my sounddriver works fine. and it doesn't..
The pjsysstest had some different test options like, Play a Tone and Record Audio.
Through debbugging I can conclude that when I start to play a tone option, the next callbacks are called.
- PCM playback open
- PCM playback hw params
- PCM playback prepare
- PCM playback trigger (start)
- PCM playback trigger (stop)
- PCM playback prepare
Using just aplay:
- PCM playback open
- PCM playback hw params
- PCM playback prepare
- PCM playback trigger (start)
//when the file is at the end:
- PCM playback trigger (stop)
- PCM playback hw free
- PCM playback close
Debugging with the pjsystest gives some strage results.
I don't understand why the trigger callbacks is called a second time to stop the stream and after that just the prepare callback is called (and not the trigger start callback aswell). Because of that there is no tune playing(because the playback stream is already stopted and nog started again).
Now, I hope you'll do understand what my problem is, and I hope someone can give me an answer or suggestion in the right direction too (or even better the solution).
Thanks a lot!
[edit]
In the meantime I tried some other PJproject sample applications: stereotest and auddemo. Those are actually working :) Thats good, this says (I think):
- nothing is wrong with my kernel and rootfs
- nothing is wrong with my driver.
The question now is: Why is the PJSYSTEST not working. Why does it calls the trigger start and then stops it directly ?
[/edit]

Can v4l2 be used to read audio and video from the same device?

I have a capture card that captures SDI video with embedded audio. I have source code for a Linux driver, which I am trying to enhance to add video4linux2 support. My changes are based on the vivi example.
The problem I've come up against is that all the example I can find deal with only video or only audio. Even on the client side, everything seems to assume v4l is just video, like ffmpeg's libavdevice.
Do I need to have my driver create two separate devices, a v4l2 device and an alsa device? It seems like this makes the job of keeping audio and video in sync much more difficult.
I would prefer some way for each buffer passed between the driver and the app (through v4l2's mmap interface) contain a frame, plus some audio that matches up (with respect to time) with that frame.
Or perhaps have each buffer contain a flag indicating if it is a video frame, or a chunk of audio. Then the time stamps on the buffers could be used to sync things up.
But I don't see a way to do this with the V4L2 API spec, nor do I see any examples of v4l2-enabled apps (gstreamer, ffmpeg, transcode, etc) reading both audio and video from a single device.
Generally, the audio capture part of a device shows up as a separate device. It's usually a different physical device (posibly sharing a card), which makes sense. I'm not sure how much help that is, but it's how all of the software I'm familiar with works...
There are some spare or reserved fields in the v4l2 buffers that can be used to pass audio or other data from the driver to the calling application via pointers to mmaped buffers.
I modified the BT8x8 driver to use this approach to pass data from an A/D card synchronized to the video on Ubuntu 6.06.
It worked OK, but the effort of maintaining my modified driver caused me to abandon this approach.
If you are still interested I could dig out the details.
IF you want your driver to play with gstreamer etc. a separate audio device generally is what is expected.
Most of the cheap v4l2 capture card's audio is only an analog pass through with a volume control requiring a jumper to capture the audio via the sound card's line input.

Resources