unable to play a wav file using ALSA api - audio

I am reading a wav file and trying to play it with alsa api using writei() method.
Wav file header has following values
Audio Format: 1 (PCM)
Num Channels: 1
Sample Rate: 11025
Byte Rate: 11025
Block Align: 1
Bits Per Sample: 8
Subchunk2 id: 0x61746164
Subchunk2 Size: 24569
I did not change buffer size or period size. Period size for my hw device is 4096 (I read it using snd_pcm_hw_params_get_period_size() )
Call to writei() looks like
//buff_size = period_size * size of each frame = 4096 * 1 bytes;
int16_t* buff = (int16_t *) malloc(buff_size);
for(i = 0; i < 6; ++i){
memcpy(buff, &samples[i*period_size], buff_size);
if (err = snd_pcm_writei(pcm, buff, period_size) == -EPIPE) {
printf("XRUN.\n");
snd_pcm_prepare(pcm);
} else if (err < 0) {
printf("ERROR. Can't write to PCM device. %s\n", snd_strerror(err));
}
}
As wav file is 8-bit PCM Mono, frame size is 1 byte and so this file's data size is 24569 frames. Using default period_size, buffer size for writei() = period_size * channels = 4096.
So I need 6 calls to writei() to play the entire file. But when I do that I cannot hear anything. Any idea what is wrong?

Related

Get over 32 bytes limit python smbus

I tried to read 50 bytes of data in arduino from a raspberry pi 3. There is this limitation that a block only have 32 bytes in length when using SMBus. Python have this library called smbus2 that provides smbus access. This library also provides a way to read beyond that 32 bytes limit using i2c_rdwr which I'm using. But instead of getting those 50 bytes, I'm only receiving 32 bytes and byte 33 - 49 is filled with 0xFF.
the code
from smbus2 import SMBus, i2c_msg
bus = SMBus(1)
read = i2c_msg.read(addr, 50)
bus.i2c_rdwr(read)
res = list(read)
print(''.join([chr(i) for i in res]))
It gives:
[some text 32 char long]+ÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿ
If someone stumble upon this, we can't send more than 32 byte long data in 1 transmission from arduino. We could split the data, send less than 32 byte each or loop the string and send each byte. For example:
// onReceiveEvent, doesn't work
void sendMessage()
{
Wire.write(data, LENGTH); // data length is 50 byte
// this however, work if LENGTH < 32
}
// onReceiveEvent, work
void sendMessage()
{
for (int i = 0; i < LENGTH; i++)
{
Wire.write(data[i]);
}
}
// reason: in file twi.c:332
/*
* Function twi_transmit
* Desc fills slave tx buffer with data
* must be called in slave tx event callback
* Input data: pointer to byte array
* length: number of bytes in array
* Output 1 length too long for buffer
* 2 not slave transmitter
* 0 ok
*/
uint8_t twi_transmit(const uint8_t* data, uint8_t length)
{
uint8_t i;
// ensure data will fit into buffer
if(TWI_BUFFER_LENGTH < (twi_txBufferLength+length)){
return 1;
}
...
twi_txBufferLength += length;
return 0;
}
I can still use the same python script, only the arduino is not capable of sending more than 32 byte in a single transmission.

sending audio via bluetooth a2dp source esp32

I am trying to send measured i2s analogue signal (e.g. from mic) to the sink device via Bluetooth instead of the default noise.
Currently I am trying to change the bt_app_a2d_data_cb()
static int32_t bt_app_a2d_data_cb(uint8_t *data, int32_t i2s_read_len)
{
if (i2s_read_len < 0 || data == NULL) {
return 0;
}
char* i2s_read_buff = (char*) calloc(i2s_read_len, sizeof(char));
bytes_read = 0;
i2s_adc_enable(I2S_NUM_0);
while(bytes_read == 0)
{
i2s_read(I2S_NUM_0, i2s_read_buff, i2s_read_len,&bytes_read, portMAX_DELAY);
}
i2s_adc_disable(I2S_NUM_0);
// taking care of the watchdog//
TIMERG0.wdt_wprotect=TIMG_WDT_WKEY_VALUE;
TIMERG0.wdt_feed=1;
TIMERG0.wdt_wprotect=0;
uint32_t j = 0;
uint16_t dac_value = 0;
// change 16bit input signal to 8bit
for (int i = 0; i < i2s_read_len; i += 2) {
dac_value = ((((uint16_t) (i2s_read_buff[i + 1] & 0xf) << 8) | ((i2s_read_buff[i + 0]))));
data[j] = (uint8_t) dac_value * 256 / 4096;
j++;
}
// testing for loop
//uint8_t da = 0;
//for (int i = 0; i < i2s_read_len; i++) {
// data[i] = (uint8_t) (i2s_read_buff[i] >> 8);// & 0xff;
// da++;
// if(da>254) da=0;
//}
free(i2s_read_buff);
i2s_read_buff = NULL;
return i2s_read_len;
}
I can hear the sawtooth sound from the sink device.
Any ideas what to do?
your data can be an array of some float digits representing analog signals or analog signal variations, for example, a 32khz sound signal contains 320000 float numbers to define captures sound for every second. if your data have been expected to transmit in offline mode you can prepare your outcoming data in the form of a buffer plus a terminator sign then send buffer by Bluetooth module of sender device which is connected to the proper microcontroller. for the receiving device, if you got terminator character like "\r" you can process incoming buffer e.g. for my case, I had to send a string array of numbers but I often received at most one or two unknown characters and to avoid it I reject it while fulfill receiving container.
how to trim unknown first characters of string in code vision
if you want it in online mode i.e. your data must be transmitted and played concurrently. you must consider delays and reasonable time to process for all microcontrollers and devices like Bluetooth, EEprom iCs and...
I'm also working on a project "a2dp source esp32".
I'm playing a wav-file from spiffs.
If the wav-file is 44100, 16-bit, stereo then you can directly write a stream of bytes from the file to the array data[ ].
When I tried to write less data than in the len-variable and return less (for example 88), I got an error, now I'm trying to figure out how to reduce this buffer because of big latency (len=512).
Also, the data in the array data[ ] is stored as stereo.
Example: read data from file to data[ ]-array:
size_t read;
read = fread((void*) data, 1, len, fwave);//fwave is a file
if(read<len){//If get EOF, go to begin of the file
fseek(fwave , 0x2C , SEEK_SET);//skip wav-header 44bytesт
read = fread((void*) (&(data[read])), 1, len-read, fwave);//read up
}
If file mono, I convert it to stereo like this (I read half and then double data):
int32_t lenHalf=len/2;
read = fread((void*) data, 1, lenHalf, fwave);
if(read<lenHalf){
fseek(fwave , 0x2C , SEEK_SET);//skip wav-header 44bytesт
read = fread((void*) (&(data[read])), 1, lenHalf-read, fwave);//read up
}
//copy to the second channel
uint16_t *data16=(uint16_t*)data;
for (int i = lenHalf/2-1; i >= 0; i--) {
data16[(i << 1)] = data16[i];
data16[(i << 1) + 1] = data16[i];
}
I think you have got sawtooth sound because:
your data is mono?
in your "return i2s_read_len;" i2s_read_len less than len
you // change 16bit input signal to 8bit, in the array data[ ] data as 16-bit: 2ByteLeft-2ByteRight-2ByteLeft-2ByteRight-...
I'm not sure, it's a guess.

How to lower the quality and specs of a wav file on linux

So to preface my problem, I'll give some context.
In SDL2 you can load wav files such as from the wiki:
SDL_AudioSpec wav_spec;
Uint32 wav_length;
Uint8 *wav_buffer;
/* Load the WAV */
if (SDL_LoadWAV("test.wav", &wav_spec, &wav_buffer, &wav_length) == NULL) {
fprintf(stderr, "Could not open test.wav: %s\n", SDL_GetError());
} else {
/* Do stuff with the WAV data, and then... */
SDL_FreeWAV(wav_buffer);
}
The issue I'm getting from SDL_GetError is Complex WAVE files not supported
Now the wav file I'm intending to open has the following properties:
Playing test.wav.
Detected file format: WAV / WAVE (Waveform Audio) (libavformat)
ID_AUDIO_ID=0
[lavf] stream 0: audio (pcm_s24le), -aid 0
Clip info:
encoded_by: Pro Tools
ID_CLIP_INFO_NAME0=encoded_by
ID_CLIP_INFO_VALUE0=Pro Tools
originator_reference:
ID_CLIP_INFO_NAME1=originator_reference
ID_CLIP_INFO_VALUE1=
date: 2016-05-1
ID_CLIP_INFO_NAME2=date
ID_CLIP_INFO_VALUE2=2016-05-1
creation_time: 20:13:34
ID_CLIP_INFO_NAME3=creation_time
ID_CLIP_INFO_VALUE3=20:13:34
time_reference:
ID_CLIP_INFO_NAME4=time_reference
ID_CLIP_INFO_VALUE4=
ID_CLIP_INFO_N=5
Load subtitles in dir/
ID_FILENAME=dir/test.wav
ID_DEMUXER=lavfpref
ID_AUDIO_FORMAT=1
ID_AUDIO_BITRATE=2304000
ID_AUDIO_RATE=48000
ID_AUDIO_NCH=2
ID_START_TIME=0.00
ID_LENGTH=135.53
ID_SEEKABLE=1
ID_CHAPTERS=0
Selected audio codec: Uncompressed PCM [pcm]
AUDIO: 48000 Hz, 2 ch, s24le, 2304.0 kbit/100.00% (ratio: 288000->288000)
ID_AUDIO_BITRATE=2304000
ID_AUDIO_RATE=48000
ID_AUDIO_NCH=2
AO: [pulse] 48000Hz 2ch s16le (2 bytes per sample)
ID_AUDIO_CODEC=pcm
From the wiki.libsdl.org/SDL_OpenAudioDevice page and subsequent wiki.libsdl.org/SDL_AudioSpec#Remarks page I can at least surmise that a wav file of:
freq = 48000;
format = AUDIO_F32;
channels = 2;
samples = 4096;
quality should work.
The main problem I can see is that my wav file has the s16le format whereas it's not listed on the SDL_AudioSpec page.
This leads me to believe I need to reduce the quality of test.wav so it does not appear as "complex" in SDL.
When I search engine Complex WAVE files not supported nothing helpful comes up, except it appears in the SDL_Mixer library, which as far as I know I'm not using.
Can the format be changed via ffmepg to work in SDL2?
Edit: This appears to be the actual code in SDL2 where it complains. I don't really know enough about C to dig all the way through the vast SDL2 library, but I thought it might help if someone notices something just from hinting variable names and such:
/* Read the audio data format chunk */
chunk.data = NULL;
do {
if ( chunk.data != NULL ) {
SDL_free(chunk.data);
chunk.data = NULL;
}
lenread = ReadChunk(src, &chunk);
if ( lenread < 0 ) {
was_error = 1;
goto done;
}
/* 2 Uint32's for chunk header+len, plus the lenread */
headerDiff += lenread + 2 * sizeof(Uint32);
} while ( (chunk.magic == FACT) || (chunk.magic == LIST) );
/* Decode the audio data format */
format = (WaveFMT *)chunk.data;
if ( chunk.magic != FMT ) {
SDL_SetError("Complex WAVE files not supported");
was_error = 1;
goto done;
}
After a couple hours of fun audio converting I got it working, will have to tweak it to try and get better sound quality.
To answer the question at hand, converting can be done by:
ffmpeg -i old.wav -acodec pcm_s16le -ac 1 -ar 16000 new.wav
To find codecs on your version of ffmpeg:
ffmpeg -codecs
This format works with SDL.
Next within SDL when setting the desired SDL_AudioSpec make sure to have the correct settings:
freq = 16000;
format = AUDIO_S16LSB;
channels = 2;
samples = 4096;
Finally the main issue was most likely using the legacy SDL_MixAudio instead of the newer SDL_MixAudioFormat
With the following settings:
SDL_MixAudioFormat(stream, mixData, AUDIO_S16LSB, len, SDL_MIX_MAXVOLUME / 2); as can be found on the wiki.

Noise in Merging two pcm files

I am merging two pcm data and the resultant pcm is having additional noise of grrrrrrrrrr. My code is :
int main(void)
{
FILE *rCAudio;
FILE *raudio;
FILE *wtest;
rCAudio=fopen("Audio1.pcm","rb"); //Reading first pcm file
if(rCAudio==NULL)
cout<<"Errr";
raudio=fopen("Audio2.pcm","rb"); //Reading second pcm file
if(raudio==NULL)
cout<<"Errr";
fopen_s(&wtest,"AudioMerge.pcm","ab"); // Writing final pcm file
short* first= new short[1792];;
short* second= new short[1792];;
short* merge = new short[1792];
short sample1,sample2;
while(1)
{
fread(first,2,1792,rCAudio);
fread(second,2,1792,raudio);
for(int j=0;j<1792;j++)
{
sample1 = first[j];
sample2 = second[j];
int mixedi=(int)sample1 + (int)sample2;
if (mixedi>32767) mixedi=32767;
if (mixedi<-32768) mixedi=-32768;
merge[j] =(short)mixedi;
}
fwrite(merge,2,1972,wtest);
}
}
I got the solution , the problem was:
I have written Audio1.pcm with 4096 bytes at a time in BYTE and Audio2.pcm with 4096 bytes at a time in BYTE. But i was reading 1972 bytes at a time in short.
So i corrected it by reading 4096 bytes at a time in BYTE and save by third merge file with 4096 bytes at a time in BYTE.

What is the smallest audio buffer needed to produce Tone sound without distotions with WaveOUT API

Does the WaveOut API has some internal limitation of the size for the current piece of buffer played ? I mean if I provide a very small buffer does it affects somehow the sound played to the speakers. I am experiencing very strange noise when I am generating and playing the sinus wave with small buffer. Something like a peak, or "BUMP".
The complete Story:
I made a program that can generate Sinus sound signal in real time.
The variable parameters are Frequency and Volume. The project requirement was to have a maximum latency of 50 ms. So the program must be able to produce Sinus signals with manually adjustable frequency of audio signal in real time.
I used Windows WaveOut API, C# and P/invoke to access the API.
Everything works fine when the sound buffer is 1000 ms large. If I minimize the buffer to 50 ms as per latency requirement then for certain frequencies I am experiencing at the end of every buffer, a noise or "BUMP". I do not understand if the sound generated is malformed ( I checked and is not) or something happens with the Audio chip, or some delay in initializing and playing.
When I save the produced audio to .wav file everything is perfect.
This means the must be some bug in my code or the audio subsystem has a limitation to the buffer chunks sent to it.
For those who doesn't know WaveOut must be initialized at first time and then must be prepared with audio headers for each buffer that are containing the number of bytes that needs to be played and the pointer to a memory that contains the audio that needs to be player.
UPDATE
Noise happens with the following combinations 44100 SamplingRate, 16 Bits, 2 channels, 50 ms buffer and generated Sinus audio signal of 201Hz, 202Hz, 203Hz, 204Hz, 205Hz ... 219Hz,
220Hz, 240 Hz, is ok
Why is this difference of 20, I do not know.
There are a few things to keep in mind when you need to output audio smoothly:
waveOutXxxx API is a legacy/compatibility layer on top of lower level API and as such it has greater overhead and is not recommended when you are to reach minimal latency. Note that this is unlikely to be your primary problem, but this is a piece of general knowledge helpful for understanding
because Windows is not real time OS and its audio subsystem is not realtime either you don't have control over random latency involved between you queue audio data for output and the data is really played back, the key is to keep certain level of buffer fullness which protects you from playback underflows and delivers smooth playback
with waveOutXxxx you are no limited to having single buffer, you can allocate multiple reusable buffers and recycle them
All in all, waveOutXxxx, DirectSound, DirectShow APIs work well with latencies 50 ms and up. With WASAPI exclusive mode streams you can get 5 ms latencies and even lower.
EDIT: I seem to have said too early about 20 ms latencies. To compensate for this, here is a simple tool LowLatencyWaveOutPlay (Win32, x64) to estimate the latency you can achieve. With sufficient buffering playback is smooth, otherwise you hear stuttering.
My understanding is that buffers might be returned late and the optimal design in terms of smallest latency lies along the line of having more smaller buffers so that you are given them back as early as possible. For example, 10 buffers 3 ms/buffer rather than 3 buffers 10 ms/buffer.
D:\>LowLatencyWaveOutPlay.exe 48000 10 3
Format: 48000 Hz, 1 channels, 16 bits per sample
Buffer Count: 10
Buffer Length: 3 ms (288 bytes)
Signal Frequency: 1000 Hz
^C
So I came here because I wanted to find the basic latency of waveoutwrite() as well. I got around 25-26ms of latency before I got to the smooth sine tone.
This is for:
AMD Phenom(tm) 9850 Quad-Core Processor 2.51 GHz
4.00 GB ram
64-bit operating system, x64-based processor
Windows 10 Enterprise N
The code follows. It is a modfied version of Petzold's sine wave program, refactored to run on the command line. I also changed the polling of buffers to use of a callback on buffer complete with the idea that this would make the program more efficient, but it didn't make a difference.
It also has a setup for elapsed timing, which I used to probe various timings for operations on the buffers. Using those I get:
Sine wave output program
Channels: 2
Sample rate: 44100
Bytes per second: 176400
Block align: 4
Bits per sample: 16
Time per buffer: 0.025850
Total time prepare header: 87.5000000000 usec
Total time to fill: 327.9000000000 usec
Total time for waveOutWrite: 90.8000000000 usec
Program:
/*******************************************************************************
WaveOut example program
Based on C. Petzold's sine wave example, outputs a sine wave via the waveOut
API in Win32.
*******************************************************************************/
#include <stdio.h>
#include <windows.h>
#include <math.h>
#include <limits.h>
#include <unistd.h>
#define SAMPLE_RATE 44100
#define FREQ_INIT 440
#define OUT_BUFFER_SIZE 570*4
#define PI 3.14159
#define CHANNELS 2
#define BITS 16
#define MAXTIM 1000000000
double fAngle;
LARGE_INTEGER perffreq;
PWAVEHDR pWaveHdr1, pWaveHdr2;
int iFreq = FREQ_INIT;
VOID FillBuffer (short* pBuffer, int iFreq)
{
int i;
int c;
for (i = 0 ; i < OUT_BUFFER_SIZE ; i += CHANNELS) {
for (c = 0; c < CHANNELS; c++)
pBuffer[i+c] = (short)(SHRT_MAX*sin (fAngle));
fAngle += 2*PI*iFreq/SAMPLE_RATE;
if (fAngle > 2 * PI) fAngle -= 2*PI;
}
}
double elapsed(LARGE_INTEGER t)
{
LARGE_INTEGER rt;
long tt;
QueryPerformanceCounter(&rt);
tt = rt.QuadPart-t.QuadPart;
return (tt*(1.0/(double)perffreq.QuadPart));
}
void CALLBACK waveOutProc(HWAVEOUT hwo, UINT uMsg, DWORD_PTR dwInstance, DWORD_PTR dwParam1, DWORD_PTR dwParam2)
{
if (uMsg == WOM_DONE) {
if (pWaveHdr1->dwFlags & WHDR_DONE) {
FillBuffer((short*)pWaveHdr1->lpData, iFreq);
waveOutWrite(hwo, pWaveHdr1, sizeof(WAVEHDR));
}
if (pWaveHdr2->dwFlags & WHDR_DONE) {
FillBuffer((short*)pWaveHdr2->lpData, iFreq);
waveOutWrite(hwo, pWaveHdr2, sizeof(WAVEHDR));
}
}
}
int main()
{
HWAVEOUT hWaveOut ;
short* pBuffer1;
short* pBuffer2;
short* pBuffer3;
WAVEFORMATEX waveformat;
UINT wReturn;
int bytes;
long t;
LARGE_INTEGER rt;
double timprep;
double filtim;
double waveouttim;
printf("Sine wave output program\n");
fAngle = 0; /* start sine angle */
QueryPerformanceFrequency(&perffreq);
pWaveHdr1 = malloc (sizeof (WAVEHDR));
pWaveHdr2 = malloc (sizeof (WAVEHDR));
pBuffer1 = malloc (OUT_BUFFER_SIZE*sizeof(short));
pBuffer2 = malloc (OUT_BUFFER_SIZE*sizeof(short));
pBuffer3 = malloc (OUT_BUFFER_SIZE*sizeof(short));
if (!pWaveHdr1 || !pWaveHdr2 || !pBuffer1 || !pBuffer2) {
if (!pWaveHdr1) free (pWaveHdr1) ;
if (!pWaveHdr2) free (pWaveHdr2) ;
if (!pBuffer1) free (pBuffer1) ;
if (!pBuffer2) free (pBuffer2) ;
fprintf(stderr, "*** Error: No memory\n");
exit(1);
}
// Load prime parameters to format
waveformat.wFormatTag = WAVE_FORMAT_PCM;
waveformat.nChannels = CHANNELS;
waveformat.nSamplesPerSec = SAMPLE_RATE;
waveformat.wBitsPerSample = BITS;
waveformat.cbSize = 0;
// Calculate other parameters
bytes = waveformat.wBitsPerSample/8; /* find bytes per sample */
if (waveformat.wBitsPerSample&8) bytes++; /* round up */
bytes *= waveformat.nChannels; /* find total channels size */
waveformat.nBlockAlign = bytes; /* set block align */
/* find average bytes/sec */
waveformat.nAvgBytesPerSec = bytes*waveformat.nSamplesPerSec;
printf("Channels: %d\n", waveformat.nChannels);
printf("Sample rate: %d\n", waveformat.nSamplesPerSec);
printf("Bytes per second: %d\n", waveformat.nAvgBytesPerSec);
printf("Block align: %d\n", waveformat.nBlockAlign);
printf("Bits per sample: %d\n", waveformat.wBitsPerSample);
printf("Time per buffer: %f\n",
OUT_BUFFER_SIZE*sizeof(short)/(double)waveformat.nAvgBytesPerSec);
if (waveOutOpen (&hWaveOut, WAVE_MAPPER, &waveformat, (DWORD_PTR)waveOutProc, 0, CALLBACK_FUNCTION)
!= MMSYSERR_NOERROR) {
free (pWaveHdr1) ;
free (pWaveHdr2) ;
free (pBuffer1) ;
free (pBuffer2) ;
hWaveOut = NULL ;
fprintf(stderr, "*** Error: No memory\n");
exit(1);
}
// Set up headers and prepare them
pWaveHdr1->lpData = (LPSTR)pBuffer1;
pWaveHdr1->dwBufferLength = OUT_BUFFER_SIZE*sizeof(short);
pWaveHdr1->dwBytesRecorded = 0;
pWaveHdr1->dwUser = 0;
pWaveHdr1->dwFlags = WHDR_DONE;
pWaveHdr1->dwLoops = 1;
pWaveHdr1->lpNext = NULL;
pWaveHdr1->reserved = 0;
QueryPerformanceCounter(&rt);
waveOutPrepareHeader(hWaveOut, pWaveHdr1, sizeof (WAVEHDR));
timprep = elapsed(rt);
pWaveHdr2->lpData = (LPSTR)pBuffer2;
pWaveHdr2->dwBufferLength = OUT_BUFFER_SIZE*sizeof(short);
pWaveHdr2->dwBytesRecorded = 0;
pWaveHdr2->dwUser = 0;
pWaveHdr2->dwFlags = WHDR_DONE;
pWaveHdr2->dwLoops = 1;
pWaveHdr2->lpNext = NULL;
pWaveHdr2->reserved = 0;
waveOutPrepareHeader(hWaveOut, pWaveHdr2, sizeof (WAVEHDR));
// Send two buffers to waveform output device
QueryPerformanceCounter(&rt);
FillBuffer (pBuffer1, iFreq);
filtim = elapsed(rt);
QueryPerformanceCounter(&rt);
waveOutWrite (hWaveOut, pWaveHdr1, sizeof (WAVEHDR));
waveouttim = elapsed(rt);
FillBuffer (pBuffer2, iFreq);
waveOutWrite (hWaveOut, pWaveHdr2, sizeof (WAVEHDR));
// Run waveform loop
sleep(10);
printf("Total time prepare header: %.10f usec\n", timprep*1000000);
printf("Total time to fill: %.10f usec\n", filtim*1000000);
printf("Total time for waveOutWrite: %.10f usec\n", waveouttim*1000000);
waveOutUnprepareHeader(hWaveOut, pWaveHdr1, sizeof (WAVEHDR));
waveOutUnprepareHeader(hWaveOut, pWaveHdr2, sizeof (WAVEHDR));
// Close waveform file
free (pWaveHdr1) ;
free (pWaveHdr2) ;
free (pBuffer1) ;
free (pBuffer2) ;
}

Resources