I have a rather different question. So I'm using Matlab on a Linux Gentoo machine. I got a few Asus Xonar STX soundcards, and I'm trying to use them as sensitive audio frequency analyzer using the PlayRec non blocking audio IO package.
Now I know that Matlab will say if you try to use the audiorecorder function, and specify 24 bits in linux, it will tell you that 24bit is only supported in Windows. However the ALSA literature does not imply that this is a limitation of the operating system or of ALSA itself, and as a matter of fact Alsa seems to allow you to specify a 24 bit PCM device. And PlayRec uses PortAudio, which then uses Alsa on Linux systems.
Now this is all well and good, and Playrec doesn't seem to have a means of specifying the bit depth, just the sample rate. I have run many tests and know what the transfer function of my soundcard is (floating point return value to input Voltage conversion ratio), and I know my peak voltage is 3V, and my noise is around 100uV. This gives me 20*log10(3/100e-6) = 91dB. Which is closer to what I expect to see from 16 bits and not from 24.
My real question is this: Is there some way of verifying that I am in fact getting 24 bits in my captured signal?
And if I am not, is there some inherent limitation of ALSA or Matlab which is restricting me to only 16-bit data from sound capture devices, even when using 3rd party program to gather that data.
If you observe the data that playrec is putting out through playrec('getRec', ...), you'll see that it always is single-precision floating point (tested on Windows, MATLAB R2013b, most current Playrec). (you can verify it yourself after recording a single page with Playrec and looking in the workspace window of the IDE or by running whos('<variable_name_of_page>') at the command line.
If you look at Line 50 of pa_dll_playrec.h, you'll see that single-precision is chosen by definition:
/* Format to be used for samples with PortAudio = 32bit */
typedef float SAMPLE;
Unfortunately, this does not completely answer the question of exact sample precision, because the PortAudio lib converts samples from the APIs varying in format into the defined one. So if you want to know, what precision you're actually getting, I'd suggest a very pragmatic solution: looking at the mantissa of the 32-bit floating sample values. A simple fprintf('%+.32f\n', data) should suffice to find out how many decimal places are actually used.
Edit: I just realized I got it wrong. But here's the trick: Record audio off of an empty channel of your audio device. Plot the recorded data and zoom into the noise floor. If you're just getting plain zeros, the device is probably not activated properly (or has a too good signal/noise ratio). Try an external interface and/or turn up the gain a little bit). Depending on the actual bit resolution of the recorded data, you'll see quantization steps in the samples. Depending on the bit-depth originally used by the quantizer, those steps are bigger or smaller. Below you'll see the comparison between 16-bit (left) and 24-bit (right) of to seperately recorded blocks, from the same audio device, only that I used PortAudio's WASAPI API (on Windows, obviously) on the left and ASIO on the right:
The difference is quite obvious: at these very low levels, 16-bit only allows three values, while 24-bit has much finer stepping. So this should be a sufficient answer to your question on how to determine the real bitdepth and if your signal is recorded at 24-bit. If there are sample steps smaller than 2^-15, the odds are pretty good.
Looking in to this topic made me realize that it very much depends on the API of the currently chosen recording device, which bit-depth the quantization actually happens at. ASIO seems to always use 24-bit, while for example WASAPI falls back to 16-bit.
If you can store that signal as a wav file, run a file command on the wav from the command line in linux. Something like:
file x.wav will give you sampling rate and bits that the file was encoded at. The output is usually something like: 16 bit, 16000Hz etc.
Related
I have a vectorized wav file with values between -1 and 1, 88,200 samples, 44.1 kHz sampling rate to hear the audio within two seconds. I'd like to send the audio through bluetooth to a bluetooth module, arduino, DAC, and 3.5mm breakout board with earbuds.
I am getting crackly audio when I receive it at the end. I tried to recreate this is MATLAB and it turns out to be a combination of the scaling (multiplying + shifting the values over 0) and the sampling rate change due to the receivers. Of course, I could be completely recking the sampling frequency with inefficient Arduino code, but since a factor is also the initial scaling my guess is that I am misunderstanding something fundamental to audio processing.
What is the proper way to format and or scale the values between 0-4095 (which are needed for the DAC input) so that the audio itself is not distorted upon listening due to the scaling factor, sampling rate retention aside? OR is there something else I am missing in the big picture of this?
Clarification: Currently I am using the python sockets library to send an audio string array char by char into an Arduino array and reading them as an integers, then inputting into the DAC. Not sure if python sockets is the best way to go, there should be something better or a more robust implementation of sockets to send the data
UPDATE: I realized that the HC-05 uses SPP bluetooth protocol, which seems to be waaay too low resolution to send reliable audio. I will see if I can send a more compressed audio file, store it in the arduino, then output to the DAC. That could provide more reliable audio.
Have you tried setting in and out values in your samples? I know that video that includes audio, that could be one thing being overlooked, anyhow, that can cause issues for uploading to YouTube. It seems similar to this, because it might not know where to begin and end and it can affect audio too.
Another issue may be the format of the samples, against Bluetooth technology. AAC should probably be the format, but confirm this because I am not 100% sure what all it will accept.
The library has an example for bandwidth:
https://www.arduino.cc/en/Reference/AudioFrequencyMeter
But there are other functions for begin() and end(). You could declare them as variable to your start and end times within the samples, such that one will be the active track at a given time. You could also declare your frequency() as a constant value of 44.1, but you might have to escape the period for that. (It otherwise reads 60 to 1500.)
I am posed with the task of mixing raw data from audio files. I am currently struggling to get a clean sound from mixing the data, I keep getting distortion or white noise.
Lets say that I have a two byte array of data from two AudioInputStream's. The AIS is used to stream a byte array from a given audio file. Here I can playback single audio files using SourceDataLine's write method. I want to play two audio files simultaneously, therefore I am aware that I need to perform some sort of PCM addition.
Can anyone recommend whether this addition should be done with float values or byte values? Also, when it comes to adding 3,4 or more audio files, I am guessing my problem will be even harder! Do I need to divide by a certain amount to avoid this overflow? Lets say I am adding two 16-bit audio files (min -32,768, max 32,767).
I admit, I have had some advice on this before but can't seem to get it working! I have code of what I have tried but not with me!
Any advice would be great.
Thanks
First off, I question whether you are actually working with fully decoded PCM data values. If you are directly adding bytes, that would only make sense if the sound was recorded at 8-bit resolution, which is done less and less. These days, audio is recorded more commonly as 16-bit values, or more. I think there are some situations that don't require as much frequency content, but with current systems, the cpu savings aren't as critical so people opt to keep at least "CD Quality" (16-bit resolution, stereo, 41000 fps).
So step one, you have to make sure that you are properly converting the byte streams to valid PCM. For example, if 16-bit encoding, the two bytes have to be appended in the correct order (may be either big-endian or little-endian), and the resulting value used.
Once that is properly handled, it is usually sufficient to simply add the values and maybe impose a min and max filter to ensure the signal doesn't go beyond the defined range. I can think of two reasons why this works: (a) audio is usually recorded at a low enough volume that summing will not cause overflow, (b) the signals are random enough, with both positive and negative values, that moments where all the contributors line up in either the positive or negative direction are rare and short-lived.
Using a min and max will "clip" the signals, and can introduce some audible distortion, but it is a much less horrible sound than overflow! If your sources are routinely hitting the min and max, you can simply multiply a volume factor (within the range 0 to 1) to one or more of the contributing signals as a whole, to bring the audio values down.
For 16-bit data, it works to perform operations directly on the signed integers that result from appending the two bytes together (-32768 to 32767). But it is a more common practice to "normalize" the values, i.e., convert the 16-bit integers to floats ranging from -1 to 1, perform operations at that level, and then convert back to integers in the range -32768 to 32767 and break those integers into byte pairs.
There is a free book on digital signal processing that is well worth reading: Steven Smith's "The Scientists and Engineers Guide to Digital Signal Processing." It will give much more detail and background.
I'm trying to find out if there's a way to determine if an AAC-encoded audio track is encoded with Dolby Pro Logic II data. Is there a way of examining the file such that you can see this information? I have for example encoded a media file in Handbrake with (truncated to audio options) -E av_aac -B 320 --mixdown dpl2 and this is the audio track output that mediainfo shows:
Audio #1
ID : 2
Format : AAC
Format/Info : Advanced Audio Codec
Format profile : LC
Codec ID : 40
Duration : 2h 5mn
Bit rate mode : Variable
Bit rate : 321 Kbps
Channel(s) : 2 channels
Channel positions : Front: L R
Sampling rate : 48.0 KHz
Compression mode : Lossy
Stream size : 288 MiB (3%)
Title : Stereo / Stereo
Language : English
Encoded date : UTC 2017-04-11 22:21:41
Tagged date : UTC 2017-04-11 22:21:41
but I can't tell if there's anything in this output that would suggest that it's encoded with DPL2 data.
tl:dr; it's probably possible; it may be easier if you're a programmer.
Because the information encoded is just a stereo analog pair, there is no guaranteed way of detecting a Dolby Pro Logic II (DPL2) signal therein, unless you specifically store your own metadata saying "this is a DPL2 file." But you can probably make a pretty good guess.
All of the old analog Dolby Surround formats, including DPL2, store surround information in two channels by inverting the phase of the surround or surrounds and then mixing them into the original left and right channels. Dolby Surround type decoders, including DPL2, attempt to recover this information by inverting the phase of one of the two channels and then looking for similarities in these signal pairs. This is either done trivially, as in Dolby Surround, or else these similarities are artificially biased to be pushed much further to the left or right, or the left or right surround, as in DPL2.
So the trick is to detect whether important data is being stored in the surround channel(s). I'll sketch out for you a method that might work, and I'll try to express it without writing code, but it's up to you to implement and refine it to your liking.
Crop the first N seconds or so of program content into a stereo file, where N is between one and thirty. Call this file Input.
Mix down the Input stereo channels to a new mono file at -3dB per channel. Call this file Center.
Split the left and right channels of Input into separate files. Call these Left and Right.
Invert the right channel. Call this file RightInvert.
Mix down the Left and RightInvert channels to a new mono file at -3dB per channel. Call this file Surround.
Determine the RMS and peak dB of the Surround file.
If the RMS or peak DB of the Surround file are below "a tolerance", stop; the original file is either mono or center-panned and hence contains no surround information. You'll have to experiment with several DPL2 and non-DPL2 sources to see what these tolerances are, but after a dozen or so files the numbers should become clear. I'm guessing around -30 dB or so.
Invert the Center file into a new file. Call this file CenterInvert.
Mix the CenterInvert file into the Surround file at 0 dB (both CenterInvert and Surround should be mono). Call this new file SurroundInvert.
Determine the RMS and peak dB of the SurroundInvert file.
If either the RMS and/or peak dB of SurroundInvert are below "a tolerance," stop; your original source contains panned left or right front information, not surround information. You'll have to experiment with several DPL2 and non-DPL2 sources to see what these tolerances are, but after a dozen or so files the numbers should become clear -- I'm guessing around -35 dB or so.
If you've gotten this far, your original Input probably contains surround information, and hence is probably a member of the Dolby Surround family of encodings.
I've written this algorithm out such that you can do each of these steps with a specific command in sox. If you want to be fancier, instead of doing the RMS/peak value step in sox, you could run an ebur128 program and check your levels in LUFS against a tolerance. If you want to be even fancier, after you create the Surround and Center files, you could filter out all frequencies higher than 7kHz and do de-emphasis on them, just like a real DPL2 decoder would.
To keep this algorithm simple, I've sketched it out entirely in the amplitude domain. The calculation of the SurroundLevel file would probably be a lot more accurately done in the frequency domain, if you know how to calculate the magnitude and angle of FFT bins and you use windows of 30 to 100 ms. But this cheapo version above should get you started.
One last caution. AAC is a modern psychoacoustic codec, which means that it likes to play games with stereo phasing and imaging to achieve its compression. So I consider it likely that the mere act of encapsulating DPL2 into an AAC stream will likely hose some of the imaging present in DPL2. To be candid, neither DPL2 nor AAC belongs anywhere in this pipeline. If you must store an analog stream originally encoded with DPL2, do it in a lossless format like WAV or FLAC, not AAC.
As of this writing, operational concepts behind Dolby Pro Logic (I) are here. These basic concepts still apply to DPL2; operational concepts for DPL2 are here.
If the file has more than one channel, you can with some certainty assume that they are used for surround purposes, although they could be just multiple tracks.
In this case it falls on a playing system to do with channels as it "thinks" best. (if file header doesn't say what to do)
But your file is stereo. If you want to know whether it is a virtual surround file then you look in header for an encoder field to see which encoder was used.
This may help somewhat, although not much. Mostly encoder field is left empty, and second thing is that the encoder doesn't have to be same as the recoder that mixed down the surround data.
I.e. the recoder will first create raw PCM data, then feed it to some encoder to produce compressed file. (AAC or whatever)
Also, there are many applications and versions vary, so might the encoder field, so tracking all of them would be nasty work.
However, you can, with over 60% certainty, deduce whether something is virtual surround or not by examining the data.
This would be advanced DSP and, for speed, even machine learning may be involved.
You would have to find out whether the stereo signals contain certain features of HRTF (head related transfer function).
This may be achieved by examining intensity difference and delay features between same sounds appearing in time domain and harmonic features (characteristic frequency changes) in frequency domain.
You would have to do both, because one without another may just tell you that something is very good stereo recording,, not a virtual surround.
I don't know whether there are HRTF specific features mapped somewhere already, or you would need to do it by yourself.
It's a very complicated solution that takes a lot of time to make properly. Also it's performance would be problematic.
With this method you can also break the stereo mixdown to the nearly original surround channels.
But for stereo to surround conversion other methods are used and they sound well.
If you are determined to perform such a detection, dedicate half a year or more of hard work if no HRTF features are mapped, few weeks if they are,
brace yourself for big stress and I wish you luck. I have done something similar. It is a killer.
If you want an out of the box solution, then the answer to your question is no, unless header provides you with encoder field and the encoder is distinctive and known to be used only for doing surround to stereo conversion.
I do not think anyone did this from actual data as I described, or if they did it is a part of commercial product. Doing what you want is not usually needed, but it can be done.
Ow, BTW, try googling HRTF inversion, it might give some help.
On a basic embedded systems speaker with a single line of output, wiggling the output as 0 or 1 in a for given periods produces sound.
I'd like to do something similar on a modern Linux desktop. A brief look-see of Portaudio, OpenAL, and ALSA suggests to me that most people do things at a considerable higher level. That's ok, but not what I'm looking for.
(I've never worked with sounds on Linux before, so if a tutorial exists, I'd love to see it).
Actually, it... kinda is. While you can generate the waveform yourself, you still need to use an API to queue it and send it to the audio hardware; there no longer even exists a sane way to twiddle the audio line directly. Plus you get cross-platform compatibility for free.
[...] embedded systems speaker with a single line of output, wiggling the output as 0 or 1 in a for given periods produces sound.
Sounds a lot like the old PC speaker. You might still find code for it in the Linux kernel.
I'd like to do something similar on a modern Linux desktop.
Then you need AFAIK a driver for ALSA. There you can find infos on how to write an ALSA driver. Use PWM to produce the sound.
Since there are many different sound cards and audio interfaces produced by different companies, there is no uniform way to have a low level access to them. With most sound I/O APIs what you need to do is to generate the PCM data and send that to the driver. That's pretty much the lowest level you can go.
But PCM data is very similar to the 0-1 approach you describe. It's just that you have the in-between options too. 0-1 is 1-bit audio. 8-, 16-, 24-bit audio is what you'll find on a modern sound card. There are also 32- and 64-bit float formats. But they're still similar.
This isn't exactly specifically a programming question (or is it?) but I was wondering:
How are graphics and sound processed from code and output by the PC?
My guess for graphics:
There is some reserved memory space somewhere that holds exactly enough room for a frame of graphics output for your monitor.
IE: 800 x 600, 24 bit color mode == 800x600x3 = ~1.4MB memory space
Between each refresh, the program writes video data to this space. This action is completed before the monitor refresh.
Assume a simple 2D game: the graphics data is stored in machine code as many bytes representing color values. Depending on what the program(s) being run instruct the PC, the processor reads the appropriate data and writes it to the memory space.
When it is time for the monitor to refresh, it reads from each memory space byte-for-byte and activates hardware depending on those values for each color element of each pixel.
All of this of course happens crazy-fast, and repeats x times a second, x being the monitor's refresh rate. I've simplified my own likely-incorrect explanation by avoiding talk of double buffering, etc
Here are my questions:
a) How close is the above guess (the three steps)?
b) How could one incorporate graphics in pure C++ code? I assume the practical thing that everyone does is use a graphics library (SDL, OpenGL, etc), but, for example, how do these libraries accomplish what they do? Would manual inclusion of graphics in pure C++ code (say, a 2D spite) involve creating a two-dimensional array of bit values (or three dimensional to include multiple RGB values per pixel)? Is this how it would be done waaay back in the day?
c) Also, continuing from above, do libraries such as SDL etc that use bitmaps actual just build the bitmap/etc files into machine code of the executable and use them as though they were build in the same matter mentioned in question b above?
d) In my hypothetical step 3 above, is there any registers involved? Like, could you write some byte value to some register to output a single color of one byte on the screen? Or is it purely dedicated memory space (=RAM) + hardware interaction?
e) Finally, how is all of this done for sound? (I have no idea :) )
a.
A long time ago, that was the case, but it hasn't been for quite a while. Most hardware will still support that type of configuration, but mostly as a fall-back -- it's not how they're really designed to work. Now most have a block of memory on the graphics card that's also mapped to be addressable by the CPU over the PCI/AGP/PCI-E bus. The size of that block is more or less independent of what's displayed on the screen though.
Again, at one time that's how it mostly worked, but it's mostly not the case anymore.
Mostly right.
b. OpenGL normally comes in a few parts -- a core library that's part of the OS, and a driver that's supplied by the graphics chipset (or possibly card) vendor. The exact distribution of labor between the CPU and GPU varies somewhat though (between vendors, over time within products from a single vendor, etc.) SDL is built around the general idea of a simple frame-buffer like you've described.
c. You usually build bitmaps, textures, etc., into separate files in formats specifically for the purpose.
d. There are quite a few registers involved, though the main graphics chipset vendors (ATI/AMD and nVidia) tend to keep their register-level documentation more or less secret (though this could have changed -- there's constant pressure from open source developers for documentation, not just closed-source drivers). Most hardware has capabilities like dedicated line drawing, where you can put (for example) line parameters into specified registers, and it'll draw the line you've specified. Exact details vary widely though...
e. Sorry, but this is getting long already, and sound covers a pretty large area...
For graphics, Jerry Coffin's got a pretty good answer.
Sound is actually handled similarly to your (the OP's) description of how graphics is handled. At a very basic level, you have a "buffer" (some memory, somewhere).
Your software writes the sound you want to play into that buffer. It is basically an encoding of the position of the speaker cone at a given instant in time.
For "CD quality" audio, you have 44100 values per second (a "sample rate" of 44.1 kHz).
A little bit behind the write position, you have the audio subsystem reading from a read position in the buffer.
This read position will be a little bit behind the write position. The distance behind is known as the latency. A larger distance gives more of a delay, but also helps to avoid the case where the read position catches up to the write position, leaving the sound device with nothing to actually play!