Multichannel DAC configuration - dac

We need 96 channel Analog output on our board. Each channel should be capable to drive separate waveforms simultaneously. DAC should support 4MSPS update rate per channel and 12 bit resolution. When we are checking for DAC with such specification, we are only finding DAC parts with 2 channel or 1 channel if we consider 4MSPS and 12 bit. Also 4 channels, but with around 40 I/O requirement per part. So the total I/O for 96 channel will be very huge.
Can anyone suggest any best suitable part for us or any alternate approach?
4MSPS sample rate, 12 bit resolution, Output 1Vpp (if more also fine).
Total I/O for 96 channel will be around 350, not more than that. (So with single FPGA, we can implement)
Please suggest.
Searched for suitable parts with multiple manufacturers.

Related

Is interrupt jitter causing the annoying wobble in audio using the mcu's dac?

I had a assignment for college where we needed to play a precompiled wav as integer array through the PWM and DAC. Now, I wanted more of a challenge, so I went out of my way and created a audio dac over usb using the micro controller in question: The STM32F051. It basically listens to my soundcard output using a wasapi loopback recorder, changes the resolution from 16 to 12 bit (since the dac on the stm32 only has a 12 bit resolution) and sends it over using usart using 10x sample rate as baud rate (in my case 960000). All done in C#.
On the microcontroller I simply use a interrupt for usart and push the received data to the dac.
It works pretty well, much better than PWM, and at a decent sample frequency of 48kHz.
But... here it comes.. When there is some (mostly) high pitch symphonic melody it starts to sound "wobbly".
Here a video where you can hear it: https://youtu.be/xD3uTP9etuA?t=88
I read up on the internet a bit about DIY dac's and someone somewhere (don't remember where) mentioned that MCU's in general have interrupt jitter. So may basic question is: Is interrupt jitter actually causing this? If so, are there ways to limit the jitter happening?
Or is this something entirely different?
I am thinking of trying to compact the pcm data send over serial (as said before, resolution of 12 bits, but are sent in packet of 2 8bits forming 16bits, hence twice the samplerate as the baud rate, so my plan is trying to shift 12 bits to the MSB and adding four bits of the next 12 bit value to the current 16 bit variable, hence only needing 12 transfers instead of 16 per 8 samples. Might read upon more efficient ways of compacting data for transport.), put the samples in a buffer and then use another timer that triggers at 48kHz for sending the samples to the dac. Would this concept work? Or would I just waste time?
For code, here is the project: https://github.com/EldinZenderink/SoundOverSerial

How to get amplitude of an audio stream in an AudioGraph to build a SoundWave using Universal Windows?

I want to built a SoundWave sampling an audio stream.
I read that a good method is to get amplitude of the audio stream and represent it with a Polygon. But, suppose we have and AudioGraph with just a DeviceInputNode and a FileOutpuNode (a simple recorder).
How can I get the amplitude from a node of the AudioGraph?
What is the best way to periodize this sampling? Is a DispatcherTimer good enough?
Any help will be appreciated.
First, everything you care about is kind of here:
uwp AudioGraph audio processing
But since you have a different starting point, I'll explain some more core things.
An AudioGraph node is already periodized for you -- it's generally how audio works. I think Win10 defaults to periods of 10ms and/or 20ms, but this can be set (theoretically) via the AudioGraphSettings.DesiredSamplesPerQuantum setting, with the AudioGraphSettings.QuantumSizeSelectionMode = QuantumSizeSelectionMode.ClosestToDesired; I believe the success of this functionality actually depends on your audio hardware and not the OS specifically. My PC can only do 480 and 960. This number is how many samples of the audio signal to accumulate per channel (mono is one channel, stereo is two channels, etc...), and this number will also set the callback timing as a by-product.
Win10 and most devices default to 48000Hz sample rate, which means they are measuring/output data that many times per second. So with my QuantumSize of 480 for every frame of audio, i am getting 48000/480 or 100 frames every second, which means i'm getting them every 10 milliseconds by default. If you set your quantum to 960 samples per frame, you would get 50 frames every second, or a frame every 20ms.
To get a callback into that frame of audio every quantum, you need to register an event into the AudioGraph.QuantumProcessed handler. You can directly reference the link above for how to do that.
So by default, a frame of data is stored in an array of 480 floats from [-1,+1]. And to get the amplitude, you just average the absolute value of this data.
This part, including handling multiple channels of audio, is explained more thoroughly in my other post.
Have fun!

Bluetooth SPP throughput

I am trying to figure out what the maximum throughput of a Bluetooth 2.1 SPP connection is.
I found 2 publications concerned with the topic (1, 2) and they both show diagrams, which show the throughput as a function of the Signal to noise ratio (that I can assume to be perfect for my concideration) and the kind of ACL package used. My problem is, I have no Idea which ACL packets are used. How is this decision made? Is it made on the fly, like "what's needed to transfer the current data is used"?
Furthermore, in the Serial Port Profile specification (chapter 2.3) I found this sentence:
This profile requires support for one-slot packets only. This means that this profile
ensures that data rates up to 128 kbps can be used. Support for higher rates is optional.
The last sentence realy confuses me. How do I find out whether this "option" applies in my case?
This means that in SPP mode, all bluetooth modules should work up to 128kbps, and some modules may work even faster.
Under SPP is RFCOMM, which tries to deliver the packets as quickly as possible. If only one packet is sent in one timeslot, you get the 128kbps. The firmware of the bluetooth module, or the HCI driver however can do things differently.
There are SPP speeds up to 480kbps reported - however this requires that both SPP modules are from the same vendor (e.g. BlueGiga iWrap modules can do this speed).
On the other end, if you're connecting to an unknown device, for example a BT112, or an RN41 module to an Android device, the actual usable SPP bandwidth can be much lower than 128 kbps (I have measurements as low as 10kbps).
In case of some old generation iPhones, the usable SPP bandwidth is around 8 kbps.
It is a wise idea to treat "standards" and "datasheets" very conservative, and do your own measurements if actual net data bandwidth is critical.
Even though BT, BT+EDR has theoretical on-the-air bitrates of 3Mbps, the actual usable net data bandwidth is a way smaller.

USB Audio confusion - What data rates are possible?

I'm new to USB development, and i'm quite confused about what data rates are realistic.
I'm trying to develop an external sound card connected on an AVR32 processor, which supports USB Full Speed(12 Mb/s). I'll use USB audio class 1 to send the audio data to a PC. I need to send 24 bit, 48kHz, 2 channels as INput to the computer, but also 24 bit, 48kHz, 1 channel OUTput from the computer. Streaming both ways.
That gives me a data rate of: 24 bit * 48kHz * 3 channels = 3,5 Mb/s, which should be possible by using USB Full Speed?
I understand that the Audio Class sends data via an Isochronous transfer, but i'm confused about how many transactions ( e.g. IN = 256 bytes ) it is possible to make in one frame? according to the USB specification (http://www.usb.org/developers/docs/usb20_docs/#usb20spec - > table 5-4) it seems to be possible to send more than one transaction per frame?
Is it possible to send both IN and OUT packets within one frame?
Thanks in advance!

Modulate digital data into audio using AFSK

I want to modulate digital data into audio. Then communicate it through any audio channel and demodulate at the destination from audio to data again. To do this I hope to use computer sound card and software modem without using any hardware implementation. In the internet, I found that this can be through the technique called Audio Frequency-Shift Keying(AFSK). I want to know that can I obtain bit rate more than 1200bps from AFSK and if it is no what the reason behind that this limitation.
Is there any technique efficient than AFSK for this purpose ?
The most common currently-used form of AFSK is the Bell202 modem at 1200 baud. There are a few other standards which also use 1200 baud, and some that run at less than 1200 bits per second, but none that I know of that run greater than 1200.
However, as far as I know, there's no reason you couldn't write a software modem to transmit and receive at a higher baud rate. Bell202 uses bit stuffing (allowing the data stream to use the same tone no more than 5 bits in a row) to help keep the transmitter and receiver from falling out of sync with each other, so a higher baud rate might require bit stuffing at a lower threshold (every 4 or 3 bits).
Another consideration is that the sound cards you're using should use a sampling rate equal to or a multiple of the baud rate you choose. This is one of the reasons 1200 baud is so common, as 1200Hz and 48000Hz are common sample rates with audio hardware.
So 1200 baud isn't a limit. It's just a standard.

Resources