Using nidaqmx to trigger signal in line with behavioural task - python-3.x

I am using the python nidaqmx API to instruct a USB-6009 DAQ to output an analog signal when a tone plays. I am trying to use the API guidelines and also the previous stackoverflow question (Triggering an output task with NIDAQmx) but still need help.
The timing of the tone is set using Psychopy, a python-based behavioural task package.
The general format of this would be:
if tone = on:
trigger_digital_output
I just cannot figure out the code from the nidaqmx documentation to trigger the analog output. Additionally, will I need to specify a digital input (USB-6009 will be connected by USb to my computer).
Thankyou

According to its specifications, the USB-6009 does not have any hardware triggers for analog output, and only a digital edge trigger for analog input.
So for your analog output task, you would use the same approach as the topic you referenced: use stop() and start() to begin the output each time you want to generate it.
The Digital I/O on the USB-6009 is only software-timed: the input or output happens on-demand, each time you call the read_one_sample_one_line() or write_one_sample_one_line() functions.
To get started with the full DAQ commands, there are a few Python examples on GitHub
analog output
digital output
digital input

Related

Sharing a microphone audio stream on Linux

As much as it matters my scenario is developing an accessibility application not any kind of malicious eavesdropping, whereas also within this scenario there are various research and development implied scenarios, all of which should greatly benefit from being able to read the microphone audio stream by multiple simultaneously running unrelated processes such as recording tools and/or different versions of my own code.
Problem Statement
I am reading a microphone input stream using a high level python API like follows:
import sounddevice
audio_stream = sounddevice.InputStream(
device=self.microphone_device,
channels=max(self.channels),
samplerate=self.audio_props['sample_rate'],
blocksize=int(self.audio_props['frame_elements_size']),
callback=self.audio_callback)
I would like to learn whether it is possible (on linux) to read the microphone audio stream simultaneously to another program such as Google Meet / Zoom reading it. I.e. effectively share the audio stream.
As is with the mentioned python wrapper, it is no big surprise that when the above code is started while a video call is in progress, it will simply fail to open the stream:
Expression 'paInvalidSampleRate' failed in
'src/hostapi/alsa/pa_linux_alsa.c', line: 2043
Expression 'PaAlsaStreamComponent_InitialConfigure( &self->playback, outParams, self->primeBuffers, hwParamsPlayback, &realSr )'
failed in 'src/hostapi/alsa/pa_linux_alsa.c', line: 2716
Expression 'PaAlsaStream_Configure( stream, inputParameters, outputParameters, sampleRate, framesPerBuffer, &inputLatency, &outputLatency, &hostBufferSizeMode )'
failed in 'src/hostapi/alsa/pa_linux_alsa.c', line: 2837
Admittedly, I am not very well versed with ALSA terminology and in general the sound stack on linux yet.
My question is, can this be accomplished directly using ALSA library API, or otherwise via other sound stacks or sound system configuration? Or if all else is not meant to work, via a proxy program/driver that is able to expose an audio buffer to multiple consumers without incurring noticeable degradation in audio stream latency?
You can do this directly with ALSA. Dsnoop should do the trick. It is a plugin included with ALSA that allows sharing input streams.
From the page I linked above:
dsnoop is the equivalent of the dmix plugin, but for recording sound. The dsnoop plugin allows several applications to record from the same device simultaneously.
From the ALSA docs:
If you want to use multiple input(capture) clients you need to use the dsnoop plugin:
You can poke around there for details on how to use it. This issue on GitHub will also help you get started, it details how to configure the dsnoop interface so you can read from it with pyaudio.
Update
To configure ALSA, edit /etc/asound.conf with something like this (from the ALSA docs on dsnoop):
pcm.mixin {
type dsnoop
ipc_key 5978293 # must be unique for all dmix plugins!!!!
ipc_key_add_uid yes
slave {
pcm "hw:1,0"
channels 2
period_size 1024
buffer_size 4096
rate 44100
periods 0
period_time 0
}
bindings {
1 1
1 0
}
}
You can test to see if your configuration works with something like this:
arecord -d 30 -f cd -t wav -D pcm.mixin test.wav
So, this is more an audio question than a python question I guess. :)
Depending on the API, Streams can be device exclusive or not. ASIO for professional audio for example is often device exclusive, so just one application(like a DAW) has access to it. On Windows for example you can turn this on and off as seen here:
https://help.ableton.com/hc/en-us/articles/209770485-Disabling-exclusive-mode-for-ASIO-interfaces
Most Python packages like pyaudio and so on are just providing bindings for portaudio, which does the heavy lifting, so also have a look at the portaudio documentation. Portaudio "combines" all the different APIs like ASIO,ALSA,WASAPI,Core Audio, and so on.
For ALSA to create more than one Stream at the same time you might need dmix, have a look at this Stackoverflow question:
https://unix.stackexchange.com/questions/355662/alsa-doesnt-work-when-multiple-applications-are-opened

C# Stream Program Output to an Audio Input Device

I have a program that generates a PCM Audio stream (as a MemoryStream). I want to send this to an audio input (recording) device in windows so that an application that is set to get input from that recording device will get the output generated by my program as input.
Any simple way to tackle this? If it is going to help, I can generate Waveform output stream from my program.
As an additional complication, any solution that depends on looping back the audio output to input wont work because I have another application that is already doing that to capture the default output.

Having FPGA to output sound on "line out" pin using verilog

I am trying to write a verilog code for FPGA which will output sound from the embedded "line out" pin. I use Quartus II and Altera DE1.
I am new to hardware programming, therefore it just takes too much time for me to catch up with basics. Apparently I need to initialize the wolfson chip and need to write something to communicate with it, as discussed here:
http://www.alteraforum.com/forum/showthread.php?t=6005
It uses wolfson WM8731 codec, manual is in here:
http://www.rockbox.org/wiki/pub/Main/DataSheets/WM8731_8731L.pdf
and an example I found but couldnt figured out how to use it is in here:
https://code.google.com/p/vector06cc/wiki/SoundCodec
I have found bunch of examples about how to generate sounds using GPIO pins, but barely anything about the usage of WM8731. I would really appreciate any guidance or experience you might share.
Assuming you're using Nios II and either SOPC Builder or Qsys, the Altera University Program offers an IP core to control the Audio CODEC on the DE-Series boards.
If you don't already have it, you can download it here (at the bottom of the page, listed as University Program Installer): https://www.altera.com/support/training/university/materials-ip-cores.html
Once you install it, check the <altera-directory>/University_Program\NiosII_Computer_Systems\DE1\DE1_Media_Computer directory. app_software and app_software_HAL both give example methods to write to the audio output (using C code running on the Nios II), and the verilog or vhdl folder show example systems on connecting the core to a NIOS II using your preferred HDL.
The core itself can be found in <altera-directory>\ip\University_Program\Audio_Video. See also ftp://ftp.altera.com/up/pub/Altera_Material/14.1/University_Program_IP_Cores/Audio_Video/Audio.pdf for some (potentially) helpful reading/reference.
Addendum:
All of the FPGA inputs and outputs use the "Digital Audio Interface" of the WM8731 chip. The pins available on the FPGA are as follows:
PIN_A6 : AUD_ADCLRCK
PIN_B6 : AUD_ADCDAT
PIN_A5 : AUD_DACLRCK
PIN_B5 : AUD_DACDAT
PIN_A4 : AUD_BCLK
PIN_B4 : AUD_XCK (MCLK on WM8731)
Output is sent to the CODEC on the AUD_DACDAT pin.
The chip is configured using the I2C_SDAT and I2C_SCLK pins on I2C address 0x34 for read, and 0x35 for write.
No other pins are available to the FPGA - some are used for external connections (like the mike or line-in), or are not connected at all.
For a full list of pin assignments for the DE1 (which can be directly imported to Quartus) see: ftp://ftp.altera.com/up/pub/Altera_Material/12.1/Boards/DE1/DE1.qsf

Adding audio effects (reverb etc..) to a BackgroundAudioPlayer driven streaming audio app

I have a windows phone 8 app which plays audio streams from a remote location or local files using the BackgroundAudioPlayer. I now want to be able to add audio effects, for example, reverb or echo, etc...
Please could you advise me on how to do this? I haven't been able to find a way of hooking extra audio processing code into the pipeline of audio processing even through I've read much about WASAPI, XAudio2 and looked at many code examples.
Note that the app is written in C# but, from my previous experience with writing audio processing code, I know that I should be writing the audio code in native C++. Roughly speaking, I need to find a point at which there is an audio buffer containing raw PCM data which I can use as an input for my audio processing code which will then write either back to the same buffer or to another buffer which is read by the next stage of audio processing. There need to be ways of synchronizing what happens in my code with the rest of the phone's audio processing mechanisms and, of course, the process needs to be very fast so as not to cause audio glitches. Or something like that; I'm used to how VST works, not how such things might work in the Windows Phone world.
Looking forward to seeing what you suggest...
Kind regards,
Matt Daley
I need to find a point at which there is an audio buffer containing
raw PCM data
AFAIK there's no such point. This MSDN page hints that audio/video decoding is performed not by the OS, but by the Qualcomm chip itself.
You can use something like Mp3Sharp for decoding. This way the mp3 will be decoded on the CPU by your managed code, you can interfere / process however you like, then feed the PCM into the media stream source. Main downside - battery life: the hardware-provided codecs should be much more power-efficient.

LabVIEW string output

How do I send a string output from a DAQ Board (NI- USB 6259) using LabVIEW? I want to send commands such as "CELL 0" or "READ" to a potentiostat device using LabVIEW.
Thanks
The 6259 doesn't do string output. It's a data acquisition board that's intended for reading/sourcing analog voltages or sending/receiving individual digital signals. It's not a communications device.
If you're really trying to send strings to this device, you probably need something more like an RS-232 or GPIB connection.
As eaolson said a DAQ is not intended to control devices. However it is an interesting project to enter the guts of the communication protocol. Doing it with a DAQ would require to:
Identify the protocol (GPIB or RS-232)
Make your cable from the DAQ output connector
For each command, generate the waveform in LabVIEW, by using the letters' ASCII code, stop bits, etc. This is the funniest part (INMHO, but I understand it's not everybody!)
Send it (using DAQ analog write VIs, you should find many examples for this)
The oscilloscope will be your best friend

Resources