How to transmit and receive a baseband signal in unetstack? - groovy

I am trying to work on a project involving implementation of acoustic propagation loss models in underwater communication(based on a certain research paper). We are trying to simulate that in unetstack. The ultimate goal is to create a channel model class that has all the loss models implemented.
But for now we have started by trying to send a baseband signal from one node to another and then are trying to capture the frequency on the receiver node and calculate loss models on that given frequency. (The loss models are a function of frequency value of the signal). I have tried to follow some documentation and some blog posts but I am not able to transmit and receive the signal.
For reference, I have already referred to these articles:
1.) svc-12-baseband
2.) basic-modem-operations-using-unetstack
This is the Research paper that I am following this to calculate the Loss based on different Loss models.
I have tried to write a groovy file for simulation, but it does not seem to work. If someone can please have a look and let me know the mistakes I have made, that would be of real help. We are quite new to unetstack as well as the topic of underwater signal processing like this and this is our first attempt at implementing it on a simulator. We are using unetsim-1.3
Any help is really appreciated! Thanks in advance
import org.arl.fjage.*
import org.arl.unet.*
import org.arl.unet.phy.*
import org.arl.unet.bb.*
import org.arl.unet.sim.*
import org.arl.unet.sim.channels.*
import static org.arl.unet.Services.*
import static org.arl.unet.phy.Physical.*
import java.lang.Math.*
platform = RealTimePlatform
simulate 3.minutes, {
def n = []
n << node('1', address: 1, location: [0,0,0])
n << node('2', address: 2, location: [0,0,0])
n.eachWithIndex { n2, i ->
n2.startup = {
def phy = agentForService PHYSICAL
def node = agentForService NODE_INFO
def bb = agentForService BASEBAND
subscribe phy
subscribe bb
if(node.address == 1)
{
add new TickerBehavior(50000, {
float freq = 5000
float duration = 1000e-3
int fd = 24000
int fc = 24000
int num = duration*fd
def sig = []
(0..num-1).each { t ->
double a = 2*Math.PI*(freq-fc)*t/fd
sig << (int)(Math.cos(a))
sig << (int)(Math.sin(a))
}
bb << new TxBasebandSignalReq(signal: sig)
println "sent"
})
}
if(node.address == 2)
{
add new TickerBehavior(50000, {
bb << new RecordBasebandSignalReq(recLen: 24000)
def rxNtf = receive(RxBasebandSignalNtf, 25000)
if(rxNtf)
{
println "Received"
}
println "Tried"
})
}
}
}
}
In some cases "Tried" is printed first even before "sent" is printed. This shows that (node.address == 2) code is executing first, before (node.address == 1) executes.

The basic code you have for transmission (TxBasebandSignalReq) and reception (RecordBasebandSignalReq) of signals seems correct.
This should work well on modems, other than the fact that your signal generation is likely flawed for 2 reasons:
You are trying to generate a signal at 5 kHz in baseband representation using a carrier frequency of 24 kHz and a bandwidth of 24 kHz. This signal will be aliased, as this baseband representation can only represent signals of 24±12 kHz, i.e., 12-36 kHz. If you need to transmit a 5 kHz signal, you need your modem to be operating at much lower carrier frequency (easy in the simulator, but in practice you'd need to check your modem specifications).
You are typecasting the output of sin and cos to int. This is probably not what you intended to do, as the signal is an array of float scaled between -1 and 1. So just dropping the (int) would be advisable.
On a simulator, you need to ensure that the modem parameters are setup correctly to reflect your assumptions of baseband carrier frequency, bandwidth and recording length:
modem.carrierFrequency = 24000
modem.basebandRate = 24000
modem.maxSignalLength = 24000
The default HalfDuplexModem parameters are different, and your current code would fail for RecordBasebandSignalReq with a REFUSE response (which your code is not checking).
The rest of your code looks okay, but I'd simplify it a bit to:
import org.arl.fjage.*
import org.arl.unet.bb.*
import org.arl.unet.Services
platform = RealTimePlatform
modem.carrierFrequency = 24000
modem.basebandRate = 24000
modem.maxSignalLength = 48000
simulate 3.minutes, {
def n1 = node('1', address: 1, location: [0,0,0])
def n2 = node('2', address: 2, location: [0,0,0])
n1.startup = {
def bb = agentForService Services.BASEBAND
add new TickerBehavior(50000, {
float freq = 25000 // pick a frequency in the 12-36 kHz range
float duration = 1000e-3
int fd = 24000
int fc = 24000
int num = duration*fd
def sig = []
(0..num-1).each { t ->
double a = 2*Math.PI*(freq-fc)*t/fd
sig << Math.cos(a)
sig << Math.sin(a)
}
bb << new TxBasebandSignalReq(signal: sig)
println "sent"
})
}
n2.startup = {
def bb = agentForService Services.BASEBAND
add new TickerBehavior(50000, {
bb << new RecordBasebandSignalReq(recLen: 24000)
def rxNtf = receive(RxBasebandSignalNtf, 25000)
if(rxNtf) {
println "Received"
}
println "Tried"
})
}
}
This should work as expected!
However, there are a few more gotchas to bear in mind:
You are sending and recording on a timer. On a simulator, this should be okay, as both nodes have the same time origin and no propagation delay (you've setup the nodes at the same location). However, on a real modem, the recording may not be happening when the transmission does.
Transmission and reception of signals with a real modem works well. The Unet simulator is primarily a network simulator and focuses on simulating the communication system behavior of modems, but not necessarily the acoustic propagation. While it supports the BASEBAND service, the channel physics of transmitting signals is not accurately modeled by the default HalfDuplexModem model. So your mileage on signal processing the recording may vary. This can be fixed by defining your own channel model that uses an appropriate acoustic propagation model, but is a non-trivial undertaking.

Related

How can I get raspberry pi 4b to read a TSL237 LF sensor?

I have the sensor working on an Arduino UNO. I'm having trouble translating the code over.
I can get the pi to receive input from the sensor, but I'm not sure how to get it to read the rising pulses for a specific amount of time (10 seconds) to give a certain number of pulses.
In this code the arduino is still receiving pulses from the sensor during the 10 second delay, but the number will vary based on how much light is getting through, which is what I want. I want the pi to do the same thing.
Also please understand I'm very much a novice, so if you can explain in detail I'd appreciate it.
#define TSL237 2
volatile unsigned long pulse_cnt = 0;
void setup() {
attachInterrupt(0, add_pulse, RISING);
pinMode(TSL237, INPUT);
Serial.begin(9600);
}
void add_pulse(){
pulse_cnt++;
return;
}
unsigned long Frequency() {
pulse_cnt = 0;
delay(10000);// this delay controlls pulse_cnt read out. longer delay == higher number
// DO NOT change this delay; it will void calibration.
unsigned long frequency = pulse_cnt;
return (frequency);
pulse_cnt = 0;
}
void loop() {
unsigned long frequency = Frequency();
Serial.println(frequency);
delay(5000);
}
here is what I have tried in MU, but I'm not as skilled.
import threading
import time
import RPi.GPIO as GPIO
GPIO.setmode(GPIO.BCM)
TSL237 = 16
GPIO.setup(TSL237, GPIO.IN, pull_up_down=GPIO.PUD_OFF)
class TSL237():
def __init__(self,TSL237):
self.TSL237 = TSL237
def Frequency(self):
self.pulse = 0
i=0
while i < 10:
i += 1
if GPIO.input(TSL237):
pulse += 1
return self.pulse
Reading = TSL237(TSL237)
GPIO.add_event_detect(TSL237, GPIO.RISING, Reading.Frequency())
#threading.Thread(target=TSL237(Frequency)).start()
You have mixed up the interrupt code with the readout code. The Frequency method in the arduino code does nothing but wait a certain time (10 seconds) for the ticks to increase. The actual increase of ticks happens in the interrupt handler add_pulse, though.
In the python code, the Frequency method is now the interrupt handler, and you're waiting inside it for some event as well. This doesn't work, because the interrupt handler won't fire again while it's active.
Change your code to something like this (untested, since I don't have such a sensor)
import threading
import time
import RPi.GPIO as GPIO
GPIO.setmode(GPIO.BCM)
TSL237Pin = 16
GPIO.setup(TSL237Pin, GPIO.IN, pull_up_down=GPIO.PUD_OFF)
class TSL237():
def __init__(self, pin):
self.pin = pin
self.pulses = 0
def OnRisingEdge(self):
self.pulses += 1 # Increase the pulse count and immediately return
def Frequency(self):
self.pulses = 0 # Reset pulse count
time.sleep(10) # wait 10 seconds
return self.pulses # return new value of pulse count
Reading = TSL237(TSL237Pin)
GPIO.add_event_detect(TSL237Pin, GPIO.RISING, callback=Reading.OnRisingEdge)
result = Reading.Frequency()
print(result)

Audio Unit RemoteIO Setting interleaved float gives kAudioUnitErr_FormatNotSupported

I am working with Audio Unit RemoteIO's to obtain a low latency audio output. My problem is AFAIK audio unit only accepts several audio formats depending on the hardware. My problem is I have a C++ DSP Sound engine and it works with float interleaved PCM. I do not want to implement a format converter since it can slow things down in the remote IO callback. I tried obtaining a low latency Audio Unit with the following format:
AudioStreamBasicDescription const audioDescription = {
.mSampleRate = defaultSampleRate,
.mFormatID = kAudioFormatLinearPCM,
.mFormatFlags = kAudioFormatFlagIsFloat,
.mBytesPerPacket = defaultSampleRate * STEREO_CHANNEL,
.mFramesPerPacket = 1,
.mBytesPerFrame = STEREO_CHANNEL * sizeof(Float32),
.mChannelsPerFrame = STEREO_CHANNEL,
.mBitsPerChannel = 8 * sizeof(Float32),
.mReserved = 0
};
status = AudioUnitSetProperty(audioUnit,
kAudioUnitProperty_StreamFormat,
kAudioUnitScope_Input,
kOutputBus,
&audioDescription,
sizeof(audioDescription));
This fails with the error code kAudioUnitErr_FormatNotSupported -10868. If I try to obtain a float PCM NON-interleaved audio stream with the following:
AudioStreamBasicDescription const audioDescription = {
.mSampleRate = defaultSampleRate,
.mFormatID = kAudioFormatLinearPCM,
.mFormatFlags = kAudioFormatFlagIsFloat | kAudioFormatFlagIsPacked | kAudioFormatFlagIsNonInterleaved,
.mBytesPerPacket = sizeof(float),
.mFramesPerPacket = 1,
.mBytesPerFrame = sizeof(float),
.mChannelsPerFrame = STEREO_CHANNEL,
.mBitsPerChannel = 8 * sizeof(float),
.mReserved = 0
};
status = AudioUnitSetProperty(audioUnit,
kAudioUnitProperty_StreamFormat,
kAudioUnitScope_Input,
kOutputBus,
&audioDescription,
sizeof(audioDescription));
Everything works fine. However I want to obtain an interleaved audio stream for my DSP engine to work without format conversion. Is this possible at all?
PS. waiting for hotpaw2 to guide me :)
Your error is probably due to this line:
.mBytesPerPacket = defaultSampleRate * STEREO_CHANNEL,

How to connect GPIO in QEMU-emulated machine to an object in host?

I need to connect the GPIO pins in the ARM machine emulated in QEMU to the GUI objects in application working on the host machine.
For example, the level on the output GPIO should be reflected by a color of a rectangle. The input GPIO should be connected to a button. When the button in GUI is pressed, the input GPIO should be read as zero (otherwise as one) etc.
Of course the input GPIOs should be also capable of generating the interrupts.
In fact it would be perfect to connect the emulated pin to a pipe or socket so that a change of the state caused by QEMU would produce a message sent to the host, and the appropriate message sent by the host should trigger the appropriate change of the state of GPIO in QEMU (and possibly generate an interrupt).
I have created a few own peripherials for QEMU (e.g, https://github.com/wzab/qemu/blob/ster3/hw/misc/wzab_sysbus_enc1.c ) but implementation of such GPIO seems to be not trivial.
Up to now I have found that material: https://sudonull.com/post/80905-Virtual-GPIO-driver-with-QEMU-ivshmem-interrupt-controller-for-Linux but it uses relatively old QEMU. Additionally, the proposed solution is compatible only with the old sysfs-based method of handling GPIOs.
A newer solution based on the above concept is available in the https://github.com/maquefel/virtual_gpio_basic repository. However, it is not clear if it is libgpiod compatible.
Are there any existing solutions of that problem?
One possible solution
The application implementing the GUI could use msgpack ( https://msgpack.org/ ) protocol to communicate the QEMU via a socket
(msgpack enables easy implementation of GUI in various languages including Python or Lua).
So whenever the QEMU changes the state of the pin, it sends a message contining two fields:
Direction: (In, Out)
State: (High, Low, High Impedance)
Whenever somebody changes the state of the pin in the GUI, similar message is sent to QEMU, but it should contain only one field:
State: (High, Low)
I assume that the logic that resolves collisions and generates the random state when somebody tries to read the not connected input should be implemented in the GUI application.
Is it a viable solution?
Another possible solution
In a version of QEMU modified by Xilinx I have found something that either maybe a solution, or at least provides means to find the solution.
These are the files with names starting with "remote-port" in the https://github.com/Xilinx/qemu/tree/master/include/hw and https://github.com/Xilinx/qemu/tree/master/hw/core directories.
Unfortunately, it seems that the Xilinx solution is aimed at cosimulation with System-C and can't be easily adapted for communication with the user GUI application.
I have managed to connect the GPIO to the GUI written in Python.
The communication is currently established via POSIX message queues.
I have modified the mpc8xxx.c model of GPIO available in QEMU 4.2.0, adding functions that receive the state of input lines and report the state of the output lines in messages.
I have modifed the MPC8XXXGPIOState adding the output message queue, the mutex and the receiving thread:
typedef struct MPC8XXXGPIOState {
SysBusDevice parent_obj;
MemoryRegion iomem;
qemu_irq irq;
qemu_irq out[32];
mqd_t mq;
QemuThread thread;
QemuMutex dat_lock;
uint32_t dir;
uint32_t odr;
uint32_t dat;
uint32_t ier;
uint32_t imr;
uint32_t icr;
} MPC8XXXGPIOState;
The changes of the pins are transmitted as structures:
typedef struct {
uint8_t magick[2];
uint8_t pin;
uint8_t state;
} gpio_msg;
The original procedure writing the data to the pin has been modified to report all modified bits via message queue:
static void mpc8xxx_write_data(MPC8XXXGPIOState *s, uint32_t new_data)
{
uint32_t old_data = s->dat;
uint32_t diff = old_data ^ new_data;
int i;
qemu_mutex_lock(&s->dat_lock);
for (i = 0; i < 32; i++) {
uint32_t mask = 0x80000000 >> i;
if (!(diff & mask)) {
continue;
}
if (s->dir & mask) {
gpio_msg msg;
msg.magick[0] = 0x69;
msg.magick[1] = 0x10;
msg.pin = i;
msg.state = (new_data & mask) ? 1 : 0;
/* Output */
qemu_set_irq(s->out[i], (new_data & mask) != 0);
/* Send the new value */
mq_send(s->mq,(const char *)&msg,sizeof(msg),0);
/* Update the bit in the dat field */
s->dat &= ~mask;
if ( new_data & mask ) s->dat |= mask;
}
}
qemu_mutex_unlock(&s->dat_lock);
}
Information about the pins modified by the GUI is received in a separate thread:
static void * remote_gpio_thread(void * arg)
{
//Here we receive the data from the queue
const int MSG_MAX = 8192;
char buf[MSG_MAX];
gpio_msg * mg = (gpio_msg *)&buf;
mqd_t mq = mq_open("/to_qemu",O_CREAT | O_RDONLY,S_IRUSR | S_IWUSR,NULL);
if(mq<0) {
perror("I can't open mq");
exit(1);
}
while(1) {
int res = mq_receive(mq,buf,MSG_MAX,NULL);
if(res<0) {
perror("I can't receive");
exit(1);
}
if(res != sizeof(gpio_msg)) continue;
if((int) mg->magick[0]*256+mg->magick[1] != REMOTE_GPIO_MAGICK) {
printf("Wrong message received");
}
if(mg->pin < 32) {
qemu_mutex_lock_iothread();
mpc8xxx_gpio_set_irq(arg,mg->pin,mg->state);
qemu_mutex_unlock_iothread();
}
}
}
The receiving thread is started in the modified instance initialization procedure:
static void mpc8xxx_gpio_initfn(Object *obj)
{
DeviceState *dev = DEVICE(obj);
MPC8XXXGPIOState *s = MPC8XXX_GPIO(obj);
SysBusDevice *sbd = SYS_BUS_DEVICE(obj);
memory_region_init_io(&s->iomem, obj, &mpc8xxx_gpio_ops,
s, "mpc8xxx_gpio", 0x1000);
sysbus_init_mmio(sbd, &s->iomem);
sysbus_init_irq(sbd, &s->irq);
qdev_init_gpio_in(dev, mpc8xxx_gpio_set_irq, 32);
qdev_init_gpio_out(dev, s->out, 32);
qemu_mutex_init(&s->dat_lock);
s->mq = mq_open("/from_qemu",O_CREAT | O_WRONLY,S_IRUSR | S_IWUSR,NULL);
qemu_thread_create(&s->thread, "remote_gpio", remote_gpio_thread, s,
QEMU_THREAD_JOINABLE);
}
The minimalistic GUI is written in Python and GTK:
#!/usr/bin/python3
# Sources:
# https://lazka.github.io/pgi-docs
# https://python-gtk-3-tutorial.readthedocs.io/en/latest/button_widgets.html
# https://developer.gnome.org/gtk3/stable/
# Threads: https://wiki.gnome.org/Projects/PyGObject/Threading
import gi
gi.require_version("Gtk", "3.0")
from gi.repository import Gtk, GLib, Gdk
import threading
# Communication part
import struct
pipc_magick = 0x6910
import posix_ipc as pipc
mq_to_qemu = pipc.MessageQueue("/to_qemu",flags=pipc.O_CREAT, read=False, write=True)
mq_from_qemu = pipc.MessageQueue("/from_qemu",flags=pipc.O_CREAT, read=True, write=False)
def send_change(nof_pin, state):
s=struct.pack(">HBB",pipc_magick,nof_pin,state)
mq_to_qemu.send(s)
def recv_change(msg):
mg, pin, state = struct.unpack(">HBB",msg)
print("mg=",mg," pin=",pin," state=",state)
if mg != pipc_magick:
raise Exception("Wrong magick number in GPIO IPC message")
if state == 0:
s = 0
else:
s = 1
GLib.idle_add(MyLeds[pin-24].change_state,s)
def receiver():
while True:
msg = mq_from_qemu.receive()
recv_change(msg[0])
class MySwitch(Gtk.Switch):
def __init__(self,number):
super().__init__()
self.number = number
class MyButton(Gtk.Button):
def __init__(self,number):
super().__init__(label=str(number))
self.number = number
class MyLed(Gtk.Label):
color = Gdk.color_parse('gray')
rgba0 = Gdk.RGBA.from_color(color)
color = Gdk.color_parse('green')
rgba1 = Gdk.RGBA.from_color(color)
del color
def __init__(self, number):
super().__init__( label=str(number))
self.number = number
self.change_state(0)
def change_state(self,state):
if state == 1:
self.override_background_color(0,self.rgba1)
else:
self.override_background_color(0,self.rgba0)
MyLeds = []
class SwitchBoardWindow(Gtk.Window):
def __init__(self):
Gtk.Window.__init__(self, title="Switch Demo")
self.set_border_width(10)
mainvbox = Gtk.Box(orientation = Gtk.Orientation.VERTICAL, spacing = 6)
self.add(mainvbox)
#Create the switches
label = Gtk.Label(label = "Stable switches: left 0, right 1")
mainvbox.pack_start(label,True,True,0)
hbox = Gtk.Box(spacing=6)
for i in range(0,12):
vbox = Gtk.Box(orientation = Gtk.Orientation.VERTICAL, spacing = 6)
label = Gtk.Label(label = str(i))
vbox.pack_start(label,True,True,0)
switch = MySwitch(i)
switch.connect("notify::active", self.on_switch_activated)
switch.set_active(False)
vbox.pack_start(switch,True,True,0)
hbox.pack_start(vbox, True, True, 0)
mainvbox.pack_start(hbox,True,True,0)
#Create the buttons
label = Gtk.Label(label = "Unstable buttons: pressed 0, released 1")
mainvbox.pack_start(label,True,True,0)
hbox = Gtk.Box(spacing=6)
for i in range(12,24):
button = MyButton(i)
button.connect("button-press-event", self.on_button_clicked,0)
button.connect("button-release-event", self.on_button_clicked,1)
hbox.pack_start(button,True,True,0)
mainvbox.pack_start(hbox,True,True,0)
#Create the LEDS
label = Gtk.Label(label = "LEDs")
mainvbox.pack_start(label,True,True,0)
hbox = Gtk.Box(spacing=6)
for i in range(24,32):
led = MyLed(i)
MyLeds.append(led)
hbox.pack_start(led,True,True,0)
mainvbox.pack_start(hbox,True,True,0)
def on_switch_activated(self, switch, gparam):
if switch.get_active():
state = 0
else:
state = 1
#MyLeds[switch.number].change_state(state)
send_change(switch.number,state)
print("Switch #"+str(switch.number)+" was turned", state)
return True
def on_button_clicked(self, button,gparam, state):
print("pressed!")
send_change(button.number,state)
print("Button #"+str(button.number)+" was turned", state)
return True
win = SwitchBoardWindow()
win.connect("destroy", Gtk.main_quit)
win.show_all()
thread = threading.Thread(target=receiver)
thread.daemon = True
thread.start()
Gtk.main()
The full project integrating the modified MPC8XXX with emulated Vexpress A9 machine is available in the branch "gpio" of my repository https://github.com/wzab/BR_Internet_Radio

Getting Multiple Audio Inputs in Processing

I'm currently writing a Processing sketch that needs to access multiple audio inputs, but Processing only allows access to the default line in. I have tried getting Lines straight from the Java Mixer (accessed within Processing), but I still only get the signal from whichever line is currently set to default on my machine.
I've started looking at sending the sound via OSC from SuperCollider, as recommended here. However, since I'm very new to SuperCollider and their documentation and support is more focused on generating sound than on accessing inputs, my next step will probably be to play around with Beads and Jack, as suggested here.
Does anyone have (1) other suggestions, or (2) concrete examples of getting multiple inputs from either SuperCollider or Beads/Jack to Processing?
Thank you in advance!
Edit: The sound will be used to power custom music visualizations (think the iTunes visualizer, but much more song specific). We have this working with multiple mp3s; now what I need is to able to get a float[] buffer from each mic. Hoping to have 9 different mics, though we'll settle for 4 if that is more doable.
For hardware, at this point, we are just using mics and XLR to USB cables. (Have considered a pre-amp, but so far this has been sufficient.) I am currently on Windows, but I think that we will ultimately switch to a Mac.
Here was my attempt with just Beads (it works fine for the laptop, since I do that one first, but the headset buffer has all 0's; if I switch them and put the headset first, the headset buffer will be correct, but the laptop will contain all 0's):
void setup() {
size(512, 400);
JavaSoundAudioIO headsetAudioIO = new JavaSoundAudioIO();
JavaSoundAudioIO laptopAudioIO = new JavaSoundAudioIO();
headsetAudioIO.selectMixer(5);
headsetAudioCon = new AudioContext(headsetAudioIO);
laptopAudioIO.selectMixer(4);
laptopAudioCon = new AudioContext(laptopAudioIO);
headsetMic = headsetAudioCon.getAudioInput();
laptopMic = headsetAudioCon.getAudioInput();
} // setup()
void draw() {
background(100,0, 75);
laptopMic.start();
laptopMic.calculateBuffer();
laptopBuffer = laptopMic.getOutBuffer(0);
for (int j = 0; j < laptopBuffer.length - 1; j++)
{
println("laptop; " + j + ": " + laptopBuffer[j]);
line(j, 200+laptopBuffer[j]*50, j+1, 200+laptopBuffer[j+1]*50);
}
laptopMic.kill();
headsetMic.start();
headsetMic.calculateBuffer();
headsetBuffer = headsetMic.getOutBuffer(0);
for (int j = 0; j < headsetBuffer.length - 1; j++)
{
println("headset; " + j + ": " + headsetBuffer[j]);
line(j, 50+headsetBuffer[j]*50, j+1, 50+headsetBuffer[j+1]*50);
}
headsetMic.kill();
} // draw()
My attempt at adding Jack contains this line:
ac = new AudioContext(new AudioServerIO.Jack(), 44100, new IOAudioFormat(44100, 16, 4, 4));
but I get the error:
Jun 22, 2016 9:17:24 PM org.jaudiolibs.beads.AudioServerIO$1 run
SEVERE: null
org.jaudiolibs.jnajack.JackException: Can't find native library
at org.jaudiolibs.jnajack.Jack.getInstance(Jack.java:428)
at org.jaudiolibs.audioservers.jack.JackAudioServer.initialise(JackAudioServer.java:102)
at org.jaudiolibs.audioservers.jack.JackAudioServer.run(JackAudioServer.java:86)
at org.jaudiolibs.beads.AudioServerIO$1.run(Unknown Source)
at java.lang.Thread.run(Thread.java:745)
Caused by: java.lang.UnsatisfiedLinkError: Unable to load library 'jack': Native library (win32-x86-64/jack.dll) not found in resource path ([file:/C:/Users/...etc...)
And when I'm in Jack, I don't see my mic (which seems like a huge red flag to me, though I am completely new to Jack). Should this AudioContext show up as an Input in Jack? Or vice versa -- find my mic there first and then get it from Jack to Processing?
(Forgive my inexperience, and thank you again! My lack of knowledge in Jack makes me wonder if I should revisit SuperCollider instead...)
I had the same issue a few years ago and I used a combination of JACK, JNAJack and Beads. You can follow this Beads Google Group thread for more details.
At the that time I had to use this version of Beads (2012-04-23), but I hope those changes probably made it into the main project by now.
For reference, here is the basic class I used:
import java.util.Arrays;
import org.jaudiolibs.beads.AudioServerIO;
import net.beadsproject.beads.analysis.featureextractors.FFT;
import net.beadsproject.beads.analysis.featureextractors.PowerSpectrum;
import net.beadsproject.beads.analysis.segmenters.ShortFrameSegmenter;
import net.beadsproject.beads.core.AudioContext;
import net.beadsproject.beads.core.AudioIO;
import net.beadsproject.beads.core.UGen;
import net.beadsproject.beads.ugens.Gain;
import processing.core.PApplet;
public class BeadsJNA extends PApplet {
AudioContext ac;
ShortFrameSegmenter sfs;
PowerSpectrum ps;
public void setup(){
//defining audio context with 6 inputs and 6 outputs - adjust this based on your sound card / JACK setup
ac = new AudioContext(new AudioServerIO.Jack(),512,AudioContext.defaultAudioFormat(6,6));
//getting 4 audio inputs (channels 1,2,3,4)
UGen microphoneIn = ac.getAudioInput(new int[]{1,2,3,4});
Gain g = new Gain(ac, 1, 0.5f);
g.addInput(microphoneIn);
ac.out.addInput(g);
println("no. of inputs: " + ac.getAudioInput().getOuts());
//test get some FFT power spectrum data form the
sfs = new ShortFrameSegmenter(ac);
sfs.addInput(ac.out);
FFT fft = new FFT();
sfs.addListener(fft);
ps = new PowerSpectrum();
fft.addListener(ps);
ac.out.addDependent(sfs);
ac.start();
}
public void draw(){
background(255);
float[] features = ps.getFeatures();
if(features != null){
for(int x = 0; x < width; x++){
int featureIndex = (x * features.length) / width;
int barHeight = Math.min((int)(features[featureIndex] *
height), height - 1);
line(x, height, x, height - barHeight);
}
}
}
public static void main(String[] args) {
PApplet.main(BeadsJNA.class.getSimpleName());
}
}

Generate audio tone to sound card in C++ or C#

I am trying to generate a tone to the sound card (Frequency: 1950 hz, duration: 40 ms, level: -30 db, right-channel, on steam 1). Any recommendations on how to accomplish this using C++ or C#. Are there any libraries (C++ or C#) for generating such precise tone?
David, playing audio to the speakers was built right into .NET (i think in the .NET 2.0 Framework). Using the System.Media.SoundPlayer you can play a sound from a memory stream that you build (in WAV format). Here is a function i coded that plays a simple frequency for a certain duration. Regarding the decibels and sending it to the sound card, i don't really understand what specifics you are referring to. For instance i fail to understand how audio as measured in decibels is sent to the sound card. My understanding is that decibels are simply a measure of how loud a sound is, thus after it's been reproduced by the speakers. Thus the volume control on the speakers affects what decibel your sounds will produce, and sending a certain decibel to the sound card thus makes no sense to me. Maybe you need something more detailed and maybe this doesn't work for you. But maybe you can run with this and get it to work for what you need. And maybe it is almost exactly what you are asking.
The process i use in this code allows one to build any audio you want, and plays it. So you can create 2 sine waves or many, many more, or triangle waves, or even speech synthesis with this method if you want. This method takes sound samples which are calculated and then plays those, so you need to code what each audio sample needs to be at the given moment in time. WAV allows stereo sound too, but this code sample only uses non-stereo sound. If you want stereo sound then it just needs modified to generate the bytes for a stereo WAV format instead. I expect it would not be too difficult.
Happy coding!
using System;
using System.Collections.Generic;
using System.IO;
using System.Linq;
using System.Windows.Forms;
public static void PlayBeep(UInt16 frequency, int msDuration, UInt16 volume = 16383)
{
var mStrm = new MemoryStream();
BinaryWriter writer = new BinaryWriter(mStrm);
const double TAU = 2 * Math.PI;
int formatChunkSize = 16;
int headerSize = 8;
short formatType = 1;
short tracks = 1;
int samplesPerSecond = 44100;
short bitsPerSample = 16;
short frameSize = (short)(tracks * ((bitsPerSample + 7) / 8));
int bytesPerSecond = samplesPerSecond * frameSize;
int waveSize = 4;
int samples = (int)((decimal)samplesPerSecond * msDuration / 1000);
int dataChunkSize = samples * frameSize;
int fileSize = waveSize + headerSize + formatChunkSize + headerSize + dataChunkSize;
// var encoding = new System.Text.UTF8Encoding();
writer.Write(0x46464952); // = encoding.GetBytes("RIFF")
writer.Write(fileSize);
writer.Write(0x45564157); // = encoding.GetBytes("WAVE")
writer.Write(0x20746D66); // = encoding.GetBytes("fmt ")
writer.Write(formatChunkSize);
writer.Write(formatType);
writer.Write(tracks);
writer.Write(samplesPerSecond);
writer.Write(bytesPerSecond);
writer.Write(frameSize);
writer.Write(bitsPerSample);
writer.Write(0x61746164); // = encoding.GetBytes("data")
writer.Write(dataChunkSize);
{
double theta = frequency * TAU / (double)samplesPerSecond;
// 'volume' is UInt16 with range 0 thru Uint16.MaxValue ( = 65 535)
// we need 'amp' to have the range of 0 thru Int16.MaxValue ( = 32 767)
double amp = volume >> 2; // so we simply set amp = volume / 2
for (int step = 0; step < samples; step++)
{
short s = (short)(amp * Math.Sin(theta * (double)step));
writer.Write(s);
}
}
mStrm.Seek(0, SeekOrigin.Begin);
new System.Media.SoundPlayer(mStrm).Play();
writer.Close();
mStrm.Close();
} // public static void PlayBeep(UInt16 frequency, int msDuration, UInt16 volume = 16383)
NAudio provides a robust audio library for .NET.
NAudio is an open source .NET audio and MIDI library, containing dozens of useful audio related classes intended to speed development of audio related utilities in .NET. It has been in development since 2002 and has grown to include a wide variety of features. While some parts of the library are relatively new and incomplete, the more mature features have undergone extensive testing and can be quickly used to add audio capabilities to an existing .NET application. NAudio can be quickly added to your .NET application using NuGet.
Here's an article that walks step-by-step through using NAudio to create a sine wave. You can create the sine wave with any desired frequency, for any desired duration:
http://msdn.microsoft.com/en-us/magazine/ee309883.aspx

Resources