How to connect GPIO in QEMU-emulated machine to an object in host? - emulation

I need to connect the GPIO pins in the ARM machine emulated in QEMU to the GUI objects in application working on the host machine.
For example, the level on the output GPIO should be reflected by a color of a rectangle. The input GPIO should be connected to a button. When the button in GUI is pressed, the input GPIO should be read as zero (otherwise as one) etc.
Of course the input GPIOs should be also capable of generating the interrupts.
In fact it would be perfect to connect the emulated pin to a pipe or socket so that a change of the state caused by QEMU would produce a message sent to the host, and the appropriate message sent by the host should trigger the appropriate change of the state of GPIO in QEMU (and possibly generate an interrupt).
I have created a few own peripherials for QEMU (e.g, https://github.com/wzab/qemu/blob/ster3/hw/misc/wzab_sysbus_enc1.c ) but implementation of such GPIO seems to be not trivial.
Up to now I have found that material: https://sudonull.com/post/80905-Virtual-GPIO-driver-with-QEMU-ivshmem-interrupt-controller-for-Linux but it uses relatively old QEMU. Additionally, the proposed solution is compatible only with the old sysfs-based method of handling GPIOs.
A newer solution based on the above concept is available in the https://github.com/maquefel/virtual_gpio_basic repository. However, it is not clear if it is libgpiod compatible.
Are there any existing solutions of that problem?
One possible solution
The application implementing the GUI could use msgpack ( https://msgpack.org/ ) protocol to communicate the QEMU via a socket
(msgpack enables easy implementation of GUI in various languages including Python or Lua).
So whenever the QEMU changes the state of the pin, it sends a message contining two fields:
Direction: (In, Out)
State: (High, Low, High Impedance)
Whenever somebody changes the state of the pin in the GUI, similar message is sent to QEMU, but it should contain only one field:
State: (High, Low)
I assume that the logic that resolves collisions and generates the random state when somebody tries to read the not connected input should be implemented in the GUI application.
Is it a viable solution?
Another possible solution
In a version of QEMU modified by Xilinx I have found something that either maybe a solution, or at least provides means to find the solution.
These are the files with names starting with "remote-port" in the https://github.com/Xilinx/qemu/tree/master/include/hw and https://github.com/Xilinx/qemu/tree/master/hw/core directories.
Unfortunately, it seems that the Xilinx solution is aimed at cosimulation with System-C and can't be easily adapted for communication with the user GUI application.

I have managed to connect the GPIO to the GUI written in Python.
The communication is currently established via POSIX message queues.
I have modified the mpc8xxx.c model of GPIO available in QEMU 4.2.0, adding functions that receive the state of input lines and report the state of the output lines in messages.
I have modifed the MPC8XXXGPIOState adding the output message queue, the mutex and the receiving thread:
typedef struct MPC8XXXGPIOState {
SysBusDevice parent_obj;
MemoryRegion iomem;
qemu_irq irq;
qemu_irq out[32];
mqd_t mq;
QemuThread thread;
QemuMutex dat_lock;
uint32_t dir;
uint32_t odr;
uint32_t dat;
uint32_t ier;
uint32_t imr;
uint32_t icr;
} MPC8XXXGPIOState;
The changes of the pins are transmitted as structures:
typedef struct {
uint8_t magick[2];
uint8_t pin;
uint8_t state;
} gpio_msg;
The original procedure writing the data to the pin has been modified to report all modified bits via message queue:
static void mpc8xxx_write_data(MPC8XXXGPIOState *s, uint32_t new_data)
{
uint32_t old_data = s->dat;
uint32_t diff = old_data ^ new_data;
int i;
qemu_mutex_lock(&s->dat_lock);
for (i = 0; i < 32; i++) {
uint32_t mask = 0x80000000 >> i;
if (!(diff & mask)) {
continue;
}
if (s->dir & mask) {
gpio_msg msg;
msg.magick[0] = 0x69;
msg.magick[1] = 0x10;
msg.pin = i;
msg.state = (new_data & mask) ? 1 : 0;
/* Output */
qemu_set_irq(s->out[i], (new_data & mask) != 0);
/* Send the new value */
mq_send(s->mq,(const char *)&msg,sizeof(msg),0);
/* Update the bit in the dat field */
s->dat &= ~mask;
if ( new_data & mask ) s->dat |= mask;
}
}
qemu_mutex_unlock(&s->dat_lock);
}
Information about the pins modified by the GUI is received in a separate thread:
static void * remote_gpio_thread(void * arg)
{
//Here we receive the data from the queue
const int MSG_MAX = 8192;
char buf[MSG_MAX];
gpio_msg * mg = (gpio_msg *)&buf;
mqd_t mq = mq_open("/to_qemu",O_CREAT | O_RDONLY,S_IRUSR | S_IWUSR,NULL);
if(mq<0) {
perror("I can't open mq");
exit(1);
}
while(1) {
int res = mq_receive(mq,buf,MSG_MAX,NULL);
if(res<0) {
perror("I can't receive");
exit(1);
}
if(res != sizeof(gpio_msg)) continue;
if((int) mg->magick[0]*256+mg->magick[1] != REMOTE_GPIO_MAGICK) {
printf("Wrong message received");
}
if(mg->pin < 32) {
qemu_mutex_lock_iothread();
mpc8xxx_gpio_set_irq(arg,mg->pin,mg->state);
qemu_mutex_unlock_iothread();
}
}
}
The receiving thread is started in the modified instance initialization procedure:
static void mpc8xxx_gpio_initfn(Object *obj)
{
DeviceState *dev = DEVICE(obj);
MPC8XXXGPIOState *s = MPC8XXX_GPIO(obj);
SysBusDevice *sbd = SYS_BUS_DEVICE(obj);
memory_region_init_io(&s->iomem, obj, &mpc8xxx_gpio_ops,
s, "mpc8xxx_gpio", 0x1000);
sysbus_init_mmio(sbd, &s->iomem);
sysbus_init_irq(sbd, &s->irq);
qdev_init_gpio_in(dev, mpc8xxx_gpio_set_irq, 32);
qdev_init_gpio_out(dev, s->out, 32);
qemu_mutex_init(&s->dat_lock);
s->mq = mq_open("/from_qemu",O_CREAT | O_WRONLY,S_IRUSR | S_IWUSR,NULL);
qemu_thread_create(&s->thread, "remote_gpio", remote_gpio_thread, s,
QEMU_THREAD_JOINABLE);
}
The minimalistic GUI is written in Python and GTK:
#!/usr/bin/python3
# Sources:
# https://lazka.github.io/pgi-docs
# https://python-gtk-3-tutorial.readthedocs.io/en/latest/button_widgets.html
# https://developer.gnome.org/gtk3/stable/
# Threads: https://wiki.gnome.org/Projects/PyGObject/Threading
import gi
gi.require_version("Gtk", "3.0")
from gi.repository import Gtk, GLib, Gdk
import threading
# Communication part
import struct
pipc_magick = 0x6910
import posix_ipc as pipc
mq_to_qemu = pipc.MessageQueue("/to_qemu",flags=pipc.O_CREAT, read=False, write=True)
mq_from_qemu = pipc.MessageQueue("/from_qemu",flags=pipc.O_CREAT, read=True, write=False)
def send_change(nof_pin, state):
s=struct.pack(">HBB",pipc_magick,nof_pin,state)
mq_to_qemu.send(s)
def recv_change(msg):
mg, pin, state = struct.unpack(">HBB",msg)
print("mg=",mg," pin=",pin," state=",state)
if mg != pipc_magick:
raise Exception("Wrong magick number in GPIO IPC message")
if state == 0:
s = 0
else:
s = 1
GLib.idle_add(MyLeds[pin-24].change_state,s)
def receiver():
while True:
msg = mq_from_qemu.receive()
recv_change(msg[0])
class MySwitch(Gtk.Switch):
def __init__(self,number):
super().__init__()
self.number = number
class MyButton(Gtk.Button):
def __init__(self,number):
super().__init__(label=str(number))
self.number = number
class MyLed(Gtk.Label):
color = Gdk.color_parse('gray')
rgba0 = Gdk.RGBA.from_color(color)
color = Gdk.color_parse('green')
rgba1 = Gdk.RGBA.from_color(color)
del color
def __init__(self, number):
super().__init__( label=str(number))
self.number = number
self.change_state(0)
def change_state(self,state):
if state == 1:
self.override_background_color(0,self.rgba1)
else:
self.override_background_color(0,self.rgba0)
MyLeds = []
class SwitchBoardWindow(Gtk.Window):
def __init__(self):
Gtk.Window.__init__(self, title="Switch Demo")
self.set_border_width(10)
mainvbox = Gtk.Box(orientation = Gtk.Orientation.VERTICAL, spacing = 6)
self.add(mainvbox)
#Create the switches
label = Gtk.Label(label = "Stable switches: left 0, right 1")
mainvbox.pack_start(label,True,True,0)
hbox = Gtk.Box(spacing=6)
for i in range(0,12):
vbox = Gtk.Box(orientation = Gtk.Orientation.VERTICAL, spacing = 6)
label = Gtk.Label(label = str(i))
vbox.pack_start(label,True,True,0)
switch = MySwitch(i)
switch.connect("notify::active", self.on_switch_activated)
switch.set_active(False)
vbox.pack_start(switch,True,True,0)
hbox.pack_start(vbox, True, True, 0)
mainvbox.pack_start(hbox,True,True,0)
#Create the buttons
label = Gtk.Label(label = "Unstable buttons: pressed 0, released 1")
mainvbox.pack_start(label,True,True,0)
hbox = Gtk.Box(spacing=6)
for i in range(12,24):
button = MyButton(i)
button.connect("button-press-event", self.on_button_clicked,0)
button.connect("button-release-event", self.on_button_clicked,1)
hbox.pack_start(button,True,True,0)
mainvbox.pack_start(hbox,True,True,0)
#Create the LEDS
label = Gtk.Label(label = "LEDs")
mainvbox.pack_start(label,True,True,0)
hbox = Gtk.Box(spacing=6)
for i in range(24,32):
led = MyLed(i)
MyLeds.append(led)
hbox.pack_start(led,True,True,0)
mainvbox.pack_start(hbox,True,True,0)
def on_switch_activated(self, switch, gparam):
if switch.get_active():
state = 0
else:
state = 1
#MyLeds[switch.number].change_state(state)
send_change(switch.number,state)
print("Switch #"+str(switch.number)+" was turned", state)
return True
def on_button_clicked(self, button,gparam, state):
print("pressed!")
send_change(button.number,state)
print("Button #"+str(button.number)+" was turned", state)
return True
win = SwitchBoardWindow()
win.connect("destroy", Gtk.main_quit)
win.show_all()
thread = threading.Thread(target=receiver)
thread.daemon = True
thread.start()
Gtk.main()
The full project integrating the modified MPC8XXX with emulated Vexpress A9 machine is available in the branch "gpio" of my repository https://github.com/wzab/BR_Internet_Radio

Related

How can I get raspberry pi 4b to read a TSL237 LF sensor?

I have the sensor working on an Arduino UNO. I'm having trouble translating the code over.
I can get the pi to receive input from the sensor, but I'm not sure how to get it to read the rising pulses for a specific amount of time (10 seconds) to give a certain number of pulses.
In this code the arduino is still receiving pulses from the sensor during the 10 second delay, but the number will vary based on how much light is getting through, which is what I want. I want the pi to do the same thing.
Also please understand I'm very much a novice, so if you can explain in detail I'd appreciate it.
#define TSL237 2
volatile unsigned long pulse_cnt = 0;
void setup() {
attachInterrupt(0, add_pulse, RISING);
pinMode(TSL237, INPUT);
Serial.begin(9600);
}
void add_pulse(){
pulse_cnt++;
return;
}
unsigned long Frequency() {
pulse_cnt = 0;
delay(10000);// this delay controlls pulse_cnt read out. longer delay == higher number
// DO NOT change this delay; it will void calibration.
unsigned long frequency = pulse_cnt;
return (frequency);
pulse_cnt = 0;
}
void loop() {
unsigned long frequency = Frequency();
Serial.println(frequency);
delay(5000);
}
here is what I have tried in MU, but I'm not as skilled.
import threading
import time
import RPi.GPIO as GPIO
GPIO.setmode(GPIO.BCM)
TSL237 = 16
GPIO.setup(TSL237, GPIO.IN, pull_up_down=GPIO.PUD_OFF)
class TSL237():
def __init__(self,TSL237):
self.TSL237 = TSL237
def Frequency(self):
self.pulse = 0
i=0
while i < 10:
i += 1
if GPIO.input(TSL237):
pulse += 1
return self.pulse
Reading = TSL237(TSL237)
GPIO.add_event_detect(TSL237, GPIO.RISING, Reading.Frequency())
#threading.Thread(target=TSL237(Frequency)).start()
You have mixed up the interrupt code with the readout code. The Frequency method in the arduino code does nothing but wait a certain time (10 seconds) for the ticks to increase. The actual increase of ticks happens in the interrupt handler add_pulse, though.
In the python code, the Frequency method is now the interrupt handler, and you're waiting inside it for some event as well. This doesn't work, because the interrupt handler won't fire again while it's active.
Change your code to something like this (untested, since I don't have such a sensor)
import threading
import time
import RPi.GPIO as GPIO
GPIO.setmode(GPIO.BCM)
TSL237Pin = 16
GPIO.setup(TSL237Pin, GPIO.IN, pull_up_down=GPIO.PUD_OFF)
class TSL237():
def __init__(self, pin):
self.pin = pin
self.pulses = 0
def OnRisingEdge(self):
self.pulses += 1 # Increase the pulse count and immediately return
def Frequency(self):
self.pulses = 0 # Reset pulse count
time.sleep(10) # wait 10 seconds
return self.pulses # return new value of pulse count
Reading = TSL237(TSL237Pin)
GPIO.add_event_detect(TSL237Pin, GPIO.RISING, callback=Reading.OnRisingEdge)
result = Reading.Frequency()
print(result)

C extensions - how to redirect printf to a python logger?

I have a simple C-extension(see example below) that sometimes prints using the printf function.
I'm looking for a way to wrap the calls to the function from that C-extensions so that all those printfs will be redirected to my python logger.
hello.c:
#include <Python.h>
static PyObject* hello(PyObject* self)
{
printf("example print from a C code\n");
return Py_BuildValue("");
}
static char helloworld_docs[] =
"helloworld(): Any message you want to put here!!\n";
static PyMethodDef helloworld_funcs[] = {
{"hello", (PyCFunction)hello,
METH_NOARGS, helloworld_docs},
{NULL}
};
static struct PyModuleDef cModPyDem =
{
PyModuleDef_HEAD_INIT,
"helloworld",
"Extension module example!",
-1,
helloworld_funcs
};
PyMODINIT_FUNC PyInit_helloworld(void)
{
return PyModule_Create(&cModPyDem);
};
setup.py:
from distutils.core import setup, Extension
setup(name = 'helloworld', version = '1.0', \
ext_modules = [Extension('helloworld', ['hello.c'])])
to use first run
python3 setup.py install
and then:
import helloworld
helloworld.hello()
I want to be able to do something like this:
with redirect_to_logger(my_logger)
helloworld.hello()
EDIT: I saw a number of posts showing how to silence the prints from C, but I wasn't able to figure out from it how can I capture the prints in python instead.
Example of such post: Redirect stdout from python for C calls
I assume that this question didn't get much traction because I maybe ask too much, so I don't care about logging anymore... how can I capture the C prints in python? to a list or whatever.
EDIT
So I was able to achieve somewhat a working code that does what I want - redirect c printf to python logger:
import select
import threading
import time
import logging
import re
from contextlib import contextmanager
from wurlitzer import pipes
from helloworld import hello
logger = logging.getLogger()
logger.setLevel(logging.DEBUG)
ch = logging.StreamHandler()
ch.setLevel(logging.DEBUG)
formatter = logging.Formatter('%(asctime)s - %(name)s - %(levelname)s - %(message)s')
ch.setFormatter(formatter)
logger.addHandler(ch)
class CPrintsHandler(threading.Thread):
def __init__(self, std, poll_std, err, poll_err, logger):
super(CPrintsHandler, self).__init__()
self.std = std
self.poll_std = poll_std
self.err = err
self.poll_err = poll_err
self.logger = logger
self.stop_event = threading.Event()
def stop(self):
self.stop_event.set()
def run(self):
while not self.stop_event.is_set():
# How can I poll both std and err at the same time?
if self.poll_std.poll(1):
line = self.std.readline()
if line:
self.logger.debug(line.strip())
if self.poll_err.poll(1):
line = self.err.readline()
if line:
self.logger.debug(line.strip())
#contextmanager
def redirect_to_logger(some_logger):
handler = None
try:
with pipes() as (std, err):
poll_std = select.poll()
poll_std.register(std, select.POLLIN)
poll_err = select.poll()
poll_err.register(err, select.POLLIN)
handler = CPrintsHandler(std, poll_std, err, poll_err, some_logger)
handler.start()
yield
finally:
if handler:
time.sleep(0.1) # why do I have to sleep here for the foo prints to finish?
handler.stop()
handler.join()
def foo():
logger.debug('logger print from foo()')
hello()
def main():
with redirect_to_logger(logger):
# I don't want the logs from here to be redirected as well, only printf.
logger.debug('logger print from main()')
foo()
main()
But I have a couple of issues:
The python logs are also being redirected and caught by the CPrintsHandler. Is there a way to avoid that?
The prints are not exactly in the correct order:
python3 redirect_c_example_for_stackoverflow.py
2020-08-18 19:50:47,732 - root - DEBUG - example print from a C code
2020-08-18 19:50:47,733 - root - DEBUG - 2020-08-18 19:50:47,731 - root - DEBUG - logger print from main()
2020-08-18 19:50:47,733 - root - DEBUG - 2020-08-18 19:50:47,731 - root - DEBUG - logger print from foo()
Also, the logger prints all go to err, perhaps the way I poll them causes this order.
I'm not that familiar with select in python and not sure if there is a way to poll both std and err at the same time and print whichever has something first.
On Linux you could use wurlitzer which would capture the output from fprint, e.g.:
from wurlitzer import pipes
with pipes() as (out, err):
helloworld.hello()
out.read()
#'example print from a C code\n'
wurlitzer is based on this article of Eli Bendersky, the code from which you can use if you don't like to depend on third-party libraries.
Sadly, wurlitzer and the code from the article work only for Linux (and possible MacOS).
Here is a prototype (an improved version of the prototype can be installed from my github) for Windows using Eli's approach as Cython-extension (which probably could be translated to ctypes if needed):
%%cython
import io
import os
cdef extern from *:
"""
#include <windows.h>
#include <io.h>
#include <stdlib.h>
#include <stdio.h>
#include <fcntl.h>
int open_temp_file() {
TCHAR lpTempPathBuffer[MAX_PATH+1];//path+NULL
// Gets the temp path env string (no guarantee it's a valid path).
DWORD dwRetVal = GetTempPath(MAX_PATH, // length of the buffer
lpTempPathBuffer); // buffer for path
if(dwRetVal > MAX_PATH || (dwRetVal == 0))
{
return -1;
}
// Generates a temporary file name.
TCHAR szTempFileName[MAX_PATH + 1];//path+NULL
DWORD uRetVal = GetTempFileName(lpTempPathBuffer, // directory for tmp files
TEXT("tmp"), // temp file name prefix
0, // create unique name
szTempFileName); // buffer for name
if (uRetVal == 0)
{
return -1;
}
HANDLE tFile = CreateFile((LPTSTR)szTempFileName, // file name
GENERIC_READ | GENERIC_WRITE, // first we write than we read
0, // do not share
NULL, // default security
CREATE_ALWAYS, // overwrite existing
FILE_ATTRIBUTE_TEMPORARY | FILE_FLAG_DELETE_ON_CLOSE, // "temporary" temporary file, see https://learn.microsoft.com/en-us/archive/blogs/larryosterman/its-only-temporary
NULL); // no template
if (tFile == INVALID_HANDLE_VALUE) {
return -1;
}
return _open_osfhandle((intptr_t)tFile, _O_APPEND | _O_TEXT);
}
int replace_stdout(int temp_fileno)
{
fflush(stdout);
int old;
int cstdout = _fileno(stdout);
old = _dup(cstdout); // "old" now refers to "stdout"
if (old == -1)
{
return -1;
}
if (-1 == _dup2(temp_fileno, cstdout))
{
return -1;
}
return old;
}
int restore_stdout(int old_stdout){
fflush(stdout);
// Restore original stdout
int cstdout = _fileno(stdout);
return _dup2(old_stdout, cstdout);
}
void rewind_fd(int fd) {
_lseek(fd, 0L, SEEK_SET);
}
"""
int open_temp_file()
int replace_stdout(int temp_fileno)
int restore_stdout(int old_stdout)
void rewind_fd(int fd)
void close_fd "_close" (int fd)
cdef class CStdOutCapture():
cdef int tmpfile_fd
cdef int old_stdout_fd
def start(self): #start capturing
self.tmpfile_fd = open_temp_file()
self.old_stdout_fd = replace_stdout(self.tmpfile_fd)
def stop(self): # stops capturing, frees resources and returns the content
restore_stdout(self.old_stdout_fd)
rewind_fd(self.tmpfile_fd) # need to read from the beginning
buffer = io.TextIOWrapper(os.fdopen(self.tmpfile_fd, 'rb'))
result = buffer.read()
close_fd(self.tmpfile_fd)
return result
And now:
b = CStdOutCapture()
b.start()
helloworld.hello()
out = b.stop()
print("HERE WE GO:", out)
# HERE WE GO: example print from a C code
This is what I would do if I am free to edit the C code. Open a memory map in C and write to its file descriptor using fprintf(). Expose the file descriptor to Python either as the int and then use mmap module to open it or use os.openfd() to wrap it in a simpler file-like object, or wrap it in file-like object in C and let Python use that.
Then I would create a class that will enable me to write to sys.stdout through usual interface, i.e. its write() method (for Python's side usage) , and that would use select module to poll the file from C that acts as its stdout in a thread. Then I would switch sys.stdout with an object of this class. So, when Python does sys.stdout.write(...) the string will be redirected to sys.stdout.write(), and when the loop in a thread detects output on a file from C, it will write it using sys.stdout.write(). So, everything will be written to the screen and be available to loggers as well.
In this model, the strictly C part will never actually be writing to the file descriptor connected to the terminal.
You can even do much of this in C itself and leave little for the Python's side, but its easier to influence the interpreter from the Python's side as the extension is the shared library which involves some kind of, lets call it, IPC and OS in the whole story. That's why the stdout is not shared between extension and Python in the first place.
If you want to continue printf() on C side, you can see how you can redirect it in C before programming this whole mess.
This answer is strictly theoretical because I have no time to test it; but it should be doable according to my knowledge. If you try it, please let me know in a comment how it went. Perhaps I missed something, but, I am certain the theory is sane.
Beauty of this idea is that it will be OS independent, although the part with shared memory or connecting a file descriptor to allocated space in RAM can be sometimes PITA on Windows.
If you are not constrained to using the printf in C, it would be easier to use the print equivalent from python C API and pass where you want to redirect the message as an argument.
For example, your hello.c would be:
#include <Python.h>
static PyObject* hello(PyObject* self, PyObject *args)
{
PyObject *file = NULL;
if (!PyArg_ParseTuple(args, "O", &file))
return NULL;
PyObject *pystr = PyUnicode_FromString("example print from a C code\n");
PyFile_WriteObject(pystr, file, Py_PRINT_RAW);
return Py_BuildValue("");
}
static char helloworld_docs[] =
"helloworld(): Any message you want to put here!!\n";
static PyMethodDef helloworld_funcs[] = {
{"hello", (PyCFunction)hello,
METH_VARARGS, helloworld_docs},
{NULL}
};
static struct PyModuleDef cModPyDem =
{
PyModuleDef_HEAD_INIT,
"helloworld",
"Extension module example!",
-1,
helloworld_funcs
};
PyMODINIT_FUNC PyInit_helloworld(void)
{
return PyModule_Create(&cModPyDem);
};
We can check if it is working with the program below:
import sys
import helloworld
helloworld.hello(sys.stdout)
helloworld.hello(sys.stdout)
helloworld.hello(sys.stderr)
In the command line we redirect each output separately:
python3 example.py 1> out.txt 2> err.txt
out.txt will have two print calls, while err.txt will have only one, as expected from our python script.
You can check python's print implementation to get some more ideas of what you can do.
cpython print source code

How to transmit and receive a baseband signal in unetstack?

I am trying to work on a project involving implementation of acoustic propagation loss models in underwater communication(based on a certain research paper). We are trying to simulate that in unetstack. The ultimate goal is to create a channel model class that has all the loss models implemented.
But for now we have started by trying to send a baseband signal from one node to another and then are trying to capture the frequency on the receiver node and calculate loss models on that given frequency. (The loss models are a function of frequency value of the signal). I have tried to follow some documentation and some blog posts but I am not able to transmit and receive the signal.
For reference, I have already referred to these articles:
1.) svc-12-baseband
2.) basic-modem-operations-using-unetstack
This is the Research paper that I am following this to calculate the Loss based on different Loss models.
I have tried to write a groovy file for simulation, but it does not seem to work. If someone can please have a look and let me know the mistakes I have made, that would be of real help. We are quite new to unetstack as well as the topic of underwater signal processing like this and this is our first attempt at implementing it on a simulator. We are using unetsim-1.3
Any help is really appreciated! Thanks in advance
import org.arl.fjage.*
import org.arl.unet.*
import org.arl.unet.phy.*
import org.arl.unet.bb.*
import org.arl.unet.sim.*
import org.arl.unet.sim.channels.*
import static org.arl.unet.Services.*
import static org.arl.unet.phy.Physical.*
import java.lang.Math.*
platform = RealTimePlatform
simulate 3.minutes, {
def n = []
n << node('1', address: 1, location: [0,0,0])
n << node('2', address: 2, location: [0,0,0])
n.eachWithIndex { n2, i ->
n2.startup = {
def phy = agentForService PHYSICAL
def node = agentForService NODE_INFO
def bb = agentForService BASEBAND
subscribe phy
subscribe bb
if(node.address == 1)
{
add new TickerBehavior(50000, {
float freq = 5000
float duration = 1000e-3
int fd = 24000
int fc = 24000
int num = duration*fd
def sig = []
(0..num-1).each { t ->
double a = 2*Math.PI*(freq-fc)*t/fd
sig << (int)(Math.cos(a))
sig << (int)(Math.sin(a))
}
bb << new TxBasebandSignalReq(signal: sig)
println "sent"
})
}
if(node.address == 2)
{
add new TickerBehavior(50000, {
bb << new RecordBasebandSignalReq(recLen: 24000)
def rxNtf = receive(RxBasebandSignalNtf, 25000)
if(rxNtf)
{
println "Received"
}
println "Tried"
})
}
}
}
}
In some cases "Tried" is printed first even before "sent" is printed. This shows that (node.address == 2) code is executing first, before (node.address == 1) executes.
The basic code you have for transmission (TxBasebandSignalReq) and reception (RecordBasebandSignalReq) of signals seems correct.
This should work well on modems, other than the fact that your signal generation is likely flawed for 2 reasons:
You are trying to generate a signal at 5 kHz in baseband representation using a carrier frequency of 24 kHz and a bandwidth of 24 kHz. This signal will be aliased, as this baseband representation can only represent signals of 24±12 kHz, i.e., 12-36 kHz. If you need to transmit a 5 kHz signal, you need your modem to be operating at much lower carrier frequency (easy in the simulator, but in practice you'd need to check your modem specifications).
You are typecasting the output of sin and cos to int. This is probably not what you intended to do, as the signal is an array of float scaled between -1 and 1. So just dropping the (int) would be advisable.
On a simulator, you need to ensure that the modem parameters are setup correctly to reflect your assumptions of baseband carrier frequency, bandwidth and recording length:
modem.carrierFrequency = 24000
modem.basebandRate = 24000
modem.maxSignalLength = 24000
The default HalfDuplexModem parameters are different, and your current code would fail for RecordBasebandSignalReq with a REFUSE response (which your code is not checking).
The rest of your code looks okay, but I'd simplify it a bit to:
import org.arl.fjage.*
import org.arl.unet.bb.*
import org.arl.unet.Services
platform = RealTimePlatform
modem.carrierFrequency = 24000
modem.basebandRate = 24000
modem.maxSignalLength = 48000
simulate 3.minutes, {
def n1 = node('1', address: 1, location: [0,0,0])
def n2 = node('2', address: 2, location: [0,0,0])
n1.startup = {
def bb = agentForService Services.BASEBAND
add new TickerBehavior(50000, {
float freq = 25000 // pick a frequency in the 12-36 kHz range
float duration = 1000e-3
int fd = 24000
int fc = 24000
int num = duration*fd
def sig = []
(0..num-1).each { t ->
double a = 2*Math.PI*(freq-fc)*t/fd
sig << Math.cos(a)
sig << Math.sin(a)
}
bb << new TxBasebandSignalReq(signal: sig)
println "sent"
})
}
n2.startup = {
def bb = agentForService Services.BASEBAND
add new TickerBehavior(50000, {
bb << new RecordBasebandSignalReq(recLen: 24000)
def rxNtf = receive(RxBasebandSignalNtf, 25000)
if(rxNtf) {
println "Received"
}
println "Tried"
})
}
}
This should work as expected!
However, there are a few more gotchas to bear in mind:
You are sending and recording on a timer. On a simulator, this should be okay, as both nodes have the same time origin and no propagation delay (you've setup the nodes at the same location). However, on a real modem, the recording may not be happening when the transmission does.
Transmission and reception of signals with a real modem works well. The Unet simulator is primarily a network simulator and focuses on simulating the communication system behavior of modems, but not necessarily the acoustic propagation. While it supports the BASEBAND service, the channel physics of transmitting signals is not accurately modeled by the default HalfDuplexModem model. So your mileage on signal processing the recording may vary. This can be fixed by defining your own channel model that uses an appropriate acoustic propagation model, but is a non-trivial undertaking.

Getting Multiple Audio Inputs in Processing

I'm currently writing a Processing sketch that needs to access multiple audio inputs, but Processing only allows access to the default line in. I have tried getting Lines straight from the Java Mixer (accessed within Processing), but I still only get the signal from whichever line is currently set to default on my machine.
I've started looking at sending the sound via OSC from SuperCollider, as recommended here. However, since I'm very new to SuperCollider and their documentation and support is more focused on generating sound than on accessing inputs, my next step will probably be to play around with Beads and Jack, as suggested here.
Does anyone have (1) other suggestions, or (2) concrete examples of getting multiple inputs from either SuperCollider or Beads/Jack to Processing?
Thank you in advance!
Edit: The sound will be used to power custom music visualizations (think the iTunes visualizer, but much more song specific). We have this working with multiple mp3s; now what I need is to able to get a float[] buffer from each mic. Hoping to have 9 different mics, though we'll settle for 4 if that is more doable.
For hardware, at this point, we are just using mics and XLR to USB cables. (Have considered a pre-amp, but so far this has been sufficient.) I am currently on Windows, but I think that we will ultimately switch to a Mac.
Here was my attempt with just Beads (it works fine for the laptop, since I do that one first, but the headset buffer has all 0's; if I switch them and put the headset first, the headset buffer will be correct, but the laptop will contain all 0's):
void setup() {
size(512, 400);
JavaSoundAudioIO headsetAudioIO = new JavaSoundAudioIO();
JavaSoundAudioIO laptopAudioIO = new JavaSoundAudioIO();
headsetAudioIO.selectMixer(5);
headsetAudioCon = new AudioContext(headsetAudioIO);
laptopAudioIO.selectMixer(4);
laptopAudioCon = new AudioContext(laptopAudioIO);
headsetMic = headsetAudioCon.getAudioInput();
laptopMic = headsetAudioCon.getAudioInput();
} // setup()
void draw() {
background(100,0, 75);
laptopMic.start();
laptopMic.calculateBuffer();
laptopBuffer = laptopMic.getOutBuffer(0);
for (int j = 0; j < laptopBuffer.length - 1; j++)
{
println("laptop; " + j + ": " + laptopBuffer[j]);
line(j, 200+laptopBuffer[j]*50, j+1, 200+laptopBuffer[j+1]*50);
}
laptopMic.kill();
headsetMic.start();
headsetMic.calculateBuffer();
headsetBuffer = headsetMic.getOutBuffer(0);
for (int j = 0; j < headsetBuffer.length - 1; j++)
{
println("headset; " + j + ": " + headsetBuffer[j]);
line(j, 50+headsetBuffer[j]*50, j+1, 50+headsetBuffer[j+1]*50);
}
headsetMic.kill();
} // draw()
My attempt at adding Jack contains this line:
ac = new AudioContext(new AudioServerIO.Jack(), 44100, new IOAudioFormat(44100, 16, 4, 4));
but I get the error:
Jun 22, 2016 9:17:24 PM org.jaudiolibs.beads.AudioServerIO$1 run
SEVERE: null
org.jaudiolibs.jnajack.JackException: Can't find native library
at org.jaudiolibs.jnajack.Jack.getInstance(Jack.java:428)
at org.jaudiolibs.audioservers.jack.JackAudioServer.initialise(JackAudioServer.java:102)
at org.jaudiolibs.audioservers.jack.JackAudioServer.run(JackAudioServer.java:86)
at org.jaudiolibs.beads.AudioServerIO$1.run(Unknown Source)
at java.lang.Thread.run(Thread.java:745)
Caused by: java.lang.UnsatisfiedLinkError: Unable to load library 'jack': Native library (win32-x86-64/jack.dll) not found in resource path ([file:/C:/Users/...etc...)
And when I'm in Jack, I don't see my mic (which seems like a huge red flag to me, though I am completely new to Jack). Should this AudioContext show up as an Input in Jack? Or vice versa -- find my mic there first and then get it from Jack to Processing?
(Forgive my inexperience, and thank you again! My lack of knowledge in Jack makes me wonder if I should revisit SuperCollider instead...)
I had the same issue a few years ago and I used a combination of JACK, JNAJack and Beads. You can follow this Beads Google Group thread for more details.
At the that time I had to use this version of Beads (2012-04-23), but I hope those changes probably made it into the main project by now.
For reference, here is the basic class I used:
import java.util.Arrays;
import org.jaudiolibs.beads.AudioServerIO;
import net.beadsproject.beads.analysis.featureextractors.FFT;
import net.beadsproject.beads.analysis.featureextractors.PowerSpectrum;
import net.beadsproject.beads.analysis.segmenters.ShortFrameSegmenter;
import net.beadsproject.beads.core.AudioContext;
import net.beadsproject.beads.core.AudioIO;
import net.beadsproject.beads.core.UGen;
import net.beadsproject.beads.ugens.Gain;
import processing.core.PApplet;
public class BeadsJNA extends PApplet {
AudioContext ac;
ShortFrameSegmenter sfs;
PowerSpectrum ps;
public void setup(){
//defining audio context with 6 inputs and 6 outputs - adjust this based on your sound card / JACK setup
ac = new AudioContext(new AudioServerIO.Jack(),512,AudioContext.defaultAudioFormat(6,6));
//getting 4 audio inputs (channels 1,2,3,4)
UGen microphoneIn = ac.getAudioInput(new int[]{1,2,3,4});
Gain g = new Gain(ac, 1, 0.5f);
g.addInput(microphoneIn);
ac.out.addInput(g);
println("no. of inputs: " + ac.getAudioInput().getOuts());
//test get some FFT power spectrum data form the
sfs = new ShortFrameSegmenter(ac);
sfs.addInput(ac.out);
FFT fft = new FFT();
sfs.addListener(fft);
ps = new PowerSpectrum();
fft.addListener(ps);
ac.out.addDependent(sfs);
ac.start();
}
public void draw(){
background(255);
float[] features = ps.getFeatures();
if(features != null){
for(int x = 0; x < width; x++){
int featureIndex = (x * features.length) / width;
int barHeight = Math.min((int)(features[featureIndex] *
height), height - 1);
line(x, height, x, height - barHeight);
}
}
}
public static void main(String[] args) {
PApplet.main(BeadsJNA.class.getSimpleName());
}
}

Linux DRM ( DRI ) Cannot Screen Scrape /dev/fb0 as before with FBDEV

On other Linux machines using the FBDEV drivers ( Raspberry Pi.. etc. ), I could mmap the /dev/fb0 device and directly create a BMP file that saved what was on the screen.
Now, I am trying to do the same thing with DRM on a TI Sitara AM57XX ( Beagleboard X-15 ). The code that used to work with FBDEV is shown below.
This mmap no longer seems to work the DRM. I'm using a very simple Qt5 Application with the Qt platform linuxfb plugin. It draws just fine into /dev/fb0 and shows on the screen properly, however I cannot read back from /dev/fb0 with a memory mapped pointer and have an image of the screen saved to file. It looks garbled like this:
Code:
#ifdef FRAMEBUFFER_CAPTURE
repaint();
QCoreApplication::processEvents();
// Setup framebuffer to desired format
struct fb_var_screeninfo var;
struct fb_fix_screeninfo finfo;
memset(&finfo, 0, sizeof(finfo));
memset(&var, 0, sizeof(var));
/* Get variable screen information. Variable screen information
* gives information like size of the image, bites per pixel,
* virtual size of the image etc. */
int fbFd = open("/dev/fb0", O_RDWR);
int fdRet = ioctl(fbFd, FBIOGET_VSCREENINFO, &var);
if (fdRet < 0) {
qDebug() << "Error opening /dev/fb0!";
close(fbFd);
return -1;
}
if (ioctl(fbFd, FBIOPUT_VSCREENINFO, &var)<0) {
qDebug() << "Error setting up framebuffer!";
close(fbFd);
return -1;
} else {
qDebug() << "Success setting up framebuffer!";
}
//Get fixed screen information
if (ioctl(fbFd, FBIOGET_FSCREENINFO, &finfo) < 0) {
qDebug() << "Error getting fixed screen information!";
close(fbFd);
return -1;
} else {
qDebug() << "Success getting fixed screen information!";
}
//int screensize = var.xres * var.yres * var.bits_per_pixel / 8;
//int screensize = var.yres_virtual * finfo.line_length;
//int screensize = finfo.smem_len;
int screensize = finfo.line_length * var.yres_virtual;
qDebug() << "Framebuffer size is: " << var.xres << var.yres << var.bits_per_pixel << screensize;
int linuxFbWidth = var.xres;
int linuxFbHeight = var.yres;
int location = (var.xoffset) * (var.bits_per_pixel/8) +
(var.yoffset) * finfo.line_length;
// Perform memory mapping of linux framebuffer
char* frameBufferMmapPixels = (char *)mmap(0, screensize, PROT_READ | PROT_WRITE, MAP_SHARED, fbFd, 0);
assert(frameBufferMmapPixels != MAP_FAILED);
QImage toSave((uchar*)frameBufferMmapPixels,linuxFbWidth,linuxFbHeight,QImage::Format_ARGB32);
toSave.save("/usr/bin/test.bmp");
sync();
#endif
Here is the output of the code when it runs:
Success setting up framebuffer!
Success getting fixed screen information!
Framebuffer size is: 800 480 32 1966080
Here is the output of fbset showing the pixel format:
mode "800x480"
geometry 800 480 800 480 32
timings 0 0 0 0 0 0 0
accel true
rgba 8/16,8/8,8/0,8/24
endmode
root#am57xx-evm:~#
finfo.line_length gives the size of the actual physical scan line in bytes. It is not necessarily the same as screen width multiplied by pixel size, as scan lines may be padded.
However the QImage constructor you are using assumes no padding.
If xoffset is zero, it should be possible to construct a QImage directly from the framebuffer data using a constructor with the bytesPerLine argument. Otherwise there are two options:
allocate a separate buffer and copy only the visible portion of each scanline to it
create an image from the entire buffer (including the padding) and then crop it
If you're using DRM, then /dev/fb0 might point to an entirely different buffer (not the currently visible one) or have an different format.
fbdev is really just for old legacy that hasn't been ported DRM/KMS yet
and only has very limited modsetting capabilities.
BTW: which kernel are you using ? hopefully not that ancient and broken TI vendor kernel ...

Resources