Dynamic creation of instances of a component - redhawksdr

I am using REDHAWK 1.9 on CentOS 6.3 32 bit...
I have a REDHAWK component that takes in one data stream. The waveform may want to have more than one instance of the class depending upon the data. Is it possible to do the following:
Create an instance of a component on the fly when the waveform is running?
Create dynamic connections between components when the waveform is running?

Jonathan, I'm not sure I totally understand your question but let me try an answer and you can clarify if I'm misunderstanding. It sounds like you want to have a waveform running, and depending on what the waveform does to the data flowing into it, launch more waveforms to perform other tasks on the data. Is that correct?
Dynamic launching of waveforms based on the meeting of certain conditions is not included natively with REDHAWK. However, it would be possible to create a component to do this and include it in one of your waveforms.
When stringing together multiple waveforms, make sure the connecting ports are configured as external ports.

Related

Overlapping one sound multiple times in processing

I'm working on the sound part of an interactive installation that would need an event to be triggered by osc an undefined number of times, making the sound linked to it overlaps instead of being rewinded and started again.
Would it be possible to do that without needing to make an array of loadings of the same sound?
I'm actually trying to do it with processing and minim library.
Do you think it would be easier to achieve it with another programming software? I've found myself in the same difficulties trying to do it with puredata. Any tip or clue would be extremely welcome.
Thanks a lot.
You will need multiple readers ([tabread~] resp [tabplay~] in Pd; i don't know about Processing/minim, but the same principle applies) to read the table multiple times (in parallel), where each one can be started separately.
However, you only need a single instance of your data array (e.g. [table]), as the various readers can access the same array independently.
Can you use Java libraries in Processing? Processing is built on Java, yes?
If you can, I have a library you can use, supporting a class I call AudioCue available via github. This is modeled on a Java Clip but with additional capabilities. It allows multiple, concurrent playback. AudioCue also has real time controls for volume, panning and playback speed, in case you want to play around with adding some more interactivity to your installation.
I would love to know if it can be used with Processing. Please follow up with me if you try this route. I'd like to see it done, and can possibly assist.
If Processing allows you to send PCM directly out for playback, then the basic algorithm is the store the audio data in an array, and create pointers or cursors (depending on your preferred terminology) that independently iterate through that array. This is the main basis of the algorithm I use for AudioCue, with the PCM being routed out via a Java SourceDataLine.

Dynamic Waveform Creation While Running in a Domain

I am trying to connect the USRP_UHD device to the FM_mono_demo waveform. The waveform's component that connects to the USRP_UHD is the TuneFilterDecimate. The USRP_UHD outputs 16-bit integers while the TuneFilterDecimate computers requires float values.
The fix is to add the DataConverter component at the beginning of the FM_mono_demo waveform.
I am going to be experimenting with a different sdr whose 'device' output floats, making the original waveform correct.
Therefore I need to have two versions of the FM_mono_demo waveform, the original and one modified with the DataConverter component.
A better solution would be to launch the DataConverter component, if needed, using python and connect it to the first component of the waveform.
There is a method to launchComponent within the sandbox, but I cannot find a way to do so within a domain.
An idea would be to create two waveforms. One would be the main waveform and the second would consist of components that could be accessed and connected to the main waveform.
This leads to the idea that multiple waveforms could be connected at run-time to allow for dynamic configuration. There is a lot going on with this question. Maybe I overlooked an obvious way to solve my original problem.
Your problem is rather broad, but I think I can point you in the right direction.
It seems as though you're dealing with two issues:
Moving from sandbox to domain
Dealing with multiple devices
For problem 1), I recommend you dig further into the manual, specifically this section: Redhawk Manual: The Runtime Environment.
For problem 2), you can find more information in the manual in this section: Redhawk Manual: Working with Devices. That section includes specifying a particular hardware device and running components with Redhawk devices (proxies to the actual hardware device).
I recommend you start with those steps and post specific questions as you run into issues. You didn't actually ask a question here, but I think your confusion lies within understanding the Redhawk architecture itself.

Use Threads for image analysis and microcontroller actuation or not

One question please.
I have a program that makes analysis of an image coming from a camera (with opencv). It has to take the values from that analysis (at a specific "sampling time") and send it to a microcontroller. The analysis part cant stop making analysis because it will let the system "blind" (thats what we think). So...I have read that to pass values from one file to another, if "return" is used, the program stops that file, so one has to use threads...Is this the best way to adress this problem?. Or is there another way to do it in just one thread??
(The overall idea is to control a machine that uses motors with the analysis from a camera, the question is, should this be done with threads or linearly.)
Thanks in advance.

The best way to load an openstreetmap .osm in a docker-container

My intentions:
Actually, I intend to:
implement vehicles as containers
simulate/move these containers on the .osm maps-based roads
My viewpoint about the problem:
I have loaded the XML-based .osm file and processed it in python using xml.dom. But I am not satisfied with the performance of loading the .osm file because later on, I will have to add/create more vehicles as containers that will be simulated onto the same road.
Suggestions needed:
This is my first time to solve a problem related to maps. In fact, I need suggestions on how to proceed by keeping in mind, the performance/efficiency, with this set of requirements. Suggestions in terms of implementation will be much appreciated. Thanks in advance!
Simulating lots of vehicles by running lots of docker containers in parallel might work I suppose. Maybeyou're initialising the same image with different start locations etc passed in as ENV vars? As a practical way of doing agent simulations this sounds a bit over-engineered to me, but as an interesting docker experiment it might make sense.
Maybe you'll need a central thing for holding and sharing the state (positions of other vehicles) and serving that back to the multiple agents.
The challenge of loading an .osm file into some sort of database or internal map representation doesn't seem like the hardest part, and because it may be done once on initialisation and imagine it's not the most performance critical part of this.
I'm thinking you'll probably want to do "routing" through the road network (taking account of one ways etc?), giving your agents a purposeful path to follow to a destination. This will get more complicated if you want to model interactions with other agents e.g. you might want to model getting stuck in traffic because other agents are going the same way, and even decisions to re-route because of traffic, so you may want quite a flexible routing system, perhaps self-coded.
But there's lots of open source routing systems which work with OSM data, to at least draw inspiration from. See this list: https://wiki.openstreetmap.org/wiki/Routing#Developers
Popular choices like OSRM are designed to scale up to country size or even global openstreetmap data, but I imagine that's overkill for you (you're probably looking at simulating within a city road network?). Even so. Probably easy enough to get working in a docker container.
Or you might find something lightweight like the code of the JOSM routing plugin easier to embed in your docker image and customize (although I see that's using a library called "JGraphT")
Then working backwards from a calculated route you can calculate interpolated steps along that path which will allow you to make your simulated agents take a step on each iteration (simulated movement)

Enterprise Architect - Executable State Machine - Time based transitions

It has been a while I'm working with Enterprise Architect 13.5 and simulating state machines.
Until now I manage transitions with simple triggers which are available in the simulation events window.
I am looking for a way to make a time based transition between two states but I do not figure how to do it.
When simulation is running I can't find a way to manage a 30s timeout between two states.
from
https://sparxsystems.com/resources/user-guides/15.1/simulation/executable-state-machines.pdf
at page 8
"Trigger and Events -- An Executable StateMachine supports event handling for Signals only. To use Call, Timing or Change Event types you must define an outside mechanism
to generate signals based on these events."
Well, you already have answered your question. When you open the properties of the transition you see the properties where you can enter something like
The specification is as simple as that. Once you save that you see the trigger name along the transition:
Pretty much straight forward, isn't it?

Resources