Is there a way to use hirarchical blocks in redhawk?
For example, say I want to make a digital modulator that is a composition of filters, upsamplers, etc, and I want to use it as a single block in a waveform project, that has other hierarchical components as well. How would I combine the already made filter and upsampler blocks into the digital modulator block using redhawk?
You currently cannot create waveforms of waveforms. A waveform can however have external ports, and external properties allowing you to chain waveforms together dynamically and treat it similar to a component from a programmatic perspective. For example in the example below I launch two waveforms on the domain and connect the two, these waveforms are the examples that come bundled with REDHAWK and have external ports and properties.
>>> from ossie.utils import redhawk
>>> dom = redhawk.attach()
>>> wf1 = dom.createApplication('/waveforms/rh/FM_mono_demo/FM_mono_demo.sad.xml')
>>> wf2 = dom.createApplication('/waveforms/rh/FM_mono_demo/FM_mono_demo.sad.xml')
>>> wf1.connect(wf2)
There isn't a construct for a component of components (other than a waveform). As of the REDHAWK 2.1 beta release there is a 'shared address' construct that allows you to do something similar to what you seem to be asking for. The 'shared address' BULKIO pattern was specifically developed to create high-speed connections between components and reduce the processing load caused by IO. Take a look at https://github.com/RedhawkSDR/core-framework/tree/develop-2.1/docs/shared-address and see if this is what you are looking for. It will allow you to launch 'N' components built according to the shared address pattern into a single component host and still retain each individual components property interfaces etc.
If you are more specific about why you want to use a hierarchical block, a more targeted answer may be possible.
Related
I have built a loopstation in JSyn. It allows you to record and play back samples. By playing multiple samples you can layer up sounds (e.g. one percussion sample, one melody, etc)
JSyn allows me to connect each of the sample players directly to my lineout where it is mixed automatically. But now I would like to record the sound just as the user hears it to a .wav-file. But I am not sure what I should connect the input port of the recorder to.
What is the smartest way to connect the audio output of all samples to the WaveRecorder?
In other words: In the Programmer's Guide there is an example for this but I am not sure how I create the "finalMix" used there.
Rather than using multiple LineOuts, just use one LineOut.
You can mix all of your signals together using a chain of MultiplyAdd units.
http://www.softsynth.com/jsyn/docs/javadocs/com/jsyn/unitgen/MultiplyAdd.html
Or you can use a Mixer unit.
http://www.softsynth.com/jsyn/docs/javadocs/com/jsyn/unitgen/MixerStereoRamped.html
Then connect the mix to your WaveRecorder and to your single LineOut.
Origen has modes for the top level DUT and IP. However, the mode API doesn't allow the flexibility to define attributes at will. There are pre-defined attributes, some of which (e.g. typ_voltage) look specific to a particular company or device.
In contrast, the Parameters module does allow flexible parameter/attribute definitions to be created within a 'context'. What is really the conceptual difference between a chip 'mode' and a parameter 'context'? They both require the user to set them.
add_mode :mymode do |m|
m.typ_voltage = 1.0.V
# I believe I am limited to what I can define here
end
define_params :mycontext do |params|
params.i.can.put.whatever.i.want = 'bdedkje'
end
They both contain methods with_modes and with_params that look similar in function. Why not make the mode attributes work exactly like the more flexible params API?
thx
Being able to arbitrarily add named attributes to a mode seems like a good idea to me, but you are right that it is not supported today.
No particular reason for that other than nobody has seen a need for it until now, but there would be no problems accepting a PR to add it.
Ideally, when implementing that, it would be good to try and do it via a module which can then be included into other classes to provide the same functionality e.g. to give pins, bits, etc. the same ability.
I am trying to extract some features from an audio sample using OpenSMILE, but I'm realizing how difficult it is to set up a config file.
The documentation is not very helpful. The best I could do was run some of the sample config files that are provided, see what came out, and then go into the config file and try to determine where the feature was specified. Here's what I did:
I used the default feature set used from The INTERSPEECH 2010 Paralinguistic Challenge (IS10_paraling.conf).
I ran it over a sample audiofile.
I looked at what came out. Then I read the config file in depth, trying to find out where the feature was specified.
Here's a little markdown table showing the results of my exploration:
| Feature generated | instruction in the conf file |
|-------------------|---------------------------------------------------------|
| pcm_loudness | I see: 'loudness=1' |
| mfcc | I see a section: [mfcc:cMfcc] |
| lspFreq | no matches for the text 'lspFreq' anywhere |
| F0finEnv | I seeF0finalEnv = 1 under [pitchSmooth:cPitchSmoother] |
What I see, is 4 different features, all generated by a different instruction in the config file. Well, for one of them, there was no disconcernable instruction in the config file that I could find. With no pattern or intuitive syntax or apparent system, I have no idea how I can eventually figure out how to specify my own features I want to generate.
There are no tutorials, no YouTube videos, no StackOverflow question and no blog posts out there talking about how this could be done. Which is really surprising since this is obviously a huge part of using OpenSMILE.
If anyone finds this, please, can you advise me on how to create custom config files of OpenSMILE? Thanks!
thanks for your interest in openSMILE and your eagerness to build your own configuration files.
Most users in the scientific community actually use openSMILE for its pre-defined config files for the baseline feature sets, which in version 2.3 are even more flexible to use (more commandline options to output to different file formats etc.).
I admit that the documentation provided is not as good as it could be. However, openSMILE is a very complex piece of Software with a lot of functionality, of which only the most important parts are currently well documented.
The best starting point would be to read the openSMILE book and the SIG'MM tutorials all referenced at http://opensmile.audeering.com/ . It contains a section on how to write configuration files. The next important element is the online help of the binary:
SMILExtract -L lists the available components
SMILExtract -H cComponentName lists all options which a given component supports (and thus also features it can extract) with a short description for each
SMILExtract -configDflt cComponentName gives you a template configuration section for the component with all options listed and defaults set
Due to the architecture of openSMILE, which is centered on incremental processing of all audio features, there is (at least not yet) no easy syntax to define the features you want. Rather, you define the processing chain by adding components:
data sources will read in data (from audio files, csv files, or microphone, for example),
data processors will do signal processing and feature extraction in individual steps (windowing, window function, FFT, magnitudes, mel-spectrum, cepstral coefficients (MFCC), for example for extracting MFCC); for each step there is a data processor.
data sinks will write data to output files or send results to a server etc.
You connect the components via the "reader.dmLevel" and "writer.dmLevel" options. These define a name of a data memory level that the components use to exchange data. Only one component may write to one level, i.e. writer.dmLevel=levelName defines the level and may appear only once. Multiple components can read from this level by setting reader.dmLevel=levelName.
In each component you then set the options to enable computation of features and set parameters for this. To answer your question about lspFreq: This is probably enabled by default in the cLsp component, so you don't see an explicit option for it. For future versions of openSMILE the practice of setting all options explicitly will and should be followed more tightly.
The names of the features in the output will be automatically defined by the components. Often each component adds a part the the name, so you can infer from the name the full chain of processing. The options nameAppend and copyInputName (available to most data processors) control this behaviour, although some components might internally override them or change the behaviour a bit.
To see the names (and other info) for each data memory level, including e.g. which features a component in the configuration produces, you can set the option "printLevelStats=5" in the section of componentInstances:cComponentManager.
As everyhting in openSMILE is built for real-time incremental processing, each data memory level has a buffer, which by default is a ring buffer to keep memory footprint constant when the application runs for a longer time.
Sometimes you might want to summarise features over a window of a given length (e.g. with the cFunctionals component). In this case you must ensure that the buffer size of the input level to this component is large enough to hold the full window. You do this via the following options:
writer.levelconf.isRb = 1/0 : sets type of buffer to ringbuffer (1) or fixed size buffer
writer.levelconf.growDyn = 1/0 : sets the buffer to dynamically grow if more data is written to it (1)
writer.levelconf.nT = sets the size of the buffer in frames. Alternatively you can use bufferSizeSec=x to set the size size in seconds and convert to frames automatically.
In most cases the sizes will be set correctly automatically. Subsequent levels also inherit the configuration from the previous levels. Exceptions are when you set a cFunctionals component to read the full input (e.g. only produce one feature at the end of the file), the you must use growDyn=1 on the level that the functionals component reads from, or if you use a variable framing mode (see below).
The cFunctionals component provides frameMode, frameSize, and frameStep options. Where frameMode can be full* (one vector produced at end of input/file), **list (specify a list of frames), var (receive messages, e.g. from a cTurnDetector component, that define frames on-the-fly), or fix (fixed length window). Only in the case of fix the options frameSize set the size of this window, and frameStep the rate at which the window is shifted forward. In case of fix the buffer size of the input level is set correctly automatically, in the other cases you have to set it manually.
I hope this helps you to get started! With every new openSMILE release we at audEERING are trying to document things a bit better and unify things through various components.
We also welcome contributions from the community (e.g. anybody willing to write a graphical configuration file editor where you drag/drop components and connect them graphically? ;)) - although we know that more documentation will make this easier. Until then, you always have to source code to read ;)
Cheers,
Florian
Many of my components will be dealing with external buffers from C libraries and I'm trying to avoid any extraneous copies. I see two signatures for pushPacket in the output port declaration and both take a vector type. I've searched for examples and the only one I've found in the provided components was in the USRP_UHD where a sequence was created using an existing buffer and a specialized pushPacket implementation was called. This required the author to implement and use a custom port with a specialized pushPacket call.
Is there a standard way of doing this so I don't have to create a special library of port wrappers and customize the ports for every component? Are there any plans to add a raw data version of pushPacket to the output ports like the example shown below?
Given a bulkio::OutLongPort:
void pushPacket(const CORBA::Long* items, size_t nitems, BULIO::PrecisionUTCTime& T, bool EOS, const std::string& streamID);
This question is in regards to Redhawk version 1.9.
There is a plan for something like this in 1.10. You can see source for that on the develop-1.10 branch on github. Checkout builkio_out_port.h.
I Use a CQRS thin read layer to provide denormalized lists/reporting data for the UI.
In some parts of my application I want to provide a search box so the user can filter through the data.
Lucene.NET is my full text search engine of choice at the moment, as I've implemented it before and am very happy with it.
But where does the searching side of things fit in with CQRS?
I see two options but there are probably more.
1] My Controller can pass the search string to a search layer (Lucene.NET) which returns a list of ID that I can then pass to the CQRS read layer. The read layer will take these IDs and assemble them into a WHERE ID IN (1,2,3) clause, ultimately returning a DataTable or IEnumerable back to the controller.
List<int> ids = searchLayer.SearchCustomers("searchString");
result = readLayer.GetCustomers(ids);
2] My thin read layer can have searching coded directly into it, so I just call
readLayer.GetListOfCustomers("search string", page, page1);
Remember that using CQRS doesn't mean that you use it in every part of your application. Slicing an application into smaller components allows using various architectural principles and patterns as they make sense. A full text search API might be one of these components.
Without knowing all the details of your application, I would lean towards a single entry point from your controller into your search layer (your option #2 above). I think your controller shouldn't know that it needs to call layer #1 for full-text enabled searching and layer #2 for just regular WHERE clause type searching.
I would assume you would have two different 'contexts' (e.g. SQLContext and LuceneContext), and these would be dependencies injected into your ReadLayer. Your read layer logic should then make the decision on when to use LuceneContext and when to use SQLContext; your controller won't know and shouldn't know.
This also allows you to swap out LuceneContext for MongoContext if there's a compelling reason in the future without your controller knowing or having to change.
And if you use interfaces for everything (e.g. ISearchContext is implemented by LuceneContext and MongoContext), then really the only change for swapping out contexts is in your IoC container's initialization/rules.
So, go with option #2, inject in your dependencies, have your controller just work through your read layer, and you should be good to go. Hopefully this helps!