RedHawk generates code for each port but puts all the In_i and Out_i classes into a single pair of files: port_impl.h, port_impl.cpp. Why are these generated classes put into a single file? For most components one must add code into the port method calls to implement the functionality of the component. One can write the functionality into functions in additional source files and simply add calls to the new functions in the port_impl methods to minimize the changes to the generated files, but one must still re-add these calls to each method if one adds additional ports and the port_impl must be regenerated. If each port's generated code were put into separate files, then adding a port would simply generate an additional file and not overwrite an existing file. This would make adding ports much easier.
A better solution is to simply create child classes of the ports which you would like to extend in their own files. Then in the component code, you can delete the pointers to the ports in the constructor (removing the old implementation) and set them to be pointers to your new, extended implementation. I believe the USRP_UHD code has an example of this method.
In more recent versions of REDHAWK, this is the only way to add functionality like this, as all BULKIO port implementations have been moved into the framework and are no longer generated.
Related
I've worked through the "Integrating Data" guide on the Spring website and have been trying to determine how to use configuration settings (substitution) in the integration.xml file rather than hard code various items. This is primarily driven by a desire to externalise some of the configuration from the XML and take advantage of Spring Boot's ability to allow for externalised configuration.
I've been trying to determine the solution for a while now and thought it's likely to be an easy answer (for those who know how).
In the snippet below (taken from the guide) I've used ${outputDir} as a placeholder for a configuration item I'll pass into the application:
<file:outbound-channel-adapter id="files"
mode="APPEND"
charset="UTF-8"
directory="${outputDir}"
filename-generator-expression="'HelloWorld'"/>
Essentially, I'm trying to determine what I need to do to get the ${outputDir} substitution working.
As part of working through the problem I reduced the code down to a demo that I've uploaded to BitBucket:
integration.xml will just copy files from a file:inbound-channel-adapter directory to a file:outbound-channel-adapterdirectory
The Application class uses Spring Boot to load the configuration into a DemoIntegration instance and it's the fields in that instance that I'd like to substitute into integration.xml at runtime.
Unless I'm mistaken (when I get this to work) I should be able to override the inputDir and outputDir items in integration.xml.
Your integration.xml references ${inputDir}, which is not there.
Just to make it work with the existing config, add/change the application.properties file in your classpath with inputDir=/tmp/in and outputDir. This way it matches with your used vars in the config file.
If you want to stick with your naming, then change the XML to use ${demo.inputDir}. These are the names you are using in your existing application.properties.
And if you want to stick to your #ConfigurationProperties, then you can put #{demoConfigration.inputDir} in the XML to access the bean, where your config is stored. Note, that your code currently fails (at least for me) as you basically define the bean twice (once per #EnableConfigurationProperties and once by #ComponentScan+#Component on the config.
To remove the stale object issue(ie..when we run the test script for multiple input,it fails for the second iteration as the object is not cleared at the end of each run)in my script, I have added always search configuration in the designer file. After this my script runs successfully on multiple inputs, but if there is a need to add some objects newly to the same designer file then my designer file will be regenerated and the Always search configuration changes will be lost.
Is there any way to retain the always search configuration remain in the designer file ever even when the designer file is regenerated?
When you generate a UI map there are actually two files that come with it. Firstly, as you've discovered, there's a generated file with all the ugly code that's generated by the coded UI test builder. Of course, making any changes to this outside of the code will regenerate the file. The second file is a partial class that accompanies the generated designer class. This file does NOT get regenerated but as a partial contains all the same object references and properties as the designer file (it just looks empty). You can reference the control you want to add this property to here and it will not be regenerated.
The other alternative to this, albeit probably not a good idea, is to put
Playback.PlaybackSettings.AlwaysSearchControls = true
inside of your test method/class initialize/test initialize. This will force the test(s) to always search for each and every control. As you might imagine, this can have a significant performance impact though when you're dealing with large UI maps or particularly long test methods.
You might also set the control object's search configuration to always search. Keep in mind that this will do searching for this control and all of it's children so I would not advise putting it on a parent with several children, such as the document.
aControl.SearchConfigurations.Add(SearchConfiguration.AlwaysSearch);
I have a number of XSDs that are part of the enterprise definitions for several services at the client.
I would like to be able to take a single XSD and generate a DDIC structure from it (without the use of PI!)
Seeing as you can generate proxies directly from a WSDL, and this also generates structures and data elements from the XSD definitions inside the WSDL, there is obviously already ABAP code that does this.
But do you know what classes/function modules to use to achieve this? Perhaps there is a convenient utility function or class method that takes the XSD as input and generates the relevant DDIC objects?
Some background on why I need this:
Some of the services include variable sections that include a piece of XML containing the data for one of the enterprise XSD entities; I am hoping to have a DDIC representation of these, which I can fill at runtime and then convert to XML to include in the message.
There is a program on the system called SPROX_XSD2PROXY with which you can upload one or more XSD files which will generate proxy objects for you.
You also end up with a service consumer with a corresponding class and what looks like a dummy operation.
The program is fairly short; it uploads the files(s) to an XSTRING, then converts the XSD(s) to WSDL(s) and finally the WSDL(s) to proxy objects using methods of a class called CL_PROXY_TEST_UTILS.
However, the result is satisfactory as it does give me a structure I can work with. And by examining the contents of those methods, it may be possible to build a more fine-tuned tool if I need one.
I'm looking for something similar conceptually to a Windows DLL. As a concrete example, suppose I have a function encrypt that I would like to share across several unrelated projects. If I want to change the implementation ideally I can do so once and every project has access to the new implementation. Is there a mechanism for doing this in Node.js?
Have a look at this document especially the section "Writing a Library"
If you are writing a program that is intended to be used by others,
then the most important thing to specify in your package.json file is
the main module. This is the module that is the entry point to your
program.
and
If you have a lot of JavaScript code, then the custom is to put it in
the ./lib folder in your project.
Specify a main module in your package.json file. This is the module
that your users will load when they do require('your-library'). This
module should ideally expose all of the functionality in your library.
If you want your users to be able to load sub-modules from the "guts"
of your library, then they'll need to specify the full path to them.
That is a lot of work to document! It's better and more future-proof
to simply specify a main module, and then, if necessary, have ways to
dynamically load what they need.
For example, you might have a flip library that is a collection of
widget objects, defined by files in the flip/lib/widgets/*.js files.
Rather than having your users do require('flip/lib/widgets/blerg.js')
to get the blerg widget, it's better to have something like:
require('flip').loadWidget('blerg').
When generating my DAL files with SubSonic, I'd like the names of the files to be .gen.cs. The main reason for this is that the files are partial classes, and I would like to add some additional implementation details into another source file for the table called .cs. This is somewhat the standard pattern for generated source files , and I'm wondering if its possible with SubSonic? I'm using SubSonic 2.2.
I thought you might be able to do this by using a set of custom templates, but the CS_ClassTemplate.aspx (or VB_ClassTemplate.aspx) doesn't control the file name of the class.
I don't think this is possible.
As an alternative, you can do what I do. I have a "generated" directory, such as \database\generated and then I put my partial classes at \database\custom. As long as the namespaces of the files in the two different directories match (like .database or whatever), then it works fine. By using two different directories, it's easier to find your custom files without looking at the generated ones.