How to model a file being compiled for different execution environments in UML? - uml

I have a matlab function that is used in three different ways:
From within Matlab (.m)
As a .NET library (.dll)
As a standalone binary (.exe)
This makes three different artifacts deployed on three different execution environments (or nodes in general). From the .m-file I create the .dll and .exe using Matlab MCC (compiler).
In my current model the files are left unrelated. How would I model that the .dll and .exe are compiled from .m using MCC?
Also, how should I relate the interfaces exposed by each? The environments have very different type systems.

I understand that you have a component made of a function (or a class):
The .m file is the source code of this function. It is therefore an artifact that manifests/embodies the abstract concept of your function in a digital format.
At the same time the .m is compiled and gives a .dll and a .exe which both embody/manifest the same function but in yet different forms. Hence, all three artifacts <> the same function.
But the .dll and the .exe also depend on the .m. So you could add another dependency, that you could for example further clarify an ad-hoc stereotype (e.g. <<generated from>>? )
The three artifacts could be deployed independently on nodes (including the .m file which could be directly executed on a matlab execution environment nested in a node). If you want to show this on the same diagram you could:
Show the deployment with nested artifacts directly on nodes, and adding the dependencies in the diagram.
But you could as well keep artifacts apart and use the <<deploy>> dependency notation.

Create a reified Compilation class that has an association with Source File, an association with an abstract Compiler Output File, and an association with Compiler. Create two subclasses of Compiler Output File: one called Dynamic Linked Library File and one called Executable File. This pattern makes explicit how compilation happens.

Related

CMake: the right way to deal with static and shared libraries

Let's consider Debian or Ubuntu distro where one can install some library package, say libfoobar, and a corresponding libfoobar-dev. The first one contains shared library object. The later usually contains headers, static library variant and cmake exported targets (say, FoobarTargets.cmake) which allow CMake stuff like find_package to work flawlessly. To do so FoobarTargets.cmake have to contain both targets: for static and for shared variant.
From the other side, I've read many articles stating that I (as a libfoobar's author) should refrain from declaring add_library(foobar SHARED ....) and add_library(foobar_static STATIC ...) in CMakeLists.txt in favour of building and installing library using BUILD_SHARED_LIBS=ON and BUILD_SHARED_LIBS=OFF. But this approach will export only one variant of the library into FoobarTargets.cmake because the later build will overwrite the first one.
So, the question is: how to do it the right way? So that package maintainer would not need to patch library's CMakeLists.txt to properly export both variants from the one side. And to adhere CMake's sense of a true way, removing duplicating targets which differ only by static/shared from the other side?
I wrote an entire blog post about this. A working example is available on GitHub.
The basic idea is that you need to write your FoobarConfig.cmake file in such a way that it loads one of FoobarSharedTargets.cmake or FoobarStaticTargets.cmake in a principled, user-controllable way, that's also tolerant to only one or the other being present. I advocate for the following strategy:
If the find_package call lists exactly one of static or shared among the required components, then load the corresponding set of targets.
If the variable Foobar_SHARED_LIBS is defined, then load the corresponding set of targets.
Otherwise, honor the setting of BUILD_SHARED_LIBS for consistency with FetchContent users.
In all cases, your users will link to Foobar::Foobar.
Ultimately, you cannot have both static and shared imported in the same subdirectory while also providing a consistent build-tree (FetchContent) and install-tree (find-package) interface. But this is not a big deal since usually consumers want only one or the other, and it's totally illegal to link both to a single target.

what is necessary to build cap file uses proprietary package except *.exp file?

i have java card which supports some proprietary classes, say "ClassFoo" in package "packagename.foo". I have documentation for these classes and "foo.exp" file. But as i undestand it is neccessary something else to build cap file because without it compiler finds errors like uknown import package and unknown class. Right? What is it?
The .exp file is used by the offcard Java Card converter. The converter converts standard Java classes to the .cap files which can be uploaded to the Java Card implementation (usually through Global Platform).
To generate these class files you first need to compile the classes. For this the usual Java linker needs to link against the interface of the external classes. So vlp is correct that you need .class files to compile against. Usually those classes are packaged into a .jar file.
Now the actual implementation of these classes usually only runs on a Java Card runtime. The linking is performed using the .exp file. So the contents of the actual classes is not important during linking, and they are not used during execution either (unless you are running on jCardSim of course).
So it's quite often that you get a .jar that contains all the public interfaces, classes and methods but doesn't contain any implementation (other than return null or return 0 where required).
It may even be possible to generate your own stub and link against that if you know the full interface (from the Java Doc).

Find all classes involved in a method call

I have a .NET 4.0 C# Solution with a single .csproj (Library) having several thousand files.
I want to extract out a small subset of the functionality from the thousands of files.
e.g. I want to extract the functionality of the MyLibrary.RelevantMethod() method into another library.
The aim is to create a new .csproj with the bare minimum class files needed to achieve this functionality.
i have a Program.cs which invokes the functionality and i can navigate through the flow to find all classes involved. Just that there are too many. (still a small subset of all classes)
Solutions tried:
the usual brute force of going through the flow from the method (F12) and copying over every class file and associated files needed for it to compile. this is taking a lot of time, but i know that if i keep at it, it'll be done. so that is what i am doing right now.
other option was to copy over the whole project and eliminate folders of classes based on instinct/name space references, build to verify and keep at it. this got nasty because a subset of classes in a folder were needed.
the vs 2013 code-map graphs became unmanageable in 3 drill downs. sequence diagrams became too complex as well.
Call hierarchy seemed to be the most promising showing all the classes involved visually but there is still the manual task of drilling through and copying the classes.
while i manually continue extracting the class one-by-one using the call hierarchy, is there a faster way or a more automated way (semi works as well) to determine all the classes involved in a method call in C#?
if i can get the list, i can do a search on the physical folders nesting the .cs. files (every class has an equivalent .cs file) and just copy them over.
You can find all classes involved in a method call with the Runtime Flow tool (developed by me). From the Runtime Summary window you can also copy these classes to the Clipboard for the selected module or a namespace.

How to generate a dependency diagram from a set of XSD files?

See the title: I have around 50 XSD files importing each other (with tags) and I need to analyze their dependencies.
Do you know any software (preferably free) to generate a dependency diagram automatically from these files?
I did not find any existing program to do that, so... I developed my own! It is called GraphVisu.
There is a first program to generate the graph structure from seed XSD files, and another one to visualise graphs. I also included a detection of clusters of interrelated nodes (called "strongly connected components" in graph theory).
Feel free to use it!
I am not aware of any free solution tailored specifically for XSD. If I would have to build it using freely available components, I would probably consider GraphViz. You would need to write a module to generate the data needed by GraphViz which will come from parsing the XSD files. The latter is kind of trivial, if you take into account how schema location works and is resolved, and handle correctly circular dependencies. The good thing is that GraphViz is supported on a wide set of platforms, and as long as you can parse XML, you could be set.
I've also developed my own, in form of an XML Schema Refactoring (XSR) add-on for QTAssistant. This particular feature set has been around since 2004, so it works really well, including WSDL and XSD files.
I can interpret differently what you asked, so I'll refer to what you could do with XSR:
XSD files dependencies
This is a simple one, showing a hierarchical layout.
This is a more complex one, showign an organic layout.
intra-XSD file schema components dependencies: can be filtered on arbitrary criteria (not sure what you meant by with tags).
XSD file set schema components dependencies (same as the above, but one can navigate across different files)
The tool comes with an automation library, where you can write a few lines of C# or Java script code which you can then invoke using QTAssistant shell or a command line shell to integrate it with an automatic build process.
Other features include the ability to export the underlying data using GraphML, that is if you wish to analyse or process the graph further (e.g. topological sorting, cycles, etc.)

VC++ merge multiple COM DLLs into one

Let's say we have multiple libraries (DLLs) whose features one wants to use in an application, and wants to use them as a single DLL.
Is it possible to merge the DLLs into a single one, with all the features packed into it? I am not looking at the option to write a wrapper.
EDIT:
I've revisited the problem. Now all I want to do is bring all the projects under one solution and get a single DLL as the output instead of each project having it's independant output. Is this possible?
You can't literally merge several compiled .dll files into one. Your best bet is to put all files into a single project and recompile as a single library. You will likely have conflicts you'll have to resolve manually.
If you really have several COM in-proc servers you will also have to merge the data that facilitates class factories and COM registration - you will have to do that manually.

Resources