File structure for Autosar SWC - autosar

Is there any option through which Davinci tool can generate multiple .c files for a single SWC.
Since I am planning to split the functionalities of my SWC.

You would need to decompose the existing component first and distribute the functionality in to different runnables and atomic SWCs. For Davinci only one file is generated per SWC.

Related

How to model a file being compiled for different execution environments in UML?

I have a matlab function that is used in three different ways:
From within Matlab (.m)
As a .NET library (.dll)
As a standalone binary (.exe)
This makes three different artifacts deployed on three different execution environments (or nodes in general). From the .m-file I create the .dll and .exe using Matlab MCC (compiler).
In my current model the files are left unrelated. How would I model that the .dll and .exe are compiled from .m using MCC?
Also, how should I relate the interfaces exposed by each? The environments have very different type systems.
I understand that you have a component made of a function (or a class):
The .m file is the source code of this function. It is therefore an artifact that manifests/embodies the abstract concept of your function in a digital format.
At the same time the .m is compiled and gives a .dll and a .exe which both embody/manifest the same function but in yet different forms. Hence, all three artifacts <> the same function.
But the .dll and the .exe also depend on the .m. So you could add another dependency, that you could for example further clarify an ad-hoc stereotype (e.g. <<generated from>>? )
The three artifacts could be deployed independently on nodes (including the .m file which could be directly executed on a matlab execution environment nested in a node). If you want to show this on the same diagram you could:
Show the deployment with nested artifacts directly on nodes, and adding the dependencies in the diagram.
But you could as well keep artifacts apart and use the <<deploy>> dependency notation.
Create a reified Compilation class that has an association with Source File, an association with an abstract Compiler Output File, and an association with Compiler. Create two subclasses of Compiler Output File: one called Dynamic Linked Library File and one called Executable File. This pattern makes explicit how compilation happens.

What's the advantages of using file system to organize our codes

It is 2017, and as far as I know, the way programmers organize their codes have not changed. We distribute our codes into files and organize them with a tree structure (nested directories and files). When codebase is huge, and the relations between classes/components are complex, this organization approach gives me the inefficient impression. With more files, either one directory has more files in it or the depth of directories increases. And since we handle the directories directly, navigation costs me time and effort without tools like search.
Figure: A complex UML from https://github.com/CMPUT301W15T09/Team9Project/wiki/UML
We can use CAD to design/draw complex things; mind map can be created in a similar manner. For these, we do not need to deal with file systems. Can't we have something similar and hide file system in a black box? Why the fundamental organization methods have not evolved for so long a time.
So I wonder, what's the advantages that keeps us from getting a new way? What's the inherit advantages of using file system to organize our codes.
Different on-disk representations of source-code have been tried (e.g. how Flash stores ActionScript inside binary .fla files) and they're generally unpopular. No-one likes proprietary file formats. It also means you can't use text-based source control systems like Git, which means you can't do a text-merge to resolve change conflicts.
We store source code in files in a tree structure (e.g. one OOP class or procedural module per file), with nested namespaces represented by nested directories because it's intuitive (and again, for better cohesion with source-control systems).
Some languages enforce this, like Java, for example, that requires the source file be named the same as the class it contains and be in the same directory name as its containing package. For other languages like C# and C++ it just makes sense - because otherwise it's confusing to someone who might be new to your codebase when they see class TurboEncabulator inside a file named PrefabulatedAmulite.cs.

Suitescript - 1 big script file, or multiple smaller files

From a performance/maintenance point of view, is it better to write my custom modules with netsuite all as one big JS, or multiple segmented script files.
If you compare it with a server side javascript language, say - Node.js the most popular, every module is written into separate file.
I generally take the approach of Object oriented javascript and put each class in a separate file which helps to organise the code.
One of the approach you can take is in development keep separate files and finally merge all files using js minifier tool like Google closure compiler when you deploy your code for production usage which can give you best of both worlds, if you are really bothered about every nano/mini seconds of performance.
If you see SuiteScript 2.0 architecture, it encourages module architecture which is easier to manage as load only those modules that you need, and it is easier to maintain multiple code files i.e. one per module considering future enhancements, bug fixes and code reuse.
Performance can never be judge by the line count of your module. We generally maintain modules for maintaining the readability and simplicity of the code. It is a good practice to put all generic functionalities in to an Utility script and use it as a library across all the modules. Again it depends on your code logic and programming style. So if you want to create multiple segments of your js file for more readability I dont think its a bad idea.

How to generate a dependency diagram from a set of XSD files?

See the title: I have around 50 XSD files importing each other (with tags) and I need to analyze their dependencies.
Do you know any software (preferably free) to generate a dependency diagram automatically from these files?
I did not find any existing program to do that, so... I developed my own! It is called GraphVisu.
There is a first program to generate the graph structure from seed XSD files, and another one to visualise graphs. I also included a detection of clusters of interrelated nodes (called "strongly connected components" in graph theory).
Feel free to use it!
I am not aware of any free solution tailored specifically for XSD. If I would have to build it using freely available components, I would probably consider GraphViz. You would need to write a module to generate the data needed by GraphViz which will come from parsing the XSD files. The latter is kind of trivial, if you take into account how schema location works and is resolved, and handle correctly circular dependencies. The good thing is that GraphViz is supported on a wide set of platforms, and as long as you can parse XML, you could be set.
I've also developed my own, in form of an XML Schema Refactoring (XSR) add-on for QTAssistant. This particular feature set has been around since 2004, so it works really well, including WSDL and XSD files.
I can interpret differently what you asked, so I'll refer to what you could do with XSR:
XSD files dependencies
This is a simple one, showing a hierarchical layout.
This is a more complex one, showign an organic layout.
intra-XSD file schema components dependencies: can be filtered on arbitrary criteria (not sure what you meant by with tags).
XSD file set schema components dependencies (same as the above, but one can navigate across different files)
The tool comes with an automation library, where you can write a few lines of C# or Java script code which you can then invoke using QTAssistant shell or a command line shell to integrate it with an automatic build process.
Other features include the ability to export the underlying data using GraphML, that is if you wish to analyse or process the graph further (e.g. topological sorting, cycles, etc.)

VC++ merge multiple COM DLLs into one

Let's say we have multiple libraries (DLLs) whose features one wants to use in an application, and wants to use them as a single DLL.
Is it possible to merge the DLLs into a single one, with all the features packed into it? I am not looking at the option to write a wrapper.
EDIT:
I've revisited the problem. Now all I want to do is bring all the projects under one solution and get a single DLL as the output instead of each project having it's independant output. Is this possible?
You can't literally merge several compiled .dll files into one. Your best bet is to put all files into a single project and recompile as a single library. You will likely have conflicts you'll have to resolve manually.
If you really have several COM in-proc servers you will also have to merge the data that facilitates class factories and COM registration - you will have to do that manually.

Resources