What is the struct of Vehicle Package Manifest and the Software Package Manifest inside it? - autosar

In the SWS of UCM described there having a Vehicle Package Manifest which contains Software Cluster info. But I cannot find the definition of Vehicle Package Manifest in the TPS. only find this in the EXP_PlatformDesgin.
List of backend packages: a list of SWCL names
Dependencies: dependencies between Software Clusters that will overrule the already defined dependencies in Software Package
Manifest. Typically used by vehicle systems integrator to add
dependencies related to vehicle systems that backend package supplier
is not aware of.
Origin: uri, repository or diagnostic address, for history, tracking and security purposes
Version
Vehicle target: vehicle description
Campaign orchestration: Below is a model example.
Why there is no definition of Vehicle Package Manifest in the TPS?
Why there having two SoftwarePackageManifest one in the
VehiclePackageManifest and another in the SoftwarePackage? Is that the same thing?
Update:
Why there is no definition of Vehicle Package Manifest in the TPS?
I found the answer in R19-11 TPS_ManifestSpecification.
About the second question, I still didn't find a clear definition of SoftwarePackageManifest.arxml(The red one in the picture) in R19-11. I am trying to implement the Packager tool for both Software Package and Backend Package. So I need to figure out
What is the usage of SoftwarePackageManifest? Does it design for Backend or UCM Master?
Whether the SoftwarePackageManifest.arxml is generated before the tool Packaging as input or during Packaging as output.

D Packages distribution within vehicle (informative)
D.1 Overview
To prepare next releases of this specification, Update and Configuration Management
team appends to this specification its future image of packages distribution from a
backend into a vehicle and between different UCMs by sharing sequence diagrams.
Intention of this appendix is to gather comments from Autosar community to ensure
future API’s quality. All described methods have to be later specified.
You have to read the spec correctly ..
This is not official spec but informative meaning unofficial
open for discussion by the AUTOSAR community (AUTOSAR consortium and members)
has to be specified later after discussion

The Vehicle Package Manifest is represented by meta-class VehiclePackage in the TPS Manifest. Actually, the term “manifest” on the AUTOSAR adaptive platform describes a piece of ARXML that is supposed to be uploaded to the target and that conforms to the AUTOSAR schema.
This means that the AUTOSAR schema needs support for various manifest configurations and that is the reason why the TPS Manifest discusses the usage of various meta-classes within the AUTOSAR meta-model (from which the AUTOSAR XML schema is generated).
A VehiclePackage describes how specific SoftwarePackages shall be installed. A SoftwarePackage, in turn, is just a “logistics” wrapper around a SoftwareCluster.

Related

How are dynamically loaded libraries represented in deployment diagrams?

My deployment diagram has a device with a Windows ExecutionEnvironment in it. The application uses several dynamically loaded libraries, some of which are deployed with the application, others into the system itself.
How are dynamically loaded libraries normally represented in deployment diagrams?
My current theory is my application gets its very own execution environment within the Windows where I deploy my application specific dynamically loaded libraries, and have the system libraries deployed outside it:
For above diagram the system has v1 of libraryA and libraryB installed, and v2 of libraryA is deployed with the application, shadowing the system version.
Your approach makes perfectly sense:
ExecutionEnvironments represent standard software systems that application components may require at execution time.
Moreover:
Artifacts elaborate and reify the abstract notion of DeployedArtifact. They represent concrete elements in the physical world, may have Properties representing their features and Operations that can be performed their instances, and may be multiply-instantiated so that different instances may be deployed to various DeploymentTargets, each with separate property values.
This applies perfectly to dynamic libraries, where there is one library loaded by the OS and that may be used by multiple applications, each in its own address space.
Some hints:
You could use the «Library» and «Executable» stereotypes of the UML standard profile to better distinguish different kind of artifacts.
You could add the dependency from the executable to the required libraries

What're the advantages of using tflog package over log package for logging?

Context: I'm developing a TF Provider and I could see the latest "Writing Log Output" doc from HashiCorp where they recommend using tflog package for logging.
That said, I can see TF Provider for GCP are still using log package. What're the advantages of using tflog over log?
The Structured Logging section of the documentation you linked describes the authors' justification for recommending this different logging strategy:
The tflog package uses structured logging, based on go-hclog. Rather than writing logs as sentences with embedded variables and values, tflog takes a sentence describing the logging event and a set of variables to log. When variables are separate from the log description, you can use them to programmatically parse, filter, and search log output. This separation also allows other parts of the system to associate variables with downstream log output.
Although not mentioned explicitly as an advantage in the documentation, it does also mention that tflog has a notion of log levels, and there's no corresponding concept in the standard library log package at the time of writing.
Given that, I would conclude that the two intended advantages of tflog over standard library log are:
tflog uses a structured logging approach where the separate variables in the result are machine-parsable and therefore amenable to automated filtering via scripts.
tflog associates a log level with each message, and the SDKs allow customizing the log level for a particular execution to control the amount of output.
I think getting any further context on this would require asking the authors of the SDKs, since this is a subjective design tradeoff rather than a situation where there is one clear correct answer.
I assume that some existing providers continue to use standard library log just because that code was written before tflog existed. tflog v0.2.0 (apparently the first publicly-published version) was released in December 2021, whereas big Terraform providers like the Google Cloud Platform provider have been under development for almost a decade before that.

How to read Config values in Cross Cutting project?

I have followed DDD guidelines to structure my project, I have Domain, Infrastructure, Application and Presentation layers.
I have also defined a cross-cutting project called Common. All other projects have dependency on Common.
One of the things that I need in Common project is my config/setting values. I have defined all solution wide settings in a DB table. The Common project reads the settings and any other project can access these settings through Common project...
How should Common project access the DB? Anywhere else in the solution, I use Infrastructure layer to read from DB, but if I reference Infrastructure in Common project, I would get circular dependency.
Should Common project have it's own DB Reader? Or putting all the config in the Common project was not the correct design at the first place?
The common package could be organized by feature. Here the IConfigProvider implementations would live in the same package as the interface.
E.g.
You could also consider global configuration as a supporting BC and implement the appropriate anti-corruption layer in each downstream context, where every context has it's own view and interpretation of such configuration.
Dependencies are always an interesting thing.
Since you haven't specified what languages/environments you are using and I have experience with C# I will use techniques and terminology related to strong typed OO languages like it.
Let's say that we will separate interface and implementation. I will use the c# notation that an interface begins with capital 'I' to make more clear what is an interface.
Your repositories are part of your domain. Lets say you have Account entity and you have AccountRepository for this entity.
What we will do is separate the interface for this repository from it's implementation. We will have IAccountRepository and a concrete implementation (maybe more than one, but this is very rare) for it: AccountRepository.
If we want to use SQL database we may have SQLAccountRepository. If we want to use MongoDB we may have MongoDBRepository. Both of these concrete repositories will implement the interface IAccountRepository.
IAccountRepository is part of your Domain, but the implementations (SQL, MongoDB etc.) are part of your Infrastructure layer as they will access external things (SQL server or MongoDB server in this example).
Your dependencies in this case will be 'Infrastructure -> Domain' not 'Domain -> Infrastructure'. This isolates your domain from the Infrastructure as the Infrastructure has reference to the Domain note vice versa.
By using interfaces your Domain only specifies what it needs not how it needs it.
If you apply the same idea you can define interfaces in your Common project for getting (and setting if necessary) settings (ISettingsProvider, IApplicationSettings etc.) and allow the Infrastructure that references Common to provide implementation for these interfaces (SQLSettingsProvider etc.)
You can use dependency injection, service locator or similar technique to bind implementations to interfaces.

Puppet modules and the self-contained design

From Puppet Best Practices:
The Puppet Labs documentation describes modules as self-contained bundles of code and data.
Ok it's clear.
A single module can easily manage a single application.
So, puppetlabs-apache manages Apache only, puppetlabs-mysql manages MySQL only.
.... So, my module my_company-mediawiki manages Mediawiki only (i suppose... with database and virtual host... because a module is self-contained bundles of code and data).
Modules are most effective when the serve a single purpose, limit dependencies, and concern themselves only with managing system state relating to their named purpose.
But my_company-mediawiki needs to depend by:
puppetlabs-mysql: to create database;
puppetlabs-apache: to manage a virtual host.
And... from a rapid search I understand that many modules refer to other modules.
But...
They provide complete functionality without creating dependencies on any other modules, and can be combined as needed to build different application stacks.
Ok, a good module is self-contained and has no dependencies.
So I have to necessarily use the pattern roles and profiles to accomplish these best practices? Or I'm confused...
The Puppet documentation's description of modules as self-contained is more aspirational than definitive. Don't read too much into it, or into others' echoes of it. Modules are quite simply Puppet's next level of code organization above classes and defined types, incorporating also plug-ins and owned data.
Plenty of low-level modules indeed have no cross-module dependencies, but such dependencies inescapably arise when you start forming aggregations at a level between that and whole node configurations. There is nothing inherently wrong with that. The Roles & Profiles pattern is a good way to structure such aggregations, but it is not the only way, and in any case it does not avoid cross-module dependencies because role and profile classes, like any other, should themselves belong to modules.

Confusion with MIDlet attributes in a JAD file

"MicroEdition-Profile" can have multiple values separated by space in the JAD file, where as "MicroEdtion-Configuration" can have only one value.
According to JTWI specifications,
Minimum requirement for MIDP is 2.0, so this gives us only one option (MIDP 2.0) to put in the JAD file
Minimum requirement for CLDC is 1.0, so this gives us two options to put in the JAD file ie: CLDC1.0 and CLDC2.0
I can create an application which is compatible with CLDC1.0 and 1.1. Why are multiple values allowed for Profile attribute but only one value is allowed for Configuration attribute?
MicroEdition-Configuration refers to the lowest-level part of the system - the JVM and so on.
MicroEdition-Profile gives a list (often of size one) of the additional software environments on top of the configuration (application lifecycle, UI etc.)
For more info see A Survey of Java ME Today (actually from 2007, but a useful overview still):
A configuration, at the bottom of the
Java ME organization stack, defines a
basic lowest-common-denominator Java
runtime environment. This includes the
VM and a set of core classes derived
primarily from the Java SE platform.
Each configuration is geared for a
broad family of constrained devices
with some type of network
connectivity.
And on profiles:
Configurations do not provide classes
for managing the application life
cycle, for driving the UI, for
maintaining and updating persistent
data locally in the device, or for
accessing securely information that is
stored on a network server. Instead,
that type of functionality is provided
by the profiles or by optional
packages. A profile adds
domain-specific classes to the core
set of classes provided by the
configuration, classes that are geared
toward specific uses of devices and
provide functionality missing from the
underlying configuration.
MIDP is the most common profile for mobiles, but there are others: IMP - a kind-of headless version of MIDP (JSR-195); and DSTB for digital TV (JSR-242).

Resources