Why AutoSar defines new types other than use the standard one provided by C++ itself? - autosar

Why AutoSar defines new types(for intance, ara::core::Future, ara::core::Vector and so on) other than use the standard one(i.e. std::future, std::vector)?
What's the benefit?

You should read about the types in chapter 7 and 8 of the AUTOSAR_SWS_AdaptivePlatformCore.pdf.
7.2.4.2 Types derived from the base C++ standard
In addition to AUTOSAR-devised data types, which are mentioned in the previous sections, the Adaptive Platform also contains a number of generic data types and helper
functions.
Some types are already contained in [4, the C++14 standard]; however, types with almost identical behavior are re-defined within the ara::core namespace. The reason
for this is that the memory allocation behavior of the std:: types is often unsuitable for automotive purposes. Thus, the ara::core ones define their own memory allocation behavior, and perform some other necessary adaptions as well, including about the throwing of exceptions.
[SWS_CORE_00040] DRAFTg Errors originating from C++ standard classes
For the classes in ara::core specified below in terms of the corresponding classes of the C++ standard, all functions that are specified by [4, the C++14 standard], [9, the C++17
standard], or [10, the draft C++20 standard] to throw any exceptions, are instead specified to be the cause of a Violation when they do so.c(RS_AP_00130)
Examples for such data types are: Array, Vector, Map, and String.
The reasons for ara::future are described also in chapter 8.1.6. (I will not cite this here).
So, in the end, ara::core is the place to define / configure the implementation specific details in order to use the same definition in the code base in AUTOSAR Adaptive SW, no matter if it is your own SW on top of ara or within ara service implementation itself.
This is like the Std_Types.h / Compiler.h / Platform_Types.h is the place in AUTOSAR Classic to define / configure the basic primitive types of uint8 / sint8 / ... instead of using uint8_t / int8_t / ... from stdint.h, which was introduced in C99, but was not available in C90.

Related

std::result vs core::result::Result [duplicate]

These two traits (std::ops::Add, core::ops::Add) provide the same functionality, and they both use the same example (both utilize std::ops::Add). Their set of implementors differ somewhat.
Should one default to using std::ops::Add? Why do both, as opposed to one, of them exist?
There aren't two traits. There is one trait which is exported under several interchangeable names. This is far from unique. Virtually everything in core is also exported from std, and virtually always under exactly the same path (i.e., you can just replace the "core" prefix with "std").
As for which one you should use: If you have a reason to not link to the standard library (#![no_std]), then the std::* one isn't available so obviously you use core::*. If on the other hand you do use the standard library, you should use the std::* re-export. It is more customary and requires less typing.
They're in fact exactly the same, despite the set of implementors being listed as slightly different.
The core library is designed for bare-metal/low-level tasks, and is thus more barebones than what std can provide by assuming an operating system exists. However, people using std will want the stuff that's in core too (e.g. Add or Option or whatever), and so to avoid having to load both std and core, std reexports everything from core, via pub use. That is, std provides aliases/import paths for the things in core.
There are some unfortunate error messages where the compiler points to the original source of an item, not the reexport, which might not be in a crate you're extern crateing.

How to use ApplicationDataTypes in C code

For my understanding, the ApplicationDataType was introduced to AUTOSAR Version 4 to design Software-Components that are independent of the underlying platform and are therefore re-usable in different projects and applications.
But how about the implementation behind such a SW-C to be platform independent?
Use-case example: You want to design and implement a SW-C that works as a FiFo. You have one Port for Input-Data, an internal buffer and one Port for Output-Data. You could implement this without knowing about the data type of the data by using the “abstract” ApplicationDataType.
By using an ApplicationDataType for a variable as part of a PortInterface sooner or later you have to map this ApplicationDataType to an ImplementationDataType for the RTE-Generator.
Finally, the code created by the RTE-Generator only uses the ImplementationDataType. The ApplicationDataType is nowhere to be found in the generated code.
Is this intended behavior or a bug of the RTE-Generator?
(Or maybe I'm missing something?)
It is intended that ApplicationDataTypes do not directly appear in code, they are represented by their ImplementationDataType counterparts.
The motivation for the definition of data types on different levels of abstraction is explained in the AUTOSAR specifications, namely the TPS Software Component Template.
You will never find an ApplicationDataType in the C code, because it's defined on a physical level with a physical unit and might have a (completly) different representation on the implementation level in C.
Imagine a battery control sensor that measures the voltage. The value can be in range 0.0V and 14.0V with one digit after the decimal point (physical). You could map it to a float in C but floating point operations are expensive. Instead, you use a fixed point arithmetic where you map the phyiscal value 0.0 to 0, 0.1 to 1, 0.2 to 2 and so on. This mapping is described by a so called compuMethod.
The software component will always use the internal representation. So, why do you need the ApplicationDataType then? There are many reasons to use them, some of them are:
Methodology: The software component designer doesn't need to worry about the implementation in C. Somebody else can define that in a later stage.
Measurement If you measure the value, you have a well defined compuMethod and know the physical interpretation of the value in C.
Data conversion: If you connect software component with different units e.g. km/h vs mph, the Rte could automatically convert the internal representation between them.
Constant conversion: You can specify an initial value on the physical value (e.g. 10.6V) and the Rte will convert it to the internal representation.
Variable Size Arrays: Without dynamic memory allocation, you cannot have a variable size array in C. But you could reserve some (max) memory in an array and store the actual length in a seperate field. On the implementation level you have then a struct with two members (value, length). But on the application level you just have an array.
from AUTOSAR_TPS_SoftwareComponentTemplate.pdf
ApplicationDataType defines a data type from the application point of
view. Especially it should be used whenever something "physical" is at
stake.
An ApplicationDataType represents a set of values as seen in the
application model, such as measurement units. It does not consider
implementation details such as bit-size, endianess, etc.
It should be possible to model the application level aspects of a VFB
system by using ApplicationDataTypes only.

Is it possible, in any language, to implement rules that will affect every instance of an object?

For example, could I implement a rule that would change every string that followed the pattern '1..4' into the array [1,2,3,4]? In JavaScript:
//here you create a rule that changes every string that matches /$([0-9]+)_([0-9]+)*/
//ever created into range($1,$2) (imagine a b are the results of the regexp)
var a = '1..4';
console.log(a);
>> output: [1,2,3,4];
Of course, I'm pretty confident that would be impossible in most languages. My question is: is there any language in which that would be possible? Or have anyone ever proposed something like that? Does this thing have a 'name' for which I can google to read more about?
Modifying the language from whithin itself falls under the umbrell of reflection and metaprogramming. It is referred as behavioral reflection. It differs from structural reflection that opperates at the level of the application (e.g. classes, methods) and not the language level. Support for behavioral reflection varies greatly across languages.
We can broadly categorize language changes in two categories:
changes that modify the semantics (i.e. the rules) of the language itself (e.g. redefine the method lookup algorithm),
changes that modify the syntax (e.g. your syntax '1..4' to create arrays).
For case 1, certain languages expose the structure of the application (structural reflection) and the inner working of their implementation (behavioral reflection) to the application itself via special object, called meta-objects. Meta-objects are reifications of otherwise implicit aspects, that become then explicitely manipulable: the application can modify the meta-objects to redefine part of its structure, or part of the language. When it comes to langauge changes, the focus is usually on modifiying message sending / method invocation since it is the core mechanism of object-oriented language. But the same idea could be applied to expose other aspects of the language, e.g. field accesses, synchronization primitives, foreach enumeration, etc. depending on the language.
For case 2, the program must be representated in a suitable data structure to be modified. For languages of the lisp family, the program manipulates lists, and the program can be itself represented as lists. This is called homoiconicity and is handy for metaprogramming, hence the flexibility of lisp-like languages. For other languages, their representation is usually an AST. Transforming the representation of the program, or rewriting it, is possible with macro, preprocessors, or hooks during compilation or class loading.
The line between 1 and 2 is however blurry. Syntactical changes can appear to modify the semantics of the language. For instance, I can rewrite all fields accesses with proper getter and setter and perform additional logic there, say to implement transactional memory. Did I perform a semantical change of what a field access is, or merely a syntax change?
Also, there are other constructs the fall bewten the lines. For instance, proxies and #doesNotUnderstand trap are popular techniques to simulate the reification of message sends to some extent.
Lisp and Smalltalk have been very influencial in the field of metaprogramming, and I think the two following projects/platform are interesting to look at for a representative of each of these:
Racket, a lisp-like language focused on growing languages from within the langauge
Helvetia, a Smalltalk extension to embed new languages into the host language by leveraging the AST of the host environment.
I hope you enjoyed this even if I did not really address your question ;)
Your desired change require modifying the way literals are created. This is AFAIK not usually exposed to the application. The closed work that I can think of is Virtual Values for Language Extension, that tackled Javascript.
Yes. Common Lisp (and certain other lisps) have "reader macros" which allow the user to reprogram (incrementally) the mapping between the input stream and the actual language construct as parsed.
See http://dorophone.blogspot.com/2008/03/common-lisp-reader-macros-simple.html
If you want to operate on the level of objects, you will want to use a debugging/memory management framework that keeps track of all objects, and processes the rules on each evaluation step (nasty). This seems like the kind of thing you might be able to shoehorn into smalltalk.
CLOS (Common Lisp Object System) allows redefinition of live objects.
Ultimately you need two things to implement this:
Access to the running system's AST (Abstract Syntax Tree), and
Access to the running system's objects.
You'll want to study meta-object protocols and the languages that use them, then the implementations of both the MOPs and the environment within which these programs are executed.
Image-based systems will be the easiest to modify (e.g., Lisp, potentially Smalltalk).
(Image-based systems store a snapshot of a running system, allowing complete shutdown and restarts, redefinitions, etc. of a complete environment, including existing objects, and their definitions.)
Ruby allows you to extend classes. For instance, this example adds functionality to the String class. But you can do more than add methods to classes. You can also overwrite methods, but defining a method that's already been defined. You may want to preserve access to the original method using alias_method.
Putting all this together, you can overload a constructor in Ruby, but in your case, there's a catch: It sounds like you want the constructor to return a different type. Constructors by definition return instances of their class. If you just want it to return the string "[1,2,3,4]", that's simple enough:
class string
alias_method :initialize :old_constructor
def initialize
old_constructor
# code that applies your transformation
end
end
But there's no way to make it return an Array if that's what you want.

Is C# 4.0 compile-time turing complete?

There is a well-known fact that C++ templates are turing-complete, CSS is turing-complete (!) and that the C# overload resolution is NP-hard (even without generics).
But is C# 4.0 (with co/contravariance, generics etc) compile-time turing complete?
Unlike templates in C++, generics in C# (and other .net lang) are a runtime generated feature. The compiler does do some checking as to verify the types use but, actual substitution happens at runtime. Same goes for Co and contravariance if I'm not mistaken as well as even the preprocessor directives. Lots of CLR magic.
(At the implementation level, the primary difference is that C#
generic type substitutions are performed at runtime and generic type
information is thereby preserved for instantiated objects)
See MSDN
http://msdn.microsoft.com/en-us/library/c6cyy67b(v=vs.110).aspx
Update:
The CLR does preform type checking via information stored in the metadata associated with the compiled assemblies(Vis-à-vis Jit Compliation), It does this as one of its many services,(ShuggyCoUk answer on this question explains it in detail) (others include memory management and exception handling). So with that I would infer that the compiler has a understanding of state as progression and state as in machine internal state (TC,in part, mean being able to review data (symbols) with so reference to previous data(symbols) , conditionally and evaluate) (I hesitated to state the exact def of TC as I, myself am not sure I have it fully grasped, so feel free to fill in the blanks and correct me when applicable ) So with that I would say with a bit of trepidation, yes, yes it can be.

Is D powerful enough for these features?

For the longest time I wanted to design a programming language that married extensibility with efficiency (and safety, ease-of-use, etc.) I recently rediscovered D and I am wondering if D 2.0 is pretty much the language I wanted to make myself. What I love most is the potential of metaprogramming; in theory, could D's traits system enable the following features at compile time?
Run-time reflection: Are the compile-time reflection features sufficient to build a run-time reflection system a la Java/.NET?
Code conversion: Using a metaprogram, create C#/C++/etc. versions of your D program every time you compile it (bonus point if doc comments can be propagated).
Traits. I don't mean the metaprogramming traits built into D, I mean object-oriented traits for class composition. A D program would indicate a set of traits to compose, and a metaprogram would compose them.
Unit inference engine: Given some notation for optionally indicating units, e.g. unit(value), could a D metaprogram examine the following code, infer the correct units, and issue an error message on the last line? (I wrote such a thing for boo so I can assure you this is possible in general, program-wide):
auto mass = kg(2.0);
auto accel = 1.0; // units are strictly optional
auto force = mass*accel;
accel += metresPerSecondSquared(9.81); // units of 'force' and 'accel' are now known
force += pounds(3.0); // unit mismatch detected
Run-time reflection: Are the compile-time reflection features sufficient to build a run-time reflection system a la Java/.NET?
Yes. You can get all the information you need at compile time using __traits and produce the runtime data structures you need for runtime reflection.
Code conversion: Using a metaprogram, create C#/C++/etc. versions of your D program every time you compile it (bonus point if doc comments can be propagated).
No, it simply isn't possible no matter how powerful D is. Some features simply do not transfer over. For example, D has an inline assembler, which is 100% impossible to convert to C#. No language can losslessly convert to all other languages.
Traits. I don't mean the metaprogramming traits built into D, I mean object-oriented traits for class composition. A D program would indicate a set of traits to compose, and a metaprogram would compose them.
You can use template mixins for this, although they don't provide method exclusion.
Unit inference engine: Given some notation for optionally indicating units, e.g. unit(value), could a D metaprogram examine the following code, infer the correct units, and issue an error message on the last line? (I wrote such a thing for boo so I can assure you this is possible in general, program-wide):
Yes, this is straightforward in D. There's at least one implementation already.

Resources