Why is there or is there a difference between pointer and input? - mrtk

While converting some code to the new MRTK RC1 I recognized two versions of e.g. Up and Down events, both for input and pointer. Now I was wondering, what the difference is? Why is this difference? Do I need to implement both if I want the same application running on desktop (mouse uses the Input version) and XR devices (pointer version)?

Great question! My understanding of the difference between InputUp/Down and PointerUp/Down is about where the events come from. In short, I would recommend listening for pointer events instead of the raw input events. You do not need to listen to both types of events.
InputDown/Up vs. PointerDown/Up
InputUp/Down are generated by controllers. For example, if you look at the classes that call MixedRealityInputSytem.RaiseOnInputDown, you will see the following code making calls to RaiseOnInputDown:
GenericJoystickController.cs
WindowsMixedRealityController.cs
MouseController.cs
In other words, these are 'Raw inputs' that represent things like 'the select button was pressed on a controller.
In contrast, OnPointerDown/Up are raised by pointers. For example, if you look for reference to MixedRealityInputSystem.RaisePointerDown, you will see the following files:
GazeProvider.cs
BaseControllerPointer.cs
GGVPointer.cs
PokePointer.cs
In other words, these are higher level inputs that come from different kinds of pointer dispatchers -- e.g. from near interaction pointers (sphere pointers), far interaction pointers (line pointers), or touch pointers (poke pointers).
Why listen for PointerDown/Up instead of InputDown/Up
Listening for pointer down and up from pointers will allow you to distinguish between things like near and far interaction since you can look at the pointer that sent the event and check if the pointer implements IMixedRealityNearPointer.

Related

Activity Diagram: Reusing Activity/Action With different inherited type of object flow as output

I have a question regarding modeling on an Activity Diagram that has been bothering me for some time and I was not able to find any answers / Convention anywhere.
Here is an example to better understand my question:
Let say that I have two class named "flat" and "house". both are a generalization of the class "housing".
housing contain an attribute "residents" for the person living in it.
flat contain an attributes "floor" that says at which floor the flat is.
Here is the class diagram:
In an activity diagram, I want to represent the action of giving persons a housing.
this action can take either house or flat as input (so the use of "housing" type for the input pin is correct I think) as well as an undefined number of people.
I want this action to give an updated house or flat as output (not an updated housing as this would mean that information specific to the house or flat would be lost.
I don't really know if I must create two actions (one for house and another for flats) or if there is a way to reuse the action for both class and have a correct output out of it.
Here is the activity Diagram:
My question is: how to represent in an activity diagram, an action that is the same for different type of Object flows as input, and that give the updated Object flow as output (that may be therefore of different type)?
nb:
all type of object flow are class and inherit from a same other class.
I'm representing this in modelio but first had this issue in Cameo.
I'm Trying to fit as best as I can within the rules of UML Language.
Cameo is right in rejecting this model. Give Flat Floor expects a Flat and will not work with a House, but Assign Resident to Housing could return a House. I know, in your context it can only return a Flat, but how should the tool know that?
The correct way to capture this fact would be to add a postcondition to Activity Assign Resident to Housing that states that the type of the input and output pin will be the same.
However, it would be really hard to define a complete set of compatibilty rules that takes into account all the global and local pre- and postconditions and the tools would also be hard pressed to validate a model according to these rules. Therefore the UML specification choose the easy road and simply doesn't allow to connect the pins.
The solution is to use the transformation property of the ObjectFlow. Just assign an OpaqueBehavior that casts the Type House to the Type Flat. Cameo will then accept the model. It is the modelers responsibility to ensure, that this cast will always work, since no exception handling can be defined here. Maybe this should be documented with a local postcondition.
In your specific example there is an even easier solution: simply fork the ObjectFlow of Type Flat and omit the OutputPin of Assign Resident to Housing.
As a side note: Due to a bug in Cameo, you can change the type of the OutputPin to a more specific Type than that of the ActivityParameter. This is correct for InputPins, but should be the opposite for OutputPins. You could use this to let the Parameter be of type House, but the OutputPin-Type would be Flat.
The two flows (top object and lower control) in the blue frame could stay as they are. Give flat floor would commence only when it receives a Flat object and the control token is sent. In order to make the right action sort of optional I would just use the object flow, thus only triggering when a Flat object is passed. That would just be enough and no additional control flow is needed.
To make things clear I would also add a guarded flow from the Assign action to an exit reading [ house was assigned ] or the like.

How to use ApplicationDataTypes in C code

For my understanding, the ApplicationDataType was introduced to AUTOSAR Version 4 to design Software-Components that are independent of the underlying platform and are therefore re-usable in different projects and applications.
But how about the implementation behind such a SW-C to be platform independent?
Use-case example: You want to design and implement a SW-C that works as a FiFo. You have one Port for Input-Data, an internal buffer and one Port for Output-Data. You could implement this without knowing about the data type of the data by using the “abstract” ApplicationDataType.
By using an ApplicationDataType for a variable as part of a PortInterface sooner or later you have to map this ApplicationDataType to an ImplementationDataType for the RTE-Generator.
Finally, the code created by the RTE-Generator only uses the ImplementationDataType. The ApplicationDataType is nowhere to be found in the generated code.
Is this intended behavior or a bug of the RTE-Generator?
(Or maybe I'm missing something?)
It is intended that ApplicationDataTypes do not directly appear in code, they are represented by their ImplementationDataType counterparts.
The motivation for the definition of data types on different levels of abstraction is explained in the AUTOSAR specifications, namely the TPS Software Component Template.
You will never find an ApplicationDataType in the C code, because it's defined on a physical level with a physical unit and might have a (completly) different representation on the implementation level in C.
Imagine a battery control sensor that measures the voltage. The value can be in range 0.0V and 14.0V with one digit after the decimal point (physical). You could map it to a float in C but floating point operations are expensive. Instead, you use a fixed point arithmetic where you map the phyiscal value 0.0 to 0, 0.1 to 1, 0.2 to 2 and so on. This mapping is described by a so called compuMethod.
The software component will always use the internal representation. So, why do you need the ApplicationDataType then? There are many reasons to use them, some of them are:
Methodology: The software component designer doesn't need to worry about the implementation in C. Somebody else can define that in a later stage.
Measurement If you measure the value, you have a well defined compuMethod and know the physical interpretation of the value in C.
Data conversion: If you connect software component with different units e.g. km/h vs mph, the Rte could automatically convert the internal representation between them.
Constant conversion: You can specify an initial value on the physical value (e.g. 10.6V) and the Rte will convert it to the internal representation.
Variable Size Arrays: Without dynamic memory allocation, you cannot have a variable size array in C. But you could reserve some (max) memory in an array and store the actual length in a seperate field. On the implementation level you have then a struct with two members (value, length). But on the application level you just have an array.
from AUTOSAR_TPS_SoftwareComponentTemplate.pdf
ApplicationDataType defines a data type from the application point of
view. Especially it should be used whenever something "physical" is at
stake.
An ApplicationDataType represents a set of values as seen in the
application model, such as measurement units. It does not consider
implementation details such as bit-size, endianess, etc.
It should be possible to model the application level aspects of a VFB
system by using ApplicationDataTypes only.

FRP - Event streams and Signals - what is lost in using just signals?

In recent implementations of Classic FRP, for instance reactive-banana, there are event streams and signals, which are step functions (reactive-banana calls them behaviours but they are nevertheless step functions). I've noticed that Elm only uses signals, and doesn't differentiate between signals and event streams. Also, reactive-banana allows to go from event streams to signals (edited: and it's sort of possible to act on behaviours using reactimate' although it not considered good practice), which kind of means that in theory we could apply all the event stream combinators on signals/behaviours by first converting the signal to event stream, applying and then converting again. So, given that it's in general easier to use and learn just one abstraction, what is the advantage of having separated signals and event streams ? Is anything lost in using just signals and converting all the event stream combinators to operate on signals ?
edit: The discussion has been very interesting. The main conclusions that I took from the discussion myself is that behaviours/event sources are both needed for mutually recursive definitions (feedback) and for having an output depend on two inputs (a behaviour and an event source) but only cause an action when one of them changes (<#>).
(Clarification: In reactive-banana, it is not possible to convert a Behavior back to an Event. The stepper function is a one-way ticket. There is a changes function, but its type indicates that it is "impure" and it comes with a warning that it does not preserve the semantics.)
I believe that having two separates concepts makes the API more elegant. In other words, it boils down to a question of API usability. I think that the two concepts behave sufficiently differently that things flow better if you have two separate types.
For example, the direct product for each type is different. A pair of Behavior is equivalent to a Behavior of pairs
(Behavior a, Behavior b) ~ Behavior (a,b)
whereas a pair of Events is equivalent to an Event of a direct sum:
(Event a, Event b) ~ Event (EitherOrBoth a b)
If you merge both types into one, then neither of these equivalence will hold anymore.
However, one of the main reasons for the separation of Event and Behavior is that the latter does not have a notion of changes or "updates". This may seem like an omission at first, but it is extremely useful in practice, because it leads to simpler code. For instance, consider a monadic function newInput that creates an input GUI widget that displays the text indicated in the argument Behavior,
input <- newInput (bText :: Behavior String)
The key point now is that the text displayed does not depend on how often the Behavior bText may have been updated (to the same or a different value), only on the actual value itself. This is a lot easier to reason about than the other case, where you would have to think about what happens when two successive event occurrences have the same value. Do you redraw the text while the user edits it?
(Of course, in order to actually draw the text, the library has to interface with the GUI framework and does keep track of changes in the Behavior. This is what the changes combinator is for. However, this can be seen as an optimization and is not available from "within FRP".)
The other main reason for the separation is recursion. Most Events that recursively depend on themselves are ill-defined. However, recursion is always allowed if you have mutual recursion between an Event and a Behavior
e = ((+) <$> b) <#> einput
b = stepper 0 e
There is no need to introduce delays by hand, it just works out of the box.
Something critically important to me is lost, namely the essence of behaviors, which is (possibly continuous) variation over continuous time.
Precise, simple, useful semantics (independent of a particular implementation or execution) is often lost as well.
Check out my answer to "Specification for a Functional Reactive Programming language", and follow the links there.
Whether in time or in space, premature discretization thwarts composability and complicates semantics.
Consider vector graphics (and other spatially continuous models like Pan's). Just as with premature finitization of data structures as explained in Why Functional Programming Matters.
I don't think there's any benefit to using the signals/behaviors abstraction over elm-style signals. As you point out, it's possible to create a signal-only API on top of the signal/behavior API (not at all ready for use, but see https://github.com/JohnLato/impulse/blob/dyn2/src/Reactive/Impulse/Syntax2.hs for an example). I'm pretty sure it's also possible to write a signal/behavior API on top of an elm-style API as well. That would make the two APIs functionally equivalent.
WRT efficiency, with a signals-only API the system should have a mechanism where only signals that have updated values will cause recomputations (e.g. if you don't move the mouse, the FRP network won't re-calculate the pointer coordinates and redraw the screen). Provided this is done, I don't think there's any loss of efficiency compared to a signals-and-streams approach. I'm pretty sure Elm works this way.
I don't think the continuous-behavior issue makes any difference here (or really at all). What people mean by saying behaviors are continuous over time is that they are defined at all times (i.e. they're functions over a continuous domain); the behavior itself isn't a continuous function. But we don't actually have a way to sample a behavior at any time; they can only be sampled at times corresponding to events, so we can't use the full power of this definition!
Semantically, starting from these definitions:
Event == for some t ∈ T: [(t,a)]
Behavior == ∀ t ∈ T: t -> b
since behaviors can only be sampled at times where events are defined, we can create a new domain TX where TX is the set of all times t at which Events are defined. Now we can loosen the Behavior definition to
Behavior == ∀ t ∈ TX: t -> b
without losing any power (i.e. this is equivalent to the original definition within the confines of our frp system). Now we can enumerate all times in TX to transform this to
Behavior == ∀ t ∈ TX: [(t,b)]
which is identical to the original Event definition except for the domain and quantification. Now we can change the domain of Event to TX (by the definition of TX), and the quantification of Behavior (from forall to for some) and we get
Event == for some t ∈ TX: [(t,a)]
Behavior == for some t ∈ TX: [(t,b)]
and now Event and Behavior are semantically identical, so they could obviously be represented using the same structure in an FRP system. We do lose a bit of information at this step; if we don't differentiate between Event and Behavior we don't know that a Behavior is defined at every time t, but in practice I don't think this really matters much. What elm does IIRC is require both Events and Behaviors to have values at all times and just use the previous value for an Event if it hasn't changed (i.e. change the quantification of Event to forall instead of changing the quantification of Behavior). This means you can treat everything as a signal and it all Just Works; it's just implemented so that the signal domain is exactly the subset of time that the system actually uses.
I think this idea was presented in a paper (which I can't find now, anyone else have a link?) about implementing FRP in Java, perhaps from POPL '14? Working from memory, so my outline isn't as rigorous as the original proof.
There's nothing to stop you from creating a more-defined Behavior by e.g. pure someFunction, this just means that within an FRP system you can't make use of that extra defined-ness, so nothing is lost by a more restricted implementation.
As for notional signals such as time, note that it's impossible to implement an actual continuous-time signal using typical programming languages. Since the implementation will necessarily be discrete, converting that to an event stream is trivial.
In short, I don't think anything is lost by using just signals.
I've unfortunately have no references in mind, but I distinctly remember
different reactive authors claiming this choice is just for efficiency. You expose
both to give the programmer a choice in what implementation of the same idea is
more efficient for your problem.
I might be lying now, but I believe Elm implements everything as event streams under the
hood. Things like time wouldn't be so nice like event streams though, since there are an
infinite amount of events during any time frame. I'm not sure how Elm solves this, but I
think it's a good example on something that makes more sense as a signal, both conceptually
and in implementation.

What is the fastcall keyword used for in visual c?

I have seen the fastcall notation appended before many functions. Why it is used?
That notation before the function is called the "calling convention." It specifies how (at a low level) the compiler will pass input parameters to the function and retrieve its results once it's been executed.
There are many different calling conventions, the most popular being stdcall and cdecl.
You might think there's only one way of doing it, but in reality, there are dozens of ways you could call a function and pass variables in and out. You could place the input parameters on a stack (push, push, push to call; pop, pop, pop to read input parameters). Or perhaps you would rather stick them in registers (this is fastcall - it tries to fit some of the input params in registers for speed).
But then what about the order? Do you push them from left to right or right to left? What about the result - there's always only one (assuming no reference parameters), so do you place the result on the stack, in a register, at a certain memory address?
Also, let's assume you're using the stack for communication - who's job is it to actually clear the stack after the function is called - the caller or the callee?
What about backing up and then restoring the contents of (certain) CPU registers - should the caller do it, or will the callee guarantee that it'll return everything the way it was?
The most popular calling convention (by far) is cdecl, which is the standard calling convention in both C and C++. The WIN32 API uses stdcall, which means any code that calls the WIN32 API needs to use stdcall for those function calls (making it another popular choice).
fastcall is a bit of an oddball - people realized for many functions with only one in/out parameter, pushing and popping from a memory-based stack is quite a bit of overhead and makes function calls a little bit heavy so the different compilers introduced (different) calling conventions that will place one or more parameters in registers before placing the rest in the stack for better performance. The problem is, not all compilers used the same rules for what goes where and who does what with fastcall, and as a result you have to be careful when using it because you'll never know who does what. Finally, see Is fastcall really faster? for info on fastcall performance benefits.
Complicated stuff.
Something important to keep in mind: don't add or change calling conventions if you don't know exactly what you're doing, because if both the caller and the callee do not agree on the calling convention, you'll likely end up with stack corruption and a segfault. This usually happens when you have the function being called in a DLL/shared library and a program is written that depends on the DLL/SO/dylib being a certain calling convention (say, cdecl), then the library is recompiled with a different calling convention (say, fastcall). Now the old program can no longer communicate with the new library.
Wikipedia states that
Conventions entitled fastcall have not been standardized, and have been implemented differently, depending on the compiler vendor. Typically fastcall calling conventions pass one or more arguments in registers which reduces the number of memory accesses required for the call.

statically/dynamically typed vs static/dynamic binding

everyone what is the difference between those 4 terms, can You give please examples?
Static and dynamic are jargon words that refer to the point in time at which some programming element is resolved. Static indicates that resolution takes place at the time a program is constructed. Dynamic indicates that resolution takes place at the time a program is run.
Static and Dynamic Typing
Typing refers to changes in program structure that are due to the differences between data values: integers, characters, floating point numbers, strings, objects and so on. These differences can have many effects, for example:
memory layout (e.g. 4 bytes for an int, 8 bytes for a double, more for an object)
instructions executed (e.g. primitive operations to add small integers, library calls to add large ones)
program flow (simple subroutine calling conventions versus hash-dispatch for multi-methods)
Static typing means that the executable form of a program generated at build time will vary depending upon the types of data values found in the program. Dynamic typing means that the generated code will always be the same, irrespective of type -- any differences in execution will be determined at run-time.
Note that few real systems are either purely one or the other, it is just a question of which is the preferred strategy.
Static and Dynamic Binding
Binding refers to the association of names in program text to the storage locations to which they refer. In static binding, this association is predetermined at build time. With dynamic binding, this association is not determined until run-time.
Truly static binding is almost extinct. Earlier assemblers and FORTRAN, for example, would completely precompute the exact memory location of all variables and subroutine locations. This situation did not last long, with the introduction of stack and heap allocation for variables and dynamically-loaded libraries for subroutines.
So one must take some liberty with the definitions. It is the spirit of the concept that counts here: statically bound programs precompute as much as possible about storage layout as is practical in a modern virtual memory, garbage collected, separately compiled application. Dynamically bound programs wait as late as possible.
An example might help. If I attempt to invoke a method MyClass.foo(), a static-binding system will verify at build time that there is a class called MyClass and that class has a method called foo. A dynamic-binding system will wait until run-time to see whether either exists.
Contrasts
The main strength of static strategies is that the program translator is much more aware of the programmer's intent. This makes it easier to:
catch many common errors early, during the build phase
build refactoring tools
incur a significant amount of the computational cost required to determine the executable form of the program only once, at build time
The main strength of dynamic strategies is that they are much easier to implement, meaning that:
a working dynamic environment can be created at a fraction of the cost of a static one
it is easier to add language features that might be very challenging to check statically
it is easier to handle situations that require self-modifying code
Typing - refers to variable tyes and if variables are allowed to change type during program execution
http://en.wikipedia.org/wiki/Type_system#Type_checking
Binding - this, as you can read below can refer to variable binding, or library binding
http://en.wikipedia.org/wiki/Binding_%28computer_science%29#Language_or_Name_binding

Resources