CAPL - How to use sysvar struct element with testWaitForSignalMatch - struct

In my CANoe CAPL script I have defined a system variable MyVariable of my custom struct definition type with Field1, Field2, and Field3 as struct members.
Access to the sysvar works like
#sysvar::Data::MyVariable.Field1 = 3;
In my test I want to check that to Field1 is written a specific value during I wait.
I tried the following:
testWaitForSignalMatch(sysvar::Data::MyVariable.Field1, 0, 1000); // Wait 1000ms for Field1 set to 0
Using this I get a compilation error:
Error 1002 at (200,100): parse error.
Does anybody know, how I have to use this correctly?
I am using CANoe version 10.0 (SP7). According to the Availability note in the help page this should be supported.
The help says:
long TestWaitForSignalMatch (sysvar aSysVar, int64 aCompareValue, dword aTimeout); // form 4
aSysVar
System variable to be queried.
May also be a specific element of a variable of type struct or generic array.
Using testWaitForSignalMatch with non-struct-elements works fine.

The correct way to use function testWaitForSingalMatch with struct members is:
testWaitForSignalMatch(sysvarMember::Data::MyVariable.Field1, 0, 1000); // Wait 1000ms for Field1 set to 0
You have to use sysvarMember instead of sysvar

Related

How to pass parameters loaded from configuration file to a procedural macro function?

Here is a problem I am trying to solve. I have multiple procedural macro functions that generate tables of pre-computed values. Currently my procedural macro functions take parameters in the form of literal integers. I would like to be able to pass these parameters from a configuration file. I could re-write my functions to load parameters from macro themselves. However, I want to keep configuration from a top level crate, like in this example:
top-level-crate/
config/
params.yaml
macro1-crate/
macro2-crate/
Since the input into a macro function is syntax tokens not run-time values, I am not able to load a file from top-level-crate and pass params.
use macro1_crate::gen_table1;
use macro2_crate::gen_table2;
const TABLE1: [f32;100] = gen_table1!(500, 123, 499);
const TABLE2: [f32;100] = gen_table2!(1, 3);
fn main() {
// use TABLE1 and TABLE2 to do further computation.
}
I would like to be able to pass params to gen_table1 and gen_table2 from a configuration file like this:
use macro1_crate::gen_table1;
use macro2_crate::gen_table2;
// Load values PARAM1, PARAM2, PARAM3, PARAM4, PARAM5
const TABLE1: [f32;100] = gen_table1!(PARAM1, PARAM2, PARAM3);
const TABLE2: [f32;100] = gen_table2!(PARAM4, PARAM5);
fn main() {
// use TABLE1 and TABLE2 to do further computation.
}
The obvious problem is that PARAM1, PARAM2, PARAM3, PARAM4, PARAM5 are runtime values, and proc macros rely on build time information to generate tables.
One option I am considering is to create yet another proc macro specifically to load configuration into some sort of data-structure built using quote! tokens. Then feed this into macros. However, this feels hackish and the configuration file needs to be loaded several times. Also the params data structure need to be tightly coupled across macros. The code might look like this:
use macro1_crate::gen_table1;
use macro2_crate::gen_table2;
const TABLE1: [f32;100] = gen_table1!(myparams!());
const TABLE2: [f32;100] = gen_table2!(myparams!());
fn main() {
// use TABLE1 and TABLE2 to do further computation.
}
Any improvements or further suggestions?
gen_table1!(myparams!()); won't work: macros are not expanded from the inside out, like function calls. Your gen_table1 macro will receive the literal token stream myparams ! () and won't be able to evaluate this macro, thus not having access to the "return value" of myparams.
Right now, I only see one real way to do what you want: load the parameters from the file in gen_table1 and gen_table2, and just pass the filename of the file containing the parameters. For example:
const TABLE1: [f32; 100] = gen_table1!("../config/params.yaml");
const TABLE2: [f32; 100] = gen_table2!("../config/params.yaml");
Of course, this could lead to duplicate code in these two macros. But that should be solvable with the usual tools: extract that parameter loading into a function (in case both macros live in the same crate) or into an additional utility crate (in case the two macros live in different crates).
You also keep mentioning the term "runtime values". I think you mean "a const value, not a literal" and that you are referring to something like this:
const PARAM1: u32 = load_param!();
const TABLE1: [f32; 100] = gen_table1!(PARAM1); // <-- this does not work as expected!
Because here, again, your macro receives the literal token stream PARAM1 and not the value of said parameter.
So yes, I think that's what you mean by "runtime value". Granted, I don't have a better term for this right now, but "runtime value" is misleading/wrong because the value is available at compile time. If you were talking about an actual runtime value, i.e. a value that is ONLY knowable at runtime AFTER compilation is already done, then it would be impossible to do what you want. That's because proc macros run once at compile time, and never at runtime.

My segmented picker has normal Int values as tags, How is this passed to and from CoreData?

My SwiftUI segmented control picker uses plain Int ".tag(1)" etc values for its selection.
CoreData only has Int16, Int32 & Int64 options to choose from, and with any of those options it seems my picker selection and CoreData refuse to talk to each other.
How is this (??simple??) task achieved please?
I've tried every numeric based option within CoreData including Int16-64, doubles and floats, all of them break my code or simply just don't work.
Picker(selection: $addDogVM.gender, label: Text("Gender?")) {
Text("Boy ♂").tag(1)
Text("?").tag(2)
Text("Girl ♀").tag(3)
}
I expected any of the 3 CoreData Int options to work out of the box, and to be compatible with the (standard) Int used by the picker.
Each element of a segmented control is represented by an index of type Int, and this index therefore commences at 0.
So using your example of a segmented control with three segments (for example: Boy ♂, ?, Girl ♀), each segment is represented by three indexes 0, 1 & 2.
If the user selects the segmented control that represents Girl ♀, then...
segmentedControl.selectedSegmentIndex = 2
When storing a value using Core Data framework, that is to be represented as a segmented control index in the UI, I therefore always commence with 0.
Everything you read from this point onwards is programmer preference - that is and to be clear - there are a number of ways to achieve the same outcome and you should choose one that best suits you and your coding style. Note also that this can be confusing for a newcomer, so I would encourage patience. My only advice, keep things as simple as possible until you've tested and debugged and tested enough to understand the differences.
So to continue:
The Apple Documentation states that...
...on 64-bit platforms, Int is the same size as Int64.
So in the Core Data model editor (.xcdatamodeld file), I choose to apply an Integer 64 attribute type for any value that will be used as an Int in my code.
Also, somewhere, some time ago, I read that if there is no reason to use Integer 16 or Integer 32, then default to the use of Integer 64 in object model graph. (I assume Integer 16 or Integer 32 are kept for backward compatibility.) If I find that reference I'll link it here.
I could write about the use of scalar attribute types here and manually writing your managed object subclass/es by selecting in the attribute inspector Class Codegen = Manual/None, but honestly I have decided such added detail will only complicate matters.
So your "automatically generated by Core Data" managed object subclass/es (NSManagedObject) will use the optional NSNumber? wrapper...
You will therefore need to convert your persisted/saved data in your code.
I do this in two places... when I access the data and when I persist the data.
(Noting I assume your entity is of type Dog and an instance exists of dog i.e. let dog = Dog())
// access
tempGender = dog.gender as? Int
// save
dog.gender = tempGender as NSNumber?
In between, I use a "temp" var property of type Int to work with the segmented control.
// temporary property to use with segmented control
private var tempGender: Int?
UPDATE
I do the last part a little differently now...
Rather than convert the data in code, I made a simple extension to my managed object subclass to execute the conversion. So rather than accessing the Core Data attribute directly and manipulating the data in code, now I instead use this convenience var.
extension Dog {
var genderAsInt: Int {
get {
guard let gender = self.gender else { return 0 }
return Int(truncating: gender)
}
set {
self.gender = NSNumber(value: newValue)
}
}
}
Your picker code...
Picker(selection: $addDogVM.genderAsInt, label: Text("Gender?")) {
Text("Boy ♂").tag(0)
Text("?").tag(1)
Text("Girl ♀").tag(2)
}
Any questions, ask in the comments.

Perl 6 - Is it possible to create an attribute trait that set a meta-attribute?

I try to create an attribute trait. The use case is to mark some attributes of a class as "crudable" in the context of an objects-to-documents-mapping while other are not.
role crud {
has Bool $.crud is default(True);
}
multi trait_mod:<is>(Attribute $a, crud, $arg) {
$a.container.VAR does crud($arg);
}
class Foo {
has $.bar is rw;
# Provide an extra nested information
has $.baz is rw is crud(True);
}
By reading and adapting some example code, I managed to get something that seems to do what I want. Here is a snippet with test case.
When I instantiate a new Foo object and set the $.bar attribute (that is not crud), it looks like that:
.Foo #0
├ $.bar is rw = 123456789
└ $.baz is rw = .Scalar+{crud} #1
└ $.crud +{crud} = True
What I understand from this is that the $.baz attribute got what I call a meta-attribute that is independent from its potential value.
It looks good to me (if I understood correctly what I did here and that my traits use is not a dirty hack). It is possible to reach $foo.baz.crud that is True. Though, I don't understand very well what .Scalar+{crud} means, and if I can set something there and how.
When I try to set the $.baz instance attribute, this error is returned:
Cannot modify an immutable Scalar+{crud} (Scalar+{crud}.new(crud => Bool::True))
in block <unit> at t/08-attribute-trait.t line 30
Note: This is the closest thing to a working solution I managed to get. I don't need different crud settings for different instances of instantiated Foo classes.
I never want to change the value of the boolean, in fact, once the object instantiated, just providing it to attributes with is crud. I am not even interested to pass a True or False value as an argument: if it would be possible to just set the boolean trait attribute to True by default, it would be enough. I didn't manage to do this though, like:
multi trait_mod:<is>(Attribute $a, :$crud!) {
# Something like this
$a.container.VAR does set-crud;
}
class Foo {
has $.bar is rw;
has $.baz is rw is crud;
}
Am I trying to do something impossible? How could I adapt this code to achieve this use case?
There are several things going on here. First of all, the signature of the trait_mod looks to be wrong. Secondly, there appears to be a bad interaction when the name of a trait is the same as an existing role. I believe this should be an NYI exception, but apparently it either goes wrong in parsing, or it goes wrong in trying to produce the error message.
Anyways, I think this is what you want:
role CRUD {}; # since CRUD is used as an acronym, I chose to use uppercase here
multi trait_mod:<is>(Attribute:D $a, :$crud!) { # note required named attribute!
$a.^mixin: CRUD if $crud; # mixin the CRUD role if a True value was given
}
class A {
has $.a is crud(False); # too bad "is !crud" is invalid syntax
has $.b is crud;
}
say "$_.name(): { $_ ~~ CRUD }" for A.^attributes; # $!a: False, $!b: True
Hope this helps.

How to get protobuf.js to output enum strings instead of integers

I'm using the latest protobuf.js with Node.js 4.4.5.
I currently struggle to get protobuf.js to output the string definitions of enums instead of integers. I tried several suggestions, but none of them worked:
https://github.com/dcodeIO/ProtoBuf.js/issues/97
https://github.com/dcodeIO/protobuf.js/issues/349
I guess it's because of API changes in protobuf.js for the first one. For the second one, I can use the suggested solution partially, but if the message is nested within other messages, the builder seems to fall back to using the integer values, although the string values have been explicitly set.
Ideally, I'd like to overwrite the function which is used for producing the enum values, but I have a hard time finding the correct one with the debugger. Or is there a better way to achieve this for deeply nested objects?
The generated JS code from protoc has a map in one direction only e.g.
proto.foo.Bar.Myenum = {
HEY: 0,
HO: 1
};
Rationale for this is here but you have to the reverse lookup in your own JS code. There are lots of easy solutions for this. I used the one at https://stackoverflow.com/a/59360329/449347 i.e.
Generic reverse mapper function ...
export function getKey(map, val) {
return Object.keys(map).find(key => map[key] === val);
}
UT ...
import { Bar } from "js/proto/bar_pb";
expect(getKey(proto.foo.Bar.Myenum, 0)).toEqual("HEY");
expect(getKey(proto.foo.Bar.Myenum, 1)).toEqual("HO");
expect(getKey(proto.foo.Bar.Myenum, 99)).toBeUndefined();

NgAttr value initially empty when containing mustache directive in AngularDart

Chapter 3 of the AngularDart tutorial defines a rating #NgComponent (see excerpt below), that is used in index.html like this:
<rating max-rating="5" rating="ctrl.selectedRecipe.rating"></rating>
In that chapter it is also suggested that that the max-rating #NgAttr can be set via a {{...}} like this:
<rating max-rating="{{ctrl.max}}" rating="ctrl.selectedRecipe.rating"></rating>
In the RecipeController I have simply declared:
int max = 5;
If I add print("maxRating('$value')") at the top of the component's maxRating() setter body (see below), then in running the app I get the following output:
maxRating('') // printed 7 times
maxRating('5') // printed 7 times
Questions: Why is the value initially empty? I assume that it is because the interpolation has not been done yet, but then why is the setter called at all before the value is "ready"?
Excerpt of RatingComponent class definition:
#NgComponent(
selector: 'rating', ...
publishAs: 'cmp'
)
class RatingComponent {
...
#NgTwoWay('rating')
int rating;
#NgAttr('max-rating')
set maxRating(String value) {
var count = value == null ? 5 : int.parse(value);
stars = new List.generate(count, (i) => i+1);
}
As far as I know, angular dart is very eager in applying values. As soon as angular is running it starts applying values, I guess to provide a feel of responsiveness.
I've been bitten by this too and had to write more than one workaround for not yet initialized values.
The setter and getters are called by the binding mechanism to stabilize the values, as some values may depend on each other and the mechanism "brute forces" this by just setting and getting values multiple times (7 by default, IIRC).

Resources