This is in reference to this question. I checked our test interface and we are only passing the V93k primary params to the test_suites.add method.
V93K_PRIMARIES = [:lev_equ_set, :lev_spec_set, :timset, :tim_equ_set, :tim_spec_set, :seqlbl, :levset]
primary_tm_params = {}.tap do |primary_hash|
V93K_PRIMARIES.each do |param|
primary_hash[param] = tm_params.delete(param) unless tm_params[param].nil?
end
end
# Create the test suite
t = test_suites.add(test_name, primary_tm_params)
t.test_method = test_methods.amd93k.send(options[:tm].to_sym, tm_params)
V93K_PRIMARIES.each do |primary|
t.send("#{primary}=", primary_tm_params[primary]) unless primary_tm_params[primary].nil?
end
# Insert the test into the flow
test(t, tm_params)
When I set a breakpoint, I do see they were missing. Here they are after updating the code:
:ip=>:L2,
:testmode=>:speed,
:cond=>:pmax,
:if_failed=>:cpu_pmin,
:testtype=>:cpu,
:test_ip=>:bist,
:tm=>"Bist"}
And here is the .tf file generated from the original two tests in the original question:
run_and_branch(cpu_L2_speed_pmin_965EA18)
then
{
}
else
{
#CPU_PMIN_965EA18_FAILED = 1;
}
if #CPU_PMIN_965EA18_FAILED == 1 then
{
run(cpu_L2_speed_pmax_965EA18);
}
else
{
}
I think we have it figured out, thx very much!
The normal approach to this is just to pass everything to flow.test, rather than a subset of the options passed from the flow.
It will only act on the options it recognizes, which are basically the flow control parameters (:id, :if_failed, :unless_enabled, etc) and the test and bin number parameters, and it will just ignore the rest.
Related
When trying to use in each statements like the following I get an unknown identifier error.
dml 1.4;
param MACRO = true;
#if (MACRO){
in each bank {
in each register {
param something = 1;
}
}
}
At compile time this errors out with the following message:
/modules/test-device/test-device.dml:179:6: error: unknown identifier: 'MACRO'
Despite the MACRO value being defined in the same file.
I know conditional in each statements are not allowed under DML and there is even an specific error for it: "conditional 'in each' is not allowed [ECONDINEACH]"
But I am getting a different error and the following snippet works with no problem:
dml 1.4;
#if (dml_1_2){
in each bank {
in each register {
param something = 1;
}
}
}
So why am I getting a different error and Is there a way to get around this?
As you mentioned, some statements like in each, but also others like typedef, template, import etc are generally disallowed directly inside an #if. There is a long-standing DML feature request to soften this restriction; in particular, this was critically needed during the DML 1.2 to DML 1.4 migration. The restriction was partially softened by adding a hack that permits top-level #if statements with forbidden statements, as long as the condition only refers to some known constants (true, false and dml_1_2).
Technically, this workaround is implemented by considering top-level #if statements as completely separate constructs depending on whether the body contains forbidden statements. If it does, the condition is evaluated in a special variable scope that only contains the three symbols true, false and dml_1_2. This explains why the error message changes from conditional 'in each' is not allowed into unknown identifier.
In your concrete #if (MACRO) example, I don't know a valid way to express that; however, in similar situations you can often solve the problem by making sure the in each statement appears in a subobject of the #if statement; e.g., if you have:
bank regs {
#if (MACRO) {
// compile error: 'in each' directly inside '#if'
in each register {
param something = 1;
}
}
}
then you can change it to:
#if (MACRO) {
bank regs {
// ok: 'in each' in a subobject of the '#if'
in each register {
param something = 1;
}
}
}
Another approach that sometimes is applicable, is if the MACRO param relates to the choice of code generator for bank skeletons; e.g., if you generate DML code for bank skeletons from IPXACT using two different frameworks, say X and Y, and MACRO determines which of these frameworks was used, then chances are that each of these frameworks instantiates a common template, say x_register vs y_register, on all generated registers, or a common template x_bank vs y_bank on all banks. If you can identify such a template, then you can write:
in each (x_register, register) {
// applied to all registers generated by the X framework
param something = 1;
}
or:
in each x_bank {
in each register {
param something = 1;
}
}
I have a proto message:
syntax = "proto3";
import "google/protobuf/any.proto";
message Task {
repeated google.protobuf.Any targets = 1;
// ...
}
message Target {
string name = 1;
// ...
}
How should I add Target messages into Task.targets?
In official docs I've found info about how to assign value to a single Any type value, however in my case I have repeated Any field type.
Edit: Task.targets may contain different types of targets, that's why Any type is used. A single Target message is just for minimal reproducible example.
Thanks #Justin Schoen. According to https://developers.google.com/protocol-buffers/docs/reference/java/com/google/protobuf/Any, you need to first create an Any object, then Pack a Target (or any other object type) before appending it to the repeated list.
from google.protobuf.any_pb2 import Any
task = Task()
target = Any()
target.Pack(Target())
task.targets.append(any)
I have limited knowledge of the any type, but I would think it could be treated as if it were a repeated list of Target messages.
Python Code:
task_targets = []
task_targets.append(<insert_pb2_import>.Target(name='test'))
return <insert_pb2_import>.Task(targets=task_targets)
After searching for an answer myself, I found this thread to be the most relevant so I'll post my solution here if it helps anyone (but in Java/Scala).
If you want
repeated google.protobuf.Any targets = 1;
and targets can be any value such as (string, bool, int, etc). This is how I did it in scala/java:
val task = Task.newBuilder()
.addTargets(Any.pack(StringValue.of("iss")))
.addTargets(Any.pack(Int32Value.of(25544)))
.addTargets(Any.pack(DoubleValue.of(1004.882447947814)))
.addTargets(Any.pack(DoubleValue.of(84.90917890132)))
.addTargets(Any.pack(DoubleValue.of(14.620929684)))
.addTargets(Any.pack(StringValue.of("kilometers")))
.build()
After playing around some time I have decided to revise the solution that uses repeating Any. And here is an advice for those who got stuck in this same place:
try to use specific types instead of Any.
A workaround for my situation is to create messages of types SpecificTargetSet1, SpecificTargetSet2, etc., that contain specific targets. The Task proto file would look like:
message Task {
google.protobuf.Any target_set = 1;
}
Target set proto file:
message SpecificTargetSet1 {
repeated SpecificTarget1 targets = 1;
}
And now a task could be created in such a way:
target = Target()
target.name = "Some name"
target_set = SpecificTargetSet1()
target_set.targets.append(target)
task = Task()
task.target_set.Pack(target_set)
I do not mark my answer as correct, as it is just a workaround.
I would like to know if it is possible to retrieve the name of a variable.
For example if I have a method:
def printSomething(def something){
//instead of having the literal String something, I want to be able to use the name of the variable that was passed
println('something is: ' + something)
}
If I call this method as follows:
def ordinary = 58
printSomething(ordinary)
I want to get:
ordinary is 58
On the other hand if I call this method like this:
def extraOrdinary = 67
printSomething(extraOrdinary)
I want to get:
extraOrdinary is 67
Edit
I need the variable name because I have this snippet of code which runs before each TestSuite in Katalon Studio, basically it gives you the flexibility of passing GlobalVariables using a katalon.features file. The idea is from: kazurayam/KatalonPropertiesDemo
#BeforeTestSuite
def sampleBeforeTestSuite(TestSuiteContext testSuiteContext) {
KatalonProperties props = new KatalonProperties()
// get appropriate value for GlobalVariable.hostname loaded from katalon.properties files
WebUI.comment(">>> GlobalVariable.G_Url default value: \'${GlobalVariable.G_Url}\'");
//gets the internal value of GlobalVariable.G_Url, if it's empty then use the one from katalon.features file
String preferedHostname = props.getProperty('GlobalVariable.G_Url')
if (preferedHostname != null) {
GlobalVariable.G_Url = preferedHostname;
WebUI.comment(">>> GlobalVariable.G_Url new value: \'${preferedHostname}\'");
} else {
WebUI.comment(">>> GlobalVariable.G_Url stays unchanged");
}
//doing the same for other variables is a lot of duplicate code
}
Now this only handles 1 variable value, if I do this for say 20 variables, that is a lot of duplicate code, so I wanted to create a helper function:
def setProperty(KatalonProperties props, GlobalVariable var){
WebUI.comment(">>> " + var.getName()" + default value: \'${var}\'");
//gets the internal value of var, if it's null then use the one from katalon.features file
GlobalVariable preferedVar = props.getProperty(var.getName())
if (preferedVar != null) {
var = preferedVar;
WebUI.comment(">>> " + var.getName() + " new value: \'${preferedVar}\'");
} else {
WebUI.comment(">>> " + var.getName() + " stays unchanged");
}
}
Here I just put var.getName() to explain what I am looking for, that is just a method I assume.
Yes, this is possible with ASTTransformations or with Macros (Groovy 2.5+).
I currently don't have a proper dev environment, but here are some pointers:
Not that both options are not trivial, are not what I would recommend a Groovy novice and you'll have to do some research. If I remember correctly either option requires a separate build/project from your calling code to work reliable. Also either of them might give you obscure and hard to debug compile time errors, for example when your code expects a variable as parameter but a literal or a method call is passed. So: there be dragons. That being said: I have worked a lot with these things and they can be really fun ;)
Groovy Documentation for Macros
If you are on Groovy 2.5+ you can use Macros. For your use-case take a look at the #Macro methods section. Your Method will have two parameters: MacroContext macroContext, MethodCallExpression callExpression the latter being the interesting one. The MethodCallExpression has the getArguments()-Methods, which allows you to access the Abstract Syntax Tree Nodes that where passed to the method as parameter. In your case that should be a VariableExpression which has the getName() method to give you the name that you're looking for.
Developing AST transformations
This is the more complicated version. You'll still get to the same VariableExpression as with the Macro-Method, but it'll be tedious to get there as you'll have to identify the correct MethodCallExpression yourself. You start from a ClassNode and work your way to the VariableExpression yourself. I would recommend to use a local transformation and create an Annotation. But identifying the correct MethodCallExpression is not trivial.
no. it's not possible.
however think about using map as a parameter and passing name and value of the property:
def printSomething(Map m){
println m
}
printSomething(ordinary:58)
printSomething(extraOrdinary:67)
printSomething(ordinary:11,extraOrdinary:22)
this will output
[ordinary:58]
[extraOrdinary:67]
[ordinary:11, extraOrdinary:22]
I am trying to remove the PreSearch filer and my code is as below. How can I achieve the same?
Xrm.Page.getControl("productid").removePreSearch(function () {
Object
});
Xrm.Page.getControl("productid").addPreSearch(function () {
fetchxml2();
});
function fetchxml2() {
var fetchXml1 = "<filter type='and'>"
fetchXml1 += "<condition attribute='productid' operator='in' >";
for (var i = 0; i < Itemid.length; i++) {
fetchXml1 += "<value>" + Itemid[i] + "</value>";
}
fetchXml1 += "</condition>";
fetchXml1 += "</filter>";
Xrm.Page.getControl("productid").addCustomFilter(fetchXml1);
//Xrm.Page.getControl("productid").removePreSearch(fetchXml1);
};
In order to be able to remove the handler via removePreSearch, avoid using an anonymous function by creating a named function and using that in both addPreSearch and removePreSearch:
function preSearchHandler(){
fetchxml2();
}
Xrm.Page.getControl("productid").removePreSearch(preSearchHandler);
Xrm.Page.getControl("productid").addPreSearch(preSearchHandler);
Just wanted to add this to the discussion:
If you, say, have three different custom filters on a lookup field, the functionality will stack when you apply a new filter.
For example, if you have an option set that calls addPreSearch() on the field, if you select all three different options, you will have all three filters applied to the field simultaneously.
say the option set has three options of [option A, option B, option C],
the corresponding functions are, for simplicity [filterA, filterB, filterC],
on the change event of the option set, for each filter that you apply, simply remove the other two (in this case).
if (optionSet == 810500000) {//option A
Xrm.Page.getControl('lookup').addPreSearch(filterA);
Xrm.Page.getControl('lookup').removePreSearch(filterB);
Xrm.Page.getControl('lookup').removePreSearch(filterC);
}
else if (optionSet == 810500001) {//option B
Xrm.Page.getControl('lookup').addPreSearch(filterB);
Xrm.Page.getControl('lookup').removePreSearch(filterA);
Xrm.Page.getControl('lookup').removePreSearch(filterC);
}//so on and so forth
I hope this helps someone out, I was able to apply custom filters to a lookup based on four distinct selections and remove the "stackable" filters by addition and removal in this manner. It's a little ugly, but, hey, it works. At the end of the day, sometimes the most elegant solution is to just win, win win win win.
If you need more context (fetchXml) and such, I can post that, too...but it doesn't really go along with the point I was trying to make. These filters can be applied simultaneously! That's the main idea I wanted to convey here.
I have records that have an index attribute to maintain their position in relation to each other.
I have a plugin that performs a renumbering operation on these records when the index is changed or new one created. There are specific rules that apply to items that are at the first and last position in the list.
If a new (or existing changed) item is inserted into the middle (not technically the middle...just somewhere between start and end) of the list a renumbering kicks off to make room for the record.
This renumbering process fires in a new execution pipeline...We are updating record D. When I tell record E to change (to make room for D) that of course fires the plugin on update message.
This renumbering is fine until we reach the end of the list where the plugin then gets into a loop with the first business rule that maintains the first and last record differently.
So I am trying to think of ways to pass a flag to the execution context spawned by the renumbering process so the recursion skips the boundary edge business rules if IsRenumbering == true.
My thoughts / ideas:
I have thought of using the Depth check > 1 but that isn't a reliable value as I can't explicitly turn it on or off....it may happen to work but that is not engineering a solid solution that is hoping nothing goes bump. Further a colleague far more knowledgeable than I said that when a workflow calls a plugin the depth value is off and can't be trusted.
All my variables are scoped at the execute level so as to avoid variable pollution at the class level....However if I had a dictionary object, tuple, something at the class level and one value would be the thread id and the other the flag value then perhaps my subsequent execution context could check if the same owning thread id had any values entered.
Any thoughts or other ideas on how to pass context information to a new pipeline would be greatly appreciated.
Per Nicknow sugestion I tried sharedvariables but they seem to be going out of scope...:
First time firing post op:
if (base.Stage == EXrmPluginStepStage.PostOperation)
{
...snip...
foreach (var item in RenumberSet)
{
Context.ParentContext.SharedVariables[recordrenumbering] = "googly";
Entity renumrec = new Entity("abcd") { Id = item.Id };
#region We either add or subtract indexes based upon sortdir
...snip...
renumrec["abc_indexfield"] = TmpIdx + 1;
break;
.....snip.....
#endregion
OrganizationService.Update(renumrec);
}
}
Now we come into Pre-Op of the recursion process kicked off by the above post-op OrganizationService.Update(renumrec); and it seems based upon this check the sharedvariable didn't carry over...???
if (!Context.SharedVariables.Contains(recordrenumbering))
{
//Trace.Trace("Null Set");
//Context.SharedVariables[recordrenumbering] = IsRenumbering;
Context.SharedVariables[recordrenumbering] = "Null Set";
}
throw invalidpluginexception reveals:
Sanity Checks:
Depth : 2
Entity: ...
Message: Update
Stage: PreOperation [20]
User: 065507fe-86df-e311-95fe-00155d050605
Initiating User: 065507fe-86df-e311-95fe-00155d050605
ContextEntityName: ....
ContextParentEntityName: ....
....
IsRenumbering: Null Set
What are you looking for is IExecutionContext.SharedVariables. Whatever you add here is available throughout the entire transaction. Since you'll have child pipelines you'll want to look at the ParentContext for the value. This can all get a little tricky, so be sure to do a lot of testing - I've run into many issues with SharedVariables and looping operations in Dynamics CRM.
Here is some sample (very untested) code to get you started.
public static bool GetIsRenumbering(IPluginExecutionContext pluginContext)
{
var keyName = "IsRenumbering";
var ctx = pluginContext;
while (ctx != null)
{
if (ctx.SharedVariables.Contains(keyName))
{
return (bool)ctx.SharedVariables[keyName];
}
else ctx = ctx.ParentContext;
}
return false;
}
public static void SetIsRenumbering(IPluginExecutionContext pluginContext)
{
var keyName = "IsRenumbering";
var ctx = pluginContext;
ctx.SharedVariables.Add(keyName, true);
}
A very simple solution: add a bit field to the entity called "DisableIndexRecalculation." When your first plugin runs, make sure to set that field to true for all of your updates. In the same plugin, check to see if "DisableIndexRecalculation" is set to true: if so, set it to null (by removing it from the TargetEntity entirely) and stop executing the plugin. If it is null, do your index recalculation.
Because you are immediately removing the field from the TargetEntity if it is true the value will never be persisted to the database so there will be no performance penalty.