element => command in Terraform - terraform

I see a code as below in https://github.com/terraform-aws-modules/terraform-aws-efs/blob/master/examples/complete/main.tf#L58
# Mount targets / security group
mount_targets = { for k, v in toset(range(length(local.azs))) :
element(local.azs, k) => { subnet_id = element(module.vpc.private_subnets, k) }
}
I am trying to understand what => means here. Also this command with for loop, element and =>.
Could anyone explain here please?

In this case the => symbol isn't an independent language feature but is instead just one part of the for expression syntax when the result will be a mapping.
A for expression which produces a sequence (a tuple, to be specific) has the following general shape:
[
for KEY_SYMBOL, VALUE_SYMBOL in SOURCE_COLLECTION : RESULT
if CONDITION
]
(The KEY_SYMBOL, portion and the if CONDITION portion are both optional.)
The result is a sequence of values that resulted from evaluating RESULT (an expression) for each element of SOURCE_COLLECTION for which CONDITION (another expression) evaluated to true.
When the result is a sequence we only need to specify one result expression, but when the result is a mapping (specifically an object) we need to specify both the keys and the values, and so the mapping form has that additional portion including the => symbol you're asking about:
{
for KEY_SYMBOL, VALUE_SYMBOL in SOURCE_COLLECTION : KEY_RESULT => VALUE_RESULT
if CONDITION
}
The principle is the same here except that for each source element Terraform will evaluate both KEY_RESULT and VALUE_RESULT in order to produce a key/value pair to insert into the resulting mapping.
The => marker here is just some punctuation so that Terraform can unambiguously recognize where the KEY_RESULT ends and where the VALUE_RESULT begins. It has no special meaning aside from being a delimiter inside a mapping-result for expression. You could think of it as serving a similar purpose as the comma between KEY_SYMBOL and VALUE_SYMBOL; it has no meaning of its own, and is only there to mark the boundary between two clauses of the overall expression.
When I read a for expression out loud, I typically pronounce => as "maps to". So with my example above, I might pronounce it as "for each key and value in source collection, key result maps to value result if the condition is true".

Lambda expressions use the operator symbol =, which reads as "goes to." Input parameters are specified on the operator's left side, and statement/expressions are specified on the right. Generally, lambda expressions are not directly used in query syntax but are often used in method calls. Query expressions may contain method calls.
Lambda expression syntax features are as follows:
It is a function without a name.
There are no modifiers, such as overloads and overrides.
The body of the function should contain an expression, rather than a statement.
May contain a call to a function procedure but cannot contain a call to a subprocedure.
The return statement does not exist.
The value returned by the function is only the value of the expression contained in the function body.
The End function statement does not exist.
The parameters must have specified data types or be inferred.
Does not allow generic parameters.
Does not allow optional and ParamArray parameters.
Lambda expressions provide shorthand for the compiler, allowing it to emit methods assigned to delegates.
The compiler performs automatic type inference on the lambda arguments, which is a key advantage.

Related

How can i specify parameter dependent on index value?

I'm trying to port code from DML 1.2 to DML 1.4. Here is part of code that i ported:
group rx_queue [i < NQUEUES] {
<...>
param desctype = i < 64 #? regs.SRRCTL12[i].DESCTYPE.val #: regs.SRRCTL2[i - 64].DESCTYPE.val; // error occurs here
<...>
}
Error says:
error: non-constant expression: cast(i, int64 ) < 64
How can i specify parameter dependent on index value?
I tried to use if...else instead ternary operator, but it says that conditional parameters are not allowed in DML.
Index parameters in DML are a slightly magical expressions; when used from within parameters, they can evaluate to either a constant or a variable depending on where the parameter is used from. Consider the following example:
group g[i < 5] {
param x = i * 4;
method m() {
log info: "%d", x;
log info: "%d", g[4 - i].x;
log info: "%d", g[2].x;
}
}
i becomes an implicit local variable within the m method, and in params, indices are a bit like implicit macro parameters. When the compiler encounters x in the first log statement, the param will expand to i * 4 right away. In the second log statement, the x param is taken from an object indexed with the expression 4 - i, so param expansion will instead insert (5 - i) * 4. In the third log statement, the x param is taken from a constant indexed object, so it expands to 2 * 4 which collapses into the constant 8.
Most uses of desctype will likely happen from contexts where indices are variable, and the #? expression requires a constant boolean as condition, so this will likely give an error as soon as anyone tries to use it.
I would normally advise you to switch from #? to ? in the definition of the desctype param, but that fails in this particular case: DMLC will report error: array index out of bounds on the i - 64 expression. This error is much more confusing, and happens because DMLC automatically evaluates every parameter once with all zero indices, to smoke out misspelled identifiers; this will include evaluation of SRRCTL2[i-64] which collapses into SRRCTL2[-64] which annoys DMLC.
This is arguably a compiler bug; DMLC should probably be more tolerant in this corner. (Note that even if we would remove the zero-indexed validation step from the compiler, your parameter would still give the same error message if it ever would be explicitly referenced with a constant index, like log info: "%d", rx_queue[0].desctype).
The reason why you didn't get an error in DML 1.2 is that DML 1.2 had a single ternary operator ? that unified 1.4's ? and #?; when evaluated with a constant condition the dead branch would be disregarded without checking for errors. This had some strange effects in other situations, but made your particular use case work.
My concrete advise would be to replace the param with a method; this makes all index variables unconditionally non-constant which avoids the problem:
method desctype() -> (uint64) {
return i < 64 ? regs.SRRCTL12[i].DESCTYPE.val : regs.SRRCTL2[i - 64].DESCTYPE.val;
}

Is it possible to pattern match arguments in a function in haskell? [duplicate]

This question already has answers here:
Pattern matching identical values
(4 answers)
Closed 10 months ago.
I want the function isValid to return True if the first and second arguments match desPlanet and currPlanet from the 3rd argument. Is there a way I can do this?
Please see -
Not like that.
Patterns can only consist of 3 things:
A constructor, applied to as many sub-patterns as the constructor has arguments. Examples are Nothing, Just (Just x), or [desPlanet, _, currPlanet]:_, etc. The pattern will match values that were constructed with the same constructor, if all of the sub-patterns match the corresponding arguments in the value.
A variable, which will simply match anything and make the matched value available under that variable name. (The variable _ is a special, since it doesn't actually make the matched value available, and can thus be used multiple times across the pattern)
A literal like 123, "foo", 'c', etc; which will be matched by equality checking. (If the literal is polymorphic, like numeric literals with the Num class, then you may need an Eq constraint)
Note that there is no option to match anything against an existing variable (or against one bound elsewhere in the same pattern). The template you're trying to check against must be statically known, it can't be referred to by a variable. You either match against a specific concrete literal, or you match against a specific concrete constructor, or you just accept anything and bind it to a variable name.
However guards do allow you to check arbitrary conditions, and if the guard condition fails then the function will fall through to its next equation (just as if a pattern failed to match). So you can still do exactly what you want ("I want the function isValid to return True if the first and second arguments match desPlanet and currPlanet from the 3rd argument"); you just don't do solely with pattern matching.
For example:
isValid currPlanet desPlanet ([desPlanet', _, curPlanet'] : _) solarSys
| curPlanet == curPlanet' && desPlanet = desPlanet'
= True
isValid ... -- other equations

ANTLR4 Semantic Predicates that is Context Dependent Does Not Work

I am parsing a C++ like declaration with this scaled down grammar (many details removed to make it a fully working example). It fails to work mysteriously (at least to me). Is it related to the use of context dependent predicate? If yes, what is the proper way to implement the "counting the number of child nodes logic"?
grammar CPPProcessor;
cppCompilationUnit : decl_specifier_seq? init_declarator* ';' EOF;
init_declarator: declarator initializer?;
declarator: identifier;
initializer: '=0';
decl_specifier_seq
locals [int cnt=0]
#init { $cnt=0; }
: decl_specifier+ ;
decl_specifier : #init { System.out.println($decl_specifier_seq::cnt); }
'const'
| {$decl_specifier_seq::cnt < 1}? type_specifier {$decl_specifier_seq::cnt += 1;} ;
type_specifier: identifier ;
identifier:IDENTIFIER;
CRLF: '\r'? '\n' -> channel(2);
WS: [ \t\f]+ -> channel(1);
IDENTIFIER:[_a-zA-Z] [0-9_a-zA-Z]* ;
I need to implement the standard C++ rule that no more than 1 type_specifier is allowed under an decl_specifier_seq.
Semantic predicate before type_specifier seems to be the solution. And the count is naturally declared as a local variable in decl_specifier_seq since nested decl_specifier_seq are possible.
But it seems that a context dependent semantic predicate like the one I used will produce incorrect parsing i.e. a semantic predicate that references $attributes. First an input file with correct result (to illustrate what a normal parse tree looks like):
int t=0;
and the parse tree:
But, an input without the '=0' to aid the parsing
int t;
0
1
line 1:4 no viable alternative at input 't'
1
the parsing failed with the 'no viable alternative' error (the numbers printed in the console is debug print of the $decl_specifier_cnt::cnt value as a verification of the test condition). i.e. the semantic predicate cannot prevent the t from being parsed as type_specifier and t is no longer considered a init_declarator. What is the problem here? Is it because a context dependent predicate having $decl_specifier_seq::cnt is used?
Does it mean context dependent predicate cannot be used to implement "counting the number of child nodes" logic?
EDIT
I tried new versions whose predicate uses member variable instead of the $decl_specifier_seq::cnt and surprisingly the grammar now works proving that the Context Dependent predicate did cause the previous grammar to fail:
....
#parser::members {
public int cnt=0;
}
decl_specifier
#init {System.out.println("cnt:"+cnt); }
:
'const'
| {cnt<1 }? type_specifier {cnt++;} ;
A normal parse tree is resulted:
This gives rise to the question of how to support nested rule if we must use member variables to replace the local variables to avoid context sensitive predicates?
And a weird result is that if I add a /*$ctx*/ after the predicate, it fails again:
decl_specifier
#init {System.out.println("cnt:"+cnt); }
:
'const'
| {cnt<1 /*$ctx*/ }? type_specifier {cnt++;} ;
line 1:4 no viable alternative at input 't'
The parsing failed with no viable alternative. Why the /*$ctx*/ causes the parsing to fail like when $decl_specifier_seq::cnt is used although the actual logic uses a member variable only?
And, without the /*$ctx*/, another issue related to the predicate called before #init block appears(described here)
ANTLR 4 evaluates semantic predicates in two cases.
The generated code evaluates a semantic predicate during parsing, and throws an exception of the evaluation returns false. All predicates traversed during parsing are evaluated in this way, including context-dependent predicates and predicates which do not appear at the left side of a decision.
The prediction method evaluates predicates in order to make correct decisions during parsing. In this case, predicates which appear anywhere other than the left edge of the decision being evaluated are assumed to return true (i.e. they are ignored). In addition, context-dependent predicates are only evaluated if the context data is available. The prediction algorithm will not create context structures that were not already provided by the parsing code. If a context-dependent predicate is encountered during prediction and no context is available, the predicate is assumed to return true (i.e. it is ignored for that decision).
The code generator does not evaluate the semantics of the target language, so it has no way to know that $ctx is semantically irrelevant when it appears in /*$ctx*/. Both cases result in the predicate being treated as context-dependent.

can someone further explain this C# code

I am using the PredicateBuilder class from http://www.albahari.com/nutshell/predicatebuilder.aspx
public static Expression<Func<T, bool>> Or<T> (this Expression<Func<T, bool>> expr1,
Expression<Func<T, bool>> expr2)
{
var invokedExpr = Expression.Invoke (expr2, expr1.Parameters.Cast<Expression> ());
return Expression.Lambda<Func<T, bool>>
(Expression.OrElse (expr1.Body, invokedExpr), expr1.Parameters);
}
this extension method is chaining Predicates with the OR operator. on the page, the explanation says
We start by invoking the second expression with the first expression’s parameters. An Invoke expression calls another lambda expression using the given expressions as arguments. We can create the conditional expression from the body of the first expression and the invoked version of the second. The final step is to wrap this in a new lambda expression.
so if for example i have
Predicate<Book> p1 = b => b.Title.Contains("economy");
Predicate<Book> p2 = b=>b.PublicationYear>2001;
Predicate chain = p1.And(p2);
I didn't quite get the exlanation. can someone please explain how the code of the extension method above is working?
thanks
Let's rewrite the method body like this:
return Expression.Lambda<Func<T, bool>>(
Expression.OrElse(
expr1.Body,
Expression.Invoke(expr2, expr1.Parameters.Cast<Expression>())
),
expr1.Parameters);
We should also keep in mind that expr1 is your existing expression that takes a T and returns a bool. Additionally, while the expression trees we are working with do not actually do anything (they represent something instead), I am going to use "do" henceforth because it makes for much easier reading. Technically for an expression tree to actually do something you have to compile it first and then invoke the resulting delegate.
Ok, so what do we have here? This is a lambda expression that takes whatever parameters expr1 takes (see last line) and whose body, as per the documentation, is
a BinaryExpression that represents a conditional OR operation that
evaluates the second operand only if the first operand evaluates to
false.
The first operand is expr1.Body, which means the resulting function (not actually a function, see note above) evaluates expr1. If the result is true it returns true on the spot. Otherwise, it invokes expr2 with the same parameters as those passed to expr1 (this means the single T parameter) and returns the result of that.

Update2,Haskell

Atom: The Atom is the datatype used to describe Atomic Sentences or propositions. These are basically
represented as a string.
Literal: Literals correspond to either atoms or negations of atoms. In this implementation each literal
is represented as a pair consisting of a boolean value, indicating the polarity of the Atom, and the
actual Atom. Thus, the literal ‘P’ is represented as (True,"P") whereas its negation ‘-P’ as
(False,"P").
2
Clause: A Clause is a disjunction of literals, for example PvQvRv-S. In this implementation this
is represented as a list of Literals. So the last clause would be [(True,"P"), (True,"Q"),
(True,"R"),(False,"S")].
Formula: A Formula is a conjunction of clauses, for example (P vQ)^(RvP v-Q)^(-P v-R).
This is the CNF form of a propositional formula. In this implementation this is represented as a list of
Clauses, so it is a list of lists of Literals. Our above example formula would be [[(True,"P"),
(True,"Q")], [(True,"R"), (True,"P"), (False,"Q")], [(False, "P"),
(False,"P")]].
Model: A (partial) Model is a (partial) assignment of truth values to the Atoms in a Formula. In this
implementation this is a list of (Atom, Bool) pairs, ie. the Atoms with their assignments. So in the
above example of type Formula if we assigned true to P and false to Q then our model would be
[("P", True),("Q", False)]
Ok so I wrote and update function
update :: Node -> [Node]
It takes in a Node and returns a list of the Nodes
that result from assigning True to an unassigned atom in one case and False in the other (ie. a case
split). The list returned has two nodes as elements. One node contains the formula
with an atom assigned True and the model updated with this assignment, and the other contains
the formula with the atom assigned False and the model updated to show this. The lists of unassigned
atoms of each node are also updated accordingly. This function makes use of an
assign function to make the assignments. It also uses the chooseAtom function to
select the literal to assign.
update :: Node -> [Node]
update (formula, (atoms, model)) = [(assign (chooseAtom atoms, True) formula, (remove (chooseAtom atoms) atoms, ((chooseAtom atoms,True)) `insert` model)) , (assign (chooseAtom atoms, False) formula, (remove (chooseAtom atoms) atoms, ((chooseAtom atoms, False) `insert` model)) )]
Now I have to do the same thing but this time I must implement a variable selection heuristic.this should replace the chooseAtom and I'm supposed to write a function update2 using it
type Atom = String
type Literal = (Bool,Atom)
type Clause = [Literal]
type Formula = [Clause]
type Model = [(Atom, Bool)]
type Node = (Formula, ([Atom], Model))
update2 :: Node -> [Node]
update2 = undefined
So my question is how can I create a heurestic and to implement it into the update2 function ,that shoud behave identical to the update function ?
If I understand the question correctly, you're asking how to implement additional selection rules in resolution systems for propositional logic. Presumably, you're constructing a tree of formulas gotten by assigning truth-values to literals until either (a) all possible combinations of assignments to literals have been tried or (b) box (the empty clause) has been derived.
Assuming the function chooseAtom implements a selection rule, you can parameterize the function update over an arbitrary selection rule r by giving update an additional parameter and replacing the occurrence of chooseAtom in update by r. Since chooseAtom implements a selection rule, passing that selection rule to the parameter r gives the desired result. If you provide an implementation of chooseAtom and the function you intend to replace it, it would be easier to verify that your implementation is correct.
Hopefully this is helpful. However, it's unclear exactly what's being asked. In particular, you're asking for a "variable selection rule." However, it looks like you're implementing a resolution system for propositional logic. In general, selection rules and variables are associated with resolution for predicate logic.

Resources