Parameterized Spock tests and pipe usage - groovy

I have trouble writing a parameterized test with Spock, when one the parameter needs the pipe character, for instance because its a flag computation.
def "verify inferInputType()"() {
expect:
inputType == mPresenter.inferInputType(opt)
where:
opt | inputType
0 | 0
EDITTEXT_TYPE_ALPHANUM | InputType.TYPE_CLASS_TEXT
EDITTEXT_TYPE_NUM | InputType.TYPE_CLASS_NUMBER
EDITTEXT_TYPE_FLOAT | (InputType.TYPE_CLASS_NUMBER | InputType.TYPE_NUMBER_FLAG_DECIMAL)
}
The test fails with the following error message :
Row in data table has wrong number of elements (3 instead of 2) # line 25, column 9.
EDITTEXT_TYPE_FLOAT | InputType.TYPE_CLASS_NUMBER | InputType.TYPE_NUMBER_FLAG_DECIMAL
^
The only way I find to make it work is to wrap the parameter inside an closure, like that
EDITTEXT_TYPE_FLOAT | {InputType.TYPE_CLASS_NUMBER | InputType.TYPE_NUMBER_FLAG_DECIMAL}()
But it's ugly, if someone has a better solution, please tell me.

You should be able to do:
InputType.TYPE_CLASS_NUMBER.or( InputType.TYPE_NUMBER_FLAG_DECIMAL )
Not sure if that is better ;-)

Related

Expected type 'Type[Add | Sub | Mult | Div | Pow | BitXor | USub]', got 'Type[operator]' instead

I use a nifty class that evaluates string expressionsand converts them to their math equivalent :
# Taken out of context for MVCE
import ast
import operator as op
OPERATORS = {
ast.Add: op.add,
ast.Sub: op.sub,
ast.Mult: op.mul,
ast.Div: op.truediv,
ast.Pow: op.pow,
ast.BitXor: op.xor,
ast.USub: op.neg
}
def eval_expr(expr):
return eval_(ast.parse(expr, mode='eval').body)
def eval_(node):
if isinstance(node, ast.Num): # <number>
value = node.n
elif isinstance(node, ast.BinOp): # <left> <operator> <right>
value = OPERATORS[type(node.op)](eval_(node.left), eval_(node.right))
elif isinstance(node, ast.UnaryOp): # <operator> <operand> e.g., -1
value = OPERATORS[type(node.op)](eval_(node.operand))
else:
raise TypeError(node)
return value
x = eval_expr("1 + 2")
print(x)
PyCharm code inspection highlights the instances of type(node.op) as problematic:
Expected type 'Type[Add | Sub | Mult | Div | Pow | BitXor | USub]' (matched generic type '_KT'), got 'Type[operator]' instead
Expected type 'Type[Add | Sub | Mult | Div | Pow | BitXor | USub]' (matched generic type '_KT'), got 'Type[unaryop]' instead
The class seems to function just fine, but my OCD wants to know how this could be refactored to avoid the inspection warnings. Or is this a PyCharm inspection gremlin?
The type checker is warning you that your dictionary that maps AST node types for operators to their implementations is incomplete. The type checker knows all of the possible types of node.op (which it seems to be describing as subtypes of the ast.operator and ast.unaryop parent types), and has noticed that your dictionary doesn't handle them all.
Since there are operators that you haven't included, it's possible for a parsable expression (like, say "2 << 5" which does a left shift, or "~31" which does a bitwise inversion) to fail to be handled by your code.
While I don't use PyCharm and thus can't test it for myself, you can probably satisfy its type checker by adding some error handling to your code, so that operator types you don't support will still be dealt with appropriately, rather than causing an uncaught exception (such as a KeyError from the dictionary) to leak out. For instance, you could use OPERATORS.get(type(node.op)) and then test for None before calling the result. If the operator type isn't in the dictionary, you'd raise an exception of your own.

Haskell define multiple functions using tuples

I've just come across some Haskell code that looks something like this:
(functionOne, functionTwo)
| someCondition = (10, "Ten")
| otherwise = (20, "Twenty")
From the way the code is used I think I understand the intent of this code i.e. it is just a more concise way of writing this:
functionOne
| someCondition = 10
| otherwise = 20
functionTwo
| someCondition = "Ten"
| otherwise = "Twenty"
However, I can't recall ever seeing functions written this way before and have no idea what this technique is called so can't search for any additional information about this.
So my questions are:
Is my understanding of what is going on here correct?
Does this technique have a name?
These aren't functions, just variable bindings. You correctly understand how it works. It doesn't have any particular name, because it's just another application of pattern matching. Anytime you could declare a variable, you can declare a more complex pattern in that same position.

Optional parameter / null handling [duplicate]

This question already has answers here:
How can I pull data out of an Option for independent use?
(3 answers)
Closed 1 year ago.
I have a function which takes 4 nullable BigUint parameters and return a tuple of Option type. I am calling this function in an iterative fashion and I am trying to figure how to handle None values as I want to treat them explicitly. I have the following uncompilable code:
use num_bigint::BigUint; // 0.4.0
fn add_one(
px: &Option<BigUint>,
py: &Option<BigUint>,
qx: &Option<BigUint>,
qy: &Option<BigUint>,
) -> (Option<BigUint>, Option<BigUint>) {
if px.is_none() && py.is_none() {
(*qx, *qy)
} else if px == qx {
(None, None)
} else {
(px + 1u32, py + 1u32)
}
}
With the error:
error[E0369]: cannot add `u32` to `&Option<BigUint>`
--> src/lib.rs:14:13
|
14 | (px + 1u32, py + 1u32)
| -- ^ ---- u32
| |
| &Option<BigUint>
error[E0369]: cannot add `u32` to `&Option<BigUint>`
--> src/lib.rs:14:24
|
14 | (px + 1u32, py + 1u32)
| -- ^ ---- u32
| |
| &Option<BigUint>
How do I evaluate Option to its corresponding type?
First, it's important to remember that an Option<T> is not a T. Even if it contains one, it's not itself one. It's a box which may or may not be empty. This is in contrast to a language like Kotlin where T? is actually a T or possibly null.
With that out of the way, it sounds like you want to take px and py and, if they contain values, add one to them, and if they don't then leave them empty. That's a perfect use case for map.
(px.map(|x| x+1u32), py.map(|x| x+1u32))
As a general piece of advice, in Java and Kotlin you're going to spend a lot of time using if statements and imperative logic to suss out your null values. In Rust, you're primarily going to do that using the standard library methods on Option. Get to know them; they're really quite helpful and encompass a lot of patterns you use frequently that, in other languages, might just be done with copious explicit null-checks.

How can the scope of jq 'as' variables be extended to pass down inside functions?

With jq, I'm trying to use entries in one array to index into a separate array. A simple JSON input would look like this:
{
"myArray": [ "AA", "BB", "CC", "DD", "EE" ],
"myFlags": [ 4, 3, 2, 1, 0 ]
}
jq's nifty 'as' operator is then able to bring the myArray array into scope and the indexing works fine:
.myArray as $Array | .myFlags | .[] | $Array[.] ====> yields "EE","DD","CC","BB","AA"
So far so jq-manual. However, if I try and move the $Array array access down into a function, the as-variable scope disappears:
def myFun: $Array[.]; .myArray as $Array | .myFlags | .[] | myFun
jq: error: $Array is not defined at <top-level>, line 1:
def myFun: $Array[.]; .myArray as $Array | .myFlags | .[] | myFun
To get around this, I currently pass down a temporary JSON object containing both the index and the array:
def myFun: .a[.b]; .myArray as $Array | .myFlags | .[] | { a: $Array, b: . } | myFun
Though this works, I have to say I'm not hugely comfortable with it.
Really, this doesn't feel to me as though this is proper jq language behaviour. It seems to me that the 'as'-scope ought to persist down into invoked def-functions. :-(
Is there a better way of extending as-scope down into def-functions? Am I missing some jq subtlety?
It actually is possible to make an "as" variable visible inside a function without passing it in as a parameter (see "Lexical Scoping" below), but it is usually unnecessary, and in fact using "as" variables is often unnecessary as well.
Avoiding the "as" variable
You can simplify your original query to:
.myArray[.myFlags[]]
Using function arguments
You can write jq functions with one or more arguments. This is the appropriate way to parameterize filters.
The syntax is quite conventional except that for functions with more than one argument, the semicolon (";") is used as the argument separator, both in the function definitions and invocations.
Note also that jq function arguments can themselves be filters, e.g. you could write:
def myFun(array; $ix): array | .[$ix];
myFun(.myArray; .myFlags[])
Lexical scoping
Here's an example showing how an 'as' variable can be made visible inside a function:
[1,2] as $array | def myFun: $array[1]; myFun

the following sets of rules are mutually left-recursive

The sumScalarOperator gives me this error, it seems that antlr see it like a possible infinite recursion loop. How can i avoid it?
sumScalarOperator: function SUM_TOKEN function;
function :
| INTEGER_TOKEN
| NUMERIC_TOKEN
| sumScalarOperator
| ID;
ID : [A-Za-z_-] [a-zA-Z0-9_-]*;
INTEGER_TOKEN: [0-9]+;
NUMERIC_TOKEN: [0-9]+'.'[0-9]+ ;
ANTLR4 can't cope with mutually left-recursive rules, but it can rewrite single left-recursive rules automatically to eliminate the left-recursion, so you can just feed it with something like:
function : function SUM_TOKEN function # sumScalarOperator
| INTEGER_TOKEN # value
| NUMERIC_TOKEN # value
| ID # value
;
Replace the value label with anything you need.

Resources