Can I treat int and float the same in my Cucumber Steps? - cucumber

I'm trying to write tests for a Tuple class that I'm writing in Ruby (this is an exercise to learn both Ruby and Gherkin). So one of my Scenarios creates a Tuple with float values:
Scenario: A tuple with w=1.0 is a point
Given a ← tuple[4.3, -4.2, 3.1, 1.0]
Then a.x = 4.3
And ...
For the Given step, cucumber suggests the following:
Given("a ← tuple[{float}, {float}, {float}, {float}]") do |float, float2, float3, float4|
pending # Write code here that turns the phrase above into concrete actions
end
which I implemented as:
Given("a ← tuple[{float}, {float}, {float}, {float}]") do |float, float2, float3, float4|
tuple_a = Tuple.new(float, float2, float3, float4)
end
Great. Now I want another scenario which happens to pass integers to the Tuple:
Scenario: Adding two tuples
Given a ← tuple[3, -2, 5, 1]
...
And Cucumber suggests:
Given("a ← tuple[{int}, {int}, {int}, {int}]") do |int, int2, int3, int4|
pending # Write code here that turns the phrase above into concrete actions
end
But my implementation is in Ruby; I don't really care if i'm passing ints or floats to Tuple.new(). The Given step I implemented first which expects floats would work the same for ints, but Cucumber won't use that; it wants me to implement it again with int params. I could just use float arguments, e.g. Given a ← tuple[3.0, -2.0, 5.0, 1.0], but that's kind of annoying. Is my only option to define a custom ParameterType? That would entail a regexp that matches both integers and floats; will that take priority over the existing int and float types?

I would suggest using unit test tools for this sort of thing e.g. rspec, minitest etc. they are a much better fit.
Scenarios are useful when you can express things in a language that is not technical and abstract. Your scenarios are technical and concrete and much harder to read and write.
An analogy is trying to write mathematical expressions in a natural langauge.
(3+5)^3 is much simpler and more precise than, add 5 to 3 and then cube the total
The art of learning Gherkin is how to write simple clear scenarios that describe a particular behaviour. It is not about using multiple params, complex regex's, large tables and multiple examples. You are learning the wrong things if you want to learn how to Cuke and do BDD. You are using the wrong tool if you want to learn ruby and write tests for things like a Tuple class.

Related

Is there any way to specify eg (car|cars) in a cucumber step definition?

So I have 2 scenarios....one starts out
Given I have 1 car
The other starts out
Given I have 2 cars
I'd like them to use the same step definition - ie something like this
Given('I have {int} (car|cars)',
I know it's possible to do specify 2 possible values (ie car or cars), but I can't for the life of me remember how. Does anyone know? I'm using typescript, protractor, angular, selenium.
Have also tried
Given(/^I have {int} (car|cars)$
Within cukeExp, the () become optional characters. That is what you want.
So your expression would be
Given('I have {int} car(s)')
Happy to help - More information can be found here: https://cucumber.io/docs/cucumber/cucumber-expressions/ - Switch to JS code at the top.
Luke - Cucumber contributor.
Luke's answer is great and is definitely standard practice when cuking.
I would (and do) take a different approach. I would strongly argue that the complexity of even a single expression like the one he uses isn't worth the step duplication. Let me explain and illustrate.
The fundamental idea behind this approach is that the internals of each step definition must be a single call to a helper method. When you do this you no longer need expressions or regex's.
I would prefer and use in my projects
module CarStepHelper
def create_car(amount: 1)
Lots of stuff to create cars
end
end
World CarStepHelper
Given 'I have one car' do
create_car
end
Given 'I have two cars' do
create_car(amount: 2)
end
to
Given('I have {int} car(s)')
lots of stuff to create cars
end
because
the step definitions are simpler (no regex, no cucumber expression
the stuff to create the cars is slightly simpler (no processing of the regex or expression)
the helper method supports and encourages a wider range of expression e.g.
Given Fred has a car
Given there is a blue car and a red car
the helper method encourages better communication between steps because you can assign its results relative to the step definition e.g.
Given Fred has a car
#freds_car = create_car
end
Given there are two cars
[#car1, #car2] = create_car(amount: 2)
end
Cucumber expressions and cucumbers regex's are very powerful and quite easy to use, but you can Cuke very effectively without ever using them. Step definition efficiency is a myth and often an anti-pattern, if you ensure each step def is just a single call you no longer have to worry about it, and you will avoid the mistake many cukers fall into which is the writing over-complicated step definitions with lots of parameters, regex's and|or expressions.
As far as I know, your step definition should be as below for it to work.
Given(/^I have "([^"]*)?" (car|cars)*$/, (number, item) => {
You can still simplify the first regular expression.
Cheers!

How to define a strategy in hypothesis to generate a pair of similar recursive objects

I am new to hypothesis and I am looking for a way to generate a pair of similar recursive objects.
My strategy for a single object is similar to this example in the hypothesis documentation.
I want to test a function which takes a pair of recursive objects A and B and the side effect of this function should be that A==B.
My first approach would be to write a test which gets two independent objects, like:
#given(my_objects(), my_objects())
def test_is_equal(a, b):
my_function(a, b)
assert a == b
But the downside is that hypothesis does not know that there is a dependency between this two objects and so they can be completely different. That is a valid test and I want to test that too.
But I also want to test complex recursive objects which are only slightly different.
And maybe that hypothesis is able to shrink a pair of very different objects where the test fails to a pair of only slightly different objects where the test fails in the same way.
This one is tricky - to be honest I'd start by writing the same test you already have above, and just turn up the max_examples setting a long way. Then I'd probably write some traditional unit tests, because getting specific data distributions out of Hypothesis is explicitly unsupported (i.e. we try to break everything that assumes a particular distribution, using some combination of heuristics and a bit of feedback).
How would I actually generate similar recursive structures though? I'd use a #composite strategy to build them at the same time, and for each element or subtree I'd draw a boolean and if True draw a different element or subtree to use in the second object. Note that this will give you a strategy for a tuple of two objects and you'll need to unpack it inside the test; that's unavoidable if you want them to be related.
Seriously do try just cracking up max_examples on the naive approach first though, running Hypothesis for ~an hour is amazingly effective and I would even expect it to shrink the output fairly well.

Using a regular string as a code for compiling

I need to make a function that receives a string such as:
int *ptr[20], *p, p2, p3[3];
and the function need to print:
ptr requires 80 bytes.
p requires 4 bytes.
p2 requires 4 bytes.
p3 requires 12 bytes.
to simplify to task, I would like to use the "fake" code in the string as a "real" code, and then just print the function sizeof(variable) to answer the question. I think it is the most simple way.
But how to do it?
What you describe is the ability to "evaluate" dynamically generated code.
Some languages -- usually they are evaluated (non-compiled) ones -- have such features, but C++ does not.
Even if it did, it wouldn't be a good solution here. You need a parser. For a formal approach, you may research lexers and context-free parsers. For an ad hoc approach...well...do whatever string manipulation you would like.

A language in which everything compiles

I'm trying to do some research for a new project, and I need to create objects dynamically from random data.
For this to work, I need a language / compiler that doesn't have problems with weird uncompilable code lying around.
Basically, I need the random code to compile (or be interpreted) as much as possible - Meaning that the uncompilable parts will be ignored, and only the compilable parts will create the objects (which could be ran).
Object Oriented-ness is not a must, but is a very strong advantage.
I thought of ASM, but it's very messy, and I'd probably need a more readable code
Thanks!
It sounds like you're doing something very much like genetic programming; even if you aren't, GP has to solve some of the same problems—using randomness to generate valid programs. The approach to this that is typically used is to work with a syntax tree: rather than storing x + y * 3 - 2, you store something like the following:
Then, instead of randomly changing the syntax, one can randomly change nodes in the tree instead. And if x should randomly change to, say, +, you can statically know that this means you need to insert two children (or not, depending on how you define +).
A good choice for a language to work with for this would be any Lisp dialect. In a Lisp, the above program would be written (- (+ x (* y 3)) 2), which is just a linearization of the syntax tree using parentheses to show depth. And in fact, Lisps expose this feature: you can just as easily work with the object '(- (+ x (* y 3)) 2) (note the leading quote). This is a three-element list, whose first element is -, second element is another list, and third element is 2. And, though you might or might not want it for your particular application, there's an eval function, such that (eval '(- (+ x (* y 3)) 2)) will take in the given list, treat it as a Lisp syntax tree/program, and evaluate it. This is what makes Lisps so attractive for doing this sort of work; Lisp syntax is basically a reification of the syntax-tree, and if you operate at the syntax-tree level, you can work on code as though it was a value. Lisp won't help you read /dev/random as a program directly, but with a little interpretation layered on top, you should be able to get what you want.
I should also mention, though I don't know anything about it (not that I know much about ordinary genetic programming) the existence of linear genetic programming. This is sort of like the assembly model that you mentioned—a linear stream of very, very simple instructions. The advantage here would seem to be that if you are working with /dev/random or something like it, the amount of interpretation needed is very small; the disadvantage would be, as you mentioned, the low-level nature of the code.
I'm not sure if this is what you're looking for, but any programming language can be made to function this way. For any programming language P, define the language Palways as follows:
If p is a valid program in P, then p is a valid program in Palways whose meaning is the same as its meaning in P.
If p is not a valid program in P, then p is a valid program in Palways whose meaning is the same as a program that immediately terminates.
For example, I could make the language C++always so that this program:
#include <iostream>
using namespace std;
int main() {
cout << "Hello, world!" << endl;
}
would compile as "Hello, world!", while this program:
Hahaha! This isn't legal C++ code!
Would be a legal program that just does absolutely nothing.
To solve your original problem, just take any OOP language like Java, Smalltalk, etc. and construct the appropriate Javaalways, Smalltalkalways, etc. language from it. Again, I'm not sure if this is at all what you're looking for, but it could be done very easily.
Alternatively, consider finding a grammar for any OOP language and then using that grammar to produce random syntactically valid programs. You could then filter those programs down by using the Palways programming language for that language to eliminate syntactically but not semantically valid programs.
Divide the ASCII byte values into 9 classes (division modulo 9 would help). Then assign then to Brainfuck codewords (see http://en.wikipedia.org/wiki/Brainfuck). Then interpret as Brainfuck.
There you go, any sequence of ASCII characters is a program. Not that it's going to do anything sensible... This approach has a much better chance, compared to templatetypedef's answer, to get a nontrivial program from a random byte sequence.
Text Editors
You could try feeding random character strings to an editor like Emacs or VI. Many (most?) characters will perform an editing action but some will do nothing (other than beep, perhaps). You would have to ensure that the random code mutator never generates the character sequence that exits the editor. However, this experience would be much like programming a Turing machine -- the code is not too readable.
Mathematica
In Mathematica, undefined symbols and other expressions evaluate to themselves, without error. So, that language might be a viable choice if you can arrange for the random code mutator to always generate well-formed expressions. This would be readily achievable since the basic Mathematica syntax is trivial, making it easy to operate on syntactic units rather than at the character level. It would be even easier if the mutator were written in Mathematica itself since expression-munging is Mathematica's forte. You could define a mini-language of valid operations within a Mathematica package that does not import the system-defined symbols. This would allow you to generate well-formed expressions to your heart's content without fear of generating a dangerous expression, like DeleteFile[FileNames["*.*", "/", Infinity]].
I believe Common Lisp should suit your needs. I always have some code in my SLIME/Emacs session that wouldn't compile. You can always tweak things, redefine functions in run-time. It is actually very good for prototyping.
A few years ago it took me quite a while to learn. But nowadays we have quicklisp and everything is so much easier.
Here I describe my development environment:
Install lisp on my linux machine
PS: I want to give an example, where Common Lisp was useful for me:
Up to maybe 2004 I used to write small programs in C (the keep it simple Unix way).
The last 3 years I had to get lots of different hardware running. Motorized stages, scientific cameras, IO cards.
The cameras turned out to be quite annoying. Usually you have to cool them down to -50 degree celsius or so and (in some SDKs) they don't like it when you close them. But this
is exactly how my C development cycle worked: write (30s), compile (1s), run (0.1s), repeat.
Eventually I decided to just use Common Lisp. Often it is straight forward to define the foreign function interfaces to talk to the SDKs and I can do this without ever leaving the running Lisp image. I start the editor in the morning define the open-device function, to talk to the device and after 3 hours I have enough of the functions implemented to set gain, temperature, region of interest and obtain the video.
Then I can often put the SDK manual away and just use the camera.
I used the same interactive programming approach when I have to parse some webpage or some weird XML.

Mathematica: what is symbolic programming?

I am a big fan of Stephen Wolfram, but he is definitely one not shy of tooting his own horn. In many references, he extols Mathematica as a different symbolic programming paradigm. I am not a Mathematica user.
My questions are: what is this symbolic programming? And how does it compare to functional languages (such as Haskell)?
When I hear the phrase "symbolic programming", LISP, Prolog and (yes) Mathematica immediately leap to mind. I would characterize a symbolic programming environment as one in which the expressions used to represent program text also happen to be the primary data structure. As a result, it becomes very easy to build abstractions upon abstractions since data can easily be transformed into code and vice versa.
Mathematica exploits this capability heavily. Even more heavily than LISP and Prolog (IMHO).
As an example of symbolic programming, consider the following sequence of events. I have a CSV file that looks like this:
r,1,2
g,3,4
I read that file in:
Import["somefile.csv"]
--> {{r,1,2},{g,3,4}}
Is the result data or code? It is both. It is the data that results from reading the file, but it also happens to be the expression that will construct that data. As code goes, however, this expression is inert since the result of evaluating it is simply itself.
So now I apply a transformation to the result:
% /. {c_, x_, y_} :> {c, Disk[{x, y}]}
--> {{r,Disk[{1,2}]},{g,Disk[{3,4}]}}
Without dwelling on the details, all that has happened is that Disk[{...}] has been wrapped around the last two numbers from each input line. The result is still data/code, but still inert. Another transformation:
% /. {"r" -> Red, "g" -> Green}
--> {{Red,Disk[{1,2}]},{Green,Disk[{3,4}]}}
Yes, still inert. However, by a remarkable coincidence this last result just happens to be a list of valid directives in Mathematica's built-in domain-specific language for graphics. One last transformation, and things start to happen:
% /. x_ :> Graphics[x]
--> Graphics[{{Red,Disk[{1,2}]},{Green,Disk[{3,4}]}}]
Actually, you would not see that last result. In an epic display of syntactic sugar, Mathematica would show this picture of red and green circles:
But the fun doesn't stop there. Underneath all that syntactic sugar we still have a symbolic expression. I can apply another transformation rule:
% /. Red -> Black
Presto! The red circle became black.
It is this kind of "symbol pushing" that characterizes symbolic programming. A great majority of Mathematica programming is of this nature.
Functional vs. Symbolic
I won't address the differences between symbolic and functional programming in detail, but I will contribute a few remarks.
One could view symbolic programming as an answer to the question: "What would happen if I tried to model everything using only expression transformations?" Functional programming, by contrast, can been seen as an answer to: "What would happen if I tried to model everything using only functions?" Just like symbolic programming, functional programming makes it easy to quickly build up layers of abstractions. The example I gave here could be easily be reproduced in, say, Haskell using a functional reactive animation approach. Functional programming is all about function composition, higher level functions, combinators -- all the nifty things that you can do with functions.
Mathematica is clearly optimized for symbolic programming. It is possible to write code in functional style, but the functional features in Mathematica are really just a thin veneer over transformations (and a leaky abstraction at that, see the footnote below).
Haskell is clearly optimized for functional programming. It is possible to write code in symbolic style, but I would quibble that the syntactic representation of programs and data are quite distinct, making the experience suboptimal.
Concluding Remarks
In conclusion, I advocate that there is a distinction between functional programming (as epitomized by Haskell) and symbolic programming (as epitomized by Mathematica). I think that if one studies both, then one will learn substantially more than studying just one -- the ultimate test of distinctness.
Leaky Functional Abstraction in Mathematica?
Yup, leaky. Try this, for example:
f[x_] := g[Function[a, x]];
g[fn_] := Module[{h}, h[a_] := fn[a]; h[0]];
f[999]
Duly reported to, and acknowledged by, WRI. The response: avoid the use of Function[var, body] (Function[body] is okay).
You can think of Mathematica's symbolic programming as a search-and-replace system where you program by specifying search-and-replace rules.
For instance you could specify the following rule
area := Pi*radius^2;
Next time you use area, it'll be replaced with Pi*radius^2. Now, suppose you define new rule
radius:=5
Now, whenever you use radius, it'll get rewritten into 5. If you evaluate area it'll get rewritten into Pi*radius^2 which triggers rewriting rule for radius and you'll get Pi*5^2 as an intermediate result. This new form will trigger a built-in rewriting rule for ^ operation so the expression will get further rewritten into Pi*25. At this point rewriting stops because there are no applicable rules.
You can emulate functional programming by using your replacement rules as function. For instance, if you want to define a function that adds, you could do
add[a_,b_]:=a+b
Now add[x,y] gets rewritten into x+y. If you want add to only apply for numeric a,b, you could instead do
add[a_?NumericQ, b_?NumericQ] := a + b
Now, add[2,3] gets rewritten into 2+3 using your rule and then into 5 using built-in rule for +, whereas add[test1,test2] remains unchanged.
Here's an example of an interactive replacement rule
a := ChoiceDialog["Pick one", {1, 2, 3, 4}]
a+1
Here, a gets replaced with ChoiceDialog, which then gets replaced with the number the user chose on the dialog that popped up, which makes both quantities numeric and triggers replacement rule for +. Here, ChoiceDialog as a built-in replacement rule along the lines of "replace ChoiceDialog[some stuff] with the value of button the user clicked".
Rules can be defined using conditions which themselves need to go through rule-rewriting in order to produce True or False. For instance suppose you invented a new equation solving method, but you think it only works when the final result of your method is positive. You could do the following rule
solve[x + 5 == b_] := (result = b - 5; result /; result > 0)
Here, solve[x+5==20] gets replaced with 15, but solve[x + 5 == -20] is unchanged because there's no rule that applies. The condition that prevents this rule from applying is /;result>0. Evaluator essentially looks the potential output of rule application to decide whether to go ahead with it.
Mathematica's evaluator greedily rewrites every pattern with one of the rules that apply for that symbol. Sometimes you want to have finer control, and in such case you could define your own rules and apply them manually like this
myrules={area->Pi radius^2,radius->5}
area//.myrules
This will apply rules defined in myrules until result stops changing. This is pretty similar to the default evaluator, but now you could have several sets of rules and apply them selectively. A more advanced example shows how to make a Prolog-like evaluator that searches over sequences of rule applications.
One drawback of current Mathematica version comes up when you need to use Mathematica's default evaluator (to make use of Integrate, Solve, etc) and want to change default sequence of evaluation. That is possible but complicated, and I like to think that some future implementation of symbolic programming will have a more elegant way of controlling evaluation sequence
As others here already mentioned, Mathematica does a lot of term rewriting. Maybe Haskell isn't the best comparison though, but Pure is a nice functional term-rewriting language (that should feel familiar to people with a Haskell background). Maybe reading their Wiki page on term rewriting will clear up a few things for you:
http://code.google.com/p/pure-lang/wiki/Rewriting
Mathematica is using term rewriting heavily. The language provides special syntax for various forms of rewriting, special support for rules and strategies. The paradigm is not that "new" and of course it's not unique, but they're definitely on a bleeding edge of this "symbolic programming" thing, alongside with the other strong players such as Axiom.
As for comparison to Haskell, well, you could do rewriting there, with a bit of help from scrap your boilerplate library, but it's not nearly as easy as in a dynamically typed Mathematica.
Symbolic shouldn't be contrasted with functional, it should be contrasted with numerical programming. Consider as an example MatLab vs Mathematica. Suppose I want the characteristic polynomial of a matrix. If I wanted to do that in Mathematica, I could do get an identity matrix (I) and the matrix (A) itself into Mathematica, then do this:
Det[A-lambda*I]
And I would get the characteristic polynomial (never mind that there's probably a characteristic polynomial function), on the other hand, if I was in MatLab I couldn't do it with base MatLab because base MatLab (never mind that there's probably a characteristic polynomial function) is only good at calculating finite-precision numbers, not things where there are random lambdas (our symbol) in there. What you'd have to do is buy the add-on Symbolab, and then define lambda as its own line of code and then write this out (wherein it would convert your A matrix to a matrix of rational numbers rather than finite precision decimals), and while the performance difference would probably be unnoticeable for a small case like this, it would probably do it much slower than Mathematica in terms of relative speed.
So that's the difference, symbolic languages are interested in doing calculations with perfect accuracy (often using rational numbers as opposed to numerical) and numerical programming languages on the other hand are very good at the vast majority of calculations you would need to do and they tend to be faster at the numerical operations they're meant for (MatLab is nearly unmatched in this regard for higher level languages - excluding C++, etc) and a piss poor at symbolic operations.

Resources