What does it mean to "run" a function in Alloy? - alloy

My understanding is that functions in Alloy return a value. However, I noticed that you can run a function using a run command the same way you would run a predicate. What does running a function mean and how is this functionality used in Alloy?

In this respect, you can think of a function as being just like a predicate: it's a constraint, and when you run it, Alloy finds an instance that makes the constraint true. In this case, it will be a collection of arguments for the function, the values of signatures and fields, and the function result.
Running a function is used, like running a predicate, to give you a better understanding by showing you sample executions. Think of it as like running test cases, but without having to write the tests :-)

Related

Is there a way to test termination in Haskell using existing libraries?

There are some libraries in Haskell for testing properties or for unit tests, etc. But I have yet to find one that allows for testing termination/nontermination of a function.
Of course you can't simply test if an expression terminates because of the Halting problem. But I could have a guess on the upper bound of how long the function takes to evaluate and if the function doesn't end in that time, the test would fail as the function was (likely) not going to terminate.
Is there a library that allows for testing termination like this?

Mockito: Difference between doThrow() and thenThrow()

What's the difference between doThrow() and thenThrow()?
Let's say, we want to mock an authentication service to validate the login credentials of a user. What's the difference between the following two lines if we were to mock an exception?
doThrow(new BadCredentialsException("Wrong username/password!")).when(authenticationService).login("user1", "pass1");
vs
when(authenticationService.login("user1", "pass1")).thenThrow(new BadCredentialsException("Wrong username/password!"));
Almost nothing: in simple cases they behave exactly the same. The when syntax reads more like a grammatical sentence in English.
Why "almost"? Note that the when style actually contains a call to authenticationService.login. That's the first expression evaluated in that line, so whatever behavior you have stubbed will happen during the call to when. Most of the time, there's no problem here: the method call has no stubbed behavior, so Mockito only returns a dummy value and the two calls are exactly equivalent. However, this might not be the case if either of the following are true:
you're overriding behavior you already stubbed, particularly to run an Answer or throw an Exception
you're working with a spy with a non-trivial implementation
In those cases, doThrow will call when(authenticationService) and deactivate all dangerous behavior, whereas when().thenThrow() will invoke the dangerous method and throw off your test.
(Of course, for void methods, you'll also need to use doThrow—the when syntax won't compile without a return value. There's no choice there.)
Thus, doThrow is always a little safer as a rule, but when().thenThrow() is slightly more readable and usually equivalent.

Difference in testing between strongly-typed and weakly-typed languages

I came from JS world and I am used to do thorough testing of all the possible cases that can be a result of weak typing. That way, inside a function I check all the incoming parameters to conform to some criteria.
As an example, in function createUser(username, id, enabled, role){} I would check if username is a string, id is a UUID, status is boolean, and role is a string that must be 'admin', 'user' or 'system'.
I create tests for these cases to make sure that when I pass wrong parameters, tests fail and I need to find bugs that lead to this. At the end, I have quite a lot of tests, many of which are type-checking tests.
Now, I am playing with Swift which is strongly-typed. I use it to create a client app that consumes data from a NodeJS server side. If I want to create a similar createUser() function in Swift, it seems like I need much less tests because type checking is in the language itself.
Is it right to think that a basically a strongly-typed language needs less tests than a weakly-typed one? Some tests just seem to be unnecessary in Swift and the whole test process seems to be more lightweight.
Are there things I can do to write even less tests by using language constructs in some specific manner and still be sure the code is correct and would pass tests by definition?
The use of
optionals and non-optionals, guard, if let
may save you some nil checks
for example -
Guard Statement
A guard statement is used to transfer program control out of a scope if one or more conditions aren’t met.
A guard statement has the following form:
guard condition else {
statements
}
and more generally, read this -
https://developer.apple.com/library/ios/documentation/Swift/Conceptual/Swift_Programming_Language/Statements.html

How to find the optimal processing order?

I have an interesting question, but I'm not sure exactly how to phrase it...
Consider the lambda calculus. For a given lambda expression, there are several possible reduction orders. But some of these don't terminate, while others do.
In the lambda calculus, it turns out that there is one particular reduction order which is guaranteed to always terminate with an irreducible solution if one actually exists. It's called Normal Order.
I've written a simple logic solver. But the trouble is, the order in which it processes the constraints seems to have a huge effect on whether it finds any solutions or not. Basically, I'm wondering whether something like a normal order exists for my logic programming language. (Or wether it's impossible for a mere machine to deterministically solve this problem.)
So that's what I'm after. Presumably the answer critically depends on exactly what the "simple logic solver" is. So I will attempt to briefly describe it.
My program is closely based on the system of combinators in chapter 9 of The Fun of Programming (Jeremy Gibbons & Oege de Moor). The language has the following structure:
The input to the solver is a single predicate. Predicates may involve variables. The output from the solver is zero or more solutions. A solution is a set of variable assignments which make the predicate become true.
Variables hold expressions. An expression is an integer, a variable name, or a tuple of subexpressions.
There is an equality predicate, which compares expressions (not predicates) for equality. It is satisfied if substituting every (bound) variable with its value makes the two expressions identical. (In particular, every variable equals itself, bound or not.) This predicate is solved using unification.
There are also operators for AND and OR, which work in the obvious way. There is no NOT operator.
There is an "exists" operator, which essentially creates local variables.
The facility to define named predicates enables recursive looping.
One of the "interesting things" about logic programming is that once you write a named predicate, it typically works fowards and backwards (and sometimes even sideways). Canonical example: A predicate to concatinate two lists can also be used to split a list into all possible pairs.
But sometimes running a predicate backwards results in an infinite search, unless you rearrange the order of the terms. (E.g., swap the LHS and RHS of an AND or an OR somehwere.) I'm wondering whether there's some automated way to detect the best order to run the predicates in, to ensure prompt termination in all cases where the solution set is exactually finite.
Any suggestions?
Relevant paper, I think: http://www.cs.technion.ac.il/~shaulm/papers/abstracts/Ledeniov-1998-DCS.html
Also take a look at this: http://en.wikipedia.org/wiki/Constraint_logic_programming#Bottom-up_evaluation

Ordering of parameters to make use of currying

I have twice recently refactored code in order to change the order of parameters because there was too much code where hacks like flip or \x -> foo bar x 42 were happening.
When designing a function signature what principles will help me to make the best use of currying?
For languages that support currying and partial-application easily, there is one compelling series of arguments, originally from Chris Okasaki:
Put the data structure as the last argument
Why? You can then compose operations on the data nicely. E.g. insert 1 $ insert 2 $ insert 3 $ s. This also helps for functions on state.
Standard libraries such as "containers" follow this convention.
Alternate arguments are sometimes given to put the data structure first, so it can be closed over, yielding functions on a static structure (e.g. lookup) that are a bit more concise. However, the broad consensus seems to be that this is less of a win, especially since it pushes you towards heavily parenthesized code.
Put the most varying argument last
For recursive functions, it is common to put the argument that varies the most (e.g. an accumulator) as the last argument, while the argument that varies the least (e.g. a function argument) at the start. This composes well with the data structure last style.
A summary of the Okasaki view is given in his Edison library (again, another data structure library):
Partial application: arguments more likely to be static usually appear before other arguments in order to facilitate partial application.
Collection appears last: in all cases where an operation queries a single collection or modifies an existing collection, the collection argument will appear last. This is something of a de facto standard for Haskell datastructure libraries and lends a degree of consistency to the API.
Most usual order: where an operation represents a well-known mathematical function on more than one datastructure, the arguments are chosen to match the most usual argument order for the function.
Place the arguments that you are most likely to reuse first. Function arguments are a great example of this. You are much more likely to want to map f over two different lists, than you are to want to map many different functions over the same list.
I tend to do what you did, pick some order that seems good and then refactor if it turns out that another order is better. The order depends a lot on how you are going to use the function (naturally).

Resources