I found myself trying to sum a set of naturals. I was puzzled by the following behavior when running a simple model.
(assume the following code is in a copy of util/natural, so ord is imported)
//sums the values in a set of naturals
fun setsum[nums : set Natural] : lone Natural {
{n : Natural | #ord/prevs[n] = (sum x : nums | #ord/prevs[x])}
}
then, in a module importing my copy of util/natural:
private open mynatural as nat
let two = nat/add[nat/One, nat/One]
let three = nat/add[two, nat/One]
let four = nat/add[two, two]
let five = nat/add[four,nat/One]
pred showExpectSum10 {
some x : Natural | x in setsum[{n : Natural | nat/lt[n, five]}]
}
//run showExpectSum10 for 15 //result is 10, as expected
//run showExpectSum10 for 1 but 20 Natural //result is 10 as expected
run showExpectSum10 for 1 but 40 Natural //result is 26 somehow.
Why does changing the scope of Natural affect the result this way?
It seems you just need to disable overflows ("Options -> Forbid Overflows: Yes"), and then it should work as expected. Every time integer arithmetic is used and overflows are allowed (which is the default setting) it possible to get spurious counterexamples (i.e., invalid instances) due to the default "wraparound" semantics of arithmetic operations in Alloy.
Related
Below is an Alloy model representing this set of integers: {0, 2, 4, 6}
As you know, the plus symbol (+) denotes set union. How can 0 be unioned to 2? 0 and 2 are not sets. I thought the union operator applies only to sets? Isn't this violating a basic notion of set union?
Second question: Is there a better way to model this, one that is less cognitively jarring?
one sig List {
numbers: set Int
} {
numbers = 0 + 2 + 4 + 6
}
In Alloy, everything you work with is a set of tuples. none is the empty set, and many sets are sets of relations (tuples with arity > 1). So also each integer, when you use it, is a set with a relation of arity 1 and cardinality 1. I.e. in Alloy when you use 1 it is really {(1)}, a set of a type containing the atom 1. I.e. the definition is in reality like:
enum Int {-8,-7,-6,-5,-4,-3,-2,-1,0,1,2,3,4,5,6,7}
Ints in Alloy are just not very good integers :-( The finite set of atoms is normally not a problem but with Ints there are just too few of them to be really useful. Worse, they quickly overflow and Alloy is not good in handling this at all.
But I do agree it looks ugly. I have an even worse problem with seq.
0-A + 1->B + 2->C + 3->C
I already experimented with adding literal seq to Alloy and got an experimental version running. Maybe sets could also be implemented this way:
// does not work in Alloy 4
seq [ A, B, C, C ] = 0->A + 1->B + 2->C + 3->C
set [ 1, 2, 3, 3 ] = 1+2+3
Today you could do this:
let x[a , b ] = { a + b }
run {
x[1,x[2,x[3,4]]] = 1+2+3+4
} for 4 int
But not sure I like this any better. If macros would have meta fields or would make the arguments available as a sequence (like most interpreters have) then we could do this
// does not work in Alloy 4
let list[ args ... ] = Int.args // args = seq univ
run {
range[ list[1,2,3,4,4] ] = 1+2+3+4
}
If you like the seq [ A, B, C, C ] syntax or the varargs then start a thread on the AlloyTools list. As said, I got the seq [ A, B, C, C ] working in a prototype.
/*
sig a {
}
sig b {
}
*/
pred rel_test(r : univ -> univ) {
# r = 1
}
run {
some r : univ -> univ {
rel_test [r]
}
} for 2
Running this small test, $r contains one element in every generated instance. When sig a and sig b are uncommented, however, the first instance is this:
In my explanation, $r has 9 tuples here and still, the predicate which asks for a one tuple relation succeeds. Where am I wrong?
An auxiliary question: are these two declarations equivalent?
pred rel_test(r : univ -> univ)
pred rel_test(r : set univ -> univ)
The problem is that with the Forbid Overflow option set to No the integer semantics in Alloy is wrap around, and with the default scope of 3 (bits), then indeed 9=1, as you can confirm in the evaluator.
With the signatures a and b commented the biggest relation that can be generated with scope 2 has 4 tuples (since the max size of univ is 2), so the problem does not occur.
It also does not occur in the latest build because I believe it comes with the Forbid Overflow option set to Yes by default, and with that option the semantics of integers rules out instances where overflows occur, precisely the case when you compute the size of the relation with 9 tuples. More details about this alternative integer semantics can be found in the paper "Preventing arithmetic overflows in Alloy" by Aleksandar Milicevic and Daniel Jackson.
On the main question: what version of Alloy are you using? I'm unable to replicate the behavior you describe (using Alloy 4.2 of 22 Feb 2015 on OS X 10.6.8).
On the auxiliary question: it appears so. (The language reference is not quite as explicit as one might wish, but it begins one part of its discussion of multiplicities with "If the right-hand expression denotes a unary relation ..." and (in what I take to be the context so defined) "the default multiplicity is one"; the conditional would make no sense if the default multiplicity were always one.
On the other hand, the same interpretive logic would lead to the conclusion that the language reference believes that unary multiplicity keywords are only allowed before expressions denoting unary relations (which would appear to make r: set univ -> univ ungrammatical). But Alloy accepts the expression and parses it as set (univ -> univ). (The alternative parse, (set univ) -> univ, would be very hard to assign a meaning to.)
I was inspired by the slide http://alloy.mit.edu/alloy/tutorials/day-course/s4_dynamic.pdf of Greg Dennis and Rob Seater, to model an automaton whose transitions are defined by rules and invariances but I can not understand why satisfy the constraints even in the presence of invariances and contradictory rules.
sig State {value : one Int}
sig System { trans : State -> State }
pred i1[s : State] { s.value < 4 }
pred i2[s : State] { s.value > 0 }
pred r1[s, s' : State, m, m' : System] {
s.value = -1 and s'.value = 0 and change[s, s', m, m']
}
pred r2[s, s' : State, m, m' : System] {
s.value = 1 and s'.value = 2 and change[s, s', m, m']
}
pred change[s, s' : State, m, m' : System] {
m'.trans = m.trans + s -> s'
}
assert ruleSafe {
all s, s' : State, m,m' : System |
i1[s] and i2[s] and r1[s,s',m,m'] and r2[s,s',m,m'] =>
i1[s'] and i2[s']
}
check ruleSafe
First, a side note: It's not clear what the value relation is intended to be doing, but do remember that integers in Alloy tend to be very narrow and overflow behavior can make rules like s.value < 4 produce unexpected results. Whenever integers are used only to ensure a nice supply of distinct values, or ordered values it is almost always better to use an uninterpreted type in Alloy with some relations to ensure the necessary constraints.
However, that's not the main source of your confusion. I think you believe that since r1 requires its first argument to have a value of -1 and r2 requires the same argument to have a value of 1, any condition involving both r1 and r2 with the same first argument cannot possibly be satisfied, and there can be no instances of a model which is required to satisfy such a predicate. Almost true!
I think you're correct that r1 and r2 cannot both be true for the same set of arguments (or any sets of arguments with the same first argument s). But the constraint you specify uses r1[s,s',m,m'] and r2[s,s',m,m'] in the antecedent of a conditional. Before you go any further, stop and think: What is the truth value of a conditional if the antecedent is false?
If you are having trouble seeing what is going on, try examining the instances of your model in which the antecedent of your conditional is true for all pairs of States and Systems. Concretely, add the following to your model and look at the instances it produces.
pred antecedent {
all s, s' : State, m, m' : System
| i1[s] and i2[s] and r1[s,s',m,m'] and r2[s,s',m,m']
}
run antecedent
It took me a minute or two before I understood what I was seeing; if it takes you longer than a few minutes, you might try the following variant definition of antecedent and see if it enlightens you.
pred antecedent {
all s, s' : State, m, m' : System
| i1[s] and i2[s] and r1[s,s',m,m'] and r2[s,s',m,m']
some State
some System
}
You are not the first user of first-order logic to have been taken by surprise when a conditional statement in your model turned out to be vacuously true when you expected it to be false. You won't be the last.
I'm trying to write a function in Haskell that calculates all factors of a given number except itself.
The result should look something like this:
factorlist 15 => [1,3,5]
I'm new to Haskell and the whole recursion subject, which I'm pretty sure I'm suppoused to apply in this example but I don't know where or how.
My idea was to compare the given number with the first element of a list from 1 to n div2
with the mod function but somehow recursively and if the result is 0 then I add the number on a new list. (I hope this make sense)
I would appreciate any help on this matter
Here is my code until now: (it doesn't work.. but somehow to illustrate my idea)
factorList :: Int -> [Int]
factorList n |n `mod` head [1..n`div`2] == 0 = x:[]
There are several ways to handle this. But first of all, lets write a small little helper:
isFactorOf :: Integral a => a -> a -> Bool
isFactorOf x n = n `mod` x == 0
That way we can write 12 `isFactorOf` 24 and get either True or False. For the recursive part, lets assume that we use a function with two arguments: one being the number we want to factorize, the second the factor, which we're currently testing. We're only testing factors lesser or equal to n `div` 2, and this leads to:
createList n f | f <= n `div` 2 = if f `isFactorOf` n
then f : next
else next
| otherwise = []
where next = createList n (f + 1)
So if the second parameter is a factor of n, we add it onto the list and proceed, otherwise we just proceed. We do this only as long as f <= n `div` 2. Now in order to create factorList, we can simply use createList with a sufficient second parameter:
factorList n = createList n 1
The recursion is hidden in createList. As such, createList is a worker, and you could hide it in a where inside of factorList.
Note that one could easily define factorList with filter or list comprehensions:
factorList' n = filter (`isFactorOf` n) [1 .. n `div` 2]
factorList'' n = [ x | x <- [1 .. n`div` 2], x `isFactorOf` n]
But in this case you wouldn't have written the recursion yourself.
Further exercises:
Try to implement the filter function yourself.
Create another function, which returns only prime factors. You can either use your previous result and write a prime filter, or write a recursive function which generates them directly (latter is faster).
#Zeta's answer is interesting. But if you're new to Haskell like I am, you may want a "simple" answer to start with. (Just to get the basic recursion pattern...and to understand the indenting, and things like that.)
I'm not going to divide anything by 2 and I will include the number itself. So factorlist 15 => [1,3,5,15] in my example:
factorList :: Int -> [Int]
factorList value = factorsGreaterOrEqual 1
where
factorsGreaterOrEqual test
| (test == value) = [value]
| (value `mod` test == 0) = test : restOfFactors
| otherwise = restOfFactors
where restOfFactors = factorsGreaterOrEqual (test + 1)
The first line is the type signature, which you already knew about. The type signature doesn't have to live right next to the list of pattern definitions for a function, (though the patterns themselves need to be all together on sequential lines).
Then factorList is defined in terms of a helper function. This helper function is defined in a where clause...that means it is local and has access to the value parameter. Were we to define factorsGreaterOrEqual globally, then it would need two parameters as value would not be in scope, e.g.
factorsGreaterOrEqual 4 15 => [5,15]
You might argue that factorsGreaterOrEqual is a useful function in its own right. Maybe it is, maybe it isn't. But in this case we're going to say it isn't of general use besides to help us define factorList...so using the where clause and picking up value implicitly is cleaner.
The indentation rules of Haskell are (to my tastes) weird, but here they are summarized. I'm indenting with two spaces here because it grows too far right if you use 4.
Having a list of boolean tests with that pipe character in front are called "guards" in Haskell. I simply establish the terminal condition as being when the test hits the value; so factorsGreaterOrEqual N = [N] if we were doing a call to factorList N. Then we decide whether to concatenate the test number into the list by whether dividing the value by it has no remainder. (otherwise is a Haskell keyword, kind of like default in C-like switch statements for the fall-through case)
Showing another level of nesting and another implicit parameter demonstration, I added a where clause to locally define a function called restOfFactors. There is no need to pass test as a parameter to restOfFactors because it lives "in the scope" of factorsGreaterOrEqual...and as that lives in the scope of factorList then value is available as well.
I was experimenting with alloy and wrote this code.
one sig s1{
vals: some Int
}{
#vals = 4
}
one sig s2{
vals: some Int
}{
#vals = 4
}
fact {
all a : s1.vals | a > 2
all i : s2.vals | i < 15
s1.vals = s2.vals
}
pred p{}
run p
It seems to me that {3,4,5,6} at least is a solution however Alloy says no instance found. When I comment s1.vals = s2.vals or change i < 15 to i > 2, it finds instances.
Can anyone please explain me why? Thanks.
Alloy's relationship with integers is sometimes mildly strained; it's not designed for heavily numeric applications, and many uses of integers in conventional programming are better handled in Alloy by other signatures.
The default bit width for integers is 4 bits, and Alloy uses twos-complement integers, so your run p is asking for a world in which integers range in value from -8 to 7. In that world, the constraint i < 15 is subject to integer overflow, and turns out to mean, in effect, i < -1. (To see this, comment out both of your constraints so that you get some instances. Then (a) leaf through the instances produced by the Analylzer and look at the integers that appear in them; you'll see their range is as I describe. Also, (b) open the Evaluator and type the numeral "15"; you'll see that its value in this universe is -1.)
If you change your run command to provide an appropriate bit width for integers (e.g. run p for 5 int), you'll get instances which are probably more like what you were expecting.
An alternative change, however, which leads to a more idiomatic Alloy model, is to abstract away from the specific kind of value by defining a sig for values:
sig value {}
Then change the declaration for vals in s1 and s2 from some Int to some value, and comment out the numeric constraints on them (or substitute some other interesting constraints for them). And then run p in a suitable scope (e.g. run p for 8 value).