No 3-element counterexample found - alloy

I am learning Alloy and was trying to make it find two binary relations r and s, such that s equals the transitive closure of r, and such that s is not equal to r. I suppose I can ask Alloy to do this by executing the following:
sig V
{
r : V,
s : V
}
assert F { not ( some (s-r) and s=^r ) }
check F
Now Alloy 4.2 cannot find a counterexample, although an easy 3-element structure would be the one where r = {(V0,V1), (V1,V2)} and s = r + {(V0,V2)} obviously.
Can someone explain what is going on?

Translating your requirement directly:
// find two binary relations r and s such that
// s equals the transitive closure of r ands is not equal to r
run {some r, s: univ -> univ | s != r and s = ^r}
This gives an instance as expected. The mistake in your spec is that your declarations restrict the relations to be functions; should instead have
sig V {
r: set V,
s: set V
}
or
sig V {r, s: set V}

Related

Understanding implementation of Extended Euclidean algorithm

After some experimentation and search, I came up with the following definition:
emcd' :: Integer -> Integer -> (Integer,Integer,Integer)
emcd' a 0 = (a, 1, 0)
emcd' a b =
let (g, t, s) = emcd' b r
in (g, s, t - (q * s))
where
(q, r) = divMod a b
What's the meaning behind this expression t - (q * s) ?
I've tried to evaluate it by hand; even though I arrived at the correct result (1, -4, 15), I can't see why that expression returns the value of t.
There is a famous method for calculating s and t in as + bt = gcd(a, b). In the process of finding the gcd, I get several equations.
By reversing the steps in the Euclidean Algorithm, it is possible to find these integers a and b. Those resulting equations look like the expression t - (q * s); however, I can't figure out the exact process.
Since (q, r) = divMod a b, we have the equation
a = qb + r
and because of the recursive call, we have:
tb + sr = g
Substituting a-qb for r in the second equation, that means
tb + s(a-qb) = g
tb + sa - qsb = g
sa + (t-qs)b = g
This explains why s and t - q*s are good choices to return.

Understanding modify function from THIH (Typing Haskell In Haskell)

I am reading typing haskell in haskell
Have a hard time understanding this line of coding on p 13
modify ce i c = ce{classes=\j→if I==j then Just c else classes ce j}
Where is j coming from? There is a brief covering on modify but no mentioning of j at all on p 13.
On p14, there is a call made to the return (modify ce i (is, [])) under the addClass. This is where I could not figure out. How can a call to modify ce i ( is, [])) be made if there is no j provided? Thanks for any help.
j is still part of the lambda expression
\j -> if i==j then Just c else classes ce j
that defines the value of the classes field. This function is a closure over the values i, c, and ce that modify itself receives as arguments.
It's a like a recursive function: the result of modify ce i c is an value where (for some values ce, i, c, and x)
classes (modify ce i c i) == Just c
and
classes (modify ce i c x) == classes ce x`.
Except instead of classes actually calling itself, modify creates an new value of type ClassEnv that wraps a "smaller" value of the same type. The classes function unwraps that environment one layer at a time until it either finds a matching value for the original argument, or it reaches the initialEnv value for which classes initalEnv _ == Nothing.
j is a lambda parameter. You can give a parameter any name you like. What does it represent? The type signature of modify tells us the first parameter is of type ClassEnv, so you can go read its definition (page 12) to see what type its classes field has.
To add to the other answers, what the author of that paper does is define a very simple map or dictionary using only functions. Normally you might write Map Id Class where Map is from the containers package, but you can also use the type Id -> Maybe Class which then basically is the lookup function for your map type. Then some simple functions can be implemented like this:
type Map k v = k -> Maybe v
singleton :: Eq k => k -> v -> Map k v
singleton k v = \k' -> if k == k' then Just v else Nothing
insert :: Eq k => k -> v -> Map k v -> Map k v
insert k v lookup = \k' -> if k' == k then Just v else lookup k'
union :: Map k v -> Map k v -> Map k v
union lookup1 lookup2 = \k -> case lookup1 k of
Nothing -> lookup2 k
v -> v
delete :: Eq k => k -> Map k v -> Map k v
delete k lookup = \k' -> if k == k' then Nothing else lookup k'
lookup :: Map k v -> k -> Maybe v
lookup = id
So, instead of defining the map as a collection of values, you define the map as the lookup function.
An advantage of this approach is that it is simple because it doesn't rely on external dependencies. But it is not so flexible: you cannot, for example, list all the keys and values in the map; and it is slow: lookups need to do a linear number of equality tests.

Transitive closure with quantifiers in Alloy

For the setup, I have a set of theorems, each with a set of values as a premise, a label, and a value it proves.
1. A /\ B => C
2. C => D
2. A => D // note same label as C => D
3. B /\ C => V
4. ...
I also have the following constraints:
Labels and premises are fixed for each theorem. One theorem always belongs to label 1 and always has A /\ B as a premise, theorems C => ? and D => ? both always belong to label 2, etc. There may be distinct theorems with the same premise belonging to different labels. For example, it's possible that I could have 4. C => A, even thought we already have 2. C => D.
All premises are of the form A /\ B /\ ... /\ N. I will never have A /\ (B \/ C) as a premise, nor will I have ~A. But I could have two theorems that share a label, one with A /\ B as a premise and one with A /\ C.
The values each theorem proves is variable, and in fact is the only thing that varies. Each theorem may prove at most one value.
All theorems with the same label must prove the same value. If 2. C => (does not prove anything), then I must also have 2. A =>. At most one label can prove a given value. This means it makes sense to write this example as 1. C 2. D 3. V ...
A value is "free" if no theorem proves it. V is never free.
A value is "provable" if it is A) free, B) belongs to a theorem where the premise is satisfiable with provable values.
A model is valid if V is provable. In this case it is, since A and B are free, which gets us C, which gets us V. However, 1. A 2. C 3. V is invalid. What I'm trying to do is figure out which additional facts are required to make all possible models valid. For example, that counterexample disappears if we add a fact that says "A proved value can't be its own premise.
Here's an alloy model representing this:
abstract sig Label {
proves: disj lone Value
}
one sig L1, L2, LV extends Label{}
abstract sig Value{}
one sig A, B, C, D, V extends Value {}
sig Theorem {
premise: Value set -> Label
}
fun free: set Value {
Value - Label.proves
}
pred solvable(v: Value) {
v in free or // ???
}
pred Valid {
solvable[V]
}
pred DefaultTheorems {
one disj T1, T2, T3, T4: Theorem | {
#Theorem = 4
T1.premise = (A + B) -> L1
T2.premise = C -> L2
T3.premise = A -> L2
T4.premise = (B + C) -> LV
}
LV.proves = V
}
check { DefaultTheorems => Valid } for 7
The problem I have is in the solvable predicate. I need it to obey the conjunctions and work for an arbitrary depth. One possible solution would be to use the transitive closure. But if I do v in free or v in free.*(Theorem.(premise.proves)), the model becomes too permissive. It would say that if C is free, then A /\ C -> A is provable. This is because Alloy does not permit sets inside sets, so it collapses {A C} -> A into A -> C and A -> A.
On the other hand, I could write it as
pred solvable(v: Value) {
v in free or some t: Theorem |
let premise' = (t.premise).(proves.v) |
some v' and all v': premise' | solvable[v']
But this is very slow and also has a maximum recursion depth of 3. Is there a way to get the speed and arbitrary depth of using a closure with the accuracy of using a quantifier? I suppose I could add a trace, where each step successively proves more values, but it seems odd to enforce an ordering on a system that doesn't need it.
After a lot of experimentation, I've decided that the only good way to do this is with a trace. Here's the final spec if anybody's interested:
open util/ordering[Step]
sig Step {}
abstract sig Label {
proves: disj lone Value
}
one sig L1, L2, LV extends Label{}
abstract sig Value {
proven_at: set Step
}
one sig A, B, C, D, V extends Value {}
sig Theorem {
premise: Value set -> Label
}
fun free: set Value {
Value - Label.proves
}
pred solvable(v: Value, s: Step) {
v in proven_at.s or
some t: Theorem |
let premise' = (t.premise).(proves.v) |
some premise' and all v': premise' |
v' in proven_at.s
}
pred Valid {
solvable[V, last]
}
fact Trace {
free = proven_at.first
all s: Step - last |
let s' = s.next |
proven_at.s' = proven_at.s + {v: Value | solvable[v, s]}
}
pred DefaultTheorems {
one disj T1, T2, T3, T4: Theorem | {
#Theorem = 4
T1.premise = (A + B) -> L1
T2.premise = C -> L2
T3.premise = A -> L2
T4.premise = (B + C) -> LV
}
LV.proves = V
}
check { DefaultTheorems => Valid } for 8 but 4 Step

Is this an accurate example of a Haskell Pullback?

I'm still trying to grasp an intuition of pullbacks (from category theory), limits, and universal properties, and I'm not quite catching their usefulness, so maybe you could help shed some insight on that as well as verifying my trivial example?
The following is intentionally verbose, the pullback should be (p, p1, p2), and (q, q1, q2) is one example of a non-universal object to "test" the pullback against to see if things commute properly.
-- MY DIAGRAM, A -> B <- C
type A = Int
type C = Bool
type B = (A, C)
f :: A -> B
f x = (x, True)
g :: C -> B
g x = (1, x)
-- PULLBACK, (p, p1, p2)
type PL = Int
type PR = Bool
type P = (PL, PR)
p = (1, True) :: P
p1 = fst
p2 = snd
-- (g . p2) p == (f . p1) p
-- TEST CASE
type QL = Int
type QR = Bool
type Q = (QL, QR)
q = (152, False) :: Q
q1 :: Q -> A
q1 = ((+) 1) . fst
q2 :: Q -> C
q2 = ((||) True) . snd
u :: Q -> P
u (_, _) = (1, True)
-- (p2 . u == q2) && (p1 . u = q1)
I was just trying to come up with an example that fit the definition, but it doesn't seem particularly useful. When would I "look for" a pull back, or use one?
I'm not sure Haskell functions are the best context
in which to talk about pull-backs.
The pull-back of A -> B and C -> B can be identified with a subset of A x C,
and subset relationships are not directly expressible in Haskell's
type system. In your specific example the pull-back would be
the single element (1, True) because x = 1 and b = True are
the only values for which f(x) = g(b).
Some good "practical" examples of pull-backs may be found
starting on page 41 of Category Theory for Scientists
by David I. Spivak.
Relational joins are the archetypal example of pull-backs
which occur in computer science. The query:
SELECT ...
FROM A, B
WHERE A.x = B.y
selects pairs of rows (a,b) where a is a row from table A
and b is a row from table B and where some function of a
equals some other function of b. In this case the functions
being pulled back are f(a) = a.x and g(b) = b.y.
Another interesting example of a pullback is type unification in type inference. You get type constraints from several places where a variable is used, and you want to find the tightest unifying constraint. I mention this example in my blog.

Code that exercises type inference

I'm working on an experimental programming language that has global polymorphic type inference.
I recently got the algorithm working sufficiently well to correctly type the bits of sample code I'm throwing at it. I'm now looking for something more complex that will exercise the edge cases.
Can anyone point me at a source of really gnarly and horrible code fragments that I can use for this? I'm sure the functional programming world has plenty. I'm particularly looking for examples that do evil things with function recursion, as I need to check to make sure that function expansion terminates correctly, but anything's good --- I need to build a test suite. Any suggestions?
My language is largely imperative, but any ML-style code ought to be easy to convert.
My general strategy is actually to approach it from the opposite direction -- ensure that it rejects incorrect things!
That said, here are some standard "confirmation" tests I usually use:
The eager fix point combinator (unashamedly stolen from here):
datatype 'a t = T of 'a t -> 'a
val y = fn f => (fn (T x) => (f (fn a => x (T x) a)))
(T (fn (T x) => (f (fn a => x (T x) a))))
Obvious mutual recursion:
fun f x = g (f x)
and g x = f (g x)
Check out those deeply nested let expressions too:
val a = let
val b = let
val c = let
val d = let
val e = let
val f = let
val g = let
val h = fn x => x + 1
in h end
in g end
in f end
in e end
in d end
in c end
in b end
Deeply nested higher order functions!
fun f g h i j k l m n =
fn x => fn y => fn z => x o g o h o i o j o k o l o m o n o x o y o z
I don't know if you have to have the value restriction in order to incorporate mutable references. If so, see what happens:
fun map' f [] = []
| map' f (h::t) = f h :: map' f t
fun rev' [] = []
| rev' (h::t) = rev' t # [h]
val x = map' rev'
You might need to implement map and rev in the standard way :)
Then with actual references lying around (stolen from here):
val stack =
let val stk = ref [] in
{push = fn x => stk := x :: !stk,
pop = fn () => stk := tl (!stk),
top = fn () => hd (!stk)}
end
Hope these help in some way. Make sure to try to build a set of regression tests you can re-run in some automatic fashion to ensure that all of your type inference behaves correctly through all changes you make :)

Resources