Comparing elements of sequence and set in TLA+ - tla+

Given a sequence S = <<1,2,3,4>> and a set S' = {1,2,3,4,5,6}. How do we check if both of them contain the same values in TLA+?

Define Range(f) == {f[x]: x \in DOMAIN f}. Since all sequences are functions, Range(S) will give us the values of sequence S. Then we check both have the same elements with Range(S) = S_prime.
(We can't call it S' because that means "The next state value of S".)

Related

How can I generate a graph by constraining it to be subisomorphic to a given graph, while not subisomorphic to another?

TL;DR: How can I generate a graph while constraining it to be subisomorph to every graph in a positive list while being non-subisomorph to every graph in a negative list?
I have a list of directed heterogeneous attributed graphs labeled as positive or negative. I would like to find the smallest list of patterns(graphs with special values) such that:
Every input graph has a pattern that matches(= 'P is subisomorphic to G, and the mapped nodes have the same attribute values')
A positive pattern can only match a positive graph
A positive pattern does not match any negative graph
A negative pattern can only match a negative graph
A negative pattern does not match any negative graph
Exemple:
Input g1(+),g2(-),g3(+),g4(+),g5(-),g6(+)
Acceptable solution: p1(+),p2(+),p3(-) where p1(+) matches g1(+) and g4(+); p2(+) matches g3(+) and g6(+); and p3(-) matches g2(-) and g5(-)
Non acceptable solution: p1(+),p2(-) where p1(+) matches g1(+),g2(-),g3(+); p2(-) matches g4(+),g5(-),g6(+)
Currently, I'm able to generate graphs matching every graph in a list, but I can't manage to enforce the constraint 'A positive pattern does not match any negative graph'. I made a predicate 'matches', which takes as input a pattern and a graph, and uses a local array of variables 'mapping' to try and map nodes together. But when I try to use that predicate in a negative context, the following error is returned: MiniZinc: flattening error: free variable in non-positive context.
How can I bypass that limitation? I tried to code the opposite predicate 'not_matches' but I've not yet found how to specify 'for all node mapping, the isomorphism is invalid'. I also can't define the mapping outside the predicate, because a pattern can match a graph more than once and i need to be able to get all mappings.
Here is a reproductible exemple:
include "globals.mzn";
predicate p(array [1..5] of var 0..10:arr1, array [1..5] of 1..10:arr2)=
let{array [1..5] of var 1..5: mapping; constraint all_different(mapping)} in (forall(i in 1..5)(arr1[i]=0\/arr1[i]=arr2[mapping[i]]));
array [1..5] of var 0..10:arr;
constraint p(arr,[1,2,3,4,5]);
constraint p(arr,[1,2,3,4,6]);
constraint not p(arr,[1,2,3,5,6]);
solve satisfy;
For that exemple, the decision variable is an array and the predicate p is true if a mapping exists such that the values of the array are mapped together. One or more elements of the array can also be 0, used here as a wildcard.
[1,2,3,4,0] is an acceptable solution
[0,0,0,0,0] is not acceptable, it matches anything. And the solution should not match [1,2,3,5,6]
[1,2,3,4,7] is not acceptable, it doesn't match anything(as there is no 7 in the parameter arrays)
Thanks by advance! =)
Edit: Added non-acceptable solutions
It is probably good to note that MiniZinc's limitation is not coincidental. When the creation of a free variable is negated, rather then finding a valid assignment for the variable, instead the model would have to prove that no such valid assignment exists. This is a much harder problem that would bring MiniZinc into the field of quantified constraint programming. The only general solution (to still receive the same flattened constraint model) would be to iterate over all possible values for each variable and enforce the negated constraints. Since the number of possibilities quickly explodes and the chance of getting a good model is small, MiniZinc does not do this automatically and throws this error instead.
This technique would work in your case as well. In the not_matches version of your predicate, you can iterate over all possible permutations (the possible mappings) and enforce that they not correct (partial) mappings. This would be a correct way to enforce the constraint, but would quickly explode. I believe, however, that there is a different way to enforce this constraint that will work better.
My idea stems from the fact that, although the most natural way to describe a permutation from one array to the another is to actually create the assignment from the first to the second, when dealing with discrete variables, you can instead enforce that each has the exact same number of each possible value. As such a predicate that enforces X is a permutation of Y might be written as:
predicate is_perm(array[int] of var $$E: X, array[int] of var $$E: Y) =
let {
array[int] of int: vals = [i | i in (dom_array(X) union dom_array(Y))]
} in global_cardinality(X, vals) = global_cardinality(Y, vals);
Notably this predicate can be negated because it doesn't contain any free variables. All new variables (the resulting values of global_cardinality) are functionally defined. When negated, only the relation = has to be changed to !=.
In your model, we are not just considering full permutations, but rather partial permutations, and we use a dummy value otherwise. As such, the p predicate might also be written:
predicate p(array [int] of var 0..10: X, array [int] of var 1..10: Y) =
let {
set of int: vals = lb_array(Y)..ub_array(Y); % must not include dummy value
array[vals] of var int: countY = global_cardinality(Y, [i | i in vals]);
array[vals] of var int: countX = global_cardinality(X, [i | i in vals]);
} in forall(i in vals) (countX[i] <= countY[i]);
Again this predicate does not contain any free variables, and can be negated. In this case, the forall can be changed into a exist with a negated body.
There are a few things that we can still do to optimise p for this use case. First, it seems that global_cardinality is only defined for variables, but since Y is guaranteed par, we can rewrite it and have the correct counts during MiniZinc's compilation. Second, it can be seen that lb_array(Y)..ub_array(Y) gives the tighest possible set. In your example, this means that only slightly different versions of the global cardinality function are evaluated, that could have been
predicate p(array [1..5] of var 0..10: X, array [1..5] of 1..10: Y) =
let {
% CHANGE: Use declared values of Y to ensure CSE will reuse `global_cardinality` result values.
set of int: vals = 1..10; % do not include dummy value
% CHANGE: parameter evaluation of global_cardinality
array[vals] of int: countY = [count(j in index_set(Y)) (i = Y[j]) | i in vals];
array[vals] of var int: countX = global_cardinality(X, [i | i in 1..10]);
} in forall(i in vals) (countX[i] <= countY[i]);
Regarding the example. One approach might be to rewrite the not p(...) constraint to a specific not_p(...) constraint. But I'm how sure how that be formulated.
Here's an example but it's probably not correct:
predicate not_p(array [1..5] of var 0..10:arr1, array [1..5] of 1..10:arr2)=
let{
array [1..5] of var 1..5: mapping;
constraint all_different(mapping)
} in
exists(i in 1..5)(
arr1[i] != 0
/\
arr1[i] != arr2[mapping[i]]
);
This give 500 solutions such as
arr = [1, 0, 0, 0, 0];
----------
arr = [2, 0, 0, 0, 0];
----------
arr = [3, 0, 0, 0, 0];
...
----------
arr = [2, 0, 0, 3, 4];
----------
arr = [2, 0, 1, 3, 4];
----------
arr = [2, 1, 0, 3, 4];
Update
I added not before the exists loop.

List comprehension in haskell with let and show, what is it for?

I'm studying project euler solutions and this is the solution of problem 4, which asks to
Find the largest palindrome made from the product of two 3-digit
numbers
problem_4 =
maximum [x | y<-[100..999], z<-[y..999], let x=y*z, let s=show x, s==reverse s]
I understand that this code creates a list such that x is a product of all possible z and y.
However I'm having a problem understanding what does s do here. Looks like everything after | is going to be executed everytime a new element from this list is needed, right?
I don't think I understand what's happening here. Shouldn't everything to the right of | be constraints?
A list comprehension is a rather thin wrapper around a do expression:
problem_4 = maximum $ do
y <- [100..999]
z <- [y..999]
let x = y*z
let s = show x
guard $ s == reverse s
return x
Most pieces translate directly; pieces that aren't iterators (<-) or let expressions are treated as arguments to the guard function found in Control.Monad. The effect of guard is to short-circuit the evaluation; for the list monad, this means not executing return x for the particular value of x that led to the false argument.
I don't think I understand what's happening here. Shouldn't everything to the right of | be constraints?
No, at the right part you see an expression that is a comma-separated (,) list of "parts", and every part is one of the following tree:
an "generator" of the form somevar <- somelist;
a let statement which is an expression that can be used to for instance introduce a variable that stores a subresult; and
expressions of the type boolean that act like a filter.
So it is not some sort of "constraint programming" where one simply can list some constraints and hope that Haskell figures it out (in fact personally that is the difference between a "programming language" and a "specification language": in a programming language you have "control" how the data flows, in a specification language, that is handled by a system that reads your specifications)
Basically an iterator can be compared to a "foreach" loop in many imperative programming languages. A "let" statement can be seen as introducing a temprary variable (but note that in Haskell you do not assign variable, you declare them, so you can not reassign values). The filter can be seen as an if statement.
So the list comprehension would be equivalent to something in Python like:
for y in range(100, 1000):
for z in range(y, 1000):
x = y * z
s = str(x)
if x == x[::-1]:
yield x
We thus first iterate over two ranges in a nested way, then we declare x to be the multiplication of y and z, with let s = show x, we basically convert a number (for example 15129) to its string counterpart (for example "15129"). Finally we use s == reverse s to reverse the string and check if it is equal to the original string.
Note that there are more efficient ways to test Palindromes, especially for multiplications of two numbers.

Is "set" the default multiplicity?

Are these two equivalent:
r: A -> B
r: A set -> set B
That is, is set the default multiplicity?
If yes, then I will quibble with the definition of the arrow operator in the Software Abstractions book. The book says on page 55:
The arrow product (or just product) p->q of two relations p and q is
the relation you get by taking every combination of a tuple from p and
a tuple from q and concatenating them.
I interpret that definition to mean the only valid instance for p->q is one that has every possible combination of tuples from p with tuples from q. But that's not right (I think). Any instance containing mappings between p and q is valid. For example, on page 56 is this example,
Name = {(N0), (N1)}
Addr = {(D0), (D1)}
The book says this is a valid relation for Name->Addr
{(N0, D0), (N0, D1), (N1, D0), (N1, D1)}
But that's not the only valid relation, right? For example, this is a valid relation:
{(N0, D0), (N1, D1)}
Is that right?
The declaration r : A->B means r is a subset of A->B. The expression A->B has just one value, which is the cross product of A and B. The declaration results in a set of possible values for r, which would include both the example given in the book that you cite, and the example that you ask about.

Letter substitutions termination

Given:
A char string S length l containing only characters from 'a' to 'z'
A set of ordered substitution rules R (in the form X->Y) where x, y are single letters from 'a' to 'z' (eg, 'a' -> ' e' could be a valid rule but 'ce'->'abc' would never be a valid rule)
When a rule r in R is applied on S, all letters of S which are equal to the left side of the rule r would be replaced by the letter in the right side of r, if the rule r cause any replacement in S, r is called triggered rule.
Flowchart (Algorithm) :
(1) Alternately apply all rules in R (following the order of rules in R) on S.
(2) While (there exists any 'triggered rule' DURING (1) ) : repeat (1)
(3) Terminate
The question is: Is there any way to determine if with a given string S and set R, the algorithm would terminate or not (running forever)
Example1 : (manually executed)
S = 'abcdef' R = { 'a'->'b' , 'b' -> 'c' }
(the order is implied the order of appearance from left to right of each rule)
Ater running algorithm on S and R:
(1.1): 'abcdef' --> 'bbcdef' --> 'cccdef'
(2.1): repeat (1) because there are 2 replacements during the (1.1)
(1.2): 'cccdef'
(2.2): continue to (3) because there is no replacement during the (1.2)
(3) : terminate the algorithm
=> The algorithm terminate with the given S and R
Example2:
S = 'abcdef' R = { 'a'->'b' , 'b' -> 'a' }
(the order is implied the appearance order from left to right of each rule)
Ater running algorithm on S and R:
(1.1): 'abcdef' --> 'bbcdef' --> 'abcdef'
(2.1): repeat (1) because there are 2 replacements during the (1.1)
(1.2): 'abcdef --> 'bbcdef' --> 'abcdef'
(2.2): repeat (1) because there are 2 replacements during the (1.2)
(1.3): ...... that would be alike (1.1) forever....
The step (3) (terminate) is never reached.
=> The algorithm won't terminate with the given S and R.
I worked on this and found no efficient algorithm for the question
"if the algorithm halts".
First idea came to my mind was to "find cycle" of letters which
are in triggered rules but the number of rules may be too large
for this idea to be ideal.
The second one is to propose a "threshold" for the time of the
repeat, if the threshold is exceeded, we conclude the algorithm
would not terninate.
The "threshold" could be choosen randomly, (as long as it big
enough) - this approach is not really compelling.
I am thinking that if there is any upper bound for the
"threshold" which ensures that we always get the right answer.
And I came up with threshold = 26 where 26 is the number of
letter from 'a' to 'z' - but I can't prove that it true (or not).
(I hope that It would be something like Bellman-Ford algorithm which determines negative cycle in a fixed number of step,..)
How about you? Please help me find the answer (this is not a
homework)
Thankyou for reading.
One simple way to think about solving this is to consider a string of length 1 and see if the problem can loop for any given starting letter. Since the string's length is never changing, and applying a rule applies to each character in S independently, it suffices to consider just a string of length 1.
Now, start with a state diagram with 26 states - 1 for each letter of the alphabet. Now, for your state transitions, consider this process:
Apply the transitions from R 1 at a time in order, until you reach the end of R. If from a particular state (letter), you do not ever reach a new letter, you know that if you reach the starting letter, you terminate. Otherwise, after applying the entire sequence of R, you will end up with a new letter. This will be your new state.
Note that all state transitions are deterministic because we apply the entire sequence of R, not just the individual transitions. If we applied the individual transitions, we might get confused, because we might have a -> b, b->a, a->c. When looking at the individual operations, we might think there are two possible transitions from a (either to b or to c), but really, considering the entire sequence, we see definitively that a transitions to c.
You will be done creating your state diagram after considering the next states of each starting letter. Creating the entire state diagram in this manner requires 26 * |R| operations. If the state diagram contains a loop, then if the string S contains any of the letters in the loop, then it fails to halt, otherwise it will halt.
Alternatively, if you just consider halting after 26 iterations through the entire sequence from R, you can use that as well.

Haskell and Lambda-Calculus: Implementing Alpha-Congruence (Alpha-Equivalence)

I am implementing an impure untyped lambda-calculus interpreter in Haskell.
I'm presently stuck on implementing "alpha-congruence" (also called "alpha-equivalence" or "alpha-equality" in some textbooks). I want to be able to check whether two lambda-expressions are equal or not equal to each other. For example, if I enter the following expression into the interpreter it should yield True (\ is used to indicate the lambda symbol):
>\x.x == \y.y
True
The problem is understanding whether the following lambda-expressions are considered alpha-equivalent or not:
>\x.xy == \y.yx
???
>\x.yxy == \z.wzw
???
In the case of \x.xy == \y.yx I would guess that the answer is True. This is because \x.xy => \z.zy and \y.yx => \z.zy and the right-hand sides of both are equal (where the symbol => is used to denote alpha-reduction).
In the cae of \x.yxy == \z.wzw I would likewise guess that the answer is True. This is because \x.yxy => \a.yay and \z.wzw => \a.waw which (I think) are equal.
The trouble is that all of my textbooks' definitions state that only the names of the bound variables need to be changed for two lambda-expressions to be considered equal. It says nothing about the free variables in an expression needing to be renamed uniformly also. So even though y and w are both in their correct places in the lambda-expressions, how would the program "know" that the first y represents the first w and the second y represents the second w. I would need to be consistent about this in an implementation.
In short, how would I go about implementing an error-free version of a function isAlphaCongruent? What are the exact rules that I need to follow in order for this to work?
I would prefer to do this without using de Bruijn indices.
You are misunderstanding: different free variables are not alpha equivalent. So y /= x, and \w.wy /= \w.wx, and \x.xy /= \y.yx. Similarly, \x.yxy /= \z.wzw because y /= w.
Your book says nothing about free variables being allowed to be uniformly renamed because they are not allowed to be uniformly renamed.
(Think of it this way: if I haven't yet told you the definition of not and id, would you expect \x. not x and \x. id x to be equivalent? I sure hope not!)

Resources