NuSMV - AND model - model-checking

I'm trying to write the following model in NuSMV
In other words, z becomes bad only when x AND y are in the state bad too. This is the code I've written
MODULE singleton
VAR state: {good, bad};
INIT state = good
TRANS (state = good) -> next(state) = bad
TRANS (state = bad) -> next(state) = bad
MODULE effect(cond)
VAR state: {good, bad};
ASSIGN
init(state) := good;
next(state) := case
(state = bad) : bad;
(state = good & cond) : bad;
(!cond) : good;
TRUE : state;
esac;
MODULE main
VAR x : singleton;
VAR y : singleton;
VAR z : effect((x.state = bad) & (y.state = bad));
But I got only these reachable states
NuSMV > print_reachable_states -v
######################################################################
system diameter: 3
reachable states: 3 (2^1.58496) out of 8 (2^3)
------- State 1 ------
x.state = good
y.state = good
z.state = good
------- State 2 ------
x.state = bad
y.state = bad
z.state = bad
------- State 3 ------
x.state = bad
y.state = bad
z.state = good
-------------------------
######################################################################
How can I modify my code to get also
x.state = good
y.state = bad
z.state = good
x.state = bad
y.state = good
z.state = good
in the reachable states?
In addition, I'm not sure if I've to add or not that red arrow printed in the model picture: if x and y are in state bad, I want that for sure sooner or later also z becomes bad.
Thank you so much for helping!

The states
x.state = good
y.state = bad
z.state = good
x.state = bad
y.state = good
z.state = good
are not reachable because each sub-module of main performs a transition at the same time of the others and because you force a deterministic transition for your state variables; that is, in your model both x and y change state from good to bad at the same time. Moreover, in contrast with your nice picture, your smv code does not allow for any self-loop except for the one on the final state.
To fix your model, you need only to state that --in case x (resp. y) is good-- you want next(x) (resp. next(y)) to be either good or bad, but not force either decision. e.g.
MODULE singleton
VAR
state: { good, bad };
ASSIGN
init(state) := good;
next(state) := case
state = good : { good, bad };
TRUE : bad;
esac;
MODULE effect(cond)
VAR
state: { good, bad };
ASSIGN
init(state) := good;
next(state) := case
(state = bad | cond) : bad;
TRUE : state;
esac;
MODULE main
VAR
x : singleton;
y : singleton;
z : effect((x.state = bad) & (y.state = bad));
note: i also simplified the rules for module effect, though this was unnecessary.
You can test the model as follows:
nuXmv > reset; read_model -i test.smv ; go; print_reachable_states -v
######################################################################
system diameter: 3
reachable states: 5 (2^2.32193) out of 8 (2^3)
------- State 1 ------
x.state = good
y.state = bad
z.state = good
------- State 2 ------
x.state = good
y.state = good
z.state = good
------- State 3 ------
x.state = bad
y.state = good
z.state = good
------- State 4 ------
x.state = bad
y.state = bad
z.state = bad
------- State 5 ------
x.state = bad
y.state = bad
z.state = good
-------------------------
######################################################################
Regarding your second question, the code sample I provided you guarantees the property you want to verify:
nuXmv > check_ltlspec -p "G ((x.state = bad & y.state = bad) -> F z.state = bad)"
-- specification G ((x.state = bad & y.state = bad) -> F z.state = bad) is true
This is obviously the case because the self-loop outlined by the red edge in your picture is not present. If you think about it, that transition would allow for at least one execution in which the current state remains equal to
x.state = bad
y.state = bad
z.state = good
indefinitely, and this would be a counter-example to your specification.
EDIT:
You can also fix the code by simply writing this:
MODULE singleton
VAR state: {good, bad};
INIT state = good
TRANS (state = bad) -> next(state) = bad
Removing the line TRANS (state = good) -> next(state) = bad allows x and y to change arbitrarily when state = good, which means they can non-deterministically remain good or become bad. This is completely equivalent to the code I provided you, although less clear at a first glance since it hides non-determinism under the hoods instead of making it explicit.

Related

inconsistent behavior of equivalence when prevent overflows is set to 'yes'

The following example shows two checks that appear equivalent, yet the second finds a counterexample while the first does not. When setting 'prevent overflow' to 'No', both return the same result:
sig A {
x : Int,
y : Int
}
pred f1[a:A] {
y = x ++ (a -> ((a.x).add[1]))
}
pred f2[a:A] {
a.y = (a.x).add[1]
y = x ++ (a -> a.y)
}
check {
all a : A | {
f1[a] => f2[a]
f2[a] => f1[a]
}
}
check {
all a:A | {
f1[a] <=> f2[a]
}
}
I am using Alloy 4.2_2015-02_22 on Ubuntu Linux with sat4j.
I am sorry not to be able to offer a more useful answer than the following. With luck, someone else will do better.
It is possible, I think, that your model exhibits a bug in Alloy. But perhaps instead the model exhibits a counter-intuitive consequence of the semantics of integers in Alloy when the Prevent Overflow flag is set. Those semantics are described in the paper Preventing arithmetic overflows in Alloy by Aleksandar Milicevic and Daniel Jackson. You may have an easier time following the details than I am having.
The following expressions seem to suggest (a) that the unintuitive results have to do with the fact that the law of excluded middle does not hold when negation is applied to undefined values, and (b) that in the case where a.x = 7 (or whatever the maximum value of Int is, in a particular scope), the predicates f1 and f2 behave differently:
pred xm1a[ a : A] { (f1[a] or not(f1[a])) }
pred xm2a[ a : A] { (f2[a] or not(f2[a])) }
pred xm1b[ a : A] { (not (f1[a] and not(f1[a]))) }
pred xm2b[ a : A] { (not (f2[a] and not(f2[a]))) }
// some sensible assertions: no counterexamples
check x1 { all a : A | a.x = 7 implies xm1a[a] }
check x2 { all a : A | a.x = 7 implies xm2a[a] }
check x3 { all a : A | a.x = 7 implies xm1b[a] }
check x4 { all a : A | a.x = 7 implies xm2b[a] }
// some odd assertions: counterexamples for y2 and y4
check y1 { all a : A | a.x = 7 implies not xm1a[a] }
check y2 { all a : A | a.x = 7 implies not xm2a[a] }
check y3 { all a : A | a.x = 7 implies not xm1b[a] }
check y4 { all a : A | a.x = 7 implies not xm2b[a] }
It's not clear (to me) whether the relevant difference between f1 and f2 is that f2 refers explicitly to a.y, or that f2 has two clauses with an implicit and between them.
If the arithmetic relation between a.x and a.y is important to the model, then figuring out exactly how overflow cases are handled will be essential. If all the matters is that a.x != a.y and y = x ++ (a -> a.y), then weakening the condition will have the nice side effect that the reader need not understand Alloy's overflow semantics. (I expect you realize that already; I mention it for the benefit of later readers.)

Use topographical sort with alloy 4.2

I'm trying to use topological sort to find two different sequential schedules that follow their prereqs. When I execute the code no instances are found and I'm not sure why. Here's my code:
open util/relation
abstract sig Course {
prereq: set Course, -- c->d in prereq if c is a prerequisite of d
s1, s2: set Course -- two sequential course schedules
}
one sig cs1121, cs1122, cs1141, cs2311, cs2321,
cs3000, cs3141, cs3311, cs3331, cs3411, cs3421, cs3425 extends Course { }
fact {
no prereq.cs1121
prereq.cs1122 = cs1121
prereq.cs1141 = cs1122
prereq.cs2311 = cs1121
prereq.cs2321 = cs1122
prereq.cs3000 = cs3141
prereq.cs3141 = cs2311
prereq.cs3141 = cs2321
prereq.cs3311 = cs2311
prereq.cs3331 = cs1141
prereq.cs3331 = cs2311
prereq.cs3331 = cs2321
prereq.cs3411 = cs1141
prereq.cs3411 = cs3421
prereq.cs3421 = cs1122
prereq.cs3425 = cs2311
prereq.cs3425 = cs2321
}
-- is the given schedule a topological sort of the prereq relation?
pred topoSort [schedule: Course->Course] {
(all c: Course | lone c.schedule and lone schedule.c) -- no branching in the schedule
and totalOrder[*schedule, Course] -- and it's a total order
and prereq in ^schedule -- and it obeys the prerequisite graph
}
pred show {
s1.irreflexive and s2.irreflexive -- no retaking courses!
s1.topoSort and s2.topoSort -- both schedules are topological sorts of the prereq relation
s1 != s2 -- the schedules are different
}
run show
Switch the solver (under the Options menu) to MiniSAT with Unsat Core, then look at the core. You'll see it highlights
prereq.cs3141 = cs2311
prereq.cs3141 = cs2321
which contradicts your no branching rule.

how to create a structure of kripke in NuSMV?

i must to create a structure of Kripke in NuSMV and i must to check some properties.
Anybody help me? The structure and the properties(LTL, CTL and CTL*) are in the pictures.
Here there is a structure and properties:
http://cl.ly/image/1x0b1v3E0P0D/Screen%20Shot%202014-10-16%20at%2016.52.34.png
I found a simpler and seemingly more reliable NuSMV code for your Kripke Structure. Thanks to dejvuth for his answer to my question. The code is as follows
MODULE main
VAR
state : {s0,s1,s2,s3,s4};
ASSIGN
init(state) := s0;
next(state):=
case
state = s0 : {s1,s2};
state = s1 : {s1,s2};
state = s2 : {s1,s2,s3};
state = s3 : {s1,s4};
state = s4 : {s4};
esac;
DEFINE
p := state = s1 | state = s2 | state = s3 | state = s4;
q := state = s1 | state = s2;
r := state = s3;
SPEC
EG p;
SPEC
AG p;
SPEC
EF (AG p);
As far as I know NuSMV only handles LTL and CTL formulas (see NuSMV in Wikipedia). The formulas in problem 1-3 are CTL formulas, hence it can be model-checked by NuSMV. However the formulas in problem 4 & 5 are CTL* formulas, and thus we cannot straightforwardly use them as an input to NuSMV. You also need to understand that the set of all CTL* formulas is the proper superset of the union of all LTL and CTL formulas. This conditions implies that some CTL* formulas do not have their equivalent LTL or CTL formulas (see CTL* in Wikipedia). Your Kripke structure can be defined in NuSMV by following code:
MODULE main
VAR
p : boolean;
q : boolean;
r : boolean;
state : {s0,s1,s2,s3,s4};
ASSIGN
init (state) := s0;
next (state) :=
case
state = s0 : {s1, s2};
state = s1 : {s1, s2};
state = s2 : {s1, s2, s3};
state = s3 : {s1, s4};
state = s4 : {s4};
TRUE : state;
esac;
init (p) := FALSE;
init (q) := FALSE;
init (r) := FALSE;
next(p) :=
case
state = s1 | state = s2 | state = s3 | state = s4 : TRUE;
TRUE : p;
esac;
next(q) :=
case
state = s1 | state = s2 : TRUE;
state = s3 | state = s4 : FALSE;
TRUE : q;
esac;
next(r) :=
case
state = s3 : TRUE;
state = s1 | state = s2 | state = s4 : FALSE;
TRUE : r;
esac;
SPEC
EG p;
SPEC
AG p;
SPEC
EF (AG p);
Of course, there is another way to define your Kripke structure in NuSMV, but I think this is one of the easiest. (Anyway, thanks for helping me with my problem).
As for the formulas in problem 4 & 5, here is my answer.
The formula AF [p U EG ( p -> q)] is of the form AF [\phi], where \phi is an LTL formula p U EG (p->q). Since the LTL formula \phi is satisfied in a Kripke model if for every path starting at s0 we have the satisfaction of \phi, then we translate AF [p U EG ( p -> q)] into AF A[p U EG ( p -> q)].
By similar argument, we translate EG[(( p & q ) | r) U ( r U AG p)] into EG[A(( p & q ) | r) U A( r U AG p)].

Memory Issue in Alloy

I am new to Alloy. I am trying to find a solution for a model with 512 states. But it runs out of memory. I set the memory and stack to its maximum level, but it is not enough. Is there any other way I could use to increase the memory Alloy uses?
I appreciate your time and help.
Thanks a lot,
Fathiyeh
Hard to know where to start. Looks as if you're writing an Alloy model as if you're expecting it to be a model checker. But the point of Alloy is to allow you to analyze systems whose states have complex structure, with constraints written in a relational logic. You won't get very far doing a direct encoding of a low-level model into Alloy; for that kind of thing you'd do better to use a model checker.
module threeprocesses
abstract sig boolean {
}
one sig true,false extends boolean{}
sig state {
e1: boolean,
t1: boolean,
ready1: boolean,
e2: boolean,
t2: boolean,
ready2: boolean,
e3: boolean,
t3: boolean,
ready3: boolean
}
sig relation {
lambda : state -> one Int,
next1 : state -> state
}
pred LS (s : state) {
(((s.t1 =s.t3) and (s.t2 =s.t1) and (s.t3 =s.t2))
or ((s.t1 != s.t3) and (s.t2 !=s.t1) and (s.t3 =s.t2))
or ((s.t1 != s.t3) and (s.t2 =s.t1) and (s.t3 !=s.t2))) and
((s.e1 =s.e3) or (s.e2 !=s.e1) or (s.e3 !=s.e2))
}
pred show (r:relation) {
all s : state |
LS [s] implies LS [s.(r.next1)]
all s : state |
(not (LS [s])) implies not (s =s.(r.next1))
all s : state |
(not (LS [s])) implies (all s2 : (s.(r.next1)) | s2. (r.lambda) > s.(r.lambda))
all s : state,s2 : state |
((s.t1 = s2.t1) and (s.e1 = s2.e1) and (s.ready1 = s2.ready1) and (s.e3 = s2.e3)
and (s.t3 = s2.t3)) implies
( (((s2.(r.next1)).ready1)= ((s.(r.next1)).ready1)) and (((s2.(r.next1)).e1)= ((s.
(r.next1)).e1)) and
(((s2.(r.next1)).t1)= ((s.(r.next1)).t1)) )
all s : state,s2 : state |
((s.t2 = s2.t2) and (s.e2 = s2.e2) and (s.ready2 = s2.ready2) and (s.e1 = s2.e1)
and (s.t1 = s2.t1)) implies
( (((s2.(r.next1)).ready2)= ((s.(r.next1)).ready2)) and (((s2.(r.next1)).e2)= ((s.
(r.next1)).e2)) and
(((s2.(r.next1)).t2)= ((s.(r.next1)).t2)) )
all s : state,s2 : state |
((s.t3 = s2.t3) and (s.e3 = s2.e3) and (s.ready3 = s2.ready3) and (s.e2 = s2.e2)
and (s.t2 = s2.t2)) implies
( (((s2.(r.next1)).ready3)= ((s.(r.next1)).ready3)) and (((s2.(r.next1)).e3)= ((s.
(r.next1)).e3)) and
(((s2.(r.next1)).t3)= ((s.(r.next1)).t3)) )
all s : state |
(not ( (s.e1 = ((s.(r.next1)).e1)) and (s.t1 = ((s.(r.next1)).t1)) and (s.ready1
= ((s.(r.next1)).ready1)) ) ) implies
(s.e1 = s.e3)
all s : state |
(not ( (s.e2 = ((s.(r.next1)).e2)) and (s.t2 = ((s.(r.next1)).t2)) and (s.ready2
= ((s.(r.next1)).ready2)) ) ) implies
(not (s.e2 = s.e1))
all s : state |
(not ( (s.e3 = ((s.(r.next1)).e3)) and (s.t3 = ((s.(r.next1)).t3)) and (s.ready3
= ((s.(r.next1)).ready3)) ) ) implies
(not (s.e3 = s.e2))
all s : state ,s2:state |
(s != s2) implies (not ((s.e1 = s2.e1) and (s.e2 = s2.e2) and (s.e3 = s2.e3) and
(s.t1 = s2.t1) and (s.t2 = s2.t2) and (s.t3 = s2.t3) and
(s.ready1 = s2.ready1) and (s.ready2 = s2.ready2) and (s.ready3 = s2.ready3)))
}
run show for 3 but 1 relation, exactly 512 state

Truly declarative language?

Does anyone know of a truly declarative language? The behavior I'm looking for is kind of what Excel does, where I can define variables and formulas, and have the formula's result change when the input changes (without having set the answer again myself)
The behavior I'm looking for is best shown with this pseudo code:
X = 10 // define and assign two variables
Y = 20;
Z = X + Y // declare a formula that uses these two variables
X = 50 // change one of the input variables
?Z // asking for Z should now give 70 (50 + 20)
I've tried this in a lot of languages like F#, python, matlab etc, but every time I tried this they come up with 30 instead of 70. Which is correct from an imperative point of view, but I'm looking for a more declarative behavior if you know what I mean.
And this is just a very simple calculation. When things get more difficult it should handle stuff like recursion and memoization automagically.
The code below would obviously work in C# but it's just so much code for the job, I'm looking for something a bit more to the point without all that 'technical noise'
class BlaBla{
public int X {get;set;} // this used to be even worse before 3.0
public int Y {get;set;}
public int Z {get{return X + Y;}}
}
static void main(){
BlaBla bla = new BlaBla();
bla.X = 10;
bla.Y = 20;
// can't define anything here
bla.X = 50; // bit pointless here but I'll do it anyway.
Console.Writeline(bla.Z);// 70, hurray!
}
This just seems like so much code, curly braces and semicolons that add nothing.
Is there a language/ application (apart from Excel) that does this? Maybe I'm no doing it right in the mentioned languages, or I've completely missed an app that does just this.
I prototyped a language/ application that does this (along with some other stuff) and am thinking of productizing it. I just can't believe it's not there yet. Don't want to waste my time.
Any Constraint Programming system will do that for you.
Examples of CP systems that have an associated language are ECLiPSe, SICSTUS Prolog / CP package, Comet, MiniZinc, ...
It looks like you just want to make Z store a function instead of a value. In C#:
var X = 10; // define and assign two variables
var Y = 20;
Func<int> Z = () => X + Y; // declare a formula that uses these two variables
Console.WriteLine(Z());
X = 50; // change one of the input variables
Console.WriteLine(Z());
So the equivalent of your ?-prefix syntax is a ()-suffix, but otherwise it's identical. A lambda is a "formula" in your terminology.
Behind the scenes, the C# compiler builds almost exactly what you presented in your C# conceptual example: it makes X into a field in a compiler-generated class, and allocates an instance of that class when the code block is entered. So congratulations, you have re-discovered lambdas! :)
In Mathematica, you can do this:
x = 10; (* # assign 30 to the variable x *)
y = 20; (* # assign 20 to the variable y *)
z := x + y; (* # assign the expression x+y to the variable z *)
Print[z];
(* # prints 30 *)
x = 50;
Print[z];
(* # prints 70 *)
The operator := (SetDelayed) is different from = (Set). The former binds an unevaluated expression to a variable, the latter binds an evaluated expression.
Wanting to have two definitions of X is inherently imperative. In a truly declarative language you have a single definition of a variable in a single scope. The behavior you want from Excel corresponds to editing the program.
Have you seen Resolver One? It's like Excel with a real programming language behind it.
Here is Daniel's example in Python, since I noticed you said you tried it in Python.
x = 10
y = 10
z = lambda: x + y
# Output: 20
print z()
x = 20
# Output: 30
print z()
Two things you can look at are the cells lisp library, and the Modelica dynamic modelling language, both of which have relation/equation capabilities.
There is a Lisp library with this sort of behaviour:
http://common-lisp.net/project/cells/
JavaFX will do that for you if you use bind instead of = for Z
react is an OCaml frp library. Contrary to naive emulations with closures it will recalculate values only when needed
Objective Caml version 3.11.2
# #use "topfind";;
# #require "react";;
# open React;;
# let (x,setx) = S.create 10;;
val x : int React.signal = <abstr>
val setx : int -> unit = <fun>
# let (y,sety) = S.create 20;;
val y : int React.signal = <abstr>
val sety : int -> unit = <fun>
# let z = S.Int.(+) x y;;
val z : int React.signal = <abstr>
# S.value z;;
- : int = 30
# setx 50;;
- : unit = ()
# S.value z;;
- : int = 70
You can do this in Tcl, somewhat. In tcl you can set a trace on a variable such that whenever it is accessed a procedure can be invoked. That procedure can recalculate the value on the fly.
Following is a working example that does more or less what you ask:
proc main {} {
set x 10
set y 20
define z {$x + $y}
puts "z (x=$x): $z"
set x 50
puts "z (x=$x): $z"
}
proc define {name formula} {
global cache
set cache($name) $formula
uplevel trace add variable $name read compute
}
proc compute {name _ op} {
global cache
upvar $name var
if {[info exists cache($name)]} {
set expr $cache($name)
} else {
set expr $var
}
set var [uplevel expr $expr]
}
main
Groovy and the magic of closures.
def (x, y) = [ 10, 20 ]
def z = { x + y }
assert 30 == z()
x = 50
assert 70 == z()
def f = { n -> n + 1 } // define another closure
def g = { x + f(x) } // ref that closure in another
assert 101 == g() // x=50, x + (x + 1)
f = { n -> n + 5 } // redefine f()
assert 105 == g() // x=50, x + (x + 5)
It's possible to add automagic memoization to functions too but it's a lot more complex than just one or two lines. http://blog.dinkla.net/?p=10
In F#, a little verbosily:
let x = ref 10
let y = ref 20
let z () = !x + !y
z();;
y <- 40
z();;
You can mimic it in Ruby:
x = 10
y = 20
z = lambda { x + y }
z.call # => 30
z = 50
z.call # => 70
Not quite the same as what you want, but pretty close.
not sure how well metapost (1) would work for your application, but it is declarative.
Lua 5.1.4 Copyright (C) 1994-2008 Lua.org, PUC-Rio
x = 10
y = 20
z = function() return x + y; end
x = 50
= z()
70
It's not what you're looking for, but Hardware Description Languages are, by definition, "declarative".
This F# code should do the trick. You can use lazy evaluation (System.Lazy object) to ensure your expression will be evaluated when actually needed, not sooner.
let mutable x = 10;
let y = 20;
let z = lazy (x + y);
x <- 30;
printf "%d" z.Value

Resources