Memory Issue in Alloy - alloy

I am new to Alloy. I am trying to find a solution for a model with 512 states. But it runs out of memory. I set the memory and stack to its maximum level, but it is not enough. Is there any other way I could use to increase the memory Alloy uses?
I appreciate your time and help.
Thanks a lot,
Fathiyeh

Hard to know where to start. Looks as if you're writing an Alloy model as if you're expecting it to be a model checker. But the point of Alloy is to allow you to analyze systems whose states have complex structure, with constraints written in a relational logic. You won't get very far doing a direct encoding of a low-level model into Alloy; for that kind of thing you'd do better to use a model checker.

module threeprocesses
abstract sig boolean {
}
one sig true,false extends boolean{}
sig state {
e1: boolean,
t1: boolean,
ready1: boolean,
e2: boolean,
t2: boolean,
ready2: boolean,
e3: boolean,
t3: boolean,
ready3: boolean
}
sig relation {
lambda : state -> one Int,
next1 : state -> state
}
pred LS (s : state) {
(((s.t1 =s.t3) and (s.t2 =s.t1) and (s.t3 =s.t2))
or ((s.t1 != s.t3) and (s.t2 !=s.t1) and (s.t3 =s.t2))
or ((s.t1 != s.t3) and (s.t2 =s.t1) and (s.t3 !=s.t2))) and
((s.e1 =s.e3) or (s.e2 !=s.e1) or (s.e3 !=s.e2))
}
pred show (r:relation) {
all s : state |
LS [s] implies LS [s.(r.next1)]
all s : state |
(not (LS [s])) implies not (s =s.(r.next1))
all s : state |
(not (LS [s])) implies (all s2 : (s.(r.next1)) | s2. (r.lambda) > s.(r.lambda))
all s : state,s2 : state |
((s.t1 = s2.t1) and (s.e1 = s2.e1) and (s.ready1 = s2.ready1) and (s.e3 = s2.e3)
and (s.t3 = s2.t3)) implies
( (((s2.(r.next1)).ready1)= ((s.(r.next1)).ready1)) and (((s2.(r.next1)).e1)= ((s.
(r.next1)).e1)) and
(((s2.(r.next1)).t1)= ((s.(r.next1)).t1)) )
all s : state,s2 : state |
((s.t2 = s2.t2) and (s.e2 = s2.e2) and (s.ready2 = s2.ready2) and (s.e1 = s2.e1)
and (s.t1 = s2.t1)) implies
( (((s2.(r.next1)).ready2)= ((s.(r.next1)).ready2)) and (((s2.(r.next1)).e2)= ((s.
(r.next1)).e2)) and
(((s2.(r.next1)).t2)= ((s.(r.next1)).t2)) )
all s : state,s2 : state |
((s.t3 = s2.t3) and (s.e3 = s2.e3) and (s.ready3 = s2.ready3) and (s.e2 = s2.e2)
and (s.t2 = s2.t2)) implies
( (((s2.(r.next1)).ready3)= ((s.(r.next1)).ready3)) and (((s2.(r.next1)).e3)= ((s.
(r.next1)).e3)) and
(((s2.(r.next1)).t3)= ((s.(r.next1)).t3)) )
all s : state |
(not ( (s.e1 = ((s.(r.next1)).e1)) and (s.t1 = ((s.(r.next1)).t1)) and (s.ready1
= ((s.(r.next1)).ready1)) ) ) implies
(s.e1 = s.e3)
all s : state |
(not ( (s.e2 = ((s.(r.next1)).e2)) and (s.t2 = ((s.(r.next1)).t2)) and (s.ready2
= ((s.(r.next1)).ready2)) ) ) implies
(not (s.e2 = s.e1))
all s : state |
(not ( (s.e3 = ((s.(r.next1)).e3)) and (s.t3 = ((s.(r.next1)).t3)) and (s.ready3
= ((s.(r.next1)).ready3)) ) ) implies
(not (s.e3 = s.e2))
all s : state ,s2:state |
(s != s2) implies (not ((s.e1 = s2.e1) and (s.e2 = s2.e2) and (s.e3 = s2.e3) and
(s.t1 = s2.t1) and (s.t2 = s2.t2) and (s.t3 = s2.t3) and
(s.ready1 = s2.ready1) and (s.ready2 = s2.ready2) and (s.ready3 = s2.ready3)))
}
run show for 3 but 1 relation, exactly 512 state

Related

When are constraints implicitly AND'ed versus when must constraints be explicitly AND'ed?

This signature contains two fields, each holding an integer:
sig Test {
a: Int,
b: Int
}
This predicate contains a series of constraints:
pred Show (t: Test) {
t.a = 0
t.b = 1
}
Those constraints are implicitly AND'ed together. So, that predicate is equivalent to this predicate:
pred Show (t: Test) {
t.a = 0 and
t.b = 1
}
This assertion contains a series of constraints followed by an implication operator:
assert ImplicationTest {
all t: Test {
t.a = 0
t.b = 1 => plus[t.a, t.b] = t.b
}
}
But in this case the constraints are not implicitly AND'ed together. If I want them AND'ed together, I must explicitly AND them:
assert ImplicationTest {
all t: Test {
t.a = 0 and
t.b = 1 => plus[t.a, t.b] = t.b
}
}
Why is this? Why is it that sometimes a series of constraints are implicitly AND'ed together, whereas other times I must explicitly AND the constraints?
I looked at the parser and as far as I can see it treats the right and left side of a newline/space as parenthesized expressions.
expr exprs -> expr and exprs
Thus:
t.a = 0 t.b = 1 t.c =2 => plus[t.a, t.b] = t.b
Is equivalent to:
(t.a = 0) and (t.b = 1 and ( t.c => plus[t.a, t.b] = t.b))
The following model seems to demonstrate that these expressions are equivalent:
sig Test {
a: Int,
b: Int,
c: Int
}
pred simple( t: Test ) {
t.a = 0 t.b = 1 t.c = 2 => plus[t.a, t.b] = t.b
}
pred full( t: Test ) {
(t.a = 0) and ((t.b = 1) and (t.c=2 => plus[t.a, t.b] = t.b))
}
assert Equivalent {
all t : Test {
simple[t] <=> full[t]
}
}
check Equivalent for 10

How to handle optional db step in slick 3?

I'm sure I'm simply facing a mental block with the functional model of Slick 3, but I cannot discern how to transactionally sequence an optional dependent db step in Slick 3. Specifically, I have a table with an optional (nullable) foreign key and I want it to be set to the ID of the inserted dependent record (if any, else null). That is, roughly:
if ( x is non null )
start transaction
id = insert x
insert y(x = id)
commit
else
start transaction
insert y(x = null)
commit
Of course, I'd rather not have the big if around the choice. Dependencies without the Option[] seem (relatively) straightforward, but the option is throwing me.
Precise example code (sans imports) follows. In this example, the question is how to save both x (a) and y (b) in the same transaction both if y is None or not. Saving Y itself seems straightforward enough as every related C has a non-optional B reference, but addressing the optional reference in A is unclear (to me).
object test {
implicit val db = Database.forURL("jdbc:h2:mem:DataTableTypesTest;DB_CLOSE_DELAY=-1", driver = "org.h2.Driver")
/* Data model */
case class A(id: Long, b: Option[Long], s: String)
class As(tag: Tag) extends Table[A](tag, "As") {
def id = column[Long]("ID", O.PrimaryKey, O.AutoInc)
def b = column[Option[Long]]("B")
def s = column[String]("S")
def * = (id, b, s) <> (A.tupled, A.unapply)
}
val as = TableQuery[As]
case class B(id: Long, s: String)
class Bs(tag: Tag) extends Table[B](tag, "Bs") {
def id = column[Long]("ID", O.PrimaryKey, O.AutoInc)
def s = column[String]("S")
def * = (id, s) <> (B.tupled, B.unapply)
}
val bs = TableQuery[Bs]
case class C(id: Long, b: Long, s: String)
class Cs(tag: Tag) extends Table[C](tag, "Cs") {
def id = column[Long]("ID", O.PrimaryKey, O.AutoInc)
def b = column[Long]("B")
def s = column[String]("S")
def * = (id, b, s) <> (C.tupled, C.unapply)
}
val cs = TableQuery[Cs]
/* Object model */
case class X(id: Long, s: String, y: Option[Y])
case class Y(id: Long, s: String, z: Set[Z])
case class Z(id: Long, s: String)
/* Mappers */
def xToA(x: X, bId: Option[Long]): A = { A(x.id, bId, x.s) }
def yToB(y: Y): B = { B(y.id, y.s) }
def zToC(z: Z, bId: Long): C = { C(z.id, bId, z.s) }
/* Given */
val example1 = X(0, "X1", Some(Y(0, "Y1", Set(Z(0, "Z11"), Z(0, "Z12")))))
val example2 = X(0, "X2", Some(Y(0, "Y2", Set())))
val example3 = X(0, "X3", None)
Await.result(db.run((as.schema ++ bs.schema ++ cs.schema).create), 10.seconds)
val examples = Seq(example1, example2, example3)
for ( example <- examples ) {
val saveY = (for { y <- example.y }
yield ( for {
id <- (bs returning bs.map(_.id)) += yToB(y)
_ <- cs ++= y.z.map(zToC(_, id))
} yield id) transactionally)
if ( saveY.isDefined ) Await.result(db.run(saveY.get), 10.seconds)
}
println(Await.result(
db.run(
(for { a <- as } yield a).result
),
10.seconds
))
println(Await.result(
db.run(
(for { b <- bs } yield b).result
),
10.seconds
))
println(Await.result(
db.run(
(for { c <- cs } yield c).result
),
10.seconds
))
}
This is fairly straightforward; just use the monadic-ness of DBIO:
// Input B value; this is your `x` in the question.
val x: Option[B] = _
// Assume `y` is fully-initialized with a `None` `b` value.
val y: A = _
// DBIO wrapping the newly-inserted ID, if `x` is set.
val maybeInsertX: DBIO[Option[Int]] = x match {
case Some(xToInsert) =>
// Insert and return the new ID.
val newId: DBIO[Int] = bs.returning(bs.map(_.id)) += xToInsert
// Map to the expected Option.
newId.map(Some(_))
case None =>
// No x means no ID.
DBIO.successful(None)
}
// Now perform your insert, copying in the newly-generated ID.
val insertA: DBIO[Int] = maybeInsertX.flatMap(bIdOption =>
as += y.copy(b = bIdOption)
)
// Run transactionally.
db.run(insertA.transactionally)

How to define Heap datastructure in Alloy (Homework)

As a homework, I have to define a heap data structure in Alloy.
I have come up with these rules
A node can have up to 1 father, left-son, right-son, left-brother and right-brother. It also has exactly 1 value and 1 level (as in how deep in the heap it is).
A node can have right-son if it has left-son.
A node cannot be in transitive closure over any of its relations (father, left-son, right-son, left-brother, right-brother).
The relations have to point to distinct nodes.
A node has a value and values must belong to a node.
A node's value must be less than the node's sons' values.
If a node has left-son and not left-brother, then the rightmost brother of its father has a right-son.
Node is its left-brother's right-brother and so on for all its relations.
Node's father's level is one less than node's level.
If there is a node with level two less, then all the nodes with that level must have both sons.
For any 2 nodes m, n that have the same level, m must be in transitive closure over left-brother of n, or in transitive closure over right-brother.
The question is twofold
A) I am not sure whether these rules are sufficient, or whether there is a better way to solve this altogether. (I think I could just resolve this all by having node consist of index and a value and transcribe heap-in-array algorithm into Alloy, but that seems rather inelegant.)
B) I have trouble implementing some of these rules.
I have implemented rules 1, 2, 3, 4, 5, 6, 7, 8 and 9. At least I think I did, and the generated graph does not contradict my expectations.
I do not know how to implement the last 2.
Also, the code I have so far:
open util/ordering [Key] as KO
open util/ordering [Level] as LO
sig Key{}
sig Level{}
sig Node {
key: one Key,
level: one Level,
father: lone Node,
left_brother: lone Node,
right_brother: lone Node,
left_son: lone Node,
right_son: lone Node
}
// There's exactly one root
fact {
one n : Node | n.father = none
}
// Every key has to belong to some node
fact {
all k : Key | some n:Node | n.key = k
}
fact {
all n : Node | (n.left_son != none && n.right_son != none) => #(KO/nexts[n.key] & (n.left_son.key + n.right_son.key)) = 2
}
// Son's father's son shall be Son etc
fact {
all n : Node | all m : Node | (n.left_son = m) => m.father = n
}
fact {
all n : Node | all m : Node | (n.right_son = m) => m.father = n
}
fact {
all n : Node | all m : Node | (m.father = n) => (n.left_son = m || n.right_son = m)
}
// Is this redundant?
fact {
all n : Node | all m : Node | (n.left_brother = m) => (m.right_brother = n)
}
fact {
all n : Node | all m : Node | (n.right_brother = m) => (m.left_brother = n)
}
// If a node has right-son, it must have a left-son.
fact {
all n : Node | (n.right_son != none) => (n.left_son != none)
}
// node having left son and left brother means that his left brother has a right son
fact {
all n: Node | (n.left_son != none && n.left_brother != none) => (n.left_brother.right_son != none)
}
// nodes father must be a level higher.
fact {
all n : Node | (n.father != none) => (LO/prev[n.level] = n.father.level)
}
// FIXME: this is wrong: There needs to be difference of 2 levels, not just a single level.
fact {
all n : Node | all m : Node | (LO/prevs[m.level] & n.level = n.level) => (n.left_son != none && n.right_son != none)
}
// TODO: If 2 nodes are in the same level, then they must be in left-brother* or right-brother* relation
// ????
// No node can be its own father
fact {
all n : Node | n.father != n
}
// No node can be in transitive closure over its ancestors
fact {
no n : Node | n in n.^father
}
// No node cannot be its own brother, son, etc...
fact {
all n: Node | n.left_brother != n
}
// Nor in its transitive closure
fact {
no n: Node | n in n.^left_brother
}
fact {
all n: Node | n.right_brother != n
}
fact {
no n: Node | n in n.^right_brother
}
fact {
all n: Node | n.left_brother != n
}
fact {
no n: Node | n in n.^left_brother
}
fact {
all n: Node | n.right_son != n
}
fact {
no n: Node | n in n.^right_son
}
fact {
all n: Node | n.left_son != n
}
fact {
no n: Node | n in n.^left_son
}
// All node relatives have to be distinct
fact {
all n: Node | n.left_son & n.right_son = none && n.left_brother & n.right_brother = none && (n.left_brother + n.right_brother) & (n.left_son + n.right_son) = none
&& (n.right_son + n.left_son + n.left_brother + n.right_brother) & n.father = none
}
run{}
For 10. something along the lines of
all m : Node | some father.father.m implies some m.left and m.right
would work, which is equivalent to
fact {
all m, n : Node | (n.father.father != none && n.father.father.level = m.level) => (m.left_son != none && m.right_son != none)
}
For 11., you can formulate it quite straightforwardly from the textual definition (of course, with using appropriate operators, namely for transitive closure).
As a general suggestion, try not to formulate such direct questions about homework problems (see this discussion). Since this answer comes pretty late, I think it's fine to try to give you some hints.

how to create a structure of kripke in NuSMV?

i must to create a structure of Kripke in NuSMV and i must to check some properties.
Anybody help me? The structure and the properties(LTL, CTL and CTL*) are in the pictures.
Here there is a structure and properties:
http://cl.ly/image/1x0b1v3E0P0D/Screen%20Shot%202014-10-16%20at%2016.52.34.png
I found a simpler and seemingly more reliable NuSMV code for your Kripke Structure. Thanks to dejvuth for his answer to my question. The code is as follows
MODULE main
VAR
state : {s0,s1,s2,s3,s4};
ASSIGN
init(state) := s0;
next(state):=
case
state = s0 : {s1,s2};
state = s1 : {s1,s2};
state = s2 : {s1,s2,s3};
state = s3 : {s1,s4};
state = s4 : {s4};
esac;
DEFINE
p := state = s1 | state = s2 | state = s3 | state = s4;
q := state = s1 | state = s2;
r := state = s3;
SPEC
EG p;
SPEC
AG p;
SPEC
EF (AG p);
As far as I know NuSMV only handles LTL and CTL formulas (see NuSMV in Wikipedia). The formulas in problem 1-3 are CTL formulas, hence it can be model-checked by NuSMV. However the formulas in problem 4 & 5 are CTL* formulas, and thus we cannot straightforwardly use them as an input to NuSMV. You also need to understand that the set of all CTL* formulas is the proper superset of the union of all LTL and CTL formulas. This conditions implies that some CTL* formulas do not have their equivalent LTL or CTL formulas (see CTL* in Wikipedia). Your Kripke structure can be defined in NuSMV by following code:
MODULE main
VAR
p : boolean;
q : boolean;
r : boolean;
state : {s0,s1,s2,s3,s4};
ASSIGN
init (state) := s0;
next (state) :=
case
state = s0 : {s1, s2};
state = s1 : {s1, s2};
state = s2 : {s1, s2, s3};
state = s3 : {s1, s4};
state = s4 : {s4};
TRUE : state;
esac;
init (p) := FALSE;
init (q) := FALSE;
init (r) := FALSE;
next(p) :=
case
state = s1 | state = s2 | state = s3 | state = s4 : TRUE;
TRUE : p;
esac;
next(q) :=
case
state = s1 | state = s2 : TRUE;
state = s3 | state = s4 : FALSE;
TRUE : q;
esac;
next(r) :=
case
state = s3 : TRUE;
state = s1 | state = s2 | state = s4 : FALSE;
TRUE : r;
esac;
SPEC
EG p;
SPEC
AG p;
SPEC
EF (AG p);
Of course, there is another way to define your Kripke structure in NuSMV, but I think this is one of the easiest. (Anyway, thanks for helping me with my problem).
As for the formulas in problem 4 & 5, here is my answer.
The formula AF [p U EG ( p -> q)] is of the form AF [\phi], where \phi is an LTL formula p U EG (p->q). Since the LTL formula \phi is satisfied in a Kripke model if for every path starting at s0 we have the satisfaction of \phi, then we translate AF [p U EG ( p -> q)] into AF A[p U EG ( p -> q)].
By similar argument, we translate EG[(( p & q ) | r) U ( r U AG p)] into EG[A(( p & q ) | r) U A( r U AG p)].

Optimization of F# string manipulation

I am just learning F# and have been converting a library of C# extension methods to F#. I am currently working on implementing a function called ConvertFirstLetterToUppercase based on the C# implementation below:
public static string ConvertFirstLetterToUppercase(this string value) {
if (string.IsNullOrEmpty(value)) return value;
if (value.Length == 1) return value.ToUpper();
return value.Substring(0, 1).ToUpper() + value.Substring(1);
}
The F# implementation
[<System.Runtime.CompilerServices.ExtensionAttribute>]
module public StringHelper
open System
open System.Collections.Generic
open System.Linq
let ConvertHelper (x : char[]) =
match x with
| [| |] | null -> ""
| [| head; |] -> Char.ToUpper(head).ToString()
| [| head; _ |] -> Char.ToUpper(head).ToString() + string(x.Skip(1).ToArray())
[<System.Runtime.CompilerServices.ExtensionAttribute>]
let ConvertFirstLetterToUppercase (_this : string) =
match _this with
| "" | null -> _this
| _ -> ConvertHelper (_this.ToCharArray())
Can someone show me a more concise implementation utilizing more natural F# syntax?
open System
type System.String with
member this.ConvertFirstLetterToUpperCase() =
match this with
| null -> null
| "" -> ""
| s -> s.[0..0].ToUpper() + s.[1..]
Usage:
> "juliet".ConvertFirstLetterToUpperCase();;
val it : string = "Juliet"
Something like this?
[<System.Runtime.CompilerServices.ExtensionAttribute>]
module public StringHelper =
[<System.Runtime.CompilerServices.ExtensionAttribute>]
let ConvertFirstLetterToUppercase (t : string) =
match t.ToCharArray() with
| null -> t
| [||] -> t
| x -> x.[0] <- Char.ToUpper(x.[0]); System.String(x)
Try the following
[<System.Runtime.CompilerServices.ExtensionAttribute>]
module StringExtensions =
let ConvertFirstLetterToUpperCase (data:string) =
match Seq.tryFind (fun _ -> true) data with
| None -> data
| Some(c) -> System.Char.ToUpper(c).ToString() + data.Substring(1)
The tryFind function will return the first element for which the lambda returns true. Since it always returns true it will simply return the first element or None. Once you've established there is at least one element you know data is not null and hence can call Substring
There's nothing wrong with using .NET library functions from a .NET language. Maybe a direct translation of your C# extension method is most appropriate, particularly for such a simple function. Although I'd be tempted to use the slicing syntax like Juliet does, just because it's cool.
open System
open System.Runtime.CompilerServices
[<Extension>]
module public StringHelper =
[<Extension>]
let ConvertFirstLetterToUpperCase(this:string) =
if String.IsNullOrEmpty this then this
elif this.Length = 1 then this.ToUpper()
else this.[0..0].ToUpper() + this.[1..]

Resources