Is it possible to model random failures in Alloy?
For instance, I currently have a connected graph that is passing data at various time steps to its neighbors. What I am trying to do is figure out some method for allowing the model to randomly kill links, and in doing so, still manage to fulfill its goal (of ensuring that all nodes have had their data state set to On).
open util/ordering[Time]
enum Datum{Off, On} // A simple representation of the state of each node
sig Time{state:Node->one Datum} // at each time we have a network state
abstract sig Node{
neighbours:set Node
}
fact {
neighbours = ~neighbours -- symmetric
no iden & neighbours -- no loops
all n : Node | Node in n.*neighbours -- connected
-- all n : Node | (Node - n) in n.neighbours -- comp. connected
}
fact start{// At the start exactly one node has the datum
one n:Node|first.state[n]=On
}
fact simple_change{ // in one time step all neighbours of On nodes become on
all t:Time-last |
let t_on = t.state.On |
next[t].state.On = t_on+t_on.neighbours
}
run {} for 5 Time, 10 Node
The software I'm attempting to model deals in uncertainty. Basically, links between nodes can fail, and the software reroutes along another path. What I'd like to try to do in Alloy is to have some facility for links to 'die' at certain timesteps (preferably randomly). In the top-most fact, I have the capability for the graph to be completely connected, so its possible that, if a link dies, another can possibly pick up the slack (as the simple_change switches the state of the Datum to be On for all connected neighbors).
Edit:
So, I did as was suggested and ran into the following error:
I am confused, as I thought neighbours and Node were still sets?
Here is my updated code:
open util/ordering[Time]
open util/relation
enum Datum{Off, On} // A simple representation of the state of each node
sig Time{
neighbours : Node->Node,
state:Node->one Datum // at each time we have a network state
}{
symmetric[neighbours, Node]
}
abstract sig Node{
neighbours:set Node
}
fact {
neighbours = ~neighbours -- symmetric
no iden & neighbours -- no loops
-- all n : Node | (Node - n) in n.neighbours -- comp. connected
all n : Node | Node in n.*neighbours -- connected
}
// At the start exactly one node has the datum
fact start{
one n:Node|first.state[n]=On
}
// in one time step all neighbours of On nodes become on
fact simple_change{
all t:Time-last |
let t_on = t.state.On |
next[t].state.On = t_on+t_on.neighbours
all t:Time-last | next[t].neighbours in t.neighbours
all t:Time-last | lone t.neighbours - next[t].neighbours
}
run {} for 10 Time, 3 Node
Move the definition of neighbours into Time:
sig Time {neighbours : Node->Node, ....}
You will need to re-express the facts about symmetry etc of neighbours relative to each time point. This is most easily done by doing it in the invariant part of the Time signature:
sig Time {
neighbours : Node->Node,
...
}{
symmetric[neighbours, Node],
....
}
(I do recommend the use of open util/relation to load useful definitions such as symmetric.)
Then the time step simple_change can be complicated by adding in a fact such as
next[t].neighbours in t.neighbours
which can throw away arbitrarily many arcs.
If you want to restrict how many arcs are thrown away in each step you can add a further fact such as
lone t.neighbours - next[t].neighbours
which restricts disposal to at most one arc.
Related
I've been learning Haskell as I find the language to be expressive, and to practice it, a friend has been giving me problems from Codeforces to do. The current problem I've been working on is to implement Dijkstra's Algorithm.
Below is a snippet of the algorithm (and here is the full code):
type Edge = (Node, Distance)
type Route = [Node]
type Graph = Map Node [Edge]
-- tracking which nodes we've gotten to
type VisitedNodes = Set Node
-- set will be used as a priority queue, along with prev/curr nodes
type PriorityQueue = Set (Distance, (Node, Maybe Node))
-- on the optimal path from start to end, what's the preceding node for a given node?
type PreviousMap = Map Node Node
-- to declutter the function types
type DijkstraStructs = (VisitedNodes, PriorityQueue, PreviousMap)
dijkstra :: Graph -> Node -> DijkstraStructs -> Maybe PreviousMap
dijkstra graph target (visitedNodes, pq, prev)
| emptyPrioQueue = Nothing
| alreadyVisited = dijkstra graph target (visitedNodes, nextPq, prev)
| reachedTarget = Just nextPrevMap
| otherwise = dijkstra graph target (updatedVisitedNodes, neighborPq, nextPrevMap)
where
-- we've exhausted the search along the nodes we can reach when this is true
emptyPrioQueue = Set.null pq
-- greedy: find the edge leading to the tentatively closest node, and remove it
((distance, (nearestNode, maybePrevNode)), nextPq) = Set.deleteFindMin pq
updatedVisitedNodes = Set.insert nearestNode visitedNodes
-- if the current node has been visited already, we will skip
alreadyVisited = nearestNode `Set.member` visitedNodes
-- for path-tracking
nextPrevMap = case maybePrevNode of
Nothing -> prev
Just prevNode -> Map.insert nearestNode prevNode prev
-- if the nearest node is the target node, then we're done. the path is encoded in the PreviousMap
reachedTarget = nearestNode == target
-- otherwise, keep searching. add all outgoing edges from current node into priority queue
neighbors = (Map.!) graph nearestNode
neighborPq = foldr (\(toNode, w) -> Set.insert (distance + w, (toNode, Just nearestNode))) nextPq neighbors
I believe my implementation of the algorithm is correct, but I suspect it's memory inefficient, as my submissions to Codeforces exceed the memory limit for large inputs (e.g., 50k nodes // 100k edges -- my algorithm uses more than 64MB on such a case).
While my immediate goal is to iterate on my algorithm in order to successfully submit it, my longer term goal is to learn how to reason about the memory usage of Haskell code in general.
I suspect a large portion of memory might be attributed to "versioning" of the intermediate Sets and Maps, but I am not sure how to think about the impact of "mutating" (i.e., creating new versions) of immutable data structures in Haskell.
In an attempt to profile my code, I followed a procedure I found on this site, which helped me detect and fix a stack overflow from using foldr for large inputs, but sadly I haven't been able to use this approach to measure the memory usage of the algorithm itself.
I would love to learn how to optimize the memory usage of this code, as well as learn how to profile/measure and reason about memory usage in Haskell. Improvements to this code, as well as general stylistic feedback is welcome.
The main problem to worry about is space leaks. When is each argument of your dijkstra function forced?
graph and target are constant. visitedNodes is forced by the alreadyVisited guard. pq is forced by the emptyPrioQueue guard. But there is nothing to force prev, so it gets thunked: the case expression of nextPrevMap is delayed until the very end of the whole execution, when the final PreviousMap is evaluated. So you have a chain of thunks that is about as long as the number of visited nodes.
I have a model (see below) with two signatures: Data and Node. I have defined some predicates which characterise inhabitants of Node, namely: Orphan, Terminal, and Isolated.
What I want to do - but have not yet achieved - is to define a predicate Link which models the linking of two Node such that one Node becomes the successor (succ) of the other. Moreover, I would like to restrict the operation such that links can only be made to Isolated Nodes. Furthermore, I would like the restriction - if it is possible - to somehow be internal to the Link predicate.
Here is my latest attempt:
sig Data {}
sig Node {
data: Data,
succ: lone Node
}
// Node characterisation
pred Isolated (n: Node) { Orphan[n] and Terminal[n] }
pred Orphan (n: Node) { no m: Node | m.succ = n }
pred Terminal (n: Node) { no n.succ }
/*
* Link
*
* May only Link n to an m, when:
* - n differs from m
* - m is an Isolated Node (DOES NOT WORK)
*
* After the operation:
* - m is the succcessor of n
*/
pred Link (n,m: Node) {
n != m
Isolated[m] /* Not satisfiable */
m = succ[n]
}
pred LinkFeasible { some n,m: Node | Link[n,m] }
run LinkFeasible
Including the conjunct Isolated[m] renders the model unsatisfiable. I think I understand why: there can be no Node which is both Isolated and a successor of another. I include it only in the hope that it might reveal my intentions.
My question: how do I define the Link predicate to link two Nodes such that only Isolated nodes may be linked-to?
The problem is that you want Link to be an operation which changes the value of succ. In order to model change in Alloy, you need to add an ordered signature which represents the distinct states of your system. So your signature would look something like this:
open util/ordering[time]
sig Time {}
sig Data {}
sig Node {
data: Data,
succ: Node lone -> Time
}
But this also changes all of your predicates. You can't say that a node is isolated or terminal, you can only say that a node is isolated at time T.
If you have Software Abstractions, this is all covered in section 6.1. I'm not familiar with any good guides to modeling time in Alloy outside of that book.
There seems to be something I don't understand about the first branch of the ordering predicate in ff_next of this alloy model.
open util/ordering[Exposure]
open util/ordering[Tile]
open util/ordering[Point]
sig Exposure {}
sig Tile {}
sig Point {
ex: one Exposure,
tl: one Tile
} fact {
// Uncommenting the line below makes the model unsatisfiable
// Point.ex = Exposure
Point.tl = Tile
}
pred ff_next[p, p': Point] {
(p.tl = last) => (p'.ex = next[p.ex] and p'.tl = first)
else (p'.ex = p.ex and p'.tl = next[p.tl])
}
fact ff_ordering {
first.ex = first
first.tl = first
all p: Point - last | ff_next[p, next[p]]
}
run {}
The intuition here is that I have a number of exposures, each of which I want to perform at a number of tile positions. Think doing panorama images and then stitching them together, but doing this multiple times with different camera settings.
With the noted line commented out the first instance I get is this:
This is equivalent to one pass over the panorama with exposure one, and then dropping the other exposures on the floor.
The issue seems to be the first branch after the => in ff_next but I don't understand what's wrong. That branch is never satisfied, which would move to the next exposure and the start of the panorama. If I uncomment the line Point.ex = Exposure the model becomes unsatisfiable, because it requires that branch.
Any help on why that branch is not satisfiable?
It looks like you're trying to express "every tile must correspond to point with the current exposure before we move to the next exposure." The problem is a major pitfall with ordering: It forces the signature to be exact. If you write
run {} for 6 but 3 Tile, 2 Exposure
Then that works as expected. There are only models when #Point = #Exposure * #Tile. You can write your own reduced version of ordering if this is an issue for you.
I'm using Alloy* hola-0.2.jar to represent and study higher-order problems.
The following code
check isAR for 1 but exactly 1 Node, exactly 1 Edge should fail with only one counterexample.
Yes, Alloy* finds the counterexample quickly. However, when I click "Next" to try to find another counterexample, the solver never finish it. (I run this for at least 3 hours in my macbook pro.)
Indeed, by theory, no more counterexample exists. So Alloy* should state No counterexample found. Assertion may be valid. But, it never pop up.
I'm aware that solving higher-order problems requires more computational effort. However this problem of mine is very small. So I doubt my code. What's the problem?
// a Heyting Algebra of subgraphs of a given graph
sig Node {}
sig Edge {
source: Node,
target: Node}
fun Edge.proj : set Node { this.source + this.target}
pred isGraph[ns: set Node, es: set Edge] {es.proj in ns}
// Cmpl[sns,ses,cns,ces] means: a pseudo-complement of a subgraph s is a subgraph c.
pred Cmpl[sns: set Node, ses: set Edge, cns: set Node, ces: set Edge] {
!((cns!=none or ces!=none) and Node in sns and Edge in ses)
Node in sns + cns
Edge in ses + ces
all ns: set Node | all es: set Edge when isGraph[ns,es] and (Node in sns + ns) and (Edge in ses + es)|
(cns in ns and ces in es)
}
/* An element x of a Heyting algebra H is called regular
* if x = ¬y for some y in H.
*/
pred isRegular [xns: set Node, xes: set Edge] {
some yns: set Node | some yes: set Edge when isGraph[yns,yes]|
one cyns: set Node | one cyes: set Edge |
isGraph[cyns,cyes] and Cmpl[yns,yes,cyns,cyes] and (cyns=xns and cyes=xes)
}
assert isAR { // is always regular?
all subns: set Node, subes: set Edge when isGraph[subns,subes] |
isRegular[subns,subes]
}
check isAR for 1 but exactly 1 Node, exactly 1 Edge
// this should fail with 1 couterexample (by theory)
The following model produces instances with exactly 2 address relations when the number of Books is limited to 1, however, if more Books are allowed it will create instances with 0-3 address relations. My misunderstanding of how Alloy works?
sig Name{}
sig Addr{}
sig Book { addr: Name -> lone Addr }
pred show(b:Book) { #b.addr = 2 }
// nr. of address relations in every Book should be 2
run show for 3 but 2 Book
// works as expected with 1 Book
Each instance of show should include one Book, labeled as being the b of show, which has two address pairs. But show does not say that every book must have two address pairs, only that at least one must have two address pairs.
[Postscript]
When you ask Alloy to show you an instance of a predicate, for example by the command run show, then Alloy should show you an instance: that is (quoting section 5.2.1 of Software abstractions, which you already have open) "an assignment of values to the variables of the constraint for which the constraint evaluates to true." In any given universe, there may be many other possible assignments of values to the variables for which the constraint evaluates to false; the existence of such possible non-suitable assignments is unavoidable in any universe with more than one atom.
Informally, we can think of a run command for a predicate P with arguments X, Y, Z as requesting that Alloy show us universes which satisfy the expression
some X, Y, Z : univ | P[X, Y, Z]
The run command does not amount to the expression
all X, Y, Z : univ | P[X, Y, Z]
If you want to see only universes in which every book has two pairs in its addr relation, then say so:
pred all_books_have_2 { all b : Book | #b.addr = 2 }
I think it's better that run have implicit existential quantification, rather than implicit universal quantification. One way to see why is to imagine a model that defines trees, such as:
sig Node { parent : lone Node }
fact parent_acyclic { no n : Node | n in n.^parent }
Suppose we get tired of seeing universes in which every tree is trivial and contains a single node. I'd like to be able to define a predicate that guarantees at least one tree with depth greater than 1, by writing
pred nontrivial[n : Node]{ some n.parent }
Given the constraint that trees be acyclic, there can never be a non-empty universe in which the predicate nontrivial holds for all nodes. So if run and pred had the semantics you have been supposing, we could not use nontrivial to find universes containing non-trivial trees.
I hope this helps.