Logical evaluation of "When A and B ... " - model-checking

Given a statement "When the CM is idle and receives update request from the WCP, it will set ....".
Some context: there can only be one type of msg in the channel i.e. it will only contain update requests from wcp.
I can think of 2 possible implementations. However, I'm not too sure as to which is the correct way.
1st Way:
do
:: CM_STATUS == IDLE && nempty(wcpOut) -> // remove msg from channel and set something;
od;
2nd Way
mtype receivedMessage;
do
:: CM_STATUS == IDLE ->
if
:: wcpOut?receivedMessage -> // set something;
fi;
od;

The two examples are slighly different.
do
:: CM_STATUS == IDLE && nempty(wcpOut) -> // read msg
od;
Here, you commit to reading the message if your state is idle and the channel wcpOut is not empty. However, what happens if the process is preempted right aver the evaluation of nempty(wcpOut), and the message is read by someone else? In that case, the process could end up being blocked.
mtype receivedMessage;
do
:: CM_STATUS == IDLE ->
if
:: wcpOut?receivedMessage -> // set something;
fi;
od;
Here, you commit to read a message when the state idle, so you are unable to react to a state change up until when the message is read.
I wouldn't use either approach, except for simple examples.
The flaw of the first method is that it does not perform the two operations atomically. The flaw of the second method is that it makes it hard to extend the behavior of your controller in the idle state by adding more conditions. (For instance, you would get the "dubious use of 'else' combined with i/o" error message if you tried to add an else branch).
IMHO, a better option is
do
:: atomic{ nempty(my_channel) -> my_channel?receiveMessage; } ->
...
:: empty(my_channel) -> // do something else
od;
Instead, when you want to use message filtering, you can use message polling:
do
:: atomic{ my_channel?[MESSAGE_TYPE] -> my_channel?MESSAGE_TYPE } ->
...
:: else -> // do something else
od;
Whether or not you choose to join these conditions with CM_STATUS == IDLE, or you prefer to use the nested approach, is entirely up to you, unless you have reasons to believe that the CM_STATUS variable can be changed by some other process. I would almost always use the second style when it can improve readability.

Related

Promela model with spin - duplicate message and corrupt message

I have this promela code and I need to model message duplication and message corruption and also I need to add mechanisms to detect and cope with corrupted messages and duplicates message
from my reading, I found that I need to add new processes one to duplicate message and another one for message corruption. Also for detecting the duplicate, I need to add a sequential number and for corrupting, I need to add checksum. my skills in pormela couldn't help me to transfer this information to code. if you can help or know a useful website to help I will appreciate
chan linkA = [1] of {byte};
chan linkB = [1] of {byte};
proctype sender (chan link; byte first)
{ byte n=first;
do
:: n <= 10 -> link!n; n=n+2
:: else -> break
od
}
proctype receiver ()
{ byte m, totaleven, totalodd;
do
:: linkA?m -> totaleven=totaleven+m
:: linkB?m -> totalodd=totalodd+m
od
}
init
{
run sender (linkA,2);
run sender (linkB,1);
run receiver ()
}
Since this looks like an exercise, instead of providing you with the correct solution, I'll try to just give you some hints.
message duplication: instead of sending the message once, add a branching option for sending the message twice, similar to this:
if
:: channel!MSG(value, seq_no, checksum)
:: channel!MSG(value, seq_no, checksum);
channel!MSG(value, seq_no, checksum); // msg. duplication
:: true; // msg. loss
fi
At the receiver side, you cope with duplication and loss by checking the seq_no value. When a message is duplicated, you discard it. When a (previous) message is missing, you discard (for simplicity) the current message and ask for the message with the desired seq_no to be sent again, like, e.g.,
channel!NACK(seq_no)
message corruption: compute the checksum first, then alter the content of value
checksum = (seq_no ^ value) % 2
select(value : 1 .. 100)
...
channel!MSG(value, seq_no, checksum);
The checksum is very simple, I don't think it matters for the purposes of the exercise. At the receiver side, you compute the checksum on the data you receive, and compare it with the checksum value in the message. If it is not okay, you ask that it is sent again.

EAFP in Haskell

I have a doubt of the Maybe and Either types, and their hypothetical relation to EAFP(Easier Ask Forgiveness to Permission). I've worked with Python and get used to work with the EAFP paradigm in the world of exceptions.
The classical example: Division by zero
def func(x,y):
if not y:
print "ERROR."
else: return (x/y)
and Python's style:
def func(x,y):
try:
return (x/y)
except: return None
In Haskell, the first function would be
func :: (Eq a, Fractional a) => a -> a -> a
func x y = if y==0 then error "ERROR." else x/y
and with Maybe:
func :: (Eq a, Fractional a) => a -> a -> Maybe a
func x y = if y==0 then Nothing else Just (x/y)
In Python's version, you run func without checking y. With Haskell, the story is the opposite: y is checked.
My question:
Formally, does Haskell support the EAFP paradigm or "prefers" LBYL although admits a semi-bizarre EAFP approximation?
PD: I called "semi-bizarre" because, even if it is intuitively readable, it looks (at least for me) like it vulnerates EAFP.
The Haskell style with Maybe and Either forces you to check for the error at some point, but it does not have to be right away. If you don't want to deal with the error now, you can just propagate it on through the rest of your computation.
Taking your hypothetical safe divide-by-0 example, you could use it in a broader computation without an explicit check:
do result <- func a b
let x = result * 10
return x
Here, you don't have to match on the Maybe returned by func: you just extract it into the result variable using do-notation, which automatically propagates failure throughout. The consequence is that you don't need to deal with the potential error immediately, but the final result of the computation is wrapped in Maybe itself.
This means that you can easily combine (compose) functions that miht result in an error without having to check the error at each step.
In a sense, this gives you the best of both worlds. You still only have to check for errors in one place, at the very end, but you're explicit about it. You have to use something like do-notation to take care of the actual propagation and you can't ignore the final error by accident: if you don't want to handle it, you have to turn it into a runtime error explicitly.
Isn't explicit better than implicit?
Now, Haskell also has a system of exceptions for working with runtime errors that you do not have to check at all. This is useful occasionally, but not too often. In Haskell, we only use it for errors that we do not expect to ever catch—truly exceptional situations. The rule of thumb is that a runtime exception represents a bug in your program, while an improper input or merely an uncommon case should be represented with Maybe or Either.

Performance of a function that changes functions

Lets suppose we have a function
type Func = Bool -> SophisticatedData
fun1 :: Func
And we'd like to change this function some input:
change :: SophisticatedData -> Func -> Func
change data func = \input -> if input == False then data else func input
Am I right that after several calls of change (endFunc = change data1 $ change data2 $ startFunc) resulting function would call all intermediate ones each time? Am I right that GC wouldn't able to delete unused data? What is the haskell way to cope with this task?
Thanks.
Well let's start by cleaning up change to be a bit more legible
change sd f input = if input then func input else sd
So when we compose these
change d1 $ change d2 $ change d3
GHC starts by storing a thunk for each of them. Remember that $ is a function to so the whole change d* thing is going to be a thunk at first. Thunks are relatively cheap and if you're not creating 10k or so of them at once you'll be just fine :) so no worries there.
Now the question is, what happens when you start evaluating, the answer is, it'll still not evaluate the complex data, so it's still quite memory efficient, and it only needs to force input to determine which branch it's taking. Because of this, you should never actually fully evaluate SophisticatedData until after choose has run and returned a one to you, then it will be evaluated as need if you use it.
Further more, at each step, GHC can garbage collect the unneeded thunks since they can't be referenced anymore.
In conclusion, you should be just fine. Trust in the laziness
You are correct: if foo is a chain of O(n) calls to change, there will be O(n) overhead on every call to foo. The way to deal with this is to memoize foo:
memoize :: Func -> Func
memoize f = \x -> if x then fTrue else fFalse where
fTrue = f True
fFalse = f False

Simple Generators

This code comes from a paper called "Lazy v. Yield". Its about a way to decouple producers and consumer of streams of data. I understand the Haskell portion of the code but the O'Caml/F# eludes me. I don't understand this code for the following reasons:
What kind of behavior can I expect from a function that takes as argument an exception and returns unit?
How does the consumer project into a specific exception? (what does that mean?)
What would be an example of a consumer?
module SimpleGenerators
type 'a gen = unit -> 'a
type producer = unit gen
type consumer = exn -> unit (* consumer will project into specific exception *)
type 'a transducer = 'a gen -> 'a gen
let yield_handler : (exn -> unit) ref =
ref (fun _ -> failwith "yield handler is not set")
let iterate (gen : producer) (consumer : consumer) : unit =
let oldh = !yield_handler in
let rec newh x =
try
yield_handler := oldh
consumer x
yield_handler := newh
with e -> yield_handler := newh; raise e
in
try
yield_handler := newh
let r = gen () in
yield_handler := oldh
r
with e -> yield_handler := oldh; raise e
I'm not familiar with the paper, so others will probably be more enlightening. Here are some quick answers/guesses in the meantime.
A function of type exn -> unit is basically an exception handler.
Exceptions can contain data. They're quite similar to polymorphic variants that way--i.e., you can add a new exception whenever you want, and it can act as a data constructor.
It looks like the consumer is going to look for a particular exception(s) that give it the data it wants. Others it will just re-raise. So, it's only looking at a projection of the space of possible exceptions (I guess).
I think the OCaml sample is using a few constructs and design patterns that you would not typically use in F#, so it is quite OCaml-specific. As Jeffrey says, OCaml programs often use exceptions for control flow (while in F# they are only used for exceptional situations).
Also, F# has really powerful sequence expressions mechanism that can be used quite nicely to separate producers of data from the consumers of data. I did not read the paper in detail, so maybe they have something more complicated, but a simple example in F# could look like this:
// Generator: Produces infinite sequence of numbers from 'start'
// and prints the numbers as they are being generated (to show I/O behaviour)
let rec numbers start = seq {
printfn "generating: %d" start
yield start
yield! numbers (start + 1) }
A simple consumer can be implemented using for loop, but if we want to consume the stream, we need to say how many elements to consume using Seq.take:
// Consumer: takes a sequence of numbers generated by the
// producer and consumes first 100 elements
let consumer nums =
for n in nums |> Seq.take 100 do
printfn "consuming: %d" n
When you run consumer (numbers 0) the code starts printing:
generating: 0
consuming: 0
generating: 1
consuming: 1
generating: 2
consuming: 2
So you can see that the effects of producers and consumers are interleaved. I think this is quite simple & powerful mechanism, but maybe I'm missing the point of the paper and they have something even more interesting. If so, please let me know! Although I think the idiomatic F# solution will probably look quite similar to the above.

How to add a finalizer on a TVar

Background
In response to a question, I built and uploaded a bounded-tchan (wouldn't have been right for me to upload jnb's version). If the name isn't enough, a bounded-tchan (BTChan) is an STM channel that has a maximum capacity (writes block if the channel is at capacity).
Recently, I've received a request to add a dup feature like in the regular TChan's. And thus begins the problem.
How the BTChan looks
A simplified (and actually non-functional) view of BTChan is below.
data BTChan a = BTChan
{ max :: Int
, count :: TVar Int
, channel :: TVar [(Int, a)]
, nrDups :: TVar Int
}
Every time you write to the channel you include the number of dups (nrDups) in the tuple - this is an 'individual element counter' which indicates how many readers have gotten this element.
Every reader will decrement the counter for the element it reads then move it's read-pointer to then next element in the list. If the reader decrements the counter to zero then the value of count is decremented to properly reflect available capacity on the channel.
To be clear on the desired semantics: A channel capacity indicates the maximum number of elements queued in the channel. Any given element is queued until a reader of each dup has received the element. No elements should remain queued for a GCed dup (this is the main problem).
For example, let there be three dups of a channel (c1, c2, c3) with capacity of 2, where 2 items were written into the channel then all items were read out of c1 and c2. The channel is still full (0 remaining capacity) because c3 hasn't consumed its copies. At any point in time if all references toc3 are dropped (so c3 is GCed) then the capacity should be freed (restored to 2 in this case).
Here's the issue: let's say I have the following code
c <- newBTChan 1
_ <- dupBTChan c -- This represents what would probably be a pathological bug or terminated reader
writeBTChan c "hello"
_ <- readBTChan c
Causing the BTChan to look like:
BTChan 1 (TVar 0) (TVar []) (TVar 1) --> -- newBTChan
BTChan 1 (TVar 0) (TVar []) (TVar 2) --> -- dupBTChan
BTChan 1 (TVar 1) (TVar [(2, "hello")]) (TVar 2) --> -- readBTChan c
BTChan 1 (TVar 1) (TVar [(1, "hello")]) (TVar 2) -- OH NO!
Notice at the end the read count for "hello" is still 1? That means the message is not considered gone (even though it will get GCed in the real implementation) and our count will never decrement. Because the channel is at capacity (1 element maximum) the writers will always block.
I want a finalizer created each time dupBTChan is called. When a dupped (or original) channel is collected all elements remaining to be read on that channel will get the per-element count decremented, also the nrDups variable will be decremented. As a result, future writes will have the correct count (a count that doesn't reserve space for variables not-read by GCed channels).
Solution 1 - Manual Resource Management (what I want to avoid)
JNB's bounded-tchan actually has manual resource management for this reason. See the cancelBTChan. I'm going for something harder for the user to get wrong (not that manual management isn't the right way to go in many cases).
Solution 2 - Use exceptions by blocking on TVars (GHC can't do this how I want)
EDIT this solution, and solution 3 which is just a spin-off, does not work! Due to bug 5055 (WONTFIX) the GHC compiler sends exceptions to both blocked threads, even though one is sufficient (which is theoretically determinable, but not practical with the GHC GC).
If all the ways to get a BTChan are IO, we can forkIO a thread that reads/retries on an extra (dummy) TVar field unique to the given BTChan. The new thread will catch an exception when all other references to the TVar are dropped, so it will know when to decrement the nrDups and individual element counters. This should work but forces all my users to use IO to get their BTChans:
data BTChan = BTChan { ... as before ..., dummyTV :: TVar () }
dupBTChan :: BTChan a -> IO (BTChan a)
dupBTChan c = do
... as before ...
d <- newTVarIO ()
let chan = BTChan ... d
forkIO $ watchChan chan
return chan
watchBTChan :: BTChan a -> IO ()
watchBTChan b = do
catch (atomically (readTVar (dummyTV b) >> retry)) $ \e -> do
case fromException e of
BlockedIndefinitelyOnSTM -> atomically $ do -- the BTChan must have gotten collected
ls <- readTVar (channel b)
writeTVar (channel b) (map (\(a,b) -> (a-1,b)) ls)
readTVar (nrDup b) >>= writeTVar (nrDup b) . (-1)
_ -> watchBTChan b
EDIT: Yes, this is a poor mans finalizer and I don't have any particular reason to avoid using addFinalizer. That would be the same solution, still forcing use of IO afaict.
Solution 3: A cleaner API than solution 2, but GHC still doesn't support it
Users start a manager thread by calling initBTChanCollector, which will monitor a set of these dummy TVars (from solution 2) and do the needed clean-up. Basically, it shoves the IO into another thread that knows what to do via a global (unsafePerformIOed) TVar. Things work basically like solution 2, but the creation of BTChan's can still be STM. Failure to run initBTChanCollector would result in an ever-growing list (space leak) of tasks as the process runs.
Solution 4: Never allow discarding BTChans
This is akin to ignoring the problem. If the user never drops a dupped BTChan then the issue disappears.
Solution 5
I see ezyang's answer (totally valid and appreciated), but really would like to keep the current API just with a 'dup' function.
** Solution 6**
Please tell me there's a better option.
EDIT:
I implemented solution 3 (totally untested alpha release) and handled the potential space leak by making the global itself a BTChan - that chan should probably have a capacity of 1 so forgetting to run init shows up really quick, but that's a minor change. This works in GHCi (7.0.3) but that seems to be incidental. GHC throws exceptions to both blocked threads (the valid one reading the BTChan and the watching thread) so my if you are blocked reading a BTChan when another thread discards it's reference then you die.
Here is another solution: require all accesses to the the bounded channel duplicate to be bracketed by a function that releases its resources on exit (by an exception or normally). You can use a monad with a rank-2 runner to prevent duplicated channels from leaking out. It's still manual, but the type system makes it a lot harder to do naughty things.
You really don't want to rely on true IO finalizers, because GHC gives no guarantees about when a finalizer may be run: for all you know it may wait until the end of the program before running the finalizer, which means you're deadlocked until then.

Resources