Barber Paradox Why is this model inconsistent? - alloy

I would like to know why is this model inconsistent? We can have the following tuples in shaves.
shaves = {(man,man)}
sig Man {shaves: set Man}
one sig Barber extends Man {}
fact {
Barber.shaves = (m:Man |m not in m.shaves}
}
Barber.shaves will produce 0 tuples. In this case, fact should be valid. So why does allow tell me that my model is inconsistent?
Would really appreciate some advice on this.

There is a simple explanation using only English.
The Alloy fact above says that the set of men that the barber (there is only one barber, since one sig Barber) shaves is exactly equal to all men who don't shave themselves. At first, this statement makes sense, because every man either shaves himself or is shaved by the barber.
The trick question is then "who shaves the barber". If the barber doesn't shave himself, then the fact forces that the barber must shave the barber, which is in contradiction with the assumption that the barber doesn't shave himself. If the barber does shave himself, then the fact forces that the barber must not shave the barber, which is again a paradox, so no instance can be found.

Okay here is the solution. The idea is really simple, and I think the authors really ought to explain this in their book.
The crux of the confusion is in the fact statement.
Barber.shaves = {m:Man | m not in m.shaves}
Basically, what the above statement is saying is that, any tuples in Barber.shaves must not shave himself. For example = Barber.shaves = {(m)}, then (m, m) MUST never occur in the shaves relations set. Also m:Man indicates that every man that does not shave himself should also be in Barber.shaves.
The problem occurs when we come to the Barber instance. If barber is not in Barber.shaves, then it violates m : Man. If barber is included, then it violates m not in m.shaves since shaves would contain {(barber, barber)}
I hope this clears up the confusion for beginners in alloy like myself.
So given the current model in my question, alloy cannot create an instance that satisfies this fact.

Related

Is there a name for this fallacy?

From time to time I've come across (or have been briefly tempted by) something like the following absurd example:
'40% of Coloradans are Republican, therefore, 40% of Republicans are Coloradan."
Closer to the bone would be, "Most NBA players are Black, so most Blacks are highly athletic."
Is there a term for this sort of gaffe, of not just misapplying observations from a non-random sample to the population, but of confusing 'A implies B' with 'B implies A' as well? It feels more complex than just taking an inductive conclusion as a deductive truth.
Thanks.

An assertion with no counterexample represents a conclusion?

So, I create a bunch of Alloy signatures, predicates, and facts. I express a lot of relationships. I write in English some statements representing the Alloy model:
A is a ...
B is ...
Then I create an Alloy assertion. No counterexamples found. What is the appropriate English statement for the assertion? I believe the assertion represents a conclusion:
A is a ...
B is ...
Therefore, ...
Do you agree? Does an Alloy assertion with no counterexample represent a conclusion ("therefore" statement)?
There's two points that would discourage me to call an assertion, for which counterexamples where not found, "conclusion":
The fact that no counter-examples have been found is to put in perspective with the fact that the search for counterexamples has been performed on a limited scope. In general, I would say that the lack of counterexamples doesn't really prove anything out of the scope it is performed in. At most it might give a certain degree of confidence that the assertion could indeed hold as well in a broader scope.
From you definition of conclusion, "A Therefore B". It is somehow implied that there's a logical connection between A and B, which is not always the case between facts and assertions. What if my assertion is a tautology ? Wouldn't it feel weird to claim: "I specified those facts in my model therefore 1=1" ? :-)

Immutability invariants in Haskell's GC, considering mutual recursion

I was reading this article: https://wiki.haskell.org/GHC/Memory_Management
and found it a bit confusing. I do not understand the part:
The trick is that immutable data NEVER points to younger values. Indeed, younger values don't yet exist at the time when an old value is created, so it cannot be pointed to from scratch. And since values are never modified, neither can it be pointed to later. This is the key property of immutable data.
I think it would be true if we forget about recursion and mutual recursion, but what about
let x = 1:y
y = 2:x
which is proper Haskell code, and makes x == [1,2,1...]. Here x points to y and y points to x. In this situation we cannot say that no value points to younger one (one of them must actually be elder).
Who is mistaken - me, or article author? Why? How does immutability help GC's analysis in this case?

Are datatypes the enemy of abstraction?

Page 137 of the book Software Abstractions has these remarkable statements:
Integers are actually not very useful. If you think you need them,
think again; there is often a more abstract description that matches
the properties better. Just because integers appear in the problem
domain does not mean that they should be modeled as such. To figure
out whether integers are necessary, ask yourself what properties are
actually relied upon. For example, a communication protocol that
numbers its messages may rely only on the numbers being distinct; or
it may rely on them increasing; or perhaps even being totally ordered.
In none of these cases should integers be used.
Wow!
This is important. I want to deeply understand this.
To help me understand, I created two versions of the communication protocol.
The first version uses the Int datatype:
sig Message {
number: Int
}
The second version does not:
sig Message {
number: Number
}
sig Number {}
Is the second version more abstract? How is it more abstract? Is it more abstract because Number is not tied to a datatype? The first version is less abstract because it specifies a datatype (Int)?
Are datatypes the enemy of abstraction?
No, the second is no better. Suppose your messages aren't totally ordered, but are just partially ordered. Then the point is that rather than assigning an index to each message, you'd do better to make explicit the partial ordering on messages:
sig Message {follows: set Message, ...}

Maintaining complex state in Haskell

Suppose you're building a fairly large simulation in Haskell. There are many different types of entities whose attributes update as the simulation progresses. Let's say, for the sake of example, that your entities are called Monkeys, Elephants, Bears, etc..
What is your preferred method for maintaining these entities' states?
The first and most obvious approach I thought of was this:
mainLoop :: [Monkey] -> [Elephant] -> [Bear] -> String
mainLoop monkeys elephants bears =
let monkeys' = updateMonkeys monkeys
elephants' = updateElephants elephants
bears' = updateBears bears
in
if shouldExit monkeys elephants bears then "Done" else
mainLoop monkeys' elephants' bears'
It's already ugly having each type of entity explicitly mentioned in the mainLoop function signature. You can imagine how it would get absolutely awful if you had, say, 20 types of entities. (20 is not unreasonable for complex simulations.) So I think this is an unacceptable approach. But its saving grace is that functions like updateMonkeys are very explicit in what they do: They take a list of Monkeys and return a new one.
So then the next thought would be to roll everything into one big data structure that holds all state, thus cleaning up the signature of mainLoop:
mainLoop :: GameState -> String
mainLoop gs0 =
let gs1 = updateMonkeys gs0
gs2 = updateElephants gs1
gs3 = updateBears gs2
in
if shouldExit gs0 then "Done" else
mainLoop gs3
Some would suggest that we wrap GameState up in a State Monad and call updateMonkeys etc. in a do. That's fine. Some would rather suggest we clean it up with function composition. Also fine, I think. (BTW, I'm a novice with Haskell, so maybe I'm wrong about some of this.)
But then the problem is, functions like updateMonkeys don't give you useful information from their type signature. You can't really be sure what they do. Sure, updateMonkeys is a descriptive name, but that's little consolation. When I pass in a god object and say "please update my global state," I feel like we're back in the imperative world. It feels like global variables by another name: You have a function that does something to the global state, you call it, and you hope for the best. (I suppose you still avoid some concurrency problems that would be present with global variables in an imperative program. But meh, concurrency isn't nearly the only thing wrong with global variables.)
A further problem is this: Suppose the objects need to interact. For example, we have a function like this:
stomp :: Elephant -> Monkey -> (Elephant, Monkey)
stomp elephant monkey =
(elongateEvilGrin elephant, decrementHealth monkey)
Say this gets called in updateElephants, because that's where we check to see if any of the elephants are in stomping range of any monkeys. How do you elegantly propagate the changes to both the monkeys and elephants in this scenario? In our second example, updateElephants takes and returns a god object, so it could effect both changes. But this just muddies the waters further and reinforces my point: With the god object, you're effectively just mutating global variables. And if you're not using the god object, I'm not sure how you'd propagate those types of changes.
What to do? Surely many programs need to manage complex state, so I'm guessing there are some well-known approaches to this problem.
Just for the sake of comparison, here's how I might solve the problem in the OOP world. There would be Monkey, Elephant, etc. objects. I'd probably have class methods to do lookups in the set of all live animals. Maybe you could lookup by location, by ID, whatever. Thanks to the data structures underlying the lookup functions, they'd stay allocated on the heap. (I'm assuming GC or reference counting.) Their member variables would get mutated all the time. Any method of any class would be able to mutate any live animal of any other class. E.g. an Elephant could have a stomp method that would decrement the health of a passed-in Monkey object, and there would be no need to pass that
Likewise, in an Erlang or other actor-oriented design, you could solve these problems fairly elegantly: Each actor maintains its own loop and thus its own state, so you never need a god object. And message passing allows one object's activities to trigger changes in other objects without passing a bunch of stuff all the way back up the call stack. Yet I have heard it said that actors in Haskell are frowned upon.
The answer is functional reactive programming (FRP). It it a hybrid of two coding styles: component state management and time-dependent values. Since FRP is actually a whole family of design patterns, I want to be more specific: I recommend Netwire.
The underlying idea is very simple: You write many small, self-contained components each with their own local state. This is practically equivalent to time-dependent values, because each time you query such a component you may get a different answer and cause a local state update. Then you combine those components to form your actual program.
While this sounds complicated and inefficient it's actually just a very thin layer around regular functions. The design pattern implemented by Netwire is inspired by AFRP (Arrowized Functional Reactive Programming). It's probably different enough to deserve its own name (WFRP?). You may want to read the tutorial.
In any case a small demo follows. Your building blocks are wires:
myWire :: WireP A B
Think of this as a component. It is a time-varying value of type B that depends on a time-varying value of type A, for example a particle in a simulator:
particle :: WireP [Particle] Particle
It depends on a list of particles (for example all currently existing particles) and is itself a particle. Let's use a predefined wire (with a simplified type):
time :: WireP a Time
This is a time-varying value of type Time (= Double). Well, it's time itself (starting at 0 counted from whenever the wire network was started). Since it doesn't depend on another time-varying value you can feed it whatever you want, hence the polymorphic input type. There are also constant wires (time-varying values that don't change over time):
pure 15 :: Wire a Integer
-- or even:
15 :: Wire a Integer
To connect two wires you simply use categorical composition:
integral_ 3 . 15
This gives you a clock at 15x real time speed (the integral of 15 over time) starting at 3 (the integration constant). Thanks to various class instances wires are very handy to combine. You can use your regular operators as well as applicative style or arrow style. Want a clock that starts at 10 and is twice the real time speed?
10 + 2*time
Want a particle that starts and (0, 0) with (0, 0) velocity and accelerates with (2, 1) per second per second?
integral_ (0, 0) . integral_ (0, 0) . pure (2, 1)
Want to display statistics while the user presses the spacebar?
stats . keyDown Spacebar <|> "stats currently disabled"
This is just a small fraction of what Netwire can do for you.
I know this is old topic. But I am facing the same problem right now while trying to implement Rail Fence cipher exercise from exercism.io. It is quite disappointing to see such a common problem having such poor attention in Haskell. I don't take it that to do some as simple as maintaining state I need to learn FRP. So, I continued googling and found solution looking more straightforward - State monad: https://en.wikibooks.org/wiki/Haskell/Understanding_monads/State

Resources