What is the interpretation of <>P -> (!P U R) - model-checking

What is the interpretation of <>P -> (!P U R)? This seems to be contradiction since in future P is expected and checking the absence of P until R.
Model checking tool passes this via BDD as well as BMC techniques.

I don't see any contradiction.
The property is made true by any Buchi automaton s.t.
P is always false [because the premise of the implication is false], or
At some point in the future P becomes true, yet for some time P may be false until R becomes true.
i.e.
In natural language, one would express that property as follows: "If, by any chance, sooner or later P becomes true, then it must be the case that the 'event' R fired first".
For instance, you can imagine P be "sending answer message" and R be "received input message" and the whole property be "no unsolicited answer message is ever sent".

Related

In what logical sense is Haskell referentially transparent?

Haskell is sometimes said to "replace equals for equals". The following code shows this isn't true under every interpretation of such a sentence. Wikipedia follows that by saying f(x)=f(x) for every x but that doesn't seem to carry any actual logical content one can test, it would be true by the reflexive law, a tautology.
I think the phrasing needed to make a logical claim like this is more like Leibniz' law (or indistinguishable identicals) where
x=y implies for every f, f(x)=f(y). That claim fails in the illustration below within Haskell. (We override == to make a partition type, but our function definition can freely ignore this and do.)
My question is, can one actually state referential transparency in a way that can be logically tested, and does Haskell actually uphold that logical claim?
module Main (main) where
data Floop = One | Two | Three
instance Eq Floop where
One == One = True
One == Two = False
One == Three = False
Two == One = False
Two == Two = True
Two == Three = True --- 2=3
Three == One = False
Three == Two = True --- 3=2
Three == Three = True
shuffle :: Floop -> Floop
shuffle One = Two
shuffle Two = Two --- fix 2
shuffle Three = One --- move 3
main = print ( (Two == Three) && (shuffle Two /= shuffle Three) )
--- prints "True" proving Haskell violates Leibniz Law
Expanding slightly on what I already said in my comment (thanks #FyodorSolkin for the prod):
You haven't violated referential transparency there, you've just made a pathological Eq instance.
While, as you've observed, the language doesn't forbid you from doing this, nor does it forbid one from making unlawful Functor or Monad instances. (Because it would be totally unfeasible to try to check these laws in practice.) But just because something doesn't cause a compiler error doesn't necessarily mean it's the right thing to do.
So the problem with your example is that while, semantically, (==) in Haskell indeed means "equal", it's just a function, in fact a method of a typeclass - which you can therefore implement however you want. Nothing stops me from defining, for example:
instance (Eq) (a -> b) where
_ == _ = True
and suddenly all functions will be considered "equal" under this definition. Clearly referential transparency will be violated if we consider this to be a true definition of equality. My point is that it's not. In fact it's quite obvious what "equality" means for any type which isn't either a function or otherwise depends on or "contains" function types. (It's actually obvious what equality of functions should mean too, it's just impossible for there to be a general algorithm to determine if two arbitrary functions are equal.)
[EDIT: I just remembered it also doesn't make much sense to talk about equality of IO actions. There might be some other abstract types like that where there's no clear definition of what equality would mean.]
To stray into abstract mathematics for a minute: your Eq instance certainly defines an equivalence relation, which is considered to be a sort of "generalised equality" - and indeed is equality if you use the relation to make equivalence classes. But then it's nonsense to try to apply a function to such a domain/type which differs on different elements of the same equivalence class. Such a thing - as in your example - actually fundamentally fails to be a well-defined mathematical function, because you're defining it on the individual elements in a way which fails to respect the equivalence relation.
f(x)=f(x) for every x
is by no means a tautology. In many popular languages, this property does not hold. Consider Java, for instance:
import java.util.*;
public class Transparency {
static int f(List<Object> xs) {
xs.add(xs.size());
return xs.size();
}
public static void main(String[] args) {
List<Object> x = new ArrayList<>();
System.out.println("Is java referentially transparent? " + (f(x) == f(x)));
}
}
$ javac Transparency.java
$ java Transparency
Is java referentially transparent? false
Here, because f mutates its input x, it would change behavior if we substitute x's definition into f(x) == f(x): f(new ArrayList<>()) == f(new ArrayList<>()) is in fact true, but when using a variable to reduce duplication it evaluates to false. In Haskell, such a substitution is always valid (disregarding cheats like unsafePerformIO).

how do I show a regular language is equal to another?

So this is related to regular languages,their quotient, and closure properties. I am suppose first answer if the question is true or false and then explain why. I dont know how to answer the following question.
1.)
Which of the following identities are true?
(L/a)a = L (the left side represents the concatenation of the languages L/a and {a}).
If they are not equal, this is most easily shown by a counterexample. Equality is usually shown by proving the inclusion in both directions.
Here, take for example L = {b}. Then L/a = L and thus (L/a)a = {ba}, which is not equal to L.

Under what circumstances could Common Subexpression Elimination affect the laziness of a Haskell program?

From wiki.haskell.org:
First of all, common subexpression elimination (CSE) means that if an expression appears in several places, the code is rearranged so that the value of that expression is computed only once. For example:
foo x = (bar x) * (bar x)
might be transformed into
foo x = let x' = bar x in x' * x'
thus, the bar function is only called once. (And if bar is a particularly expensive function, this might save quite a lot of work.)
GHC doesn't actually perform CSE as often as you might expect. The trouble is, performing CSE can affect the strictness/laziness of the program. So GHC does do CSE, but only in specific circumstances --- see the GHC manual. (Section??)
Long story short: "If you care about CSE, do it by hand."
I'm wondering under what circumstances CSE "affects" the strictness/laziness of the program and what kind of effect that could be.
The naive CSE rule would be
e'[e, e] ~> let x = e in e'[x, x].
That is, whenever a subexpression e occurs twice in the expression e', we use a let-binding to compute e once. This however leads itself to some trivial space leaks. For example
sum [1..n] + prod [1..n]
is typically O(1) space usage in a lazy functional programming language like Haskell (as sum and prod would tail-recurse and blah blah blah), but would become O(n) when the naive CSE rule is enacted. This can be terrible for programs when n is high!
The approach is then to make this rule more specific, restricting it to a small set of cases that we know won't have the problem. We can begin by more specifically enumerating the problems with the naive rule, which will form a set of priorities for us to develop a better CSE:
The two occurrences of e might be far apart in e', leading to a long lifetime for the let x = e binding.
The let-binding must always allocate a closure where previously there might not have been one.
This can create an unbound number of closures.
There are cases where the closure might never deallocate.
Something better
let x = e in e'[e] ~> let x = e in e'[x]
This is a more conservative rule but is much safer. Here we recognize that e appears twice but the first occurrence syntactically dominates the second expression, meaning here that the programmer has already introduced a let-binding. We can safely just reuse that let-binding and replace the second occurrence of e with x. No new closures are allocated.
Another example of syntactic domination:
case e of { x -> e'[e] } ~> case e of { x -> e'[x] }
And yet another:
case e of {
Constructor x0 x1 ... xn ->
e'[e]
}
~>
case e of {
Constructor x0 x1 ... xn ->
e'[Constructor x0 x1 ... xn]
}
These rules all take advantage of existing structure in the program to ensure that the kinetics of space usage remain the same before and after the transformation. They are much more conservative than the original CSE but they are also much safer.
See also
For a full discussion of CSE in a lazy FPL, read Chitil's (very accessible) 1997 paper. For a full treatment of how CSE works in a production compiler, see GHC's CSE.hs module, which is documented very thoroughly thanks to GHC's tradition of writing long footnotes. The comment-to-code ratio in that module is off the charts. Also note how old that file is (1993)!

G-machine, (non-)strict contexts - why case expressions need special treatment

I'm currently reading Implementing functional languages: a tutorial by SPJ and the (sub)chapter I'll be referring to in this question is 3.8.7 (page 136).
The first remark there is that a reader following the tutorial has not yet implemented C scheme compilation (that is, of expressions appearing in non-strict contexts) of ECase expressions.
The solution proposed is to transform a Core program so that ECase expressions simply never appear in non-strict contexts. Specifically, each such occurrence creates a new supercombinator with exactly one variable which body corresponds to the original ECase expression, and the occurrence itself is replaced with a call to that supercombinator.
Below I present a (slightly modified) example of such transformation from 1
t a b = Pack{2,1} ;
f x = Pack{2,2} (case t x 7 6 of
<1> -> 1;
<2> -> 2) Pack{1,0} ;
main = f 3
== transformed into ==>
t a b = Pack{2,1} ;
f x = Pack{2,2} ($Case1 (t x 7 6)) Pack{1,0} ;
$Case1 x = case x of
<1> -> 1;
<2> -> 2 ;
main = f 3
I implemented this solution and it works like charm, that is, the output is Pack{2,2} 2 Pack{1,0}.
However, what I don't understand is - why all that trouble? I hope it's not just me, but the first thought I had of solving the problem was to just implement compilation of ECase expressions in C scheme. And I did it by mimicking the rule for compilation in E scheme (page 134 in 1 but I present that rule here for completeness): so I used
E[[case e of alts]] p = E[[e]] p ++ [Casejump D[[alts]] p]
and wrote
C[[case e of alts]] p = C[[e]] p ++ [Eval] ++ [Casejump D[[alts]] p]
I added [Eval] because Casejump needs an argument on top of the stack in weak head normal form (WHNF) and C scheme doesn't guarantee that, as opposed to E scheme.
But then the output changes to enigmatic: Pack{2,2} 2 6.
The same applies when I use the same rule as for E scheme, i.e.
C[[case e of alts]] p = E[[e]] p ++ [Casejump D[[alts]] p]
So I guess that my "obvious" solution is inherently wrong - and I can see that from outputs. But I'm having trouble stating formal arguments as to why that approach was bound to fail.
Can someone provide me with such argument/proof or some intuition as to why the naive approach doesn't work?
The purpose of the C scheme is to not perform any computation, but just delay everything until an EVAL happens (which it might or might not). What are you doing in your proposed code generation for case? You're calling EVAL! And the whole purpose of C is to not call EVAL on anything, so you've now evaluated something prematurely.
The only way you could generate code directly for case in the C scheme would be to add some new instruction to perform the case analysis once it's evaluated.
But we (Thomas Johnsson and I) decided it was simpler to just lift out such expressions. The exact historical details are lost in time though. :)

Haskell and Lambda-Calculus: Implementing Alpha-Congruence (Alpha-Equivalence)

I am implementing an impure untyped lambda-calculus interpreter in Haskell.
I'm presently stuck on implementing "alpha-congruence" (also called "alpha-equivalence" or "alpha-equality" in some textbooks). I want to be able to check whether two lambda-expressions are equal or not equal to each other. For example, if I enter the following expression into the interpreter it should yield True (\ is used to indicate the lambda symbol):
>\x.x == \y.y
True
The problem is understanding whether the following lambda-expressions are considered alpha-equivalent or not:
>\x.xy == \y.yx
???
>\x.yxy == \z.wzw
???
In the case of \x.xy == \y.yx I would guess that the answer is True. This is because \x.xy => \z.zy and \y.yx => \z.zy and the right-hand sides of both are equal (where the symbol => is used to denote alpha-reduction).
In the cae of \x.yxy == \z.wzw I would likewise guess that the answer is True. This is because \x.yxy => \a.yay and \z.wzw => \a.waw which (I think) are equal.
The trouble is that all of my textbooks' definitions state that only the names of the bound variables need to be changed for two lambda-expressions to be considered equal. It says nothing about the free variables in an expression needing to be renamed uniformly also. So even though y and w are both in their correct places in the lambda-expressions, how would the program "know" that the first y represents the first w and the second y represents the second w. I would need to be consistent about this in an implementation.
In short, how would I go about implementing an error-free version of a function isAlphaCongruent? What are the exact rules that I need to follow in order for this to work?
I would prefer to do this without using de Bruijn indices.
You are misunderstanding: different free variables are not alpha equivalent. So y /= x, and \w.wy /= \w.wx, and \x.xy /= \y.yx. Similarly, \x.yxy /= \z.wzw because y /= w.
Your book says nothing about free variables being allowed to be uniformly renamed because they are not allowed to be uniformly renamed.
(Think of it this way: if I haven't yet told you the definition of not and id, would you expect \x. not x and \x. id x to be equivalent? I sure hope not!)

Resources