I'm using SBV (with Z3 backend) in Haskell to create some theory provers. I want to check if forall x and y with given constrains (like x + y = y + x, where + is a "plus operator", not addition) some other terms are valid. I want to define axioms about the + expression (like associativity, identity etc.) and then check for further equalities, like check if a + (b + c) == (a + c) + b is valid formal a, b and c.
I was trying to accomplish it using something like:
main = do
let x = forall "x"
let y = forall "y"
out <- prove $ (x .== x)
print "end"
But it seems we cannot use the .== operator on symbolic values. Is this a missing feature or wrong usage? Are we able to do it somehow using SBV?
That sort of reasoning is indeed possible, through the use of uninterpreted sorts and functions. Be warned, however, that reasoning about such structures typically requires quantified axioms, and SMT-solvers are usually not terribly good at reasoning with quantifiers.
Having said that, here's how I would go about it, using SBV.
First, some boiler-plate code to get an uninterpreted type T:
{-# LANGUAGE DeriveDataTypeable #-}
import Data.Generics
import Data.SBV
-- Uninterpreted type T
data T = TBase () deriving (Eq, Ord, Data, Typeable, Read, Show)
instance SymWord T
instance HasKind T
type ST = SBV T
Once you do this, you'll have access to an uninterpreted type T and its symbolic counterpart ST. Let's declare plus and zero, again just uninterpreted constants with the right types:
-- Uninterpreted addition
plus :: ST -> ST -> ST
plus = uninterpret "plus"
-- Uninterpreted zero
zero :: ST
zero = uninterpret "zero"
So far, all we told SBV is that there exists a type T, and a function plus, and a constant zero; expressly being uninterpreted. That is, the SMT solver makes no assumptions other than the fact that they have the given types.
Let's first try to prove that 0+x = x:
bad = prove $ \x -> zero `plus` x .== x
If you try this, you'll get the following response:
*Main> bad
Falsifiable. Counter-example:
s0 = T!val!0 :: T
What the SMT solver is telling you is that the property does not hold, and here's a value where it doesn't hold. The value T!val!0 is a Z3 specific response; other solvers can return other things. It's essentially an internal identifier for a habitant of the type T; and other than that we know nothing about it. This isn't terribly useful of course, as you don't really know what associations it made for plus and zero, but it is to be expected.
To prove the property, let's tell the SMT solver two more things. First, that plus is commutative. And second, that zero added on the right doesn't do anything. These are done via addAxiom calls. Unfortunately, you have to write your axioms in the SMTLib syntax, as SBV doesn't (at least yet) support axioms written using Haskell. Note also we switch to using the Symbolic monad here:
good = prove $ do
addAxiom "plus-zero-axioms"
[ "(assert (forall ((x T) (y T)) (= (plus x y) (plus y x))))"
, "(assert (forall ((x T)) (= (plus x zero) x)))"
]
x <- free "x"
return $ zero `plus` x .== x
Note how we told the solver x+y = y+x and x+0 = x, and asked it to prove 0+x = x. Writing axioms this way looks really ugly since you have to use the SMTLib syntax, but that's the current state of affairs. Now we have:
*Main> good
Q.E.D.
Quantified axioms and uninterpreted-types/functions are not the easiest things to use via the SBV interface, but you can get some mileage out of it this way. If you have heavy use of quantifiers in your axioms, it's unlikely that the solver will be able to answer your queries; and will likely respond unknown. It all depends on the solver you use, and how hard the properties to prove are.
Your use of the API isn't quite right. The simplest way to prove mathematical equalities would be to use simple functions. For instance, associativity over unbounded Integers can be expressed this way:
prove $ \x y z -> x + (y + z) .== (x + y) + (z :: SInteger)
If you need a more programmatic interface (and sometimes you will), then you can use the Symbolic monad, thusly:
plusAssoc = prove $ do x <- sInteger "x"
y <- sInteger "y"
z <- sInteger "z"
return $ x + (y + z) .== (x + y) + z
I'd recommend browsing through many of the examples provided in the hackage site to get familiar with the API: https://hackage.haskell.org/package/sbv
Related
Graham Hutton, in the 2nd edition of Programming in Haskell, spends the last 2 chapters on the topic of stack machine based implementation of an AST.
And he finishes by showing how to derive the correct implementation of that machine from the semantic model of the AST.
I'm trying to enlist the help of Data.SBV in that derivation, and failing.
And I'm hoping that someone can help me understand whether I'm:
Asking for something that Data.SBV can't do, or
Asking Data.SBV for something it can do, but asking incorrectly.
-- test/sbv-stack.lhs - Data.SBV assisted stack machine implementation derivation.
{-# LANGUAGE OverloadedLists #-}
{-# LANGUAGE ScopedTypeVariables #-}
import Data.SBV
import qualified Data.SBV.List as L
import Data.SBV.List ((.:), (.++)) -- Since they don't collide w/ any existing list functions.
-- AST Definition
data Exp = Val SWord8
| Sum Exp Exp
-- Our "Meaning" Function
eval :: Exp -> SWord8
eval (Val x) = x
eval (Sum x y) = eval x + eval y
type Stack = SList Word8
-- Our "Operational" Definition.
--
-- This function attempts to implement the *specification* provided by our
-- "meaning" function, above, in a way that is more conducive to
-- implementation in our available (and, perhaps, quite primitive)
-- computational machinery.
--
-- Note that we've (temporarily) assumed that this machinery will consist
-- of some form of *stack-based computation engine* (because we're
-- following Hutton's example).
--
-- Note that we give the *specification* of the function in the first
-- (commented out) line of the definition. The derivation of the actual
-- correct definition from this specification is detailed in Ch. 17 of
-- Hutton's book.
eval' :: Exp -> Stack -> Stack
-- eval' e s = eval e : s -- our "specification"
eval' (Val n) s = push n s -- We're defining this one manually.
where
push :: SWord8 -> Stack -> Stack
push n s = n .: s
eval' (Sum x y) s = add (eval' y (eval' x s))
where
add :: Stack -> Stack
add = uninterpret "add" s -- This is the function we're asking to be derived.
-- Now, let's just ask SBV to "solve" our specification of `eval'`:
spec :: Goal
spec = do x :: SWord8 <- forall "x"
y :: SWord8 <- forall "y"
-- Our spec., from above, specialized to the `Sum` case:
constrain $ eval' (Sum (Val x) (Val y)) L.nil .== eval (Sum (Val x) (Val y)) .: L.nil
We get:
λ> :l test/sbv-stack.lhs
[1 of 1] Compiling Main ( test/sbv-stack.lhs, interpreted )
Ok, one module loaded.
Collecting type info for 1 module(s) ...
λ> sat spec
Unknown.
Reason: smt tactic failed to show goal to be sat/unsat (incomplete quantifiers)
What happened?!
Well, maybe, asking SBV to solve for anything other than a predicate (i.e. - a -> Bool) doesn't work?
The fundamental issue here is that you are mixing SMTLib's sequence logic and quantifiers. And the problem turns out to be too difficult for an SMT solver to handle. This sort of synthesis of functions is indeed possible if you restrict yourself to basic logics. (Bitvectors, Integers, Reals.) But adding sequences to the mix puts it into the undecidable fragment.
This doesn't mean z3 cannot synthesize your add function. Perhaps a future version might be able to handle it. But at this point you're at the mercy of heuristics. To see why, note that you're asking the solver to synthesize the following definition:
add :: Stack -> Stack
add s = v .: s''
where (a, s') = L.uncons s
(b, s'') = L.uncons s'
v = a + b
while this looks rather innocent and simple, it requires capabilities beyond the current abilities of z3. In general, z3 can currently synthesize functions that only make a finite number of choices on concrete elements. But it is unable to do so if the output depends on input for every choice of input. (Think of it as a case-analysis producing engine: It can conjure up a function that maps certain inputs to others, but cannot figure out if something should be incremented or two things must be added. This follows from the work in finite-model finding theory, and is way beyond the scope of this answer! See here for details: https://arxiv.org/abs/1706.00096)
A better use case for SBV and SMT solving for this sort of problem is to actually tell it what the add function is, and then prove some given program is correctly "compiled" using Hutton's strategy. Note that I'm explicitly saying a "given" program: It would also be very difficult to model and prove this for an arbitrary program, but you can do this rather easily for a given fixed program. If you are interested in proving the correspondence for arbitrary programs, you really should be looking at theorem provers such as Isabelle, Coq, ACL2, etc.; which can deal with induction, a proof technique you will no doubt need for this sort of problem. Note that SMT solvers cannot perform induction in general. (You can use e-matching to simulate some induction like proofs, but it's a kludge at best and in general unmaintainable.)
Here's your example, coded to prove the \x -> \y -> x + y program is "correctly" compiled and executed with respect to reference semantics:
{-# LANGUAGE ScopedTypeVariables #-}
import Data.SBV
import qualified Data.SBV.List as L
import Data.SBV.List ((.:))
-- AST Definition
data Exp = Val SWord8
| Sum Exp Exp
-- Our "Meaning" Function
eval :: Exp -> SWord8
eval (Val x) = x
eval (Sum x y) = eval x + eval y
-- Evaluation by "execution"
type Stack = SList Word8
run :: Exp -> SWord8
run e = L.head (eval' e L.nil)
where eval' :: Exp -> Stack -> Stack
eval' (Val n) s = n .: s
eval' (Sum x y) s = add (eval' y (eval' x s))
add :: Stack -> Stack
add s = v .: s''
where (a, s') = L.uncons s
(b, s'') = L.uncons s'
v = a + b
correct :: IO ThmResult
correct = prove $ do x :: SWord8 <- forall "x"
y :: SWord8 <- forall "y"
let pgm = Sum (Val x) (Val y)
spec = eval pgm
machine = run pgm
return $ spec .== machine
When I run this, I get:
*Main> correct
Q.E.D.
And the proof takes almost no time. You can easily extend this by adding other operators, binding forms, function calls, the whole works if you like. So long as you stick to a fixed "program" for verification, it should work out just fine.
If you make a mistake, let's say define add by subtraction (modify the last line of it to ready v = a - b), you get:
*Main> correct
Falsifiable. Counter-example:
x = 32 :: Word8
y = 0 :: Word8
I hope this gives an idea of what the current capabilities of SMT solvers are and how you can put them to use in Haskell via SBV.
Program synthesis is an active research area with many custom techniques and tools. An out of the box use of an SMT-solver will not get you there. But if you do build such a custom system in Haskell, you can use SBV to access an underlying SMT solver to solve many constraints you'll have to handle during the process.
(Aside: An extended example, similar in spirit but with different goals, is shipped with the SBV package: https://hackage.haskell.org/package/sbv-8.5/docs/Documentation-SBV-Examples-Strings-SQLInjection.html. This program shows how to use SBV and SMT solvers to find SQL injection vulnerabilities in an idealized SQL implementation. That might be of some interest here, and would be more aligned with how SMT solvers are typically used in practice.)
Does filling the hole in the following program necessarily require non-constructive means? If yes, is it still the case if x :~: y decidable?
More generally, how do I use a refutation to guide the type checker?
(I am aware that I can work around the problem by defining Choose as a GADT, I'm asking specifically for type families)
{-# LANGUAGE DataKinds #-}
{-# LANGUAGE TypeFamilies #-}
{-# LANGUAGE TypeOperators #-}
module PropositionalDisequality where
import Data.Type.Equality
import Data.Void
type family Choose x y where
Choose x x = 1
Choose x _ = 2
lem :: (x :~: y -> Void) -> Choose x y :~: 2
lem refutation = _
If you try hard enough to implement a function, you can convince yourself
that it is not possible. If you're not convinced, the argument can be made
more formal: we enumerate programs exhaustively to find that none is possible. It turns out there's only half a dozen of meaningful cases to consider.
I wonder why this argument is not made more often.
Totally not accurate summary:
Act I: proof search is easy.
Act II: dependent types too.
Act III: Haskell is still fine for writing dependently typed programs.
I. The proof search game
First we define the search space.
We can reduce any Haskell definition to one of the form
lem = (exp)
for some expression (exp). Now we only need to find a single expression.
Look at all possible ways of making an expression in Haskell:
https://www.haskell.org/onlinereport/haskell2010/haskellch3.html#x8-220003
(this doesn't account for extensions, exercise for the reader).
It fits a page in a single column, so it's not that big to start with.
Moreover, most of them are sugar for some form of function application or
pattern-matching; we can also desugar away type classes with dictionary
passing, so we're left with a ridiculously small lambda calculus:
lambdas, \x -> ...
pattern-matching, case ... of ...
function application, f x
constructors, C (including integer literals)
constants, c (for primitives that cannot be written in terms of the constructs above, so various built-ins (seq) and maybe FFI if that counts)
variables (bound by lambdas and cases)
We can exclude every constant on the grounds that I think the question is
really about pure lambda calculus (or the reader can enumerate the constants,
to exclude black-magic constants like undefined, unsafeCoerce,
unsafePerformIO that make everything collapse (any type is inhabited and, for
some of those, the type system is unsound), and to be left with white-magic
constants to which the present theoretical argument can be generalized via a
well-funded thesis).
We can also reasonably assume that we want a solution with no recursion involved
(to get rid of noise like lem = lem, and fix if you felt like you couldn't
part with it before), and which actually has a normal form, or preferably, a
canonical form with respect to βη-equivalence. In other words, we refine and
examine the set of possible solutions as follows.
lem :: _ -> _ has a function type, so we can assume WLOG that its definition starts with a lambda:
-- Any solution
lem = (exp)
-- is η-equivalent to a lambda
lem = \refutation -> (exp) refutation
-- so let's assume we do have a lambda
lem = \refutation -> _hole
Now enumerate what could be under the lambda.
It could be a constructor,
which then has to be Refl, but there is no proof that Choose x y ~ 2 in
the context (here we could formalize and enumerate the type equalities the
typechecker knows about and can derive, or make the syntax of coercions
(proofs of equalities) explicit and keep playing this proof search game
with them), so this doesn't type check:
lem = \refutation -> Refl
Maybe there is some way of constructing that equality proof, but then the
expression would start with something else, which is going to be another
case of the proof.
It could be some application of a constructor C x1 x2 ..., or the
variable refutation (applied or not); but there's no possible way that's
well-typed, it has to somehow produce a (:~:), and Refl is really the
only way.
Or it could be a case. WLOG, there is no nested case on the left, nor any
constructor, because the expression could be simplified in both cases:
-- Any left-nested case expression
case (case (e) of { C x1 x2 -> (f) }) { D y1 y2 -> (g) }
-- is equivalent to a right-nested case
case (e) of { C x1 x2 -> case (f) of { D y1 y2 -> (g) } }
-- Any case expression with a nested constructor
case (C u v) of { C x1 x2 -> f x1 x2 }
-- reduces to
f u v
So the last subcase is the variable case:
lem = \refutation -> case refutation (_hole :: x :~: y) of {}
and we have to construct a x :~: y. We enumerate ways of filling the
_hole again. It's either Refl, but no proof is available, or
(skipping some steps) case refutation (_anotherHole :: x :~: y) of {},
and we have an infinite descent on our hands, which is also absurd.
A different possible argument here is that we can pull out the case
from the application, to remove this case from consideration WLOG.
-- Any application to a case
f (case e of C x1 x2 -> g x1 x2)
-- is equivalent to a case with the application inside
case e of C x1 x2 -> f (g x1 x2)
There are no more cases. The search is complete, and we didn't find an
implementation of (x :~: y -> Void) -> Choose x y :~: 2. QED.
To read more on this topic, I guess a course/book about lambda calculus up
until the normalization proof of the simply-typed lambda calculus should give
you the basic tools to start with. The following thesis contains an
introduction on the subject in its first part, but admittedly I'm a poor judge
of the difficulty of such material: Which types have a unique inhabitant?
Focusing on pure program equivalence,
by Gabriel Scherer.
Feel free to suggest more adequate resources and literature.
II. Fixing the proposition and proving it with dependent types
Your initial intuition that this should encode a valid proposition
is definitely valid. How might we fix it to make it provable?
Technically, the type we are looking at is quantified with forall:
forall x y. (x :~: y -> Void) -> Choose x y :~: 2
An important feature of forall is that it is an irrelevant quantifier.
The variables it introduces cannot be used "directly" in a term of this type. Although that aspect becomes more prominent in the presence of dependent types,
it still pervades Haskell today, providing another intuition for why this (and many other examples) is not "provable" in Haskell: if you think about why you
think that proposition is valid, you will naturally start with a case split about whether x is equal to y, but to even do such a case split you need a way to
decide which side you're on, which will of course have to look at x and y,
so they cannot be irrelevant. forall in Haskell is not at all like what most people mean with "for all".
Some discussion on the matter of relevance can be found in the thesis Dependent Types in Haskell, by Richard Eisenberg (in particular, Section 3.1.1.5 for an initial example, Section 4.3 for relevance in Dependent Haskell and Section 8.7 for comparison with other languages with dependent types).
Dependent Haskell will need a relevant quantifier to complement forall, and
which would get us closer to proving this:
foreach x y. (x :~: y -> Void) -> Choose x y :~: 2
Then we could probably write this:
lem :: foreach x y. (x :~: y -> Void) -> Choose x y :~: 2
lem x y p = case x ==? u of
Left r -> absurd (p r) -- x ~ y, that's absurd!
Right Irrefl -> Refl -- x /~ y, so Choose x y = 2
That also assumes a first-class notion of disequality /~, complementing ~,
to help Choose reduce when it is in the context and a decision function
(==?) :: foreach x y. Either (x :~: y) (x :/~: y).
Actually, that machinery isn't necessary, that just makes for a shorter
answer.
At this point I'm making stuff up because Dependent Haskell does not exist yet,
but that is easily doable in related dependently typed languages (Coq, Agda,
Idris, Lean), modulo an adequate replacement of the type family Choose
(type families are in some sense too powerful to be translated as mere
functions, so may be cheating, but I digress).
Here is a comparable program in Coq, showing also that lem applied to 1 and 2
and a suitable proof does reduce to a proof by reflexivity of choose 1 2 = 2.
https://gist.github.com/Lysxia/5a9b6996a3aae741688e7bf83903887b
III. Without dependent types
A critical source of difficulty here is that Choose is a closed type
family with overlapping instances. It is problematic because there is
no proper way to express the fact that x and y are not equal in Haskell,
to know that the first clause Choose x x does not apply.
A more fruitful avenue if you're into Pseudo-Dependent Haskell is to use a
boolean type equality:
-- In the base library
import Data.Type.Bool (If)
import Data.Type.Equality (type (==))
type Choose x y = If (x == y) 1 2
An alternative encoding of equality constraints becomes useful for this style:
type x ~~ y = ((x == y) ~ 'True)
type x /~ y = ((x == y) ~ 'False)
with that, we can get another version of the type-proposition above,
expressible in current Haskell (where SBool is the singleton type of Bool),
which essentially can be read as adding the assumption that the equality of x
and y is decidable. This does not contradict the earlier claim about "irrelevance" of forall, the function is inspecting a boolean (or rather an SBool), which postpones the inspection of x and y to whoever calls lem.
lem :: forall x y. SBool (x == y) -> ((x ~~ y) => Void) -> Choose x y :~: 2
lem decideEq p = case decideEq of
STrue -> absurd p
SFalse -> Refl
Take this OCaml code:
let silly (g : (int -> int) -> int) (f : int -> int -> int) =
g (f (print_endline "evaluated"; 0))
silly (fun _ -> 0) (fun x -> fun y -> x + y)
It prints evaluated and returns 0. But if I eta-expand f to get g (fun x -> f (print_endline "evaluated"; 0) x), evaluated is no longer printed.
Same holds for this SML code:
fun silly (g : (int -> int) -> int, f : int -> int -> int) : int =
g (f (print "evaluated" ; 0));
silly ((fn _ => 0), fn x => fn y => x + y);
On the other hand, this Haskell code doesn't print evaluated even with the strict pragma:
{-# LANGUAGE Strict #-}
import Debug.Trace
silly :: ((Int -> Int) -> Int) -> (Int -> Int -> Int) -> Int
silly g f = g (f (trace "evaluated" 0))
main = print $ silly (const 0) (+)
(I can make it, though, by using seq, which makes perfect sense for me)
While I understand that OCaml and SML do the right thing theoretically, are there any practical reason to prefer this behaviour to the "lazier" one? Eta-contraction is a common refactoring tool and I'm totally scared of using it in a strict language. I feel like I should paranoically eta-expand everything, just because otherwise arguments to partially applied functions can be evaluated when they're not supposed to. When is the "strict" behaviour useful?
Why and how does Haskell behave differently under the Strict pragma? Are there any references I can familiarize myself with to better understand the design space and pros and cons of the existing approaches?
To address the technical part of your question, eta-conversion also changes the meaning of expressions in lazy languages, you just need to consider the eta-rule of a different type constructor, e.g., + instead of ->.
This is the eta-rule for binary sums:
(case e of Lft y -> f (Lft y) | Rgt y -> f (Rgt y)) = f e (eta-+)
This equation holds under eager evaluation, because e will always be reduced on both sides. Under lazy evaluation, however, the r.h.s. only reduces e if f also forces it. That might make the l.h.s. diverge where the r.h.s. would not. So the equation does not hold in a lazy language.
To make it concrete in Haskell:
f x = 0
lhs = case undefined of Left y -> f (Left y); Right y -> f (Right y)
rhs = f undefined
Here, trying to print lhs will diverge, whereas rhs yields 0.
There is more that could be said about this, but the essence is that the equational theories of both evaluation regimes are sort of dual.
The underlying problem is that under a lazy regime, every type is inhabited by _|_ (non-termination), whereas under eager it is not. That has severe semantic consequences. In particular, there are no inductive types in Haskell, and you cannot prove termination of a structural recursive function, e.g., a list traversal.
There is a line of research in type theory distinguishing data types (strict) from codata types (non-strict) and providing both in a dual manner, thus giving the best of both worlds.
Edit: As for the question why a compiler should not eta-expand functions: that would utterly break every language. In a strict language with effects that's most obvious, because the ability to stage effects via multiple function abstractions is a feature. The simplest example perhaps is this:
let make_counter () =
let x = ref 0 in
fun () -> x := !x + 1; !x
let tick = make_counter ()
let n1 = tick ()
let n2 = tick ()
let n3 = tick ()
But effects are not the only reason. Eta-expansion can also drastically change the performance of a program! In the same way you sometimes want to stage effects you sometimes also want to stage work:
match :: String -> String -> Bool
match regex = \s -> run fsm s
where fsm = ...expensive transformation of regex...
matchFloat = match "[0-9]+(\.[0-9]*)?((e|E)(+|-)?[0-9]+)?"
Note that I used Haskell here, because this example shows that implicit eta-expansion is not desirable in either eager or lazy languages!
With respect to your final question (why does Haskell do this), the reason "Strict Haskell" behaves differently from a truly strict language is that the Strict extension doesn't really change the evaluation model from lazy to strict. It just makes a subset of bindings into "strict" bindings by default, and only in the limited Haskell sense of forcing evaluation to weak head normal form. Also, it only affects bindings made in the module with the extension turned on; it doesn't retroactively affect bindings made elsewhere. (Moreover, as described below, the strictness doesn't take effect in partial function application. The function needs to be fully applied before any arguments are forced.)
In your particular Haskell example, I believe the only effect of the Strict extension is as if you had explicitly written the following bang patterns in the definition of silly:
silly !g !f = g (f (trace "evaluated" 0))
It has no other effect. In particular, it doesn't make const or (+) strict in their arguments, nor does it generally change the semantics of function applications to make them eager.
So, when the term silly (const 0) (+) is forced by print, the only effect is to evaluate its arguments to WHNF as part of the function application of silly. The effect is similar to writing (in non-Strict Haskell):
let { g = const 0; f = (+) } in g `seq` f `seq` silly g f
Obviously, forcing g and f to their WHNFs (which are lambdas) isn't going to have any side effect, and when silly is applied, const 0 is still lazy in its remaining argument, so the resulting term is something like:
(\x -> 0) ((\x y -> <defn of plus>) (trace "evaluated" 0))
(which should be interpreted without the Strict extension -- these are all lazy bindings here), and there's nothing here that will force the side effect.
As noted above, there's another subtle issue that this example glosses over. Even if you had made everything in sight strict:
{-# LANGUAGE Strict #-}
import Debug.Trace
myConst :: a -> b -> a
myConst x y = x
myPlus :: Int -> Int -> Int
myPlus x y = x + y
silly :: ((Int -> Int) -> Int) -> (Int -> Int -> Int) -> Int
silly g f = g (f (trace "evaluated" 0))
main = print $ silly (myConst 0) myPlus
this still wouldn't have printed "evaluated". This is because, in the evaluation of silly when the strict version of myConst forces its second argument, that argument is a partial application of the strict version of myPlus, and myPlus won't force any of its arguments until it's been fully applied.
This also means that if you change the definition of myPlus to:
myPlus x = \y -> x + y -- now it will print "evaluated"
then you'll be able to largely reproduce the ML behavior. Because myPlus is now fully applied, it will force its argument, and this will print "evaluated". You can suppress it again eta-expanding f in the definition of silly:
silly g f = g (\x -> f (trace "evaluated" 0) x) -- now it won't
because now when myConst forces its second argument, that argument is already in WHNF (because it's a lambda), and we never get to the application of f, full or not.
In the end, I guess I wouldn't take "Haskell plus the Strict extension and unsafe side effects like trace" too seriously as a good point in the design space. Its semantics may be (barely) coherent, but they sure are weird. I think the only serious use case is when you have some code whose semantics "obviously" don't depend on lazy versus strict evaluation but where performance would be improved by a lot of forcing. Then, you can just turn on Strict for a performance boost without having to think too hard.
Is following code right way to think about currying in Haskell. Following is an example of addition in haskell
f = \x -> \y -> x + y
In general is currying realized using lamdbas in functional programming?
Currying is:
In mathematics and computer science, currying is the technique of translating the evaluation of a function that takes multiple arguments (or a tuple of arguments) into evaluating a sequence of functions, each with a single argument. It was introduced by Gottlob Frege, developed by Moses Schönfinkel, and further developed by Haskell Curry.
source Wikipedia
now you could argue that in Haskell there is never more than one argument to a function (you can of course have tuples - see below) - so in a sense all functions in Haskell are already curried (or can only be defined in such a way).
Of course there are curry and uncurry - but those act on tuples:
curry :: ((a, b) -> c) -> a -> b -> c
curry f x y = f (x, y)
and I could argue that a tuple is just one argument too ;)
On a conceptual level you are of course right as augustss pointed out!
But sadly there are some problems (see Monomorphism Restriction for example) where this equality does not hold (if you don't add a type signature):
add x y = x + y === add = \x -> \y -> x + y
How should one reason about function evaluation in examples like the following in Haskell:
let f x = ...
x = ...
in map (g (f x)) xs
In GHC, sometimes (f x) is evaluated only once, and sometimes once for each element in xs, depending on what exactly f and g are. This can be important when f x is an expensive computation. It has just tripped a Haskell beginner I was helping and I didn't know what to tell him other than that it is up to the compiler. Is there a better story?
Update
In the following example (f x) will be evaluated 4 times:
let f x = trace "!" $ zip x x
x = "abc"
in map (\i -> lookup i (f x)) "abcd"
With language extensions, we can create situations where f x must be evaluated repeatedly:
{-# LANGUAGE GADTs, Rank2Types #-}
module MultiEvG where
data BI where
B :: (Bounded b, Integral b) => b -> BI
foo :: [BI] -> [Integer]
foo xs = let f :: (Integral c, Bounded c) => c -> c
f x = maxBound - x
g :: (forall a. (Integral a, Bounded a) => a) -> BI -> Integer
g m (B y) = toInteger (m + y)
x :: (Integral i) => i
x = 3
in map (g (f x)) xs
The crux is to have f x polymorphic even as the argument of g, and we must create a situation where the type(s) at which it is needed can't be predicted (my first stab used an Either a b instead of BI, but when optimising, that of course led to only two evaluations of f x at most).
A polymorphic expression must be evaluated at least once for each type it is used at. That's one reason for the monomorphism restriction. However, when the range of types it can be needed at is restricted, it is possible to memoise the values at each type, and in some circumstances GHC does that (needs optimising, and I expect the number of types involved mustn't be too large). Here we confront it with what is basically an inhomogeneous list, so in each invocation of g (f x), it can be needed at an arbitrary type satisfying the constraints, so the computation cannot be lifted outside the map (technically, the compiler could still build a cache of the values at each used type, so it would be evaluated only once per type, but GHC doesn't, in all likelihood it wouldn't be worth the trouble).
Monomorphic expressions need only be evaluated once, they can be shared. Whether they are is up to the implementation; by purity, it doesn't change the semantics of the programme. If the expression is bound to a name, in practice you can rely on it being shared, since it's easy and obviously what the programmer wants. If it isn't bound to a name, it's a question of optimisation. With the bytecode generator or without optimisations, the expression will often be evaluated repeatedly, but with optimisations repeated evaluation would indicate a compiler bug.
Polymorphic expressions must be evaluated at least once for every type they're used at, but with optimisations, when GHC can see that it may be used multiple times at the same type, it will (usually) still be shared for that type during a larger computation.
Bottom line: Always compile with optimisations, help the compiler by binding expressions you want shared to a name, and give monomorphic type signatures where possible.
Your examples are indeed quite different.
In the first example, the argument to map is g (f x) and is passed once to map most likely as partially applied function.
Should g (f x), when applied to an argument within map evaluate its first argument, then this will be done only once and then the thunk (f x) will be updated with the result.
Hence, in your first example, f xwill be evaluated at most 1 time.
Your second example requires a deeper analysis before the compiler can arrive at the conclusion that (f x) is always constant in the lambda expression. Perhaps it will never optimize it at all, because it may have knowledge that trace is not quite kosher. So, this may evaluate 4 times when tracing, and 4 times or 1 time when not tracing.
This is really dependent on GHC's optimizations, as you've been able to tell.
The best thing to do is to study the GHC core that you get after optimizing the program. I would look at the generated Core and examine whether f x had its own let statement outside the map or not.
If you want to be sure, then you should factor f x out into its own variable assigned in a let, but there's not really a guaranteed way to figure it out other than reading through Core.
All that said, with the exception of things like trace that use unsafePerformIO, this will never change the semantics of your program: how it actually behaves.
In GHC without optimizations, the body of a function is evaluated every time the function is called. (A "call" means the function is applied to arguments and the result is evaluated.) In the following example, f x is inside a function, so it will execute each time the function is called.
(GHC may optimize this expression as discussed in the FAQ [1].)
let f x = trace "!" $ zip x x
x = "abc"
in map (\i -> lookup i (f x)) "abcd"
However, if we move f x out of the function, it will execute only once.
let f x = trace "!" $ zip x x
x = "abc"
in map ((\f_x i -> lookup i f_x) (f x)) "abcd"
This can be rewritten more readably as
let f x = trace "!" $ zip x x
x = "abc"
g f_x i = lookup i f_x
in map (g (f x)) "abcd"
The general rule is that, each time a function is applied to an argument, a new "copy" of the function body is created. Function application is the only thing that may cause an expression to re-execute. However, be warned that some functions and function calls do not look like functions syntactically.
[1] http://www.haskell.org/haskellwiki/GHC/FAQ#Subexpression_Elimination