Pattern match over non-constructor functions - haskell

One of the most powerful ways pattern matching and lazy evaluation can come together is to bypass expensive computation. However I am still shocked that Haskell only permits the pattern matching of constructors, which is barely pattern matching at all!
Is there some way to impliment the following functionality in Haskell:
exp :: Double -> Double
exp 0 = 1
exp (log a) = a
--...
log :: Double -> Double
log 1 = 0
log (exp a) = a
--...
The original problem I found this useful in was writing an associativity preference / rule in a Monoid class:
class Monoid m where
iden :: m
(+) m -> m -> m
(+) iden a = a
(+) a iden = a
--Line with issue
(+) ((+) a b) c = (+) a ((+) b c)

There's no reason to be shocked about this. How would it be even remotely feasible to pattern match on arbitrary functions? Most functions aren't invertible, and even for those that are it is typically nontrivial to actually compute the inverses.
Of course the compiler could in principle handle trivial examples like replacing literal exp (log x) with x, but that would be almost completely useless in practice (in the unlikely event somebody were to literally write that, they could as well reduce it right there in the source), and would generally lead to very weird unpredictable behaviour if inlining order changes whether or not the compiler can see that a match applies.
(There is however a thing called rewrite rules, which is similar to what you proposed but is seen as only an optimisation tool.)
Even the two lines from the Monoid class that don't error don't make sense, but for different reasons. First, when you write
(+) iden a = a
(+) a iden = a
this doesn't do what you seem to think. These are actually two redundant catch-call clauses, equivalent to
(+) x y = y
(+) x y = x
...which is an utterly nonsensical thing to write. What you want to state could in fact be written as
default (+) :: Eq a => a -> a -> a
x+y
| x==iden = y
| y==iden = x
| otherwise = ...
...but this still doesn't accomplish anything useful, because this is never going to be a full definition of +. And as soon as a concrete instance even begins to define its own + operator, the complete default one is going to be ignored.
Moreover, if you were to have these kind of clauses all over your Haskell project it would in practice just mean your performing a lot of unnecessary, redundant extra checks. A law-abiding Monoid instance needs to fulfill mempty <> a ≡ a anyway, no point explicitly special-casing it.
I think what you really want is tests. It would make sense to specify laws right in a class declaration in a way that they could automatically be checked, but standard Haskell has no syntax for this. Most projects just do it in a separate test suite, using QuickCheck to generate example inputs. I think there's also a tool that allow you to put the test cases right in your source file, but I forgot what it's called.

Related

Why the pointfree style does not cause a problem?

I read about The Monomorphism Restriction from the page https://www.haskell.org/tutorial/pitfalls.html and could not understand the last point:
A common violation of the restriction happens with functions defined
in a higher-order manner, as in this definition of sum from the
Standard Prelude:
sum = foldl (+) 0
As is, this would cause a static type error. We can fix the problem by
adding the type signature:
sum :: (Num a) => [a] -> a
Also note that this problem would not have arisen if we had written:
sum xs = foldl (+) 0 xs
because the restriction only applies to pattern bindings.
Why the last point does not cause any error?
because the restriction only applies to pattern bindings.
Essentially, the MR does not apply when we are defining a function using a function binding of the form
f arg1 ... argN = ...
with N > 0.
The intuition is as follows. The purpose of the MR is to avoid turning Haskell non-functions into lower-level functions accidentally. For instance,
x = 3 + 4
is not a function. However, its type is Num a => a, which is usually implemented as a function from a Num dictionary to the result of 3+4 where + is a function defined by the dictionary. This can lead to a bad performance, since every time we use x the sum will need to be recomputed from scratch. This is unavoidable if we want to compute print (x :: Int) >> print (x :: Double), for instance. But actually using x at different types is rather uncommon.
So, the MR makes x monomorphic, preventing us to use it at more than a single type. In that way, recomputation can be avoided.
However, if x is already a function there is no harm in keeping that polymorphic, since we are "recomputing" function calls anyway. So, the MR does not apply to function bindings.

Can any recursive definition be rewritten using foldr?

Say I have a general recursive definition in haskell like this:
foo a0 a1 ... = base_case
foo b0 b1 ...
| cond1 = recursive_case_1
| cond2 = recursive_case_2
...
Can it always rewritten using foldr? Can it be proved?
If we interpret your question literally, we can write const value foldr to achieve any value, as #DanielWagner pointed out in a comment.
A more interesting question is whether we can instead forbid general recursion from Haskell, and "recurse" only through the eliminators/catamorphisms associated to each user-defined data type, which are the natural generalization of foldr to inductively defined data types. This is, essentially, (higher-order) primitive recursion.
When this restriction is performed, we can only compose terminating functions (the eliminators) together. This means that we can no longer define non terminating functions.
As a first example, we lose the trivial recursion
f x = f x
-- or even
a = a
since, as said, the language becomes total.
More interestingly, the general fixed point operator is lost.
fix :: (a -> a) -> a
fix f = f (fix f)
A more intriguing question is: what about the total functions we can express in Haskell? We do lose all the non-total functions, but do we lose any of the total ones?
Computability theory states that, since the language becomes total (no more non termination), we lose expressiveness even on the total fragment.
The proof is a standard diagonalization argument. Fix any enumeration of programs in the total fragment so that we can speak of "the i-th program".
Then, let eval i x be the result of running the i-th program on the natural x as input (for simplicity, assume this is well typed, and that the result is a natural). Note that, since the language is total, then a result must exist. Moreover, eval can be implemented in the unrestricted Haskell language, since we can write an interpreter of Haskell in Haskell (left as an exercise :-P), and that would work as fine for the fragment. Then, we simply take
f n = succ $ eval n n
The above is a total function (a composition of total functions) which can be expressed in Haskell, but not in the fragment. Indeed, otherwise there would be a program to compute it, say the i-th program. In such case we would have
eval i x = f x
for all x. But then,
eval i i = f i = succ $ eval i i
which is impossible -- contradiction. QED.
In type theory, it is indeed the case that you can elaborate all definitions by dependent pattern-matching into ones only using eliminators (a more strongly-typed version of folds, the generalisation of lists' foldr).
See e.g. Eliminating Dependent Pattern Matching (pdf)

What are the benefits of currying?

I don't think I quite understand currying, since I'm unable to see any massive benefit it could provide. Perhaps someone could enlighten me with an example demonstrating why it is so useful. Does it truly have benefits and applications, or is it just an over-appreciated concept?
(There is a slight difference between currying and partial application, although they're closely related; since they're often mixed together, I'll deal with both terms.)
The place where I realized the benefits first was when I saw sliced operators:
incElems = map (+1)
--non-curried equivalent: incElems = (\elems -> map (\i -> (+) 1 i) elems)
IMO, this is totally easy to read. Now, if the type of (+) was (Int,Int) -> Int *, which is the uncurried version, it would (counter-intuitively) result in an error -- but curryied, it works as expected, and has type [Int] -> [Int].
You mentioned C# lambdas in a comment. In C#, you could have written incElems like so, given a function plus:
var incElems = xs => xs.Select(x => plus(1,x))
If you're used to point-free style, you'll see that the x here is redundant. Logically, that code could be reduced to
var incElems = xs => xs.Select(curry(plus)(1))
which is awful due to the lack of automatic partial application with C# lambdas. And that's the crucial point to decide where currying is actually useful: mostly when it happens implicitly. For me, map (+1) is the easiest to read, then comes .Select(x => plus(1,x)), and the version with curry should probably be avoided, if there is no really good reason.
Now, if readable, the benefits sum up to shorter, more readable and less cluttered code -- unless there is some abuse of point-free style done is with it (I do love (.).(.), but it is... special)
Also, lambda calculus would get impossible without using curried functions, since it has only one-valued (but therefor higher-order) functions.
* Of course it actually in Num, but it's more readable like this for the moment.
Update: how currying actually works.
Look at the type of plus in C#:
int plus(int a, int b) {..}
You have to give it a tuple of values -- not in C# terms, but mathematically spoken; you can't just leave out the second value. In haskell terms, that's
plus :: (Int,Int) -> Int,
which could be used like
incElem = map (\x -> plus (1, x)) -- equal to .Select (x => plus (1, x))
That's way too much characters to type. Suppose you'd want to do this more often in the future. Here's a little helper:
curry f = \x -> (\y -> f (x,y))
plus' = curry plus
which gives
incElem = map (plus' 1)
Let's apply this to a concrete value.
incElem [1]
= (map (plus' 1)) [1]
= [plus' 1 1]
= [(curry plus) 1 1]
= [(\x -> (\y -> plus (x,y))) 1 1]
= [plus (1,1)]
= [2]
Here you can see curry at work. It turns a standard haskell style function application (plus' 1 1) into a call to a "tupled" function -- or, viewed at a higher level, transforms the "tupled" into the "untupled" version.
Fortunately, most of the time, you don't have to worry about this, as there is automatic partial application.
It's not the best thing since sliced bread, but if you're using lambdas anyway, it's easier to use higher-order functions without using lambda syntax. Compare:
map (max 4) [0,6,9,3] --[4,6,9,4]
map (\i -> max 4 i) [0,6,9,3] --[4,6,9,4]
These kinds of constructs come up often enough when you're using functional programming, that it's a nice shortcut to have and lets you think about the problem from a slightly higher level--you're mapping against the "max 4" function, not some random function that happens to be defined as (\i -> max 4 i). It lets you start to think in higher levels of indirection more easily:
let numOr4 = map $ max 4
let numOr4' = (\xs -> map (\i -> max 4 i) xs)
numOr4 [0,6,9,3] --ends up being [4,6,9,4] either way;
--which do you think is easier to understand?
That said, it's not a panacea; sometimes your function's parameters will be the wrong order for what you're trying to do with currying, so you'll have to resort to a lambda anyway. However, once you get used to this style, you start to learn how to design your functions to work well with it, and once those neurons starts to connect inside your brain, previously complicated constructs can start to seem obvious in comparison.
One benefit of currying is that it allows partial application of functions without the need of any special syntax/operator. A simple example:
mapLength = map length
mapLength ["ab", "cde", "f"]
>>> [2, 3, 1]
mapLength ["x", "yz", "www"]
>>> [1, 2, 3]
map :: (a -> b) -> [a] -> [b]
length :: [a] -> Int
mapLength :: [[a]] -> [Int]
The map function can be considered to have type (a -> b) -> ([a] -> [b]) because of currying, so when length is applied as its first argument, it yields the function mapLength of type [[a]] -> [Int].
Currying has the convenience features mentioned in other answers, but it also often serves to simplify reasoning about the language or to implement some code much easier than it could be otherwise. For example, currying means that any function at all has a type that's compatible with a ->b. If you write some code whose type involves a -> b, that code can be made work with any function at all, no matter how many arguments it takes.
The best known example of this is the Applicative class:
class Functor f => Applicative f where
pure :: a -> f a
(<*>) :: f (a -> b) -> f a -> f b
And an example use:
-- All possible products of numbers taken from [1..5] and [1..10]
example = pure (*) <*> [1..5] <*> [1..10]
In this context, pure and <*> adapt any function of type a -> b to work with lists of type [a]. Because of partial application, this means you can also adapt functions of type a -> b -> c to work with [a] and [b], or a -> b -> c -> d with [a], [b] and [c], and so on.
The reason this works is because a -> b -> c is the same thing as a -> (b -> c):
(+) :: Num a => a -> a -> a
pure (+) :: (Applicative f, Num a) => f (a -> a -> a)
[1..5], [1..10] :: Num a => [a]
pure (+) <*> [1..5] :: Num a => [a -> a]
pure (+) <*> [1..5] <*> [1..10] :: Num a => [a]
Another, different use of currying is that Haskell allows you to partially apply type constructors. E.g., if you have this type:
data Foo a b = Foo a b
...it actually makes sense to write Foo a in many contexts, for example:
instance Functor (Foo a) where
fmap f (Foo a b) = Foo a (f b)
I.e., Foo is a two-parameter type constructor with kind * -> * -> *; Foo a, the partial application of Foo to just one type, is a type constructor with kind * -> *. Functor is a type class that can only be instantiated for type constrcutors of kind * -> *. Since Foo a is of this kind, you can make a Functor instance for it.
The "no-currying" form of partial application works like this:
We have a function f : (A ✕ B) → C
We'd like to apply it partially to some a : A
To do this, we build a closure out of a and f (we don't evaluate f at all, for the time being)
Then some time later, we receive the second argument b : B
Now that we have both the A and B argument, we can evaluate f in its original form...
So we recall a from the closure, and evaluate f(a,b).
A bit complicated, isn't it?
When f is curried in the first place, it's rather simpler:
We have a function f : A → B → C
We'd like to apply it partially to some a : A – which we can just do: f a
Then some time later, we receive the second argument b : B
We apply the already evaluated f a to b.
So far so nice, but more important than being simple, this also gives us extra possibilities for implementing our function: we may be able to do some calculations as soon as the a argument is received, and these calculations won't need to be done later, even if the function is evaluated with multiple different b arguments!
To give an example, consider this audio filter, an infinite impulse response filter. It works like this: for each audio sample, you feed an "accumulator function" (f) with some state parameter (in this case, a simple number, 0 at the beginning) and the audio sample. The function then does some magic, and spits out the new internal state1 and the output sample.
Now here's the crucial bit – what kind of magic the function does depends on the coefficient2 λ, which is not quite a constant: it depends both on what cutoff frequency we'd like the filter to have (this governs "how the filter will sound") and on what sample rate we're processing in. Unfortunately, the calculation of λ is a bit more complicated (lp1stCoeff $ 2*pi * (νᵥ ~*% δs) than the rest of the magic, so we wouldn't like having to do this for every single sample, all over again. Quite annoying, because νᵥ and δs are almost constant: they change very seldom, certainly not at each audio sample.
But currying saves the day! We simply calculate λ as soon as we have the necessary parameters. Then, at each of the many many audio samples to come, we only need to perform the remaining, very easy magic: yⱼ = yⱼ₁ + λ ⋅ (xⱼ - yⱼ₁). So we're being efficient, and still keeping a nice safe referentially transparent purely-functional interface.
1 Note that this kind of state-passing can generally be done more nicely with the State or ST monad, that's just not particularly beneficial in this example
2 Yes, this is a lambda symbol. I hope I'm not confusing anybody – fortunately, in Haskell it's clear that lambda functions are written with \, not with λ.
It's somewhat dubious to ask what the benefits of currying are without specifying the context in which you're asking the question:
In some cases, like functional languages, currying will merely be seen as something that has a more local change, where you could replace things with explicit tupled domains. However, this isn't to say that currying is useless in these languages. In some sense, programming with curried functions make you "feel" like you're programming in a more functional style, because you more typically face situations where you're dealing with higher order functions. Certainly, most of the time, you will "fill in" all of the arguments to a function, but in the cases where you want to use the function in its partially applied form, this is a bit simpler to do in curried form. We typically tell our beginning programmers to use this when learning a functional language just because it feels like better style and reminds them they're programming in more than just C. Having things like curry and uncurry also help for certain conveniences within functional programming languages too, I can think of arrows within Haskell as a specific example of where you would use curry and uncurry a bit to apply things to different pieces of an arrow, etc...
In some cases, you want to think about more than functional programs, you can present currying / uncurrying as a way to state the elimination and introduction rules for and in constructive logic, which provides a connection to a more elegant motivation for why it exists.
In some cases, for example, in Coq, using curried functions versus tupled functions can produce different induction schemes, which may be easier or harder to work with, depending on your applications.
I used to think that currying was simple syntax sugar that saves you a bit of typing. For example, instead of writing
(\ x -> x + 1)
I can merely write
(+1)
The latter is instantly more readable, and less typing to boot.
So if it's just a convenient short cut, why all the fuss?
Well, it turns out that because function types are curried, you can write code which is polymorphic in the number of arguments a function has.
For example, the QuickCheck framework lets you test functions by feeding them randomly-generated test data. It works on any function who's input type can be auto-generated. But, because of currying, the authors were able to rig it so this works with any number of arguments. Were functions not curried, there would be a different testing function for each number of arguments - and that would just be tedious.

Removing common haskell piping boilerplate

I have some pretty common Haskell boilerplate that shows up in a lot of places. It looks something like this (when instantiating classes):
a <= b = (modify a) <= (modify b)
like this (with normal functions):
fn x y z = fn (foo x) (foo y) (foo z)
and sometimes even with tuples, as in:
mod (x, y) = (alt x, alt y)
It seems like there should be an easy way to reduce all of this boilerplate and not have to repeat myself quite so much. (These are simple examples, but it does get annoying). I imagine that there are abstractions created for removing such boilerplate, but I'm not sure what they're called nor where to look. Can any haskellites point me in the right direction?
For the (<=) case, consider defining compare instead; you can then use Data.Ord.comparing, like so:
instance Ord Foo where
compare = comparing modify
Note that comparing can simply be defined as comparing f = compare `on` f, using Data.Function.on.
For your fn example, it's not clear. There's no way to simplify this type of definition in general. However, I don't think the boilerplate is too bad in this instance.
For mod:
mod = alt *** alt
using Control.Arrow.(***) — read a b c as b -> c in the type signature; arrows are just a general abstraction (like functors or monads) of which functions are an instance. You might like to define both = join (***) (which is itself shorthand for both f = f *** f); I know at least one other person who uses this alias, and I think it should be in Control.Arrow proper.
So, in general, the answer is: combinators, combinators, combinators! This ties directly in with point-free style. It can be overdone, but when the combinators exist for your situation, such code can not only be cleaner and shorter, it can be easier to read: you only have to learn an abstraction once, and can then apply it everywhere when reading code.
I suggest using Hoogle to find these combinators; when you think you see a general pattern underlying a definition, try abstracting out what you think the common parts are, taking the type of the result, and searching for it on Hoogle. You might find a combinator that does just what you want.
So, for instance, for your mod case, you could abstract out alt, yielding \f (a,b) -> (f a, f b), then search for its type, (a -> b) -> (a, a) -> (b, b) — there's an exact match, but it's in the fgl graph library, which you don't want to depend on. Still, you can see how the ability to search by type can be very valuable indeed!
There's also a command-line version of Hoogle with GHCi integration; see its HaskellWiki page for more information.
(There's also Hayoo, which searches the entirety of Hackage, but is slightly less clever with types; which one you use is up to personal preference.)
For some of the boilerplate, Data.Function.on can be helpful, although in these examples, it doesn't gain much
instance Ord Foo where
(<=) = (<=) `on` modify -- maybe
mod = uncurry ((,) `on` alt) -- Not really

Is Milner let polymorphism a rank 2 feature?

let a = b in c can be thought as a syntactic sugar for (\a -> c) b, but in a typed setting in general it's not the case. For example, in the Milner calculus let a = \x -> x in (a True, a 1) is typable, but seemingly equivalent (\a -> (a True, a 1)) (\x -> x) is not.
However, the latter is typable in System F with a rank 2 type for the first lambda.
My questions are:
Is let polymorphism a rank 2 feature that sneaked secretly in the otherwise rank 1 world of Milner calculus?
The purpose of having of separate let construct seems to specify which types should be generalized by type checker, and which are not. Does it serve any other purposes? Are there any reasons to extend more powerful systems e.g. System F with separate let which is not sugar? Are there any papers on the rationale behind the design of the Milner calculus which no longer seems obvious to me?
Is there the most general type for \a -> (a True, a 1) in System F?
Are there type systems closed under beta expansion? I.e. if P is typable and M N = P then M is typable?
Some clarifications:
By equivalence I mean equivalence modulo type annotations. Is 'System F a la Church' the correct term for that?
I know that in general the principal typing property doesn't hold in F, but a principal type could exist for my particular term.
By let I mean the non-recursive flavour of let. Extension of system F with recursive let is obviously useful as it allows for non-termination.
W.r.t. to the four questions asked:
A key insight in this matter is that rather than just typing a
lambda-abstraction with a potentially polymorphic argument type, we
are typing a (sugared) abstraction that is (1) applied exactly once
and, moreover, that is (2) applied to a statically known argument.
That is, we can first subject the "argument" (i.e. the definiens of
the local definition) to type reconstruction to find its
(polymorphic) type; then assign the found type to the "parameter"
(the definiendum); and then, finally, type the body in the extended
type context.
Note that that is considerably more easy than general rank-2 type
inference.
Note that, strictly speaking, let .. = .. in .. is only syntactic sugar in System F if you demand that the definiendum carries a type annotation: let .. : .. = .. in .. .
Here are two solutions for T in (\a :: T -> (a True, a 1)) in System F: forall b. (forall a. a -> b) -> (b, b) and forall c d. (forall a b. a -> b) -> (c, d). Note that neither one of them is more general than the other. In general, System F does not admit principal types.
I suppose this holds for the simply typed lambda-calculus?
Types are not preserved under beta-expansion in any calculus that can express the concept of "dead code". You can probably figure out how to write something similar to this in any usable language:
if True then something typable else utter nonsense
For example, let M = (\x y -> x) (something typable) and N = (utter nonsense) and P = (something typable), so that M N = P, and P is typable, but M N isn't.
...rereading your question, I see that you only demand that M be typable, but that seems like a very strange meaning to give to "preserved under beta-expansion" to me. Anyway, I don't see why some argument like the above couldn't apply: simply let M have some untypable dead code in it.
You could type (\a -> (a True, a 1)) (\x -> x) if instead of generalizing only let expressions, you generalized all lambda abstractions. Having done so, one also needs to instantiate type schemas at every use point, not simply at the point where the binder which refers to them is actually used. I'm don't think there's any problem with this actually, outside of the fact that its vastly less efficient. I recall some discussion of this in TAPL, in fact, making similar points.
I recall many years ago seeing in a book about lambda calculus (possibly Barendregt) a type system preserved by beta expansion. It had no quantification, but it had disjunction to express that a term needed to be of more than one type simultaneously, as well as a special type omega which every term inhabited. As I recall, the latter avoids Daniel Wagner's dead code objection. While every expression was well-typed, restricting the position of omega in the type allowed you to charactize which expressions had (weak?) head normal forms.
Also if I recall correctly, fully normal form expressions had principal types, which did not contain omega.
For example the principal type of \f x -> f (f x) (the Church numeral 2) would be something like ((A -> B) /\ (B -> C)) -> A -> C
Not able to answer all your very specialized questions, but no, its not a rank 2 feature. As you write, it's just that let definitions are being quantified which yields a fully polymorphic rank-1 type unless the definition depends on some monomorphic value ina nouter scope.
Please also note that Haskell let is known as let rec in other languages and allows definition of mutually recursive functions and values. This is something you would not want to code manually with lambda-expressions and Y-combinators.

Resources