I've got this piece of code that is supposed to check whther or not first elemnt of an array can be divided by the seocnd one. Have been struggling for some time now with finding the solution, as after compiling i get this error:
Parse error (line 2, column 22): parse error (possibly incorrect indentation or mismatched brackets)
Indentation is quite confusing to me as I'm quite new to Haskell, any help would be appreciated with determining what should I do here
fst2Div:: [a] -> Bool
fst2Div (x : y : _) =
case theList of (x:_:_) -> x
(_:y:_) -> y
if (y `mod` x)==0 then True
else False
x and y are already in scope by the (x : y : _) pattern. There is no need to use an extra case … of … expression. This will not work here anyway, since the case … of … expression will return either x or y. Furthermore theList not defined.
The signature is furthermore to generic. a can be of any type. But you can only calculate the mod :: Integral a => a -> a -> a on an Integral type. You thus can work with x and y directy with:
fst2Div:: Integral a => [a] -> Bool
fst2Div (x : y : _) = y `mod` x == 0
fst2Div _ = False
The second clause unifies with lists that have less than two elements. In that case we here return False.
f x zero = Nothing
f x y = Just $ x / y
where zero = 0
The literal-bound identifier zero simply matches all after the warning Pattern match(es) are overlapped.
That's how Haskell's syntax works; every lowercase-initial variable name in a pattern (re)binds that name. Any existing binding will be shadowed.
But even if that weren't the case, the binding for zero would not be visible to the first alternative, because of how Haskell's syntax works. A similar thing happens in the following version:
f = \v1 v2 -> case (v1, v2) of
(x, zero) -> Nothing
(x, y) -> Just $ x / y
where zero = 0
The where clause only applies to the one alternative that it's part of, not to the whole list of alternatives. That code is pretty much the same thing as
f = \v1 v2 -> case (v1, v2) of
(x, zero) -> Nothing
(x, y) -> let zero = 0 in Just $ x / y
If bound identifiers had different semantics than unbound identifiers in a pattern match, that could be quite error prone as binding a new identifier could mess up pattern matches anywhere that identifier is in scope.
For example let's say you're importing some module Foo (unqualified). And now the module Foo is changed to add the binding x = 42 for some reason. Now in your pattern match you'd suddenly be comparing the first argument against 42 rather than binding it to x. That's a pretty hard to find bug.
So to avoid this kind of scenario, identifier patterns have the same semantics regardless of whether they're already bound somewhere.
Because they are very fragile. What does this compute?
f x y z = 2*x + 3*y + z
Would you expect this to be equal to
f x 3 z = 2*x + 9 + z
f _ _ _ = error "non-exhaustive patterns!"
only because there's a y = 3 defined somewhere in the same 1000+ line module?
Also consider this:
import SomeLibrary
f x y z = 2*x + 3*y + z
What if in a future release SomeLibrary defines y? We don't want that to suddenly stop working.
Finally, what if there is no Eq instance for y?
y :: a -> a
y = id
f :: a -> (a -> a) -> a
f x y = y x
f x w = w (w x)
Sure, it is a contrived example, but there's no way the runtime can compare the input function to check whether it is equal to y or not.
To disambiguate this, some new languages like Swift uses two different syntaxes. E.g. (pseudo-code)
switch someValue {
case .a(x) : ... // compare by equality using the outer x
case .b(let x) : ... // redefine x as a new local variable, shadowing the outer one
}
zero is just a variable that occurs inside a pattern, just like y does in the second line. There is no difference between the two. When a variable that occurs inside a pattern, this introduces a new variable. If there was a binding for that variable already, the new variable shadows the old one.
So you cannot use an already bound variable inside a pattern. Instead, you should do something like that:
f x y | y == zero = Nothing
where zero = 0
f x y = Just $ x / y
Notice that I also moved the where clause to bring it in scope for the first line.
I'm trying out an exercise where I have to print out a root node of a bst. They are telling me to use data BinSearch x y = Empty | Node x y (BinSearch x y) (BinSearch x y) (meaning a binary search tree either has nothing in it or it does) and the signature, result :: BinSearch x y -> Maybe y. I need to run the program like,
> result None
Nothing
> result (Node 0 44 None None)
Just 44
I'm a bit confused on how to do this. If I had control over the signature, it'd be easy, but I don't understand how to go about this.
The only thing I've come up with is
data BinSearch x y = None | Node x y (BinSearch x y) (BinSearch x y)
result :: BinSearch x y -> Maybe y
result (Node a b None None) = b
but the error I get there is that it couldn't match v with BinSearch x0 y0.
EDIT: I have fixed all transcription errors, apologies. The problem is now stated EXACTLY as it was in the book.
Two small hints: first, it must be BinSearch with a capital B. I suspect this was a transcription error on your part. Second is a syntax hint; your pattern needs parentheses, like this:
result (Node a b None None) = ...
This is not a complete solution; but will get you over this hurdle and into the next (more interesting) error.
I got it. I'm dumb. You just have to put "Just 44" at the end. My problem was that I didn't know that "Just" was built in.
I'm new to Haskell. I understand that functions are curried to become functions that take one parameter. What I don't understand is how pattern matching against multiple values can be achieved when this is the case. For example:
Suppose we have the following completely arbitrary function definition:
myFunc :: Int -> Int -> Int
myFunc 0 0 = 0
myFunc 1 1 = 1
myFunc x y = x `someoperation` y
Is the partially applied function returned by myFunc 0 essentially:
partiallyAppliedMyFunc :: Int -> Int
partiallyAppliedMyFunc 0 = 0
partiallyAppliedMyFunc y = 0 `someoperation` y
Thus removing the extraneous pattern that can't possibly match? Or.... what's going on here?
Actually, this question is more subtle than it may appear on the surface, and involves learning a little bit about compiler internals to really answer properly. The reason is that we sort of take for granted that we can have nested patterns and patterns over more than one term, when in reality for the purposes of a compiler the only thing you can do is branch on the top-level constructor of a single value. So the first stage of the compiler is to turn nested patterns (and patterns over more than one value) into simpler patterns. For example, a naive algorithm might transform your function into something like this:
myFunc = \x y -> case x of
0 -> case y of
0 -> 0
_ -> x `someoperation` y
1 -> case y of
1 -> 1
_ -> x `someoperation` y
_ -> x `someoperation` y
You can already see there's lots of suboptimal things here: the someoperation term is repeated a lot, and the function expects both arguments before it will even start to do a case at all; see A Term Pattern-Match Compiler Inspired by Finite Automata Theory for a discussion of how you might improve on this.
Anyway, in this form, it should actually be a bit more clear how the currying step happens. We can directly substitute for x in this expression to look at what myFunc 0 does:
myFunc 0 = \y -> case 0 of
0 -> case y of
0 -> 0
_ -> 0 `someoperation` y
1 -> case y of
1 -> 1
_ -> 0 `someoperation` y
_ -> 0 `someoperation` y
Since this is still a lambda, no further reduction is done. You might hope that a sufficiently smart compiler could do a bit more, but GHC explicitly does not do more; if you want more computation to be done after supplying only one argument, you have to change your definition. (There's a time/space tradeoff here, and choosing correctly is too hard to do reliably. So GHC leaves it in the programmer's hands to make this choice.) For example, you could explicitly write
myFunc 0 = \y -> case y of
0 -> 0
_ -> 0 `someoperation` y
myFunc 1 = \y -> case y of
1 -> 1
_ -> 1 `someoperation` y
myFunc x = \y -> x `someoperation` y
and then myFunc 0 would reduce to a much smaller expression.
I was a bit confused by the documentation for fix (although I think I understand what it's supposed to do now), so I looked at the source code. That left me more confused:
fix :: (a -> a) -> a
fix f = let x = f x in x
How exactly does this return a fixed point?
I decided to try it out at the command line:
Prelude Data.Function> fix id
...
And it hangs there. Now to be fair, this is on my old macbook which is kind of slow. However, this function can't be too computationally expensive since anything passed in to id gives that same thing back (not to mention that it's eating up no CPU time). What am I doing wrong?
You are doing nothing wrong. fix id is an infinite loop.
When we say that fix returns the least fixed point of a function, we mean that in the domain theory sense. So fix (\x -> 2*x-1) is not going to return 1, because although 1 is a fixed point of that function, it is not the least one in the domain ordering.
I can't describe the domain ordering in a mere paragraph or two, so I will refer you to the domain theory link above. It is an excellent tutorial, easy to read, and quite enlightening. I highly recommend it.
For the view from 10,000 feet, fix is a higher-order function which encodes the idea of recursion. If you have the expression:
let x = 1:x in x
Which results in the infinite list [1,1..], you could say the same thing using fix:
fix (\x -> 1:x)
(Or simply fix (1:)), which says find me a fixed point of the (1:) function, IOW a value x such that x = 1:x... just like we defined above. As you can see from the definition, fix is nothing more than this idea -- recursion encapsulated into a function.
It is a truly general concept of recursion as well -- you can write any recursive function this way, including functions that use polymorphic recursion. So for example the typical fibonacci function:
fib n = if n < 2 then n else fib (n-1) + fib (n-2)
Can be written using fix this way:
fib = fix (\f -> \n -> if n < 2 then n else f (n-1) + f (n-2))
Exercise: expand the definition of fix to show that these two definitions of fib are equivalent.
But for a full understanding, read about domain theory. It's really cool stuff.
I don't claim to understand this at all, but if this helps anyone...then yippee.
Consider the definition of fix. fix f = let x = f x in x. The mind-boggling part is that x is defined as f x. But think about it for a minute.
x = f x
Since x = f x, then we can substitute the value of x on the right hand side of that, right? So therefore...
x = f . f $ x -- or x = f (f x)
x = f . f . f $ x -- or x = f (f (f x))
x = f . f . f . f . f . f . f . f . f . f . f $ x -- etc.
So the trick is, in order to terminate, f has to generate some sort of structure, so that a later f can pattern match that structure and terminate the recursion, without actually caring about the full "value" of its parameter (?)
Unless, of course, you want to do something like create an infinite list, as luqui illustrated.
TomMD's factorial explanation is good. Fix's type signature is (a -> a) -> a. The type signature for (\recurse d -> if d > 0 then d * (recurse (d-1)) else 1) is (b -> b) -> b -> b, in other words, (b -> b) -> (b -> b). So we can say that a = (b -> b). That way, fix takes our function, which is a -> a, or really, (b -> b) -> (b -> b), and will return a result of type a, in other words, b -> b, in other words, another function!
Wait, I thought it was supposed to return a fixed point...not a function. Well it does, sort of (since functions are data). You can imagine that it gave us the definitive function for finding a factorial. We gave it a function that dind't know how to recurse (hence one of the parameters to it is a function used to recurse), and fix taught it how to recurse.
Remember how I said that f has to generate some sort of structure so that a later f can pattern match and terminate? Well that's not exactly right, I guess. TomMD illustrated how we can expand x to apply the function and step towards the base case. For his function, he used an if/then, and that is what causes termination. After repeated replacements, the in part of the whole definition of fix eventually stops being defined in terms of x and that is when it is computable and complete.
You need a way for the fixpoint to terminate. Expanding your example it's obvious it won't finish:
fix id
--> let x = id x in x
--> id x
--> id (id x)
--> id (id (id x))
--> ...
Here is a real example of me using fix (note I don't use fix often and was probably tired / not worrying about readable code when I wrote this):
(fix (\f h -> if (pred h) then f (mutate h) else h)) q
WTF, you say! Well, yes, but there are a few really useful points here. First of all, your first fix argument should usually be a function which is the 'recurse' case and the second argument is the data on which to act. Here is the same code as a named function:
getQ h
| pred h = getQ (mutate h)
| otherwise = h
If you're still confused then perhaps factorial will be an easier example:
fix (\recurse d -> if d > 0 then d * (recurse (d-1)) else 1) 5 -->* 120
Notice the evaluation:
fix (\recurse d -> if d > 0 then d * (recurse (d-1)) else 1) 3 -->
let x = (\recurse d -> if d > 0 then d * (recurse (d-1)) else 1) x in x 3 -->
let x = ... in (\recurse d -> if d > 0 then d * (recurse (d-1)) else 1) x 3 -->
let x = ... in (\d -> if d > 0 then d * (x (d-1)) else 1) 3
Oh, did you just see that? That x became a function inside our then branch.
let x = ... in if 3 > 0 then 3 * (x (3 - 1)) else 1) -->
let x = ... in 3 * x 2 -->
let x = ... in 3 * (\recurse d -> if d > 0 then d * (recurse (d-1)) else 1) x 2 -->
In the above you need to remember x = f x, hence the two arguments of x 2 at the end instead of just 2.
let x = ... in 3 * (\d -> if d > 0 then d * (x (d-1)) else 1) 2 -->
And I'll stop here!
How I understand it is, it finds a value for the function, such that it outputs the same thing you give it. The catch is, it will always choose undefined (or an infinite loop, in haskell, undefined and infinite loops are the same) or whatever has the most undefineds in it. For example, with id,
λ <*Main Data.Function>: id undefined
*** Exception: Prelude.undefined
As you can see, undefined is a fixed point, so fix will pick that. If you instead do (\x->1:x).
λ <*Main Data.Function>: undefined
*** Exception: Prelude.undefined
λ <*Main Data.Function>: (\x->1:x) undefined
[1*** Exception: Prelude.undefined
So fix can't pick undefined. To make it a bit more connected to infinite loops.
λ <*Main Data.Function>: let y=y in y
^CInterrupted.
λ <*Main Data.Function>: (\x->1:x) (let y=y in y)
[1^CInterrupted.
Again, a slight difference. So what is the fixed point? Let us try repeat 1.
λ <*Main Data.Function>: repeat 1
[1,1,1,1,1,1, and so on
λ <*Main Data.Function>: (\x->1:x) $ repeat 1
[1,1,1,1,1,1, and so on
It is the same! Since this is the only fixed point, fix must settle on it. Sorry fix, no infinite loops or undefined for you.
As others pointed out, fix somehow captures the essence of recursion. Other answers already do a great job at explaining how fix works. So let's take a look from another angle and see how fix can be derived by generalising, starting from a specific problem: we want to implement the factorial function.
Let's first define a non recursive factorial function. We will use it later to "bootstrap" our recursive implementation.
factorial n = product [1..n]
We approximate the factorial function by a sequence of functions: for each natural number n, factorial_n coincides with factorial up to and including n. Clearly factorial_n converges towards factorial for n going towards infinity.
factorial_0 n = if n == 0 then 1 else undefined
factorial_1 n = n * factorial_0(n - 1)
factorial_2 n = n * factorial_1(n - 1)
factorial_3 n = n * factorial_2(n - 1)
...
Instead of writing factorial_n out by hand for every n, we implement a factory function that creates these for us. We do this by factoring the commonalities out and abstracting over the calls to factorial_[n - 1] by making them a parameter to the factory function.
factorialMaker f n = if n == 0 then 1 else n * f(n - 1)
Using this factory, we can create the same converging sequence of functions as above. For each factorial_n we need to pass a function that calculates the factorials up to n - 1. We simply use the factorial_[n - 1] from the previous step.
factorial_0 = factorialMaker undefined
factorial_1 = factorialMaker factorial_0
factorial_2 = factorialMaker factorial_1
factorial_3 = factorialMaker factorial_2
...
If we pass our real factorial function instead, we materialize the limit of the series.
factorial_inf = factorialMaker factorial
But since that limit is the real factorial function we have factorial = factorial_inf and thus
factorial = factorialMaker factorial
Which means that factorial is a fixed-point of factorialMaker.
Finally we derive the function fix, which returns factorial given factorialMaker. We do this by abstracting over factorialMaker and make it an argument to fix. (i.e. f corresponds to factorialMaker and fix f to factorial):
fix f = f (fix f)
Now we find factorial by calculating the fixed-point of factorialMaker.
factorial = fix factorialMaker