getting rid of unnecessary parenthesis - haskell

I wrote a function for evaluating a polynomial at a given number. The polynomial is represented as a list of coefficients (e.g. [1,2,3] corresponds to x^2+2x+3).
polyEval x p = sum (zipWith (*) (iterate (*x) 1) (reverse p))
As you can see, I first used a lot of parenthesis to group which expressions should be evaluated. For better readability I tried to eliminate as many parenthesis using . and $. (In my opinion more than two pairs of nested parenthesis are making the code more and more difficult to read.) I know that function application has highest priority and is left associative. The . and $are both right associative but . has priority 9, while $ has priority 0.
So it seemed to me that following expression cannot be written with even fewer parenthesis
polyEval x p = sum $ zipWith (*) (iterate (*x) 1) $ reverse p
I know that we need parenthesis for (*) and (*x) to convert them to prefix functions, but is it possible to somehow remove the parenthesis around iterate (*x) 1?
Also what version would you prefer for readability?
I know that there are many other ways to achieve the same, but I'd like to discuss my particular example, as it has a function evaluated in two arguments (iterate (*x) 1) as middle argument of another function that takes three arguments.

As usual with this sort of question I prefer the OP's version to any of the alternatives that have been proposed so far. I would write
polyEval x p = sum $ zipWith (*) (iterate (* x) 1) (reverse p)
and leave it at that. The two arguments of zipWith (*) play symmetric roles in the same way that the two arguments of * do, so eta-reducing is just obfuscation.
The value of $ is that it makes the outermost structure of the computation clear: the evaluation of a polynomial at a point is the sum of something. Eliminating parentheses should not be a goal in itself.

So it might be a little puerile, but I actually really like to think of Haskell’s rules in terms of food. I think of Haskell’s left-associative function application f x y = (f x) y as a sort of aggressive nom or greedy nom, in that the function f refuses to wait for the y to come around and immediately eats the f, unless you take the time to put these things in parentheses to make a sort of "argument sandwich" f (x y) (at which point the x, being uneaten, becomes hungry and eats the y.) The only boundaries are the operators and the special forms.
Then within the boundaries of the special forms, the operators consume whatever is around them; finally the special forms take their time to digest the expressions around them. This is the only reason that . and $ are able to save some parentheses.
Finally this we can see that iterate (* x) 1 is probably going to need to be in a sandwich because we don't want something to just eat iterate and stop. So there is no great way to do that without changing that code, unless we can somehow do away with the third argument to zipWith -- but that argument contains a p so that requires writing something to be more point-free.
So, one solution is to change your approach! It makes a little more sense to store a polynomial as a list of coefficients in the already-reversed direction, so that your x^2 + 2 * x + 3 example is stored as [3, 2, 1]. Then we don't need to perform this complicated reverse operation. It also makes the mathematics a little simpler as the product of two polynomials can be rewritten recursively as (a + x * P(x)) * (b + x * Q(x)) which gives the straightforward algorithm:
newtype Poly f = Poly [f] deriving (Eq, Show)
instance Num f => Num (Poly f) where
fromInteger n = Poly [fromInteger n]
negate (Poly ps) = Poly (map negate ps)
Poly f + Poly g = Poly $ summing f g where
summing [] g = g
summing f [] = f
summing (x:xs) (y:ys) = (x + y) : summing xs ys
Poly (x : xs) * Poly (y : ys) = prefix (x*y) (y_p + x_q) + r where
y_p = Poly $ map (y *) xs
x_q = Poly $ map (x *) ys
prefix n (Poly m) = Poly (n : m)
r = prefix 0 . prefix 0 $ Poly xs * Poly ys
Then your function
evaluatePoly :: Num f => Poly f -> f -> f
evaluatePoly (Poly p) x = eval p where
eval = (sum .) . zipWith (*) $ iterate (x *) 1
lacks parentheses around iterate because the eval is written in pointfree style, so $ can be used to consume the rest of the expression. As you can see it unfortunately leaves some new parentheses around (sum .) to do this, though, so it might not be totally worth your while. I find the latter less readable than, say,
evaluatePoly (Poly coeffs) x = sum $ zipWith (*) powersOfX coeffs where
powersOfX = iterate (x *) 1
I might even prefer to write the latter, if performance on high powers is not super-critical, as powersOfX = [x^n | n <- [0..]] or powersOfX = map (x^) [0..], but I think iterate is not too hard to understand in general.

Perhaps breaking it down to more elementary functions will simplify further. First define a dot product function to multiply two arrays (inner product).
dot x y = sum $ zipWith (*) x y
and change the order of terms in polyEval to minimize the parenthesis
polyEval x p = dot (reverse p) $ iterate (* x) 1
reduced to 3 pairs of parenthesis.

Related

Adding two numbers together without using the + operator in Haskell

I want to add two positive numbers together without the use of any basic operators like + for addition. I've already worked my way around that (in the add''' function) (i think) may not be efficient but thats not the point right now. I am getting lots of type errors however which i have no idea how to handle, and is very confusing for me as it works on paper and i've come from python.
add 1245 7489
--add :: Int -> Int -> Int
add x y = add'' (zip (add' x) (add' y))
where
add' :: Int -> [Int]
add' 0 = []
add' x = add' (x `div` 10) ++ [x `mod` 10]
conversion [1,2,4,5] [7,4,8,9] then zipping them together [(1,7),(2,4)....]
add'' :: [(Int,Int)] -> [Int]
add'' (x:xs) = [(add''' (head x) (last x))] ++ add'' xs
summary [8,6,...] what happens when the sum reaches 10 is not implemented yet.
where
--add''' :: (Int,Int) -> Int
add''' x y = last (take (succ y) $ iterate succ x)
adding two numbers together
You can't use head and last on tuples. ...Frankly, you should never use these functions at all because they're unsafe (partial), but they can be used on lists. In Haskell, lists are something completely different from tuples.To get at the elements of a tuple, use pattern matching.
add'' ((x,y):xs) = [add''' x y] ++ add'' xs
(To get at the elements of a list, pattern matching is very often the best too.) Alternatively, you can use fst and snd, these do on 2-tuples what you apparently thought head and last would.
Be clear which functions are curried and which aren't. The way you write add''', its type signature is actually Int -> Int -> Int. That is equivalent to (Int, Int) -> Int, but it's still not the same to the type checker.
The result of add'' is [Int], but you're trying to use this as Int in the result of add. That can't work, you need to translate from digits to numbers again.
add'' doesn't handle the empty case. That's fixed easily enough, but better than doing this recursion at all is using standard combinators. In your case, this is only supposed to work element-wise anyway, so you can simply use map – or do that right in the zipping, with zipWith. Then you also don't need to unwrap any tuples at all, because it works with a curried function.
A clean version of your attempt:
add :: Int -> Int -> Int
add x y = fromDigits 0 $ zipWith addDigits (toDigits x []) (toDigits y [])
where
fromDigits :: Int -> [Int] -> Int
fromDigits acc [] = acc
fromDigits acc (d:ds)
= acc `seq` -- strict accumulator, to avoid thunking.
fromDigits (acc*10 + d) ds
toDigits :: Int -> [Int] -> [Int] -- yield difference-list,
toDigits 0 = id -- because we're consing
toDigits x = toDigits (x`div`10) . ((x`mod`10):) -- left-associatively.
addDigits :: Int -> Int -> Int
addDigits x y = last $ take (succ x) $ iterate succ y
Note that zipWith requires both numbers to have the same number of digits (as does zip).
Also, yes, I'm using + in fromDigits, making this whole thing pretty futile. In practice you would of course use binary, then it's just a bitwise-or and the multiplication is a left shift. What you actually don't need to do here is take special care with 10-overflow, but that's just because of the cheat of using + in fromDigits.
By head and last you meant fst and snd, but you don't need them at all, the components are right there:
add'' :: [(Int, Int)] -> [Int]
add'' (pair : pairs) = [(add''' pair)] ++ add'' pairs
where
add''' :: (Int, Int) -> Int
add''' (x, y) = last (take (succ y) $ iterate succ x)
= iterate succ x !! y
= [x ..] !! y -- nice idea for an exercise!
Now the big question that remains is what to do with those big scary 10-and-over numbers. Here's a thought: produce a digit and a carry with
= ([(d, 0) | d <- [x .. 9]] ++ [(d, 1) | d <- [0 ..]]) !! y
Can you take it from here? Hint: reverse order of digits is your friend!
the official answer my professor gave
works on positive and negative numbers too, but still requires the two numbers to be the same length
add 0 y = y
add x y
| x>0 = add (pred x) (succ y)
| otherwise = add (succ x) (pred y)
The other answers cover what's gone wrong in your approach. From a theoretical perspective, though, they each have some drawbacks: they either land you at [Int] and not Int, or they use (+) in the conversion back from [Int] to Int. What's more, they use mod and div as subroutines in defining addition -- which would be okay, but then to be theoretically sound you would want to make sure that you could define mod and div themselves without using addition as a subroutine!
Since you say efficiency is no concern, I propose using the usual definition of addition that mathematicians give, namely: 0 + y = y, and (x+1) + y = (x + y)+1. Here you should read +1 as a separate operation than addition, a more primitive one: the one that just increments a number. We spell it succ in Haskell (and its "inverse" is pred). With this theoretical definition in mind, the Haskell almost writes itself:
add :: Int -> Int -> Int
add 0 y = y
add x y = succ (add (pred x) y)
So: compared to other answers, we can take an Int and return an Int, and the only subroutines we use are ones that "feel" more primitive: succ, pred, and checking whether a number is zero or nonzero. (And we land at only three short lines of code... about a third as long as the shortest proposed alternative.) Of course the price we pay is very bad performance... try add (2^32) 0!
Like the other answers, this only works for positive numbers. When you are ready for handling negative numbers, we should chat again -- there's some fascinating mathematical tricks to pull.

Is it possible to define patterns of composition in functions or functors?

Consider the following situation. I define a function to process a list of elements by the typical way of doing an operation on the head and recalling the function over the rest of the list. But under certain condition of the element (being negative, being a special character, ...) I change sign on the rest of the list before continuing. Like this:
f [] = []
f (x : xs)
| x >= 0 = g x : f xs
| otherwise = h x : f (opposite xs)
opposite [] = []
opposite (y : ys) = negate y : opposite ys
As opposite (opposite xs) = xs, I become to the situation of redundant opposite operations, accumulating opposite . opposite . opposite ....
It happens with other operations instead of opposite, any such that the composition with itself is the identity, like reverse.
Is it possible to overcome this situation using functors / monads / applicatives / arrows? (I don't understand well those concepts). What I would like is to be able of defining a property, or a composition pattern, like this:
opposite . opposite = id -- or, opposite (opposite y) = y
in order that the compiler or interpreter avoids to calculate the opposite of the opposite (it is possible and simple (native) in some concatenative languages).
You can solve this without any monads, since the logic is quite simple:
f g h = go False where
go _ [] = []
go b (x':xs)
| x >= 0 = g x : go b xs
| otherwise = h x : go (not b) xs
where x = (if b then negate else id) x'
The body of the go function is almost identical to that of your original f function. The only difference is that go decides whether the element should be negated or not based on the boolean value passed to it from previous calls.
Sure, just keep a bit of state telling whether to apply negate to the current element or not. Thus:
f = mapM $ \x_ -> do
x <- gets (\b -> if b then x_ else negate x_)
if x >= 0
then return (g x)
else modify not >> return (h x)

Partial application of functions and currying, how to make a better code instead of a lot of maps?

I am a beginner at Haskell and I am trying to grasp it.
I am having the following problem:
I have a function that gets 5 parameters, lets say
f x y w z a = x - y - w - z - a
And I would like to apply it while only changing the variable x from 1 to 10 whereas y, w, z and a will always be the same. The implementation I achieved was the following but I think there must be a better way.
Let's say I would like to use:
x from 1 to 10
y = 1
w = 2
z = 3
a = 4
Accordingly to this I managed to apply the function as following:
map ($ 4) $ map ($ 3) $ map ($ 2) $ map ($ 1) (map f [1..10])
I think there must be a better way to apply a lot of missing parameters to partially applied functions without having to use too many maps.
All the suggestions so far are good. Here's another, which might seem a bit weird at first, but turns out to be quite handy in lots of other situations.
Some type-forming operators, like [], which is the operator which maps a type of elements, e.g. Int to the type of lists of those elements, [Int], have the property of being Applicative. For lists, that means there is some way, denoted by the operator, <*>, pronounced "apply", to turn lists of functions and lists of arguments into lists of results.
(<*>) :: [s -> t] -> [s] -> [t] -- one instance of the general type of <*>
rather than your ordinary application, given by a blank space, or a $
($) :: (s -> t) -> s -> t
The upshot is that we can do ordinary functional programming with lists of things instead of things: we sometimes call it "programming in the list idiom". The only other ingredient is that, to cope with the situation when some of our components are individual things, we need an extra gadget
pure :: x -> [x] -- again, one instance of the general scheme
which wraps a thing up as a list, to be compatible with <*>. That is pure moves an ordinary value into the applicative idiom.
For lists, pure just makes a singleton list and <*> produces the result of every pairwise application of one of the functions to one of the arguments. In particular
pure f <*> [1..10] :: [Int -> Int -> Int -> Int -> Int]
is a list of functions (just like map f [1..10]) which can be used with <*> again. The rest of your arguments for f are not listy, so you need to pure them.
pure f <*> [1..10] <*> pure 1 <*> pure 2 <*> pure 3 <*> pure 4
For lists, this gives
[f] <*> [1..10] <*> [1] <*> [2] <*> [3] <*> [4]
i.e. the list of ways to make an application from the f, one of the [1..10], the 1, the 2, the 3 and the 4.
The opening pure f <*> s is so common, it's abbreviated f <$> s, so
f <$> [1..10] <*> [1] <*> [2] <*> [3] <*> [4]
is what would typically be written. If you can filter out the <$>, pure and <*> noise, it kind of looks like the application you had in mind. The extra punctuation is only necessary because Haskell can't tell the difference between a listy computation of a bunch of functions or arguments and a non-listy computation of what's intended as a single value but happens to be a list. At least, however, the components are in the order you started with, so you see more easily what's going on.
Esoterica. (1) in my (not very) private dialect of Haskell, the above would be
(|f [1..10] (|1|) (|2|) (|3|) (|4|)|)
where each idiom bracket, (|f a1 a2 ... an|) represents the application of a pure function to zero or more arguments which live in the idiom. It's just a way to write
pure f <*> a1 <*> a2 ... <*> an
Idris has idiom brackets, but Haskell hasn't added them. Yet.
(2) In languages with algebraic effects, the idiom of nondeterministic computation is not the same thing (to the typechecker) as the data type of lists, although you can easily convert between the two. The program becomes
f (range 1 10) 2 3 4
where range nondeterministically chooses a value between the given lower and upper bounds. So, nondetermism is treated as a local side-effect, not a data structure, enabling operations for failure and choice. You can wrap nondeterministic computations in a handler which give meanings to those operations, and one such handler might generate the list of all solutions. That's to say, the extra notation to explain what's going on is pushed to the boundary, rather than peppered through the entire interior, like those <*> and pure.
Managing the boundaries of things rather than their interiors is one of the few good ideas our species has managed to have. But at least we can have it over and over again. It's why we farm instead of hunting. It's why we prefer static type checking to dynamic tag checking. And so on...
Others have shown ways you can do it, but I think it's useful to show how to transform your version into something a little nicer. You wrote
map ($ 4) $ map ($ 3) $ map ($ 2) $ map ($ 1) (map f [1..10])
map obeys two fundamental laws:
map id = id. That is, if you map the identity function over any list, you'll get back the same list.
For any f and g, map f . map g = map (f . g). That is, mapping over a list with one function and then another one is the same as mapping over it with the composition of those two functions.
The second map law is the one we want to apply here.
map ($ 4) $ map ($ 3) $ map ($ 2) $ map ($ 1) (map f [1..10])
=
map ($ 4) . map ($ 3) . map ($ 2) . map ($ 1) . map f $ [1..10]
=
map (($ 4) . ($ 3) . ($ 2) . ($ 1) . f) [1..10]
What does ($ a) . ($ b) do? \x -> ($ a) $ ($ b) x = \x -> ($ a) $ x b = \x -> x b a. What about ($ a) . ($ b) . ($ c)? That's (\x -> x b a) . ($ c) = \y -> (\x -> x b a) $ ($ c) y = \y -> y c b a. The pattern now should be clear: ($ a) . ($ b) ... ($ y) = \z -> z y x ... c b a. So ultimately, we get
map ((\z -> z 1 2 3 4) . f) [1..10]
=
map (\w -> (\z -> z 1 2 3 4) (f w)) [1..10]
=
map (\w -> f w 1 2 3 4) [1..10]
=
map (\x -> ($ 4) $ ($ 3) $ ($ 2) $ ($ 1) $ f x) [1..10]
In addition to what the other answers say, it might be a good idea to reorder the parameters of your function, especially x is usually the parameter that you vary over like that:
f y w z a x = x - y - w - z - a
If you make it so that the x parameter comes last, you can just write
map (f 1 2 3 4) [1..10]
This won't work in all circumstances of course, but it is good to see what parameters are more likely to vary over a series of calls and put them towards the end of the argument list and parameters that tend to stay the same towards the start. When you do this, currying and partial application will usually help you out more than they would otherwise.
Assuming you don't mind variables you simply define a new function that takes x and calls f. If you don't have a function definition there (you can generally use let or where) you can use a lambda instead.
f' x = f x 1 2 3 4
Or with a lambda
\x -> f x 1 2 3 4
Currying won't do you any good here, because the argument you want to vary (enumerate) isn't the last one. Instead, try something like this.
map (\x -> f x 1 2 3 4) [1..10]
The general solution in this situation is a lambda:
\x -> f x 1 2 3 4
however, if you're seeing yourself doing this very often, with the same argument, it would make sense to move that argument to be the last argument instead:
\x -> f 1 2 3 4 x
in which case currying applies perfectly well and you can just replace the above expression with
f 1 2 3 4
so in turn you could write:
map (f 1 2 3 4) [1..10]

Define function in Haskell using foldr

I'm trying to define a function in Haskell using the foldr function:
fromDigits :: [Int] -> Int
This function takes a list of Ints (each on ranging from 0 to 9) and converts to a single Int. For example:
fromDigits [0,1] = 10
fromDigits [4,3,2,1] = 1234
fromDigits [2,3,9] = 932
fromDigits [2,3,9,0,1] = 10932
Anyway, I have no trouble defining this using explicit recursion or even using zipWith:
fromDigits n = sum (zipWith (*) n (map ((^)10) [0..]))
But now I have to define it using a foldr, but I don't know how to get the powers of 10. What I have is:
fromDigits xs = foldr (\x acc -> (x*10^(???)) + acc) 0 xs
How can I get them to decrease? I know I can start with (length xs - 1) but what then?
Best Regards
You were almost there:
your
fromDigits xs = foldr (\x acc -> (x*10^(???)) + acc) 0 xs
is the solution with 2 little changes:
fromDigits = foldr (\x acc -> acc*10 + x) 0
(BTW I left out the xs on each sides, that's not necessary.
Another option would be
fromDigits = foldl (\x acc -> read $ (show x) ++ (show acc)) 0
The nice thing about foldr is that it's so extemely easy to visualise!
foldr f init [a,b, ... z]
≡ foldr f init $ a : b : ... z : []
≡ a`f b`f`... z`f`init
≡ f a (f b ( ... (f z init)...)))
so as you see, the j-th list element is used in j consecutive calls of f. The head element is merely passed once to the left of the function. For you application, the head element is the last digit. How should that influence the outcome? Well, it's just added to the result, isn't it?
15 = 10 + 5
623987236705839 = 623987236705830 + 9
– obvious. Then the question is, how do you take care for the other digits? Well, to employ the above trick you first need to make sure there's a 0 in the last place of the carried subresult. A 0 that does not come from the supplied digits! How do you add such a zero?
That should really be enough hint given now.
The trick is, you don't need to compute the power of 10 each time from scratch, you just need to compute it based on the previous power of ten (i.e. multiply by 10). Well, assuming you can reverse the input list.
(But the lists you give above are already in reverse order, so arguably you should be able to re-reverse them and just say that your function takes a list of digits in the correct order. If not, then just divide by 10 instead of multiplying by 10.)

Confusion about function composition in Haskell

Consider following function definition in ghci.
let myF = sin . cos . sum
where, . stands for composition of two function (right associative). This I can call
myF [3.14, 3.14]
and it gives me desired result. Apparently, it passes list [3.14, 3.14] to function 'sum' and its 'result' is passed to cos and so on and on. However, if I do this in interpreter
let myF y = sin . cos . sum y
or
let myF y = sin . cos (sum y)
then I run into troubles. Modifying this into following gives me desired result.
let myF y = sin . cos $ sum y
or
let myF y = sin . cos . sum $ y
The type of (.) suggests that there should not be a problem with following form since 'sum y' is also a function (isn't it? After-all everything is a function in Haskell?)
let myF y = sin . cos . sum y -- this should work?
What is more interesting that I can make it work with two (or many) arguments (think of passing list [3.14, 3.14] as two arguments x and y), I have to write the following
let (myF x) y = (sin . cos . (+ x)) y
myF 3.14 3.14 -- it works!
let myF = sin . cos . (+)
myF 3.14 3.14 -- -- Doesn't work!
There is some discussion on HaskellWiki regarding this form which they call 'PointFree' form http://www.haskell.org/haskellwiki/Pointfree . By reading this article, I am suspecting that this form is different from composition of two lambda expressions. I am getting confused when I try to draw a line separating both of these styles.
Let's look at the types. For sin and cos we have:
cos, sin :: Floating a => a -> a
For sum:
sum :: Num a => [a] -> a
Now, sum y turns that into a
sum y :: Num a => a
which is a value, not a function (you could name it a function with no arguments but this is very tricky and you also need to name () -> a functions - there was a discussion somewhere about this but I cannot find the link now - Conal spoke about it).
Anyway, trying cos . sum y won't work because . expects both sides to have types a -> b and b -> c (signature is (b -> c) -> (a -> b) -> (a -> c)) and sum y cannot be written in this style. That's why you need to include parentheses or $.
As for point-free style, the simples translation recipe is this:
take you function and move the last argument of function to the end of the expression separated by a function application. For example, in case of mysum x y = x + y we have y at the end but we cannot remove it right now. Instead, rewriting as mysum x y = (x +) y it works.
remove said argument. In our case mysum x = (x +)
repeat until you have no more arguments. Here mysum = (+)
(I chose a simple example, for more convoluted cases you'll have to use flip and others)
No, sum y is not a function. It's a number, just like sum [1, 2, 3] is. It therefore makes complete sense that you cannot use the function composition operator (.) with it.
Not everything in Haskell are functions.
The obligatory cryptic answer is this: (space) binds more tightly than .
Most whitespace in Haskell can be thought of as a very high-fixity $ (the "apply" function). w x . y z is basically the same as (w $ x) . (y $ z)
When you are first learning about $ and . you should also make sure you learn about (space) as well, and make sure you understand how the language semantics implicitly parenthesize things in ways that may not (at first blush) appear intuitive.

Resources