Square of the sum minus sum of the squares in J (or how to take the train?) - j

Still in the learning process of J... The problem to solve is now to express the square of the sum minus the sum of the squares of natural integers.
The naive solution is
(*:+/>:i.100) - (+/*:>:i.100)
Now, I want to use a fork to be able to write the list >:i.100 only one time. My fork should like to:
h
/ \
f g
| |
x x
where f is the square of the sum, g is the sum of the squares, and h is minus. So, naively, I wrote:
((*:+/) - (+/*:)) >:i.100
but it gives me a domain error. Why? I also tried:
(+/ (*: - +/) :*) >: i.100
and this time, it gives me a long list of numbers... I guess it has something to do with the # conjunction, but I still don't figure out what the At does... Continuing my quest, I finally got
((+/*+/) - +/#:*:) >:i.100
but I don't like the fact I manually compute the squares instead of using the *: operator, and I don't really understand why I need the #: conjunction. Could somebody gives me some light about this problem?

(+/*:) and (*:+/) don't do what you think they do.
Actually, your f is Q (S x) (square of sum of x) and your g is S (Q x) (sum of square of x). You can see that for any f,g it is f (g y) = (f #: g) y.
So, you can write
(Q (S x)) h (S (Q x))
as
((Q #: S) x) h ((S #: Q) X)
which is now equivalent to
(f x) h (g x)
or
(f h g) x
Thus,
((*: #: (+/)) - (+/ #: *:)) >: i.1000
Note also that *: #: (+/) is not the same as *: #: +/, since +/ is not one verb (like *:) but a composite verb from a verb (+) and an adverb (/).

Related

Find Haskell functions f, g such that f g = f . g

While learning Haskell, I came across a challenge to find two functions f and g, such that f g and f . g are equivalent (and total, so things like f = undefined or f = (.) f don't count). The given solution is that f and g are both equal to \x -> x . x (or join (.)).
(I note that this isn't Haskell-specific; it can be expressed in pure combinatory logic as "find f and g such that f g = B f g", and the given solution would then translate to f = g = W B.)
I understand why the given solution works when I expand it out, but I don't understand how you'd ever find it if you didn't already know it. Here's how far I can get:
f g = f . g (given)
f g z = (f . g) z (eta-expansion of both sides)
f g z = f (g z) (simplify the RHS)
And I don't know how to proceed from there. What would I do next in trying to find a solution?
I discovered that it's possible to find a family of solutions by considering Church numeral calculation. In the Church encoding, multiplication is performed by composing the Church numerals, and exponentiation is performed by applying the base to the exponent. Thus, if f is the Church encoding of some number x, and g is the Church encoding of some number y, then f g = f . g implies y^x = x*y. Any nonnegative integer solutions to this equation translate to solutions to the original problem. Examples:
x=1, y=0, f=id, g=const id
x=1, y=1, f=id, g=id
x=1, y=2, f=id, g=join (.)
Since y^1 = y = 1*y for all y, it makes sense that f=id works for all Church numerals g. This is indeed the case, and in fact, as Rein Henrichs pointed out, it's true for all g, and this is easily verifiable by inspection.
x=2, y=0, f=join (.), g=const id
x=2, y=2, f=join (.), g=join (.)
x=3, y=0, f=(.) <*> join (.), g=const id
Since 0^x = 0 = x*0 for all positive x, it makes sense that g=const id works for all positive Church numerals f. (It does not work for f=const id, Church numeral 0, which makes sense since 0^0 is an indeterminate form.)

getting rid of unnecessary parenthesis

I wrote a function for evaluating a polynomial at a given number. The polynomial is represented as a list of coefficients (e.g. [1,2,3] corresponds to x^2+2x+3).
polyEval x p = sum (zipWith (*) (iterate (*x) 1) (reverse p))
As you can see, I first used a lot of parenthesis to group which expressions should be evaluated. For better readability I tried to eliminate as many parenthesis using . and $. (In my opinion more than two pairs of nested parenthesis are making the code more and more difficult to read.) I know that function application has highest priority and is left associative. The . and $are both right associative but . has priority 9, while $ has priority 0.
So it seemed to me that following expression cannot be written with even fewer parenthesis
polyEval x p = sum $ zipWith (*) (iterate (*x) 1) $ reverse p
I know that we need parenthesis for (*) and (*x) to convert them to prefix functions, but is it possible to somehow remove the parenthesis around iterate (*x) 1?
Also what version would you prefer for readability?
I know that there are many other ways to achieve the same, but I'd like to discuss my particular example, as it has a function evaluated in two arguments (iterate (*x) 1) as middle argument of another function that takes three arguments.
As usual with this sort of question I prefer the OP's version to any of the alternatives that have been proposed so far. I would write
polyEval x p = sum $ zipWith (*) (iterate (* x) 1) (reverse p)
and leave it at that. The two arguments of zipWith (*) play symmetric roles in the same way that the two arguments of * do, so eta-reducing is just obfuscation.
The value of $ is that it makes the outermost structure of the computation clear: the evaluation of a polynomial at a point is the sum of something. Eliminating parentheses should not be a goal in itself.
So it might be a little puerile, but I actually really like to think of Haskell’s rules in terms of food. I think of Haskell’s left-associative function application f x y = (f x) y as a sort of aggressive nom or greedy nom, in that the function f refuses to wait for the y to come around and immediately eats the f, unless you take the time to put these things in parentheses to make a sort of "argument sandwich" f (x y) (at which point the x, being uneaten, becomes hungry and eats the y.) The only boundaries are the operators and the special forms.
Then within the boundaries of the special forms, the operators consume whatever is around them; finally the special forms take their time to digest the expressions around them. This is the only reason that . and $ are able to save some parentheses.
Finally this we can see that iterate (* x) 1 is probably going to need to be in a sandwich because we don't want something to just eat iterate and stop. So there is no great way to do that without changing that code, unless we can somehow do away with the third argument to zipWith -- but that argument contains a p so that requires writing something to be more point-free.
So, one solution is to change your approach! It makes a little more sense to store a polynomial as a list of coefficients in the already-reversed direction, so that your x^2 + 2 * x + 3 example is stored as [3, 2, 1]. Then we don't need to perform this complicated reverse operation. It also makes the mathematics a little simpler as the product of two polynomials can be rewritten recursively as (a + x * P(x)) * (b + x * Q(x)) which gives the straightforward algorithm:
newtype Poly f = Poly [f] deriving (Eq, Show)
instance Num f => Num (Poly f) where
fromInteger n = Poly [fromInteger n]
negate (Poly ps) = Poly (map negate ps)
Poly f + Poly g = Poly $ summing f g where
summing [] g = g
summing f [] = f
summing (x:xs) (y:ys) = (x + y) : summing xs ys
Poly (x : xs) * Poly (y : ys) = prefix (x*y) (y_p + x_q) + r where
y_p = Poly $ map (y *) xs
x_q = Poly $ map (x *) ys
prefix n (Poly m) = Poly (n : m)
r = prefix 0 . prefix 0 $ Poly xs * Poly ys
Then your function
evaluatePoly :: Num f => Poly f -> f -> f
evaluatePoly (Poly p) x = eval p where
eval = (sum .) . zipWith (*) $ iterate (x *) 1
lacks parentheses around iterate because the eval is written in pointfree style, so $ can be used to consume the rest of the expression. As you can see it unfortunately leaves some new parentheses around (sum .) to do this, though, so it might not be totally worth your while. I find the latter less readable than, say,
evaluatePoly (Poly coeffs) x = sum $ zipWith (*) powersOfX coeffs where
powersOfX = iterate (x *) 1
I might even prefer to write the latter, if performance on high powers is not super-critical, as powersOfX = [x^n | n <- [0..]] or powersOfX = map (x^) [0..], but I think iterate is not too hard to understand in general.
Perhaps breaking it down to more elementary functions will simplify further. First define a dot product function to multiply two arrays (inner product).
dot x y = sum $ zipWith (*) x y
and change the order of terms in polyEval to minimize the parenthesis
polyEval x p = dot (reverse p) $ iterate (* x) 1
reduced to 3 pairs of parenthesis.

Haskell dot operator: what difference does it make exactly?

I'm confused about a thing with the Haskell dot operator. What I have read about it is that it basicly creates a new function, composed of 2 other functions. E.g.:
f(g x) = f . g
(Omitting the parameter)
However, what'd be the difference if I just ommited the dot as well? Like:
f . g =? f g
Because in both cases g will be applied to the argument(s) passed to it, then f will be applied to that result.
So I don't see the difference between those two, but maybe there's a difference or there would be one when it's more complex? But I don't see it right now so if anyone could help me out on this it'd be much appreciated!
Best regards,
Skyfe.
The expression
h = f . g
creates a new function h(...) which is f(g(...)). This can be done without even calling f. However,
h = f g
passes g to f and assigns the result of that to h. In this case, f is called when h is evaluated.
Here is a proof of them being different:
Prelude> (const . id) True False
True
Prelude> (const id) True False
False
You just need to be careful about reducing these definitions.
(f . g) x = f (g x)
this is how it is defined; nothing else. In particular, it is not f g x which is the same as (f g) x by definition.
In f (g x) g expects an argument and produces a value; f expects an argument and gets that value that (g x) produced; all is well.
But if with the same functions we write (f g) x then f receives g - a function - as the value of its parameter. Presumably it expected something else, a number say. And then the value it returns will be used as a function, and called with x as an argument! A total mismatch.

What is this pattern of folding and iteration?

Imagine you need to fold over a sequence, and want to know also the intermediate values at several points along the range. This is what I've used for this:
[a,b,c] = map fst . tail $ chain [g i, g j, g k] (zero, sequence)
g :: Integer -> (a,b) -> (a,b)
chain (f:fs) x = x : chain fs (f x)
chain [] x = [x]
The function g consumes a specified portion of an input sequence (of length i, j, etc.), starting with some initial value and producing results of the same type, to be fed into the next invocation. Consuming the sequence several times for different lengths starting over from the start and same initial value would be inefficient, both time and space-wise of course.
So on the one hand we fold over this sequence of integers - interim points on the sequence; on the other hand we iterate this function, g. What is it? Am I missing something basic here? Can this be somehow expressed with the regular repertoire of folds, etc.?
EDIT: Resolved: the above is simply
[a,b,c] = map fst . tail $ scanl (flip g) (zero, sequence) [i, j, k]
interesting how a modifiable iteration actually is folding over the list of modifiers.
Try scanl: http://www.haskell.org/hoogle/?hoogle=scanl
scanl is similar to foldl, but returns a list of successive reduced
values from the left:
scanl f z [x1, x2, ...] == [z, z `f` x1, (z `f` x1) `f` x2, ...]
Note that
last (scanl f z xs) == foldl f z xs
To elaborate on Marcin's comment, you basically want:
intermediates = scanl step zero sequence
map (\n -> intermediates !! n) [i, j, k]
step is not g, but rather just the part of g that consumes a single element of the list sequence.
Also, accept Marcin's as the correct answer.

Confusion about function composition in Haskell

Consider following function definition in ghci.
let myF = sin . cos . sum
where, . stands for composition of two function (right associative). This I can call
myF [3.14, 3.14]
and it gives me desired result. Apparently, it passes list [3.14, 3.14] to function 'sum' and its 'result' is passed to cos and so on and on. However, if I do this in interpreter
let myF y = sin . cos . sum y
or
let myF y = sin . cos (sum y)
then I run into troubles. Modifying this into following gives me desired result.
let myF y = sin . cos $ sum y
or
let myF y = sin . cos . sum $ y
The type of (.) suggests that there should not be a problem with following form since 'sum y' is also a function (isn't it? After-all everything is a function in Haskell?)
let myF y = sin . cos . sum y -- this should work?
What is more interesting that I can make it work with two (or many) arguments (think of passing list [3.14, 3.14] as two arguments x and y), I have to write the following
let (myF x) y = (sin . cos . (+ x)) y
myF 3.14 3.14 -- it works!
let myF = sin . cos . (+)
myF 3.14 3.14 -- -- Doesn't work!
There is some discussion on HaskellWiki regarding this form which they call 'PointFree' form http://www.haskell.org/haskellwiki/Pointfree . By reading this article, I am suspecting that this form is different from composition of two lambda expressions. I am getting confused when I try to draw a line separating both of these styles.
Let's look at the types. For sin and cos we have:
cos, sin :: Floating a => a -> a
For sum:
sum :: Num a => [a] -> a
Now, sum y turns that into a
sum y :: Num a => a
which is a value, not a function (you could name it a function with no arguments but this is very tricky and you also need to name () -> a functions - there was a discussion somewhere about this but I cannot find the link now - Conal spoke about it).
Anyway, trying cos . sum y won't work because . expects both sides to have types a -> b and b -> c (signature is (b -> c) -> (a -> b) -> (a -> c)) and sum y cannot be written in this style. That's why you need to include parentheses or $.
As for point-free style, the simples translation recipe is this:
take you function and move the last argument of function to the end of the expression separated by a function application. For example, in case of mysum x y = x + y we have y at the end but we cannot remove it right now. Instead, rewriting as mysum x y = (x +) y it works.
remove said argument. In our case mysum x = (x +)
repeat until you have no more arguments. Here mysum = (+)
(I chose a simple example, for more convoluted cases you'll have to use flip and others)
No, sum y is not a function. It's a number, just like sum [1, 2, 3] is. It therefore makes complete sense that you cannot use the function composition operator (.) with it.
Not everything in Haskell are functions.
The obligatory cryptic answer is this: (space) binds more tightly than .
Most whitespace in Haskell can be thought of as a very high-fixity $ (the "apply" function). w x . y z is basically the same as (w $ x) . (y $ z)
When you are first learning about $ and . you should also make sure you learn about (space) as well, and make sure you understand how the language semantics implicitly parenthesize things in ways that may not (at first blush) appear intuitive.

Resources