Is a train in J associative - j

In programming language J, is a train of verbs always associative? If it is, Are there any proofs?

No, a train of verbs is not associative and this follows the definitions. For example, a fork is
(f g h) y = (f y) g (h y)
but
(f (g h)) y = y f ((g h) y) = y f (y g (h y))
which can also be written as y f y g h y. And
((f g) h) y = y (f g) (h y) = y f (g (h y))
which can also be written as y f g h y.
Those three are completely different things.

Train in J is right associative, and the minimum group is a fork. Only when it cannot make a fork, it makes a hook. So
vvvvv = (vv(vvv)),
And
vvvv= (v(vvv)).

Related

Lambda Calculus change of variable and application question

I am studying Haskell and I am learning what is an abstraction, substitution (beta equivalence), application, free and bound variables (alpha equivalence), but I have some doubts resolving these exercises, I don't know if my solutions are correct.
Make the following substitutions
1. (λ x → y x x) [x:= f z]
Sol. (\x -> y x x) =>α (\w -> y w w) =>α (\w -> x w w) =>β (\w -> f z w w)
2. ((λ x → y x x) x) [y:= x]
Sol. ((\x -> y x x)x) =>α (\w -> y w w)[y:= x] = (\w -> x w w)
3. ((λ x → y x) (λ y → y x) y) [x:= f y]
Sol. aproximation, i don't know how to do it: ((\x -> y x)(\y -> y x) y) =>β
(\x -> y x)y x)[x:= f y] =>β y x [x:= f y] = y f y
4. ((λ x → λ y → y x x) y) [y:= f z]
Sol aproximation, ((\x -> (\y -> (y x x))) y) =>β ((\y -> (y x x)) y) =>α ((\y -> (y x x)) f z)
Another doubt that I have is if can I run these expressions on this website? It is a Lambda Calculus Calculator but I do not know how to run these tests.
1. (λ x → y x x) [x:= f z]
Sol. (\x -> y x x) =>α (\w -> y w w) =>α (\w -> x w w) =>β (\w -> f z w w)
No, you can't rename y, it's free in (λ x → y x x). Only bound variables can be (consistently) α-renamed. But only free variables can be substituted, and there's no free x in that lambda term.
2. ((λ x → y x x) x) [y:= x]
Sol. ((\x -> y x x)x) =>α (\w -> y w w)[y:= x] = (\w -> x w w)
Yes, substituting x for y would allow it to be captured by the λ x, so you indeed must α-rename the x in (λ x → y x x) first to some new unique name as you did, but you've dropped the application to the free x for some reason. You can't just omit parts of a term, so it's ((\w -> y w w) x)[y:= x]. Now perform the substitution. Note you're not asked to perform the β-reduction of the resulting term, just the substitution.
I'll leave the other two out. Just follow the rules carefully. It's easy if you rename all bound names to unique names first, even if the renaming is not strictly required, for instance
((λ x → y x) (λ y → y x) y) [x:= f y] -->
((λ w → y w) (λ z → z x) y) [x:= f y]
The "unique" part includes also the free variables used in the substitution terms, that might get captured after being substituted otherwise (i.e. without the renaming being performed first, in the terms in which they are being substituted). That's why we had to rename the bound y in the above term, -- because y appears free in the substitution term. We didn't have to rename the bound x but it made it easier that way.

Explain (.)(.) to me

Diving into Haskell, and while I am enjoying the language I'm finding the pointfree style completely illegible. I've come a across this function which only consists of these ASCII boobies as seen below.
f = (.)(.)
And while I understand its type signature and what it does, I can't for the life of me understand why it does it. So could someone please write out the de-pointfreed version of it for me, and maybe step by step work back to the pointfree version sorta like this:
f g x y = (g x) + y
f g x = (+) (g x)
f g = (+) . g
f = (.) (+)
Generally (?) (where ? stands for an arbitrary infix operator) is the same as \x y -> x ? y. So we can rewrite f as:
f = (\a b -> a . b) (\c d -> c . d)
Now if we apply the argument to the function, we get:
f = (\b -> (\c d -> c . d) . b)
Now b is just an argument to f, so we can rewrite this as:
f b = (\c d -> c . d) . b
The definition of . is f . g = \x -> f (g x). If replace the outer . with its definition, we get:
f b = \x -> (\c d -> c . d) (b x)
Again we can turn x into a regular parameter:
f b x = (\c d -> c . d) (b x)
Now let's replace the other .:
f b x = (\c d y -> c (d y)) (b x)
Now let's apply the argument:
f b x = \d y -> (b x) (d y)
Now let's move the parameters again:
f b x d y = (b x) (d y)
Done.
You can also gradually append arguments to f:
f = ((.) . )
f x = (.) . x
f x y = ((.) . x) y
= (.) (x y)
= ((x y) . )
f x y z = (x y) . z
f x y z t = ((x y) . z) t
= (x y) (z t)
= x y (z t)
= x y $ z t
The result reveals that x and z are actually (binary and unary, respectively) functions, so I'll use different identifiers:
f g x h y = g x (h y)
We can work backwards by "pattern matching" over the combinators' definitions. Given
f a b c d = a b (c d)
= (a b) (c d)
we proceed
= B (a b) c d
= B B a b c d -- writing B for (.)
so by eta-contraction
f = B B
because
a (b c) = B a b c -- bidirectional equation
by definition. Haskell's (.) is actually the B combinator (see BCKW combinators).
edit: Potentially, many combinators can match the same code. That's why there are many possible combinatory encodings for the same piece of code. For example, (ab)(cd) = (ab)(I(cd)) is a valid transformation, which might lead to some other combinator definition matching that. Choosing the "most appropriate" one is an art (or a search in a search space with somewhat high branching factor).
That's about going backwards, as you asked. But if you want to go "forward", personally, I like the combinatory approach much better over the lambda notation fidgeting. I would even just write many arguments right away, and get rid of the extra ones in the end:
BBabcdefg = B(ab)cdefg = (ab)(cd)efg
hence,
BBabcd = B(ab)cd = (ab)(cd)
is all there is to it.

An intuitive idea of the arrow operator applicative

I had a couple of hours of fun today trying to understand what the arrow operator applicative does in Haskell. I am now trying to verify whether my understanding is correct. In short, I found that for the arrow operator applicative
(f <*> g <*> h <*> v) z = f z (g z) (h z) (v z)
Before I proceed, I am aware of this discussion but found it to be very convoluted and much more complicated than what I hope I derived today.
In order to understand what the applicative does I started from the definition of the arrow applicative in base
instance Applicative ((->) a) where
pure = const
(<*>) f g x = f x (g x)
and then proceeded to explore what the expressions
(f <*> g <*> h) z
and
(f <*> g <*> h <*> v) z
yield when expanded.
From the definition we get that
f <*> g = \x -> f x (g x)
Because (<*>) is left associative, it follows that
f <*> g <*> h = (f <*> g) <*> h
= (\x -> f x (g x)) <*> h
= \y -> (\x -> f x (g x)) y (h y)
Therefore
(f <*> g <*> h) z = (\y -> (\x -> f x (g x)) y (h y)) z
= (\x -> f x (g x)) z (h z)
= (f z (g z)) (h z)
= f z (g z) (h z)
The last step is due to the fact that function application is left associative. Similarly
(f <*> g <*> h <*> v) z = f z (g z) (h z) (v z)
This, to me, provides a very clear intuitive idea of what the arrow applicative does. But is this correct?
To test the result I ran, for example, the following,
λ> ((\z g h v -> [z, g, h, v]) <*> (1+) <*> (2+) <*> (3+)) 4
[4,5,6,7]
which conforms to the result derived above.
Before doing the expansion above I found this applicative very difficult to understand, since extremely complicated behaviour can result from its use because of currying. In particular, in
(f <*> g <*> h <*> v) z = f z (g z) (h z) (v z)
functions can return other functions. Here is an example:
λ> ((\z g -> g) <*> pure (++) <*> pure "foo" <*> pure "bar") undefined
"foobar"
In this case z=undefined is ignored by all functions, because pure x z = x and the first function ignores z by construction. Furthermore, the first function takes only two arguments but returns a function taking two arguments.
Yes, your calculations are correct.

Write f in pointfree-style?

Say I have functions
g :: a -> b, h :: a -> c
and
f :: b -> c -> d.
Is it possible to write the function
f' :: a -> a -> d
given by
f' x y = f (g x) (h y)
in point free style?.
One can write the function
f' a -> d, f' x = f (g x) (h x)
in point free style by setting
f' = (f <$> g) <*> h
but I couldn't figure out how to do the more general case.
We have:
k x y = (f (g x)) (h y)
and we wish to write k in point-free style.
The first argument passed to k is x. What do we need to do with x? Well, first we need to call g on it, and then f, and then do something fancy to apply this to (h y).
k = fancy . f . g
What is this fancy? Well:
k x y = (fancy . f . g) x y
= fancy (f (g x)) y
= f (g x) (h y)
So we desire fancy z y = z (h y). Eta-reducing, we get fancy z = z . h, or fancy = (. h).
k = (. h) . f . g
A more natural way to think about it might be
┌───┐ ┌───┐
x ───│ g │─── g x ───│ │
/ └───┘ │ │
(x, y) │ f │─── f (g x) (h y)
\ ┌───┐ │ │
y ───│ h │─── h y ───│ │
└───┘ └───┘
└──────────────────────────────┘
k
Enter Control.Arrow:
k = curry ((g *** h) >>> uncurry f)
Take a look at online converter
It converted
f' x y = f (g x) (h y)
into
f' = (. h) . f . g
with the flow of transformations
f' = id (fix (const (flip ((.) . f . g) h)))
f' = fix (const (flip ((.) . f . g) h))
f' = fix (const ((. h) . f . g))
f' = (. h) . f . g
This is slightly longer, but a little easier to follow, than (. h) . f. g.
First, rewrite f' slightly to take a tuple instead of two arguments. (In otherwords, we're uncurrying your original f'.)
f' (x, y) = f (g x) (h y)
You can pull a tuple apart with fst and snd instead of pattern matching on it:
f' t = f (g (fst t)) (h (snd t))
Using function composition, the above becomes
f' t = f ((g . fst) t) ((h . snd) t)
which, hey, looks a lot like the version you could make point-free using applicative style:
f' = let g' = g . fst
h' = h . snd
in (f <$> g') <*> h'
The only problem left is that f' :: (a, a) -> d. You can fix this by explicitly currying it:
f' :: a -> a -> d
f' = let g' = g . fst
h' = h . snd
in curry $ (f <$> g') <*> h'
(This is very similar, by the way, to the Control.Arrow solution added by Lynn.)
Using the "three rules of operator sections" as applied to the (.) function composition operator,
(.) f g = (f . g) = (f .) g = (. g) f -- the argument goes into the free slot
-- 1 2 3
this is derivable by a few straightforward mechanical steps:
k x y = (f (g x)) (h y) -- a (b c) === (a . b) c
= (f (g x) . h) y
= (. h) (f (g x)) y
= (. h) ((f . g) x) y
= ((. h) . (f . g)) x y
Lastly, (.) is associative, so the inner parens may be dropped.
The general procedure is to strive to reach the situation where eta-reduction can be performed, i.e. we can get rid of the arguments if they are in same order and are outside any parentheses:
k x y = (......) y
=>
k x = (......)
Lather, rinse, repeat.
Another trick is to turn two arguments into one, or vice versa, with the equation
curry f x y = f (x,y)
so, your
f (g x) (h y) = (f.g) x (h y) -- by B-combinator rule
= (f.g.fst) (x,y) ((h.snd) (x,y))
= (f.g.fst <*> h.snd) (x,y) -- by S-combinator rule
= curry (f.g.fst <*> h.snd) x y
This is the same as the answer by #chepner, but presented more concisely.
So, you see, your (f.g <*> h) x1 just becomes (f.g.fst <*> h.snd) (x,y). Same difference.
1(because, for functions, (<$>) = (.))
Control.Compose
(g ~> h ~> id) f
Data.Function.Meld
f $* g $$ h *$ id
Data.Function.Tacit
lurryA #N2 (f <$> (g <$> _1) <*> (h <$> _2))
lurryA #N5 (_1 <*> (_2 <*> _4) <*> (_3 <*> _5)) f g h
Related articles
Semantic Editor Combinators, Conal Elliott, 2008/11/24
Pointless fun, Matt Hellige, 2008/12/03

Fusion law for foldr1?

For foldr we have the fusion law: if f is strict, f a = b, and
f (g x y) = h x (f y) for all x, y, then f . foldr g a = foldr h b.
How can one discover/derive a similar law for foldr1? (It clearly can't even take the same form - consider the case when both sides act on [x].)
You can use free theorems to derive statements like the fusion law. The Automatic generation of free theorems does this work for you, it automatically derives the following statement if you enter foldr1 or the type (a -> a -> a) -> [a] -> a.
If f strict and f (p x y) = q (f x) (f y)) for all x and y you have f (foldr1 p z) = foldr1 q (map f z)). That is, in contrast to you statement about foldr you get an additional map f on the right hand side.
Also note that the free theorem for foldr is slightly more general than your fusion law and, therefore, looks quite similar to the law for foldr1. Namely you have for strict functions g and f if g (p x y) = q (f x) (g y)) for all x and y then g (foldr p z v) = foldr q (g z) (map f v)).
I don't know if there's going to be anything satisfying for foldr1. [I think] It's just defined as
foldr1 f (x:xs) = foldr f x xs
let's first expand what you have above to work on the entire list,
f (foldr g x xs) = foldr h (f x) xs
for foldr1, you could say,
f (foldr1 g xs) = f (foldr g x xs)
= foldr h (f x) xs
to recondense into foldr1, you can create some imaginary function that maps f to the left element, for a result of,
f . foldr1 g = foldr1 h (mapfst f) where
mapfst (x:xs) = f x : xs

Resources