Write `4 : '$ x (<") y'` tacitly - j

How can I write the frame function tacitly? (From "Learning J" Ch 7)
I'm using the f x g y = x f#:g y composition scheme from Ch 8 , but it's not working. My guess is because <" has no natural rank?
x=.1
y=.i.2 3 4
f=.$
g=.<"
frame_e=.4 :'f x g y'
frame_t=.f#:g
x frame_e y NB. -> 2 3, which is the x-frame of y
x frame_t y NB. -> domain error
NB. No natural rank
g b.0 NB. -> syntax error
0 g b.0 NB. -> 0 0 0
I confirmed the pattern works as I expected with other functions.
x=.1
y=.i.2 3 4
f=.+/
g=.*
f x g y NB. -> equiv of 12+2*i.3 4
x f#:g y NB. -> same

Tacitly, I would do it using
framet=. {. $
2 framet i. 2 3 4
2 3
but that does not really get to the root of your question, does it?
The issue is really the way that g is defined:
g=.<"
This does not make g a verb, but an adverb. It does use the x in the explicit definition to create a verb, but it needs to do this before it uses that verb to evaluate y. As far as I know, J does not allow you to stage these processes. As you have seen the pattern does work when f and g are actually verbs.
I find tacit programming elegant, but it can be slower at some things and there are areas where it is limited.
I am hoping that someone can provide a better answer, so that I may learn as well.

Related

Argument Usage: ti=.{.(*i.)}

I'm trying to get my head around J. In the easy-j.pdf (available here; page 19) introduction there is this hook:
ti=.{.(*i.)}. NB. ti=times index generator
ti 2 5 NB. Usage
I understand the previous term: 2(*i.)5 NB. 2 times 0 1 2 3 4
I can understand/imagine that }. takes the last element from the argument-list (above 2 5) to create (*i.)5. But what makes it clear/obvious that somehow the argument-list is also passed to {. to retrieve the 2 (in my current understanding the argument is already used by }.)?
I hope this question is understandable to J experts.
ti is actually a monadic fork with three tines that are all verbs. The way that this is executed is that the two outside tines {. and }. are executed on the argument 2 5 and the result is fed as left and the right arguments of the middle tine (* i.), which is itself a hook.
In J fork operations are often symbolized with f, g and h standing for verbs and x and y representing left and right arguments and forks are evaluated like this:
(f h g) y <-> (f y) h (g y) NB. <-> is a meta symbol for equivalency - not J symbols
In this case f y is {. 2 5 and g y is }. 2 5
{. 2 5
2
}. 2 5
5
The middle tine of a fork is always dyadic because it is fed from the two outside tines and the construct for the dyadic hook (* i.) in the centre is
x (f g) y <-> x f (g y)
2 (* i.) 5 NB. 2 * (i. 5)
0 2 4 6 8

Arithmetic mean forwards vs backwards?

I'm familiar with this way of doing an arithmetic mean in J:
+/ % #
But it's also shown here as
# %~ +/
Are these two versions interchangeable, and if not when should I use one versus the other?
Dyadic ~ reverses the arguments of a verb. x f~ y is equivalent to y f x. You use ~ when you, um, want to reverse the arguments of a verb.
One of its most common uses is for forks and hooks composition. For example, because y f (g y) is (f g) y you can use ((f~) g) y when you need (g y) f y.
In the reverse mean example I don't really see a reason that one way would be more effective than the other (V V V form of fork), but because forks in J can be non-symmetric (in the N V V form) I can see some reasons that reversing the middle tine of the fork would be an advantage. Take for example:
(5 # $) 1 2 3 NB. (N V V) form
3 3 3 3 3
(5 #~ $) 1 2 3 NB. (N V~ V) becomes effectively (V V N)
5 5 5
($ # 5) 1 2 3 NB. (V V N) is a syntax error
|syntax error
| ($#5)1 2 3
Dyadic ~ is the "Passive" adverb, which swaps the left and right arguments. Thus x f~ y is the same as y f x. +/ % # and # %~ +/ are equivalent. 2 % 5 gives you 0.4, but 2 %~ 5 gives 2.5.
Among the places this can be handy is checking the results of a line you are working with. While you would presumably be testing something more complicated, you can check yourself by repeating your last line and just adding to the left without rearranging anything or adding parentheses.
string =. 'J is beyond awesome.'
'e' = string
0 0 0 0 0 0 1 0 0 0 0 0 0 0 1 0 0 0 1 0
string #~ 'e' = string
eee
The monadic ~ is the "Reflex" adverb, which causes the modified verb to operate as a dyad, duplicating the single argument for both left and right. While this is another shortcut to arranging your arguments, it is quite different from the dyadic ~. *~ 4 is 16, because you are multiplying y by itself (y * y).

How should I indent nested case expressions?

How to indent correctly a nested case expression in haskell that would act like a nested loop in imperative programming ?
f x y = case x of
1 -> case y of
1 ->
2 ->
...
2 -> case y of
...
The compiler gives me an indentation error at the start of the second x case, so i'm guessing it doesn't understand that the first x case is over
Not directly an answer, but could maybe helpful nevertheless:
In this particular case, you could also write:
f 1 1 = ...
f 1 2 = ...
f 2 2 = ...
or, as a case expression:
f x y = case (x, y) of
(1,1) -> ...
(1,2) -> ...
(2,1) -> ...
Your code seems ok. Haskell has a very simple rule of Indenation as explained in wikibooks:
Code which is part of some expression should be indented further in
than the beginning of that expression.
This works for me:
f x y = case x of
1 -> case y of
1 -> undefined
2 -> undefined
2 -> case y of
1 -> undefined
You may want to check your editor to see if it is doing proper indentation. As #Tarmil suggested, always use spaces for indentation. More details on that here.
I had the same problem and it was due to that I was using tabs for identation. When I indentated the code with spaces, it worked!

How do Haskell currying and pattern matching work together?

I'm new to Haskell. I understand that functions are curried to become functions that take one parameter. What I don't understand is how pattern matching against multiple values can be achieved when this is the case. For example:
Suppose we have the following completely arbitrary function definition:
myFunc :: Int -> Int -> Int
myFunc 0 0 = 0
myFunc 1 1 = 1
myFunc x y = x `someoperation` y
Is the partially applied function returned by myFunc 0 essentially:
partiallyAppliedMyFunc :: Int -> Int
partiallyAppliedMyFunc 0 = 0
partiallyAppliedMyFunc y = 0 `someoperation` y
Thus removing the extraneous pattern that can't possibly match? Or.... what's going on here?
Actually, this question is more subtle than it may appear on the surface, and involves learning a little bit about compiler internals to really answer properly. The reason is that we sort of take for granted that we can have nested patterns and patterns over more than one term, when in reality for the purposes of a compiler the only thing you can do is branch on the top-level constructor of a single value. So the first stage of the compiler is to turn nested patterns (and patterns over more than one value) into simpler patterns. For example, a naive algorithm might transform your function into something like this:
myFunc = \x y -> case x of
0 -> case y of
0 -> 0
_ -> x `someoperation` y
1 -> case y of
1 -> 1
_ -> x `someoperation` y
_ -> x `someoperation` y
You can already see there's lots of suboptimal things here: the someoperation term is repeated a lot, and the function expects both arguments before it will even start to do a case at all; see A Term Pattern-Match Compiler Inspired by Finite Automata Theory for a discussion of how you might improve on this.
Anyway, in this form, it should actually be a bit more clear how the currying step happens. We can directly substitute for x in this expression to look at what myFunc 0 does:
myFunc 0 = \y -> case 0 of
0 -> case y of
0 -> 0
_ -> 0 `someoperation` y
1 -> case y of
1 -> 1
_ -> 0 `someoperation` y
_ -> 0 `someoperation` y
Since this is still a lambda, no further reduction is done. You might hope that a sufficiently smart compiler could do a bit more, but GHC explicitly does not do more; if you want more computation to be done after supplying only one argument, you have to change your definition. (There's a time/space tradeoff here, and choosing correctly is too hard to do reliably. So GHC leaves it in the programmer's hands to make this choice.) For example, you could explicitly write
myFunc 0 = \y -> case y of
0 -> 0
_ -> 0 `someoperation` y
myFunc 1 = \y -> case y of
1 -> 1
_ -> 1 `someoperation` y
myFunc x = \y -> x `someoperation` y
and then myFunc 0 would reduce to a much smaller expression.

How do I use fix, and how does it work?

I was a bit confused by the documentation for fix (although I think I understand what it's supposed to do now), so I looked at the source code. That left me more confused:
fix :: (a -> a) -> a
fix f = let x = f x in x
How exactly does this return a fixed point?
I decided to try it out at the command line:
Prelude Data.Function> fix id
...
And it hangs there. Now to be fair, this is on my old macbook which is kind of slow. However, this function can't be too computationally expensive since anything passed in to id gives that same thing back (not to mention that it's eating up no CPU time). What am I doing wrong?
You are doing nothing wrong. fix id is an infinite loop.
When we say that fix returns the least fixed point of a function, we mean that in the domain theory sense. So fix (\x -> 2*x-1) is not going to return 1, because although 1 is a fixed point of that function, it is not the least one in the domain ordering.
I can't describe the domain ordering in a mere paragraph or two, so I will refer you to the domain theory link above. It is an excellent tutorial, easy to read, and quite enlightening. I highly recommend it.
For the view from 10,000 feet, fix is a higher-order function which encodes the idea of recursion. If you have the expression:
let x = 1:x in x
Which results in the infinite list [1,1..], you could say the same thing using fix:
fix (\x -> 1:x)
(Or simply fix (1:)), which says find me a fixed point of the (1:) function, IOW a value x such that x = 1:x... just like we defined above. As you can see from the definition, fix is nothing more than this idea -- recursion encapsulated into a function.
It is a truly general concept of recursion as well -- you can write any recursive function this way, including functions that use polymorphic recursion. So for example the typical fibonacci function:
fib n = if n < 2 then n else fib (n-1) + fib (n-2)
Can be written using fix this way:
fib = fix (\f -> \n -> if n < 2 then n else f (n-1) + f (n-2))
Exercise: expand the definition of fix to show that these two definitions of fib are equivalent.
But for a full understanding, read about domain theory. It's really cool stuff.
I don't claim to understand this at all, but if this helps anyone...then yippee.
Consider the definition of fix. fix f = let x = f x in x. The mind-boggling part is that x is defined as f x. But think about it for a minute.
x = f x
Since x = f x, then we can substitute the value of x on the right hand side of that, right? So therefore...
x = f . f $ x -- or x = f (f x)
x = f . f . f $ x -- or x = f (f (f x))
x = f . f . f . f . f . f . f . f . f . f . f $ x -- etc.
So the trick is, in order to terminate, f has to generate some sort of structure, so that a later f can pattern match that structure and terminate the recursion, without actually caring about the full "value" of its parameter (?)
Unless, of course, you want to do something like create an infinite list, as luqui illustrated.
TomMD's factorial explanation is good. Fix's type signature is (a -> a) -> a. The type signature for (\recurse d -> if d > 0 then d * (recurse (d-1)) else 1) is (b -> b) -> b -> b, in other words, (b -> b) -> (b -> b). So we can say that a = (b -> b). That way, fix takes our function, which is a -> a, or really, (b -> b) -> (b -> b), and will return a result of type a, in other words, b -> b, in other words, another function!
Wait, I thought it was supposed to return a fixed point...not a function. Well it does, sort of (since functions are data). You can imagine that it gave us the definitive function for finding a factorial. We gave it a function that dind't know how to recurse (hence one of the parameters to it is a function used to recurse), and fix taught it how to recurse.
Remember how I said that f has to generate some sort of structure so that a later f can pattern match and terminate? Well that's not exactly right, I guess. TomMD illustrated how we can expand x to apply the function and step towards the base case. For his function, he used an if/then, and that is what causes termination. After repeated replacements, the in part of the whole definition of fix eventually stops being defined in terms of x and that is when it is computable and complete.
You need a way for the fixpoint to terminate. Expanding your example it's obvious it won't finish:
fix id
--> let x = id x in x
--> id x
--> id (id x)
--> id (id (id x))
--> ...
Here is a real example of me using fix (note I don't use fix often and was probably tired / not worrying about readable code when I wrote this):
(fix (\f h -> if (pred h) then f (mutate h) else h)) q
WTF, you say! Well, yes, but there are a few really useful points here. First of all, your first fix argument should usually be a function which is the 'recurse' case and the second argument is the data on which to act. Here is the same code as a named function:
getQ h
| pred h = getQ (mutate h)
| otherwise = h
If you're still confused then perhaps factorial will be an easier example:
fix (\recurse d -> if d > 0 then d * (recurse (d-1)) else 1) 5 -->* 120
Notice the evaluation:
fix (\recurse d -> if d > 0 then d * (recurse (d-1)) else 1) 3 -->
let x = (\recurse d -> if d > 0 then d * (recurse (d-1)) else 1) x in x 3 -->
let x = ... in (\recurse d -> if d > 0 then d * (recurse (d-1)) else 1) x 3 -->
let x = ... in (\d -> if d > 0 then d * (x (d-1)) else 1) 3
Oh, did you just see that? That x became a function inside our then branch.
let x = ... in if 3 > 0 then 3 * (x (3 - 1)) else 1) -->
let x = ... in 3 * x 2 -->
let x = ... in 3 * (\recurse d -> if d > 0 then d * (recurse (d-1)) else 1) x 2 -->
In the above you need to remember x = f x, hence the two arguments of x 2 at the end instead of just 2.
let x = ... in 3 * (\d -> if d > 0 then d * (x (d-1)) else 1) 2 -->
And I'll stop here!
How I understand it is, it finds a value for the function, such that it outputs the same thing you give it. The catch is, it will always choose undefined (or an infinite loop, in haskell, undefined and infinite loops are the same) or whatever has the most undefineds in it. For example, with id,
λ <*Main Data.Function>: id undefined
*** Exception: Prelude.undefined
As you can see, undefined is a fixed point, so fix will pick that. If you instead do (\x->1:x).
λ <*Main Data.Function>: undefined
*** Exception: Prelude.undefined
λ <*Main Data.Function>: (\x->1:x) undefined
[1*** Exception: Prelude.undefined
So fix can't pick undefined. To make it a bit more connected to infinite loops.
λ <*Main Data.Function>: let y=y in y
^CInterrupted.
λ <*Main Data.Function>: (\x->1:x) (let y=y in y)
[1^CInterrupted.
Again, a slight difference. So what is the fixed point? Let us try repeat 1.
λ <*Main Data.Function>: repeat 1
[1,1,1,1,1,1, and so on
λ <*Main Data.Function>: (\x->1:x) $ repeat 1
[1,1,1,1,1,1, and so on
It is the same! Since this is the only fixed point, fix must settle on it. Sorry fix, no infinite loops or undefined for you.
As others pointed out, fix somehow captures the essence of recursion. Other answers already do a great job at explaining how fix works. So let's take a look from another angle and see how fix can be derived by generalising, starting from a specific problem: we want to implement the factorial function.
Let's first define a non recursive factorial function. We will use it later to "bootstrap" our recursive implementation.
factorial n = product [1..n]
We approximate the factorial function by a sequence of functions: for each natural number n, factorial_n coincides with factorial up to and including n. Clearly factorial_n converges towards factorial for n going towards infinity.
factorial_0 n = if n == 0 then 1 else undefined
factorial_1 n = n * factorial_0(n - 1)
factorial_2 n = n * factorial_1(n - 1)
factorial_3 n = n * factorial_2(n - 1)
...
Instead of writing factorial_n out by hand for every n, we implement a factory function that creates these for us. We do this by factoring the commonalities out and abstracting over the calls to factorial_[n - 1] by making them a parameter to the factory function.
factorialMaker f n = if n == 0 then 1 else n * f(n - 1)
Using this factory, we can create the same converging sequence of functions as above. For each factorial_n we need to pass a function that calculates the factorials up to n - 1. We simply use the factorial_[n - 1] from the previous step.
factorial_0 = factorialMaker undefined
factorial_1 = factorialMaker factorial_0
factorial_2 = factorialMaker factorial_1
factorial_3 = factorialMaker factorial_2
...
If we pass our real factorial function instead, we materialize the limit of the series.
factorial_inf = factorialMaker factorial
But since that limit is the real factorial function we have factorial = factorial_inf and thus
factorial = factorialMaker factorial
Which means that factorial is a fixed-point of factorialMaker.
Finally we derive the function fix, which returns factorial given factorialMaker. We do this by abstracting over factorialMaker and make it an argument to fix. (i.e. f corresponds to factorialMaker and fix f to factorial):
fix f = f (fix f)
Now we find factorial by calculating the fixed-point of factorialMaker.
factorial = fix factorialMaker

Resources