I want to put the operation which takes all the items in a list which are greater than 2 into a pointless (as in not explicitly capturing the argument in a variable) function in J. I wanted to do this by using ~ with a hook, like f =: ((> & 2) #)~ but it seems like neither that nor ((> & 2) #~) works.
My reasoning was that my function has the form (f y) g y where y is the list, f is (> & 2), and g is #. I would appreciate any help!
Everything is OK except you mixed the order of the hook. It's y f (g y) so you want
(#~ (>&2)) y
Hooks have the form f g and the interpretation, when applied to a single argument (i.e. monadically) is (unaltered input) f (g input). So, as Eelvex noted, you'd phrase this as a hook like hook =: #~ >&2 . Also, as kaledic noted, the idiom (#~ filter) is extremely common in J, so much that it's usually read as a cohesive whole: keep-items-matching-filter.*
If you wanted a point-free phrasing of the operation which looks similar, notationally, to the original noun-phrase (y > 2) # y , you might like to use the fork >&2 # ] where ] means "the unaltered input" (i.e. the identity function) or even (] # 2:) # ] or some variation.
(*) In fact, the pattern (f~ predicate) defines an entire class of idioms, like (<;.1~ frets) for cutting an array into partitions and (</.~ categories) for classifying the items of an array into buckets.
Related
So I'm looking for a way to force some operators which are defined as right-associative to be applied in left associative manner. I've seen that Haskell has $ operator, which changes precedence but sadly not the associativity of the function application.
To be exact, I specifically look for the way to make a long concatenation chain to evaluate arguments in left associative manner.
Does Haskell has something like this?
Actually, $ does change the associativity of function application.
Normal function application associates to the left (and this behavior cannot be changed), so the expression f x y associates as:
f x y = (f x) y
On the other hand, $ associates to the right (and this behavior is part of the library definition of $ and could be changed by redefining your own version of $ or some other operator), so:
f $ x $ y = f $ (x $ y) = f (x y)
with the first equality following from the associativity of $ and the last equality following from its definition.
It's possible that what you're really talking about is not the associativity but rather the order in which the argument and function are combined. Normal function application is func arg, but if you want arg func, you can do it with an operator. The (&) operator in Data.Function does this. It's a reversed version of $, which means that it associates to the left:
y & f & g = (y & f) & g
but it ALSO has a different definition, so y & f applies the function f to the argument y, instead of applying y to f.
So, you can, as in #Iceland_jack's example, write:
"Hello world" & -- start with a string
words & -- break it into words
map length & -- get the length of each word
sum -- sum the lengths
If this is what you mean by "a long concatenation chain to evaluate arguments in left associative manner", then you've got your answer. Note that the definition of & isn't too complicated. It looks like this. The infixl statement sets both the precedence and the (left) associativity of the operator:
infixl 1 &
(&) :: a -> (a -> b) -> b
x & f = f x
If you are instead talking about an operator that applies a single function to multiple arguments but is written with the arguments first (in reverse order), like:
"string" ?? 2 ?? take = take 2 "string"
for some operator ??, then I don't think there's any built-in operator like that, but you can do it yourself by defining an operator that is right associative like $ but has the same core definition as &:
infixr 0 ??
(??) :: a -> (a -> b) -> b
x ?? f = f x
Say I have a haskell function f n l = filter (n<) l where it takes an integer n and list l and returns all of the integers in l greater then n.
I'm trying to figure out how to best write this function in a language like Joy. I've generally had good luck with converting the haskell function to pointfree form f = filter . (<) and then trying to rewrite it in Joy from there. But I can't figure out how to simulate partial function application in a concatenative language.
So far, I've tried to do something like swap [[>] dip] filter, but it seems like there must be a better/cleaner way to write this.
Also, I'm experimenting with writing my own concatenative language and was wondering if lazy-evaluation could be compatible with concatenative languages.
swap [[>] dip] filter won’t work because it assumes n is accessible for each call to the quotation by which you’re filtering; that implies filter can’t leave any intermediate values on the stack while it’s operating, and > doesn’t consume n. You need to capture the value of n in that quotation.
First “eta”-reduce the list parameter:
l n f = l [ n > ] filter
n f = [ n > ] filter
Then capture n by explicitly quoting it and composing it with >:
n f = n quote [ > ] compose filter
(Assuming quote : a -> (-> a) a.k.a. unit, takes a value and wraps it in a quotation and compose : (A -> B) (B -> C) -> (A -> C) a.k.a. cat, concatenates two quotations.)
Then just “eta”-reduce n:
f = quote [ > ] compose filter
I put “eta” in scare quotes because it’s a little more general than in lambda calculus, working for any number of values on the stack, not just one.
You can of course factor out partial application into its own definition, e.g. the papply combinator in Cat, which is already defined as swons (swap cons) in Joy, but can also be defined like so:
DEFINE
papply (* x [F] -- [x F] *)
== [unit] dip concat ;
f (* xs n -- xs[>=n] *)
== [>] papply filter .
In Kitten this could be written in a few different ways, according to preference:
// Point-free
function \> compose filter
// Local variable and postfix
-> n; { n (>) } filter
// Local variable and operator section
-> n; \(n <) filter
Any evaluation strategy compatible with functional programming is also compatible with concatenative programming—popr is a lazy concatenative language.
I'm Haskell newbie and reading :
http://www.seas.upenn.edu/~cis194/spring13/lectures/01-intro.html
It states "In Haskell one can always “replace equals by equals”, just like you learned in algebra class.". What is meant by this and what are its advantages ?
I don't recall learning this in algebra but perhaps I do not recognise the terminology.
It means that if you know that A (an expression) is equal to B (another expression), then you may always replace A for B in any expression involving A, and vice-versa.
For instance, we know that even = not . odd. Therefore
filter even
=
filter (not . odd)
On the other hand, we know that odd satisfies the following equation
odd = (1 ==) . (`mod` 2)
As such, we also know that
filter even
=
filter (not . odd)
=
filter (not . (1 ==) . (`mod` 2))
Moreover, you know that mod 2 always returns 0 or 1. So, by case analysis, the following is valid.
not . (1 ==)
=
(0 ==)
Therefore, we can also say
filter even
=
filter ((0 ==) . (`mod` 2))
The advantage of being able to replace equals by equals is to design a program by massaging equation after equation until a suitable definition is found, like in typical solve for x kind of problems of Algebra.
In its simplest form, substituting "equals by equals" means replacing a defined identifier with its definition. For instance
let x = f 1 in x + x
can be equivalently written as
f 1 + f 1
in the sense that the result will be the same. In GHC, you can expect the second one to re-compute f 1 twice, possibly degrading performance, but the result of the sum is the same.
In impure languages, such as Ocaml, the two snippets above are instead not equivalent. This is because side effects are allowed: evaluating f 1 can have observable effects. For instance, f could be defined as follows:
(* Ocaml code *)
let f = let r = ref 0 in
fun x -> r := !r + x ; !r
Using the above definition, f has an internal mutable state, which gets incremented by its argument every time it is called, before the new state is returned. Because of this,
f 1 + f 1
would evaluate to 1 + 2 since the state is incremented twice, while
let x = f 1 in x + x
would evaluate to 1 + 1, since only one increment of the state is performed.
The consequence is that, in Ocaml, replacing x with its definition would not be a semantics-preserving program transformation. Of course, the same would hold in imperative languages, which allow side effects. Only in pure languages (Haskell, Agda, Coq, ...) the transformation is safe.
I've been asking a few questions about strictness, but I think I've missed the mark before. Hopefully this is more precise.
Lets say we have:
n = 1000000
f z = foldl' (\(x1, x2) y -> (x1 + y, y - x2)) z [1..n]
Without changing f, what should I set
z = ...
So that f z does not overflow the stack? (i.e. runs in constant space regardless of the size of n)
Its okay if the answer requires GHC extensions.
My first thought is to define:
g (a1, a2) = (!a1, !a2)
and then
z = g (0, 0)
But I don't think g is valid Haskell.
So your strict foldl' is only going to evaluate the result of your lambda at each step of the fold to Weak Head Normal Form, i.e. it is only strict in the outermost constructor. Thus the tuple will be evaluated, however those additions inside the tuple may build up as thunks. This in-depth answer actually seems to address your exact situation here.
W/R/T your g: You are thinking of BangPatterns extension, which would look like
g (!a1, !a2) = (a1, a2)
and which evaluates a1 and a2 to WHNF before returning them in the tuple.
What you want to be concerned about is not your initial accumulator, but rather your lambda expression. This would be a nice solution:
f z = foldl' (\(!x1, !x2) y -> (x1 + y, y - x2)) z [1..n]
EDIT: After noticing your other questions I see I didn't read this one very carefully. Your goal is to have "strict data" so to speak. Your other option, then, is to make a new tuple type that has strictness tags on its fields:
data Tuple a b = Tuple !a !b
Then when you pattern match on Tuple a b, a and b will be evaluated.
You'll need to change your function regardless.
There is nothing you can do without changing f. If f were overloaded in the type of the pair you could use strict pairs, but as it stands you're locked in to what f does. There's some small hope that the compiler (strictness analysis and transformations) can avoid the stack growth, but nothing you can count on.
When I put the following lambda expression in ghci I get 1:
ghci> (\x -> x+1) 0
1
But when I use that function with iterate I get
ghci> take 10 (iterate (\x -> x+1) 0)
[0,1,2,3,4,5,6,7,8,9]
I expected to get a list equal to [1..10]. Why not?
The first result of iterate is the original input without the function applied, i.e. the function is called 0 times. That's why the result is one off from what you expect.
More specifically, iterate is implemented lie this:
iterate f v = v : iterate f (f v)
Just remember that the start value you give to iterate will appear first in thelist - that's it.
Stop...Hoogle time!
http://haskell.org/hoogle/?hoogle=iterate
click iterate
http://hackage.haskell.org/packages/archive/base/latest/doc/html/Prelude.html#v:iterate
iterate f x returns an infinite list of repeated applications of f to x:
iterate f x == [x, f x, f (f x), ...]
There you go. It works that way because that's how it says it works. I'm not trying to be flippant, just hoping to illustrate the usefulness of Hoogle and the docs. (Sounds like a good name for a Haskell band: "Hoogle and the Docs")