I'm learning about CPS with Racket, and I've managed to write up these functions:
;lift a regular single-arg function into CPS
(define (lift/k f)
(lambda (x k)
(k (f x))))
;compose two CPS functions
(define (compose/k f g)
(lambda (x k)
(g x (lambda (y)
(f y k)))))
They seem to work correctly
(define (is-two/k x k)
(k (= x 2)))
(define is-not-two/k (compose/k (lift/k not) is-two/k))
(is-not-two/k 3 display)
(is-not-two/k 2 display)
#t#f
I'm wondering if these functions are still "true CPS". Have I messed up "true" continuation-passing with these functions? Is it kosher to use function composition techniques in CPS? Is it encouraged? Or would it be considered a "compromise" to do this? Is there a more CPS-y way to do it?
Yes I know I just asked 5 questions, but the basic idea behind them all (which I'm not sure I understand correctly) is the same. Explanations in other Lisps, Haskell, Erlang, or other functional languages are fine.
The continuation-passing-style transform can be partial, or complete. You're usually working with a system where certain primitives (+, -, etc.) are stuck in non-cps-land. Fortunately, CPS works fine either way.
The steps in CPSing:
Pick which functions are going to be primitive.
CPS-transform so that all non-primitive functions (including continuations) are called only in tail position.
So, in your code, your 'lift/k' is essentially treating its given function as being primitive: note that the body of lift/k calls 'f' in non-tail position. If you want not to treat the lifted function as a primitive, you must go in and rewrite it explicitly.
Your 'compose' function composes two CPSed functions, but is not itself in CPS (that is, you're assuming that 'compose' is primitive. You probably want to CPS it. Note that since it just returns a value straight off, this is simple:
;compose two CPS functions
(define (compose/k f g k)
(k (lambda (x k)
(g x (lambda (y)
(f y k))))))
Related
Is the function I give to foldl applied in an infix way?
Example
foldl (-) 0 [1,2,3]
= 0-1-2-3
= -6
so more generally:
foldl f x [a,b,c]
is applied as:
(((x `f` a) `f` b) `f` c)
I know it's recursive, but can I think about it that way?
The only difference between infix function application and prefix function application is syntax, so your question does not make very much sense. Outside of referring to the syntax of a particular expression, applying a function “in an infix way” doesn’t mean anything.
In Haskell, when you write x + y, it is precisely equivalent to writing (+) x y. Likewise, x `op` y is precisely equivalent to writing op x y. Put another way, application of an infix operator is still just plain old function application where the function is applied to two arguments.
If it helps you to visualize foldl via an expression like ((a `f` b) `f` c) `f` d instead of one like f (f (f a b) c) d, that’s certainly within your right, since the two expressions are equivalent. Indeed, the documentation for foldl uses infix notation to help explain the function’s behavior, since it is a useful representation that helps get the point across. But be careful not to confuse notation (aka syntax) with denotation (aka meaning). Many programs can be notationally distinct but denotationally equivalent.
Is following code right way to think about currying in Haskell. Following is an example of addition in haskell
f = \x -> \y -> x + y
In general is currying realized using lamdbas in functional programming?
Currying is:
In mathematics and computer science, currying is the technique of translating the evaluation of a function that takes multiple arguments (or a tuple of arguments) into evaluating a sequence of functions, each with a single argument. It was introduced by Gottlob Frege, developed by Moses Schönfinkel, and further developed by Haskell Curry.
source Wikipedia
now you could argue that in Haskell there is never more than one argument to a function (you can of course have tuples - see below) - so in a sense all functions in Haskell are already curried (or can only be defined in such a way).
Of course there are curry and uncurry - but those act on tuples:
curry :: ((a, b) -> c) -> a -> b -> c
curry f x y = f (x, y)
and I could argue that a tuple is just one argument too ;)
On a conceptual level you are of course right as augustss pointed out!
But sadly there are some problems (see Monomorphism Restriction for example) where this equality does not hold (if you don't add a type signature):
add x y = x + y === add = \x -> \y -> x + y
Today, I was going through the source code of Jane Street's Core_kernel module and I came across the compose function:
(* The typical use case for these functions is to pass in functional arguments
and get functions as a result. For this reason, we tell the compiler where
to insert breakpoints in the argument-passing scheme. *)
let compose f g = (); fun x -> f (g x)
I would have defined the compose function as:
let compose f g x = f (g x)
The reason they give for defining compose the way they did is “because compose is a function which takes functions f and g as arguments and returns the function fun x -> f (g x) as a result, they defined compose the way they did to tell the compiler to insert a breakpoint after f and g but before x in the argument-passing scheme.”
So I have two questions:
Why do we need breakpoints in the argument-passing scheme?
What difference would it make if we defined compose the normal way?
Coming from Haskell, this convention doesn't make any sense to me.
This is an efficiency hack to avoid the cost of a partial application in the expected use case indicated in the comment.
OCaml compiles curried functions into fixed-arity constructs, using a closure to partially apply them where necessary. This means that calls of that arity are efficient - there's no closure construction, just a function call.
There will be a closure construction within compose for fun x -> f (g x), but this will be more efficient than the partial application. Closures generated by partial application go through a wrapper caml_curryN which exists to ensure that effects occur at the correct time (if that closure is itself partially applied).
The fixed arity that the compiler chooses is based on a simple syntactic analysis - essentially, how many arguments are taken in a row without anything in between. The Jane St. programmers have used this to select the arity that they desire by injecting () "in between" arguments.
In short, let compose f g x = f (g x) is a less desirable definition because it would result in the common two-argument case of compose f g being a more expensive partial application.
Semantically, of course, there is no difference at all.
It's worth noting that compilation of partial application has improved in OCaml, and this performance hack is no longer necessary.
I looked at the module of GHC.Prim and found that it seems that all datas in GHC.Prim are defined as data Float# without something like =A|B, and all functions in GHC.Prim is defined as gtFloat# = let x = x in x.
My question is whether these definations make sense and what they mean.
I checked the header of GHC.Prim like below
{-
This is a generated file (generated by genprimopcode).
It is not code to actually be used. Its only purpose is to be
consumed by haddock.
-}
I guess it may have some relations with the questions and who could please explain that to me.
It's magic :)
These are the "primitive operators and operations". They are hardwired into the compiler, hence there are no data constructors for primitives and all functions are bottom since they are necessarily not expressable in pure haskell.
(Bottom represents a "hole" in a haskell program, an infinite loop or undefined are examples of bottom)
To put it another way
These data declarations/functions are to provide access to the raw compiler internals. GHC.Prim exists to export these primitives, it doesn't actually implement them or anything (eg its code isn't actually useful). All of that is done in the compiler.
It's meant for code that needs to be extremely optimized. If you think you might need it, some useful reading about the primitives in GHC
A brief expansion of jozefg's answer ...
Primops are precisely those operations that are supplied by the runtime because they can't be defined within the language (or shouldn't be, for reasons of efficiency, say). The true purpose of GHC.Prim is not to define anything, but merely to export some operations so that Haddock can document their existence.
The construction let x = x in x is used at this point in GHC's codebase because the value undefined has not yet been, um, "defined". (That waits until the Prelude.) But notice that the circular let construction, just like undefined, is both syntactically correct and can have any type. That is, it's an infinite loop with the semantics of ⊥, just as undefined is.
... and an aside
Also note that in general the Haskell expression let x = z in y means "change the variable x to the expression z wherever x occurs in the expression y". If you're familiar with the lambda calculus, you should recognize this as the reduction rule for the application of the lambda abstraction \x -> y to the term z. So is the Haskell expression let x = x in x nothing more than some syntax on top of the pure lambda calculus? Let's take a look.
First, we need to account for the recursiveness of Haskell's let expressions. The lambda calculus does not admit recursive definitions, but given a primitive fixed-point operator fix,1 we can encode recursiveness explicitly. For example, the Haskell expression let x = x in x has the same meaning as (fix \r x -> r x) z.2 (I've renamed the x on the right side of the application to z to emphasize that it has no implicit relation to the x inside the lambda).
Applying the usual definition of a fixed-point operator, fix f = f (fix f), our translation of let x = x in x reduces (or rather doesn't) like this:
(fix \r x -> r x) z ==>
(\s y -> s y) (fix \r x -> r x) z ==>
(\y -> (fix \r x -> r x) y) z ==>
(fix \r x -> r x) z ==> ...
So at this point in the development of the language, we've introduced the semantics of ⊥ from the foundation of the (typed) lambda calculus with a built-in fixed-point operator. Lovely!
We need a primitive fixed-point operation (that is, one that is built into the language) because it's impossible to define a fixed-point combinator in the simply typed lambda calculus and its close cousins. (The definition of fix in Haskell's Prelude doesn't contradict this—it's defined recursively, but we need a fixed-point operator to implement recursion.)
If you haven't seen this before, you should read up on fixed-point recursion in the lambda calculus. A text on the lambda calculus is best (there are some free ones online), but some Googling should get you going. The basic idea is that we can convert a recursive definition into a non-recursive one by abstracting over the recursive call, then use a fixed-point combinator to pass our function (lambda abstraction) to itself. The base-case of a well-defined recursive definition corresponds to a fixed point of our function, so the function executes, calling itself over and over again until it hits a fixed point, at which point the function returns its result. Pretty damn neat, huh?
There are tons of tutorials on how to curry functions, and as many questions here at stackoverflow. However, after reading The Little Schemer, several books, tutorials, blog posts, and stackoverflow threads I still don't know the answer to the simple question: "What's the point of currying?" I do understand how to curry a function, just not the "why?" behind it.
Could someone please explain to me the practical uses of curried functions (outside of languages that only allow one argument per function, where the necessity of using currying is of course quite evident.)
edit: Taking into account some examples from TLS, what's the benefit of
(define (action kind)
(lambda (a b)
(kind a b)))
as opposed to
(define (action kind a b)
(kind a b))
I can only see more code and no added flexibility...
One effective use of curried functions is decreasing of amount of code.
Consider three functions, two of which are almost identical:
(define (add a b)
(action + a b))
(define (mul a b)
(action * a b))
(define (action kind a b)
(kind a b))
If your code invokes add, it in turn calls action with kind +. The same with mul.
You defined these functions like you would do in many imperative popular languages available (some of them have been including lambdas, currying and other features usually found in functional world, because all of it is terribly handy).
All add and sum do, is wrapping the call to action with the appropriate kind. Now, consider curried definitions of these functions:
(define add-curried
((curry action) +))
(define mul-curried
((curry action) *))
They've become considerable shorter. We just curried the function action by passing it only one argument, the kind, and got the curried function which accepts the rest two arguments.
This approach allows you to write less code, with high level of maintainability.
Just imagine that function action would immediately be rewritten to accept 3 more arguments. Without currying you would have to rewrite your implementations of add and mul:
(define (action kind a b c d e)
(kind a b c d e))
(define (add a b c d e)
(action + a b c d e))
(define (mul a b c d e)
(action * a b c d e))
But currying saved you from that nasty and error-prone work; you don't have to rewrite even a symbol in the functions add-curried and mul-curried at all, because the calling function would provide the necessary amount of arguments passed to action.
They can make code easier to read. Consider the following two Haskell snippets:
lengths :: [[a]] -> [Int]
lengths xs = map length xs
lengths' :: [[a]] -> [Int]
lengths' = map length
Why give a name to a variable you're not going to use?
Curried functions also help in situations like this:
doubleAndSum ys = map (\xs -> sum (map (*2) xs) ys
doubleAndSum' = map (sum . map (*2))
Removing those extra variables makes the code easier to read and there's no need for you to mentally keep clear what xs is and what ys is.
HTH.
You can see currying as a specialization. Pick some defaults and leave the user (maybe yourself) with a specialized, more expressive, function.
I think that currying is a traditional way to handle general n-ary functions provided that the only ones you can define are unary.
For example, in lambda calculus (from which functional programming languages stem), there are only one-variable abstractions (which translates to unary functions in FPLs). Regarding lambda calculus, I think it's easier to prove things about such a formalism since you don't actually need to handle the case of n-ary functions (since you can represent any n-ary function with a number of unary ones through currying).
(Others have already covered some of the practical implications of this decision so I'll stop here.)
Using all :: (a -> Bool) -> [a] -> Bool with a curried predicate.
all (`elem` [1,2,3]) [0,3,4,5]
Haskell infix operators can be curried on either side, so you can easily curry the needle or the container side of the elem function (is-element-of).
We cannot directly compose functions that takes multiple parameters. Since function composition is one of the key concept in functional programming. By using Currying technique we can compose functions that takes multiple parameters.
I would like to add example to #Francesco answer.
So you don't have to increase boilerplate with a little lambda.
It is very easy to create closures. From time to time i use SRFI-26. It is really cute.
In and of itself currying is syntactic sugar. Syntactic sugar is all about what you want to make easy. C for example wants to make instructions that are "cheap" in assembly language like incrementing, easy and so they have syntactic sugar for incrementation, the ++ notation.
t = x + y
x = x + 1
is replaced by t = x++ + y
Functional languages could just as easily have stuff like.
f(x,y,z) = abc
g(r,s)(z) = f(r,s,z).
h(r)(s)(z) = f(r,s,z)
but instead its all automatic. And that allows for a g bound by a particular r0, s0 (i.e. specific values) to be passed as a one variable function.
Take for example perl's sort function which takes
sort sub list
where sub is a function of two variables that evaluates to a boolean and
list is an arbitrary list.
You would naturally want to use comparison operators (<=>) in Perl and have
sortordinal = sort (<=>)
where sortordinal works on lists. To do this you would sort to be a curried function.
And in fact
sort of a list is defined in precisely this way in Perl.
In short: currying is sugar to make first class functions more natural.