Common Lisp the Language: "dynamic shadowing cannot occur" - scope

Near the end of chapter 3 of Common Lisp the Language, Steele writes "Constructs that use lexical scope effectively generate a new name for each established entity on each execution. Therefore dynamic shadowing cannot occur (though lexical shadowing may)". I am confused as to what exactly he means by "dynamic shadowing cannot occur". What would an example of "dynamic shadowing" look like?

Here is an example of what he might have meant:
(defun f (g)
(let ((a 2))
(funcall g a)))
(let ((a 1))
(f (lambda (x) (- x a))))
This returns 1 in Common Lisp because the lexical binding of a in f does not affect the binding of a in the top-level let, so, when f calls g, it subtracts 1 from 2 because the lambda gets a from the top-level binding.
Contrast this with the dynamic binding in Emacs Lisp, where the return value is 0.
You might also find it instructive to work out the contorted-example and try it in CL and ELisp.

Related

In Common Lisp, when are objects referenced and when are they directly accessed by value?

I was reading this question to try and get some insight into the answer. It specifically asks about pass-by-reference, and all of the answers seem to indicate there is no support for pass-by-reference. However, this answer would imply that, while pass by reference may not be supported, some values are indeed accessed by reference. A simpler example would involve cons cells; I can pass a cons cell to a function, and change it's cdr or car to whatever I please.
Ultimately I'd like to know if there is some clear delimination between (to use C# parlance) value-types and reference-types, and if there's any way (more convenient than the answer referenced above) to treat a value as a reference-type.
There is no distinction: all objects are passed by value in Lisp (at least in all Lisps I know of). However some objects are mutable, and conses are one such type. So you can pass a cons cell to a procedure and mutate it in that procedure. Thus the important consideration is whether objects are mutable or not.
In particular this (Common Lisp) function always returns T as its first value, even though its second value may not have 0 as its car or cdr.
(defun cbv (&optional (f #'identity))
(let ((c (cons 0 0)))
(let ((cc c))
(funcall f c)
(values (eq c cc) c))))
> (cbv (lambda (c)
(setf (car c) 1
(cdr c) 2)))
t
(1 . 2)
However since Common Lisp has lexical scope, first-class functions and macros you can do some trickery which makes it look a bit as if call-by-reference is happening:
(defmacro capture-binding (var)
;; Construct an object which captures a binding
`(lambda (&optional (new-val nil new-val-p))
(when new-val-p
(setf ,var new-val))
,var))
(defun captured-binding-value (cb)
;; value of a captured binding
(funcall cb))
(defun (setf captured-binding-value) (new cb)
;; change the value of a captured binding
(funcall cb new))
(defun cbd (&optional (f #'identity))
(let ((c (cons 0 0)))
(let ((cc c))
(funcall f (capture-binding c))
(values (eq c cc) c cc))))
And now:
> (cbd (lambda (b)
(setf (captured-binding-value b) 3)))
nil
3
(0 . 0)
If you understand how this works you probably understand quite a lot of how scope & macros work in Lisp.
There is an exception to the universality of passing objects by value in Common Lisp which is mentioned by Rainer in a comment below: instances of some primitive types may be copied in some circumstances for efficiency. This only ever happens for instances of specific types, and the objects for which it happens are always immutable. To deal with this case, CL provides an equality predicate, eql which does the same thing as eq, except that it knows about objects which may secretly be copied in this way and compares them properly.
So, the safe thing to do is to use eql instead of eq: since objects which may be copied are always immutable this means you will never get tripped up by this.
Here's an example where objects which you would naturally think of as identical turn out not to be. Given this definition:
(defun compare (a b)
(values (eq a b)
(eql a b)))
Then in the implementation I'm using I find that:
> (compare 1.0d0 1.0d0)
nil
t
so double-precision float zero is not eq to itself, always, but it is always eql to itself. And trying something which seems like it should be the same:
> (let ((x 1.0d0)) (compare x x))
t
t
So in this case it looks like the function call is not copying objects but rather I started off with two different objects coming from the reader. However the implementation is always allowed to copy numbers at will and it might well do so with different optimisation settings.

Is there a programming language where a variable is evaluated when it is accessed?

The below is the pseudo-code of what I want to explain, in JavaScript-like syntax.
const func1 = (x => x * x); // JavaScript Arrow Function syntax.
const func2 = (x => {log("I'm func2!"); return x + 1;});
var a;
var b <= func1(a); // `<=` is a *Binding* operator.
var c <= func2(b);
a = 1;
log(c); // Logs "I'm func2!", and then logs 2, which is ((a * a) + 1).
log(b); // Logs 1, which is (a * a).
a = 2;
log(c); // Logs "I'm func2!", and then logs 5, which is ((a * a) + 1).
log(b); // Logs 4, which is (a * a).
// When b or c is accessed, the function for it is called and make the value.
Is there a programming language that has the concept explained above?
The main evaluation strategies in programming languages are lazy (for example, call-by-name) and strict (for example, call-by-value). Lazy languages evaluate an expression only when it is needed; strict languages evaluate an expression eagerly. To be concrete, the difference usually arises in function calls: lazy languages pass the argument to the function unevaluated, and strict languages evaluate the argument and then call the function. There are lots of tradeoffs that you can read about, e.g. https://en.wikipedia.org/wiki/Evaluation_strategy.
But your code example raises a question: how does the binding to b work? Regardless of whether the language is strict or lazy, you have a problem because you're going to want to use the value of a, which you haven't bound yet. This brings up a second parameter in programming language design: static versus dynamic scope. In a statically-scoped language, a variable's scope is determined statically. That is, the "a" passed to func1 is exactly the a that is in scope at the time func1 is called. In a dynamically-scoped language, a variable's scope is determined at runtime, so it would be possible to imagine that func1(a) is evaluated lazily, in which case the a in scope at that time is the one after the assignment to a.
The question of static vs. dynamic scope, however, is mostly a resolved one: static scope is almost always the right answer. Some languages tried using dynamic scope (e.g. older LISPs) but switched to static scope because people found it too confusing.
So, in summary, I could imagine a language that works the way you describe, but it's likely that most people would find it very confusing to use.
I don't know such language, but I know a different strategy for that: you can wrap the call again in a Lambda object. It would have the same effect as you have described. E.g. in Lisp you can do:
(defun some-func (x) (* x x))
(defvar *a* (lambda () (some-func 4)))
*a* ;; => returns the function
(*a*) ;; => returns 16
That is possible in almost all today's languages. Even in C, but there it's getting pretty hard.
What a funny coincidence! I happen to be aware of one language which has this property, it exists since Monday, here: https://github.com/drcz/ZJEB
It was supposed to be partly a joke and partly an experiment in expressiveness; here is a REPL session with your example:
(-- ALGORITHMIC LANGUAGE ZJEB v0.1 --)
copyleft 2016/08/08 by Scislav Dercz
type (halt) to quit
READY.
>(def func1 (bind (x) (* x x)))
(new shorthand func1 memoized)
>(def func2 (bind (x) (+ x 1)))
(new shorthand func2 memoized)
>(def a 1)
(new shorthand a memoized)
>(def b (func1 a))
(new shorthand b memoized)
>(def c (func2 b))
(new shorthand c memoized)
>c
2
>b
1
>(def a 2)
(new shorthand a memoized)
>c
5
>b
4
and here's a simpler one
>(def x 5)
(new shorthand x memoized)
>(def y (* x x))
(new shorthand y memoized)
>y
25
>(def x 3)
(new shorthand x memoized)
>y
9
The thing is, unlike in scheme (or any other lisp I know), the "def" form does not evaluate the second operand -- it just creates a syntactical "shorthand" to an expression given. So any time the evaluator finds a symbol, it tries to look it up in the environment (i.e. variable-value bindings created with "bind" form -- which is a multi-case pattern-matching variant of "lambda" form from lisp), and if it fails, it also check the list of definitions -- when it succeeds, it immediately evaluates the coresponding expression.
It was not meant to do things which you describe (and in general is not usefull at all, only forcing the evaluator to do a little more work in the current implementation), as the language is purely functional (and actually the only reason re-defining a shorthand is possible is for the convenience of working with repl). This "feature" is in there because I wanted to think about recursive definitions like
(def fact (bind (0) 1
(n) (* n (fact (- n 1)))))
to actually mean an infinite expression
(bind (0) 1
(n) (* n ((bind (0) 1
(n) (* n ...)) (- n 1)))
in the spirit of Dana Scott's "Lattice of flow diagrams".
Note that infinite data structures like
(def evil `(ha ,evil))
do break the interpreter when you try to evaluate them, though I should probably make them legit [by means of partly-lazy evaluation or something] as well...
Back to your question, you might take a look at various Functional Reactive Programming languages/frameworks, as they provide the behaviors you ask for, at least for certain datatypes.
Svik's comment is also very nice, spreadsheet formulas work this way (and some people claim them to be an FRP language).
(The lazy evaluation thing is rather not the case, because variable scope in any "lazy" language I happen to know is lexical -- laziness+dynamic scoping would probaly be too confusing, as mcoblenz stated above).
Lazy evaluation is a method to evaluate a Haskell program. It means that expressions are not evaluated when they are bound to variables, but their evaluation is deferred until their results are needed by other computations.
https://wiki.haskell.org/Lazy_evaluation
What you need is call-by-name and mutable variables. One of the earliest languages with (optional) call-by-name was Algol 60.
A more modern language with (optional) call-by-name is Scala. Here is a port your sample code to Scala:
def log(a: Any) = println(a)
def func1(x: => Int) = x * x
def func2(x: => Int) = {
log("I'm func2!")
x + 1
}
var a = 0
def rest1(b: => Int) = {
def rest2(c: => Int) = {
a = 1
log(c); // Logs "I'm func2!", and then logs 2, which is ((a * a) + 1).
log(b); // Logs 1, which is (a * a).
a = 2;
log(c); // Logs "I'm func2!", and then logs 5, which is ((a * a) + 1).
log(b); // Logs 4, which is (a * a).
}
rest2(func2(b))
}
rest1(func1(a))
Parameters that are declared such as b: => Int are call-by-name (whereas b: Int would be call-by-value).

Representing undefined result in MIT Scheme

Imagine I have a function with a domain of all integers bigger than 0. I want the result of other inputs to be undefined. For the sake of simplicity, let's say this is the increment function. In Haskell, I could achieve this with something like
f :: Integer -> Integer
f x
| x > 0 = x + 1
| otherwise = undefined
Of course, the example is quite gimped but it should be clear what I want to achieve. I'm not sure how to achieve the similar in Scheme.
(define (f x)
(if (> x 0)
(+ x 1)
(?????)))
My idea is to just stick an error in there but is there any way to replicate the Haskell behaviour more closely?
Your question is related to this one which has answers pointing out that in R5RS (which I guess MIT scheme partially supports?), the if with one branch returns an "unspecified value". So the equivalent to the haskell code should be:
(define (f x)
(if (> x 0)
(+ x 1)))
You probably already know this: in haskell undefined is defined in terms of error, and is primarily used in development as a placeholder to be removed later. The proper way to define your haskell function would be to give it a type like: Integer -> Maybe Integer.
A common undefined value is void defined as (define void (if #f #f)).
Notice that not all Scheme implementations allow an if without the alternative part (as suggested in the other answers) - for instance, Racket will flag this situation as an error.
In Racket you can explicitly write (void) to specify that a procedure returns no useful result (check if this is available in MIT Scheme). From the documentation:
The constant #<void> is returned by most forms and procedures that have a side-effect and no useful result. The constant #<undefined> is used as the initial value for letrec bindings. The #<void> value is always eq? to itself, and the #<undefined> value is also eq? to itself.
(void v ...) → void?
Returns the constant #<void>. Each v argument is ignored.
That is, the example in the question would look like this:
(define (f x)
(if (> x 0)
(+ x 1)
(void)))
Speaking specifically to MIT Scheme, I believe #!unspecific is the constant that is returned from an if without an alternative.
(eq? (if (= 1 2) 3) #!unspecific) => #t

Continuation passing style - function composition

I'm learning about CPS with Racket, and I've managed to write up these functions:
;lift a regular single-arg function into CPS
(define (lift/k f)
(lambda (x k)
(k (f x))))
;compose two CPS functions
(define (compose/k f g)
(lambda (x k)
(g x (lambda (y)
(f y k)))))
They seem to work correctly
(define (is-two/k x k)
(k (= x 2)))
(define is-not-two/k (compose/k (lift/k not) is-two/k))
(is-not-two/k 3 display)
(is-not-two/k 2 display)
#t#f
I'm wondering if these functions are still "true CPS". Have I messed up "true" continuation-passing with these functions? Is it kosher to use function composition techniques in CPS? Is it encouraged? Or would it be considered a "compromise" to do this? Is there a more CPS-y way to do it?
Yes I know I just asked 5 questions, but the basic idea behind them all (which I'm not sure I understand correctly) is the same. Explanations in other Lisps, Haskell, Erlang, or other functional languages are fine.
The continuation-passing-style transform can be partial, or complete. You're usually working with a system where certain primitives (+, -, etc.) are stuck in non-cps-land. Fortunately, CPS works fine either way.
The steps in CPSing:
Pick which functions are going to be primitive.
CPS-transform so that all non-primitive functions (including continuations) are called only in tail position.
So, in your code, your 'lift/k' is essentially treating its given function as being primitive: note that the body of lift/k calls 'f' in non-tail position. If you want not to treat the lifted function as a primitive, you must go in and rewrite it explicitly.
Your 'compose' function composes two CPSed functions, but is not itself in CPS (that is, you're assuming that 'compose' is primitive. You probably want to CPS it. Note that since it just returns a value straight off, this is simple:
;compose two CPS functions
(define (compose/k f g k)
(k (lambda (x k)
(g x (lambda (y)
(f y k))))))

Practical use of curried functions?

There are tons of tutorials on how to curry functions, and as many questions here at stackoverflow. However, after reading The Little Schemer, several books, tutorials, blog posts, and stackoverflow threads I still don't know the answer to the simple question: "What's the point of currying?" I do understand how to curry a function, just not the "why?" behind it.
Could someone please explain to me the practical uses of curried functions (outside of languages that only allow one argument per function, where the necessity of using currying is of course quite evident.)
edit: Taking into account some examples from TLS, what's the benefit of
(define (action kind)
(lambda (a b)
(kind a b)))
as opposed to
(define (action kind a b)
(kind a b))
I can only see more code and no added flexibility...
One effective use of curried functions is decreasing of amount of code.
Consider three functions, two of which are almost identical:
(define (add a b)
(action + a b))
(define (mul a b)
(action * a b))
(define (action kind a b)
(kind a b))
If your code invokes add, it in turn calls action with kind +. The same with mul.
You defined these functions like you would do in many imperative popular languages available (some of them have been including lambdas, currying and other features usually found in functional world, because all of it is terribly handy).
All add and sum do, is wrapping the call to action with the appropriate kind. Now, consider curried definitions of these functions:
(define add-curried
((curry action) +))
(define mul-curried
((curry action) *))
They've become considerable shorter. We just curried the function action by passing it only one argument, the kind, and got the curried function which accepts the rest two arguments.
This approach allows you to write less code, with high level of maintainability.
Just imagine that function action would immediately be rewritten to accept 3 more arguments. Without currying you would have to rewrite your implementations of add and mul:
(define (action kind a b c d e)
(kind a b c d e))
(define (add a b c d e)
(action + a b c d e))
(define (mul a b c d e)
(action * a b c d e))
But currying saved you from that nasty and error-prone work; you don't have to rewrite even a symbol in the functions add-curried and mul-curried at all, because the calling function would provide the necessary amount of arguments passed to action.
They can make code easier to read. Consider the following two Haskell snippets:
lengths :: [[a]] -> [Int]
lengths xs = map length xs
lengths' :: [[a]] -> [Int]
lengths' = map length
Why give a name to a variable you're not going to use?
Curried functions also help in situations like this:
doubleAndSum ys = map (\xs -> sum (map (*2) xs) ys
doubleAndSum' = map (sum . map (*2))
Removing those extra variables makes the code easier to read and there's no need for you to mentally keep clear what xs is and what ys is.
HTH.
You can see currying as a specialization. Pick some defaults and leave the user (maybe yourself) with a specialized, more expressive, function.
I think that currying is a traditional way to handle general n-ary functions provided that the only ones you can define are unary.
For example, in lambda calculus (from which functional programming languages stem), there are only one-variable abstractions (which translates to unary functions in FPLs). Regarding lambda calculus, I think it's easier to prove things about such a formalism since you don't actually need to handle the case of n-ary functions (since you can represent any n-ary function with a number of unary ones through currying).
(Others have already covered some of the practical implications of this decision so I'll stop here.)
Using all :: (a -> Bool) -> [a] -> Bool with a curried predicate.
all (`elem` [1,2,3]) [0,3,4,5]
Haskell infix operators can be curried on either side, so you can easily curry the needle or the container side of the elem function (is-element-of).
We cannot directly compose functions that takes multiple parameters. Since function composition is one of the key concept in functional programming. By using Currying technique we can compose functions that takes multiple parameters.
I would like to add example to #Francesco answer.
So you don't have to increase boilerplate with a little lambda.
It is very easy to create closures. From time to time i use SRFI-26. It is really cute.
In and of itself currying is syntactic sugar. Syntactic sugar is all about what you want to make easy. C for example wants to make instructions that are "cheap" in assembly language like incrementing, easy and so they have syntactic sugar for incrementation, the ++ notation.
t = x + y
x = x + 1
is replaced by t = x++ + y
Functional languages could just as easily have stuff like.
f(x,y,z) = abc
g(r,s)(z) = f(r,s,z).
h(r)(s)(z) = f(r,s,z)
but instead its all automatic. And that allows for a g bound by a particular r0, s0 (i.e. specific values) to be passed as a one variable function.
Take for example perl's sort function which takes
sort sub list
where sub is a function of two variables that evaluates to a boolean and
list is an arbitrary list.
You would naturally want to use comparison operators (<=>) in Perl and have
sortordinal = sort (<=>)
where sortordinal works on lists. To do this you would sort to be a curried function.
And in fact
sort of a list is defined in precisely this way in Perl.
In short: currying is sugar to make first class functions more natural.

Resources