I was wondering how the following code evaluates to 3.
(define (foo y) ((lambda (x) y) ((lambda (y)(* y y)) y)))
(foo 3)
I have been looking at it for a while and cannot seem to understand why the evaluation does not result in 9. Could someone provide a detailed step by step instruction on how this evaluates to 3?
Let's start by indenting the code in a way that it's easier to understand:
(define (foo y)
((lambda (x) y)
((lambda (y) (* y y))
y)))
Now let's evaluate it, from the inside-out:
(define (foo y)
((lambda (x) y)
((lambda (y) (* y y))
3))) ; pass the parameter
(define (foo y)
((lambda (x) y)
(* 3 3))) ; evaluate innermost lambda
(define (foo y)
((lambda (x) y) 9)) ; pass the result of evaluation
Aha! that's where we get the 3. Even though we're passing a 9 as parameter (bound to x), we're simply returning the value of the outermost y parameter, which was 3 all along:
=> 3
With let re-writing,
(define (foo y) ((lambda (x) y) ((lambda (y)(* y y)) y)))
(foo 3)
=
(let ([y 3]) ; by application
((lambda (x) y) ((lambda (y)(* y y)) y)))
=
(let ([y 3])
(let ([x ((lambda (y)(* y y)) y)]) ; by application
y ))
=
(let ([y 3])
(let ([x ((lambda (z)(* z z)) y)]) ; by alpha-renaming
y ))
=
(let ([y 3])
(let ([x (let ([z y]) ; by application
(* z z))])
y ))
=
(let ([y 3])
(let ([x (* y y)]) ; by let-elimination
y ))
=
(let ([y 3])
(let ([x 9]) ; by expression evaluation
y ))
=
(let ([y 3])
y ) ; by let-elimination
=
3 ; by let-elimination
As you can see, the nested (lambda (y)(* y y)) is situated in a nested scope, not affecting the y which is finally returned, but only x, which value, 9, is evaluated and then discarded.
Related
I am starting learning haskell and today in my class our teacher has resolved an excercise which we have to subtitute the expressions given. One of those expressions which was resolved by the teacher was the following:
(\ x. x) (\ y. y + x) [x: = z + z]
(\ x. x) (\ y. y + (z + z))
It seems that the teacher only replaced the free x on the right with the new value asked, but it didn't reduce the expression. What i see on this expression (\ x. x) (\ y. y + x) its that is a Redox. So what i would do first is reduce and then substitute.
So in my notebook i wrote this steps to get the solution:
(\ x. x) (\ y. y + x) [x: = z + z]
(\ y. y + x) [x: = z + z]
(\ y. y + (z + z))
My doubt: is it my solution correct? Can i first Reduce expression and then substitute? Thanks
I am studying Haskell and I am learning what is an abstraction, substitution (beta equivalence), application, free and bound variables (alpha equivalence), but I have some doubts resolving these exercises, I don't know if my solutions are correct.
Make the following substitutions
1. (λ x → y x x) [x:= f z]
Sol. (\x -> y x x) =>α (\w -> y w w) =>α (\w -> x w w) =>β (\w -> f z w w)
2. ((λ x → y x x) x) [y:= x]
Sol. ((\x -> y x x)x) =>α (\w -> y w w)[y:= x] = (\w -> x w w)
3. ((λ x → y x) (λ y → y x) y) [x:= f y]
Sol. aproximation, i don't know how to do it: ((\x -> y x)(\y -> y x) y) =>β
(\x -> y x)y x)[x:= f y] =>β y x [x:= f y] = y f y
4. ((λ x → λ y → y x x) y) [y:= f z]
Sol aproximation, ((\x -> (\y -> (y x x))) y) =>β ((\y -> (y x x)) y) =>α ((\y -> (y x x)) f z)
Another doubt that I have is if can I run these expressions on this website? It is a Lambda Calculus Calculator but I do not know how to run these tests.
1. (λ x → y x x) [x:= f z]
Sol. (\x -> y x x) =>α (\w -> y w w) =>α (\w -> x w w) =>β (\w -> f z w w)
No, you can't rename y, it's free in (λ x → y x x). Only bound variables can be (consistently) α-renamed. But only free variables can be substituted, and there's no free x in that lambda term.
2. ((λ x → y x x) x) [y:= x]
Sol. ((\x -> y x x)x) =>α (\w -> y w w)[y:= x] = (\w -> x w w)
Yes, substituting x for y would allow it to be captured by the λ x, so you indeed must α-rename the x in (λ x → y x x) first to some new unique name as you did, but you've dropped the application to the free x for some reason. You can't just omit parts of a term, so it's ((\w -> y w w) x)[y:= x]. Now perform the substitution. Note you're not asked to perform the β-reduction of the resulting term, just the substitution.
I'll leave the other two out. Just follow the rules carefully. It's easy if you rename all bound names to unique names first, even if the renaming is not strictly required, for instance
((λ x → y x) (λ y → y x) y) [x:= f y] -->
((λ w → y w) (λ z → z x) y) [x:= f y]
The "unique" part includes also the free variables used in the substitution terms, that might get captured after being substituted otherwise (i.e. without the renaming being performed first, in the terms in which they are being substituted). That's why we had to rename the bound y in the above term, -- because y appears free in the substitution term. We didn't have to rename the bound x but it made it easier that way.
I'm currently trying to implement beta reduction in Haskell, and I'm having a small problem. I've managed to figure out the majority of it, however as it is now I'm getting one small error when I test and I can't figure out how to fix it.
The code uses a custom datatype, Term and a substitution function which I defined beforehand, both of these will be below.
--Term datatype
data Term = Variable Var | Lambda Var Term | Apply Term Term
--Substitution function
substitute :: Var -> Term -> Term -> Term
substitute x n (Variable m)
|(m == x) = n
|otherwise = (Variable m)
substitute x n (Lambda m y)
|(m == x) = (Lambda m y)
|otherwise = (Lambda z (substitute x n (rename m z y)))
where z = fresh (merge(merge(used y) (used n)) ([x]))
substitute x n (Apply m y) = Apply (substitute x n m) (substitute x n y)
--Beta reduction
beta :: Term -> [Term]
beta (Variable x) = []
beta (Lambda x y) = map (Lambda x) (beta y)
beta (Apply (Lambda x m) n) = [(substitute x n m)] ++ [(Apply (Lambda x n) m) | m <- beta m] ++ [(Apply (Lambda x z) m) | z <- beta n]
beta (Apply x y) = [Apply x' y | x' <- beta x] ++ (map (Apply x) (beta y))
The expected outcome is as follows:
*Main> Apply example (numeral 1)
(\a. \x. (\y. a) x b) (\f. \x. \f. x)
*Main> beta it
[\c. (\b. \f. \x. \f. x) c b,(\a. \x. a b) (\f. \x. f x)]
However this is my outcome:
*Main> Apply example (numeral 1)
(\a. \x. (\y. a) x b) (\f. \x. \f. x)
*Main> beta it
[\c. (\b. \f. \x. \f. x) c b,(\a. \f. \x. \f. x) (\x. a b)]
Any help would be much appreciated.
Think you've also got your church numeral encoded wrong, numeral 1 should return
\f. \x. f x
rather than
\f. \x. \f. x.
I'm starting to learn Emacs Lisp and as an exercise I'm trying to implement map using foldr. The code is the following:
(defun foldr (f z l)
(if (null l)
z
(funcall f (car l)
(foldr f z
(cdr l)))))
(defun map (f l)
(foldr (lambda (x z)
(cons (funcall f x) z))
nil
l))
However, when I try to evaluate, for example,
(map (lambda (x) (+ x 1)) '(1 2 3 4 5))
in Emacs with eval-last-sexp (after evaluating both foldr and map), the result is the following:
Debugger entered--Lisp error: (wrong-number-of-arguments (lambda (x z) (cons (funcall f x) z)) 1)
(lambda (x z) (cons (funcall f x) z))(5)
funcall((lambda (x z) (cons (funcall f x) z)) 5)
(cons (funcall f x) z)
(lambda (x z) (cons (funcall f x) z))(5 nil)
funcall((lambda (x z) (cons (funcall f x) z)) 5 nil)
(if (null l) z (funcall f (car l) (foldr f z (cdr l))))
foldr((lambda (x z) (cons (funcall f x) z)) nil (5))
(funcall f (car l) (foldr f z (cdr l)))
(if (null l) z (funcall f (car l) (foldr f z (cdr l))))
foldr((lambda (x z) (cons (funcall f x) z)) nil (4 5))
(funcall f (car l) (foldr f z (cdr l)))
(if (null l) z (funcall f (car l) (foldr f z (cdr l))))
foldr((lambda (x z) (cons (funcall f x) z)) nil (3 4 5))
(funcall f (car l) (foldr f z (cdr l)))
(if (null l) z (funcall f (car l) (foldr f z (cdr l))))
foldr((lambda (x z) (cons (funcall f x) z)) nil (2 3 4 5))
(funcall f (car l) (foldr f z (cdr l)))
(if (null l) z (funcall f (car l) (foldr f z (cdr l))))
foldr((lambda (x z) (cons (funcall f x) z)) nil (1 2 3 4 5))
map((lambda (x) (+ x 1)) (1 2 3 4 5))
eval((map (function (lambda (x) (+ x 1))) (quote (1 2 3 4 5))) nil)
eval-last-sexp-1(nil)
#[257 "\204\303!\207 \303!\n)B\211A =\204\211A\211#\207" [eval-expression-debug-on-error eval-last-sexp-fake-value debug-on-error eval-last-sexp-1] 4 2471606 "P"](nil)
ad-Advice-eval-last-sexp(#[257 "\204\303!\207 \303!\n)B\211A =\204\211A\211#\207" [eval-expression-debug-on-error eval-last-sexp-fake-value debug-on-error eval-last-sexp-1] 4 2471606 "P"] nil)
apply(ad-Advice-eval-last-sexp #[257 "\204\303!\207 \303!\n)B\211A =\204\211A\211#\207" [eval-expression-debug-on-error eval-last-sexp-fake-value debug-on-error eval-last-sexp-1] 4 2471606 "P"] nil)
eval-last-sexp(nil)
call-interactively(eval-last-sexp nil nil)
command-execute(eval-last-sexp)
I don't understand why this doesn't work, when the following super similar implementation in Haskell works fine:
foldr f z l = if (null l)
then z
else f (head l) (foldr f z (tail l))
map f l = foldr (\ x z -> (:) (f x) z) [] l
So why aren't the Lisp and Haskell programs equivalent? What's the nature of the problem in the Lisp implementation, i.e., why isn't map working?
(setq lexical-binding t)
f is a free variable in the lambda form you pass to foldr.
With lexical binding, your example works fine:
(map (lambda (x) (+ x 1)) '(1 2 3 4 5)) ; ==> (2 3 4 5 6)
I am given the following 2 versions of a tree-flattening program and i am asked to prove identical behaviour.
flatten::Tree->[Int]
flatten (Leaf z) =[z] F.1
flatten (Node x y) =concat (flatten x)(flatten y) F.2
flatten'::Tree->Int->[Int]
flatten' (Leaf z) a =concat [z] a P.1
flatten' (Node x y) a =flatten' x (flatten' y a) P.2
concat->[a]->a->[a]
concat [] a =a C.1
concat (h:t) a =h:concat t a C.2
Prove that:
flatten' z a = concat (flatten z) a
Base Case:
LHS:
flatten' (Leaf z) a = concat [z] a By P.1
RHS:
concat (flatten z) a = concat (flatten (Leaf z)) a
= concat [z] a By F.1 and C.1
LHS=RHS, hence base case holds
Inductive Case:
(Only possible thanks to the guy below who explained how induction on
binary trees works!)
Assume that:
flatten' x a = concat (flatten x) a Ind. Hyp1
flatten' y a = concat (flatten y) a Ind. Hyp2
Show that:
flatten'(Node x y) a = concat (flatten (Node x y)) a
LHS:
flatten' (Node x y) a =
flatten' x (flatten' y a) By P.2
flatten' x (concat (flatten y) a) By Ind. Hyp2
concat (flatten x) (concat (flatten y) a) By Ind. Hyp1
RHS:
concat (flatten (Node x y)) a =
concat (concat (flatten x) (flatten y)) a By F.2
concat (flatten x) (concat (flatten y) a) By C.2
LHS = RHS, hence inductive step holds. End of proof.
When inducting on lists, your induction hypothesis is that the wanted property holds on the list tail, and you have to prove that it also holds on the whole list.
On trees, it's only slightly different: your induction hypothesis is that the wanted property holds on both subtrees, and you have to prove that is also holds on the whole tree.
Assume that:
forall a, flatten' x a = concat (flatten x) a Ind. Hyp. 1
forall a, flatten' y a = concat (flatten y) a Ind. Hyp. 2
Show that:
forall a, flatten'(Node x y) a = concat (flatten (Node x y)) a
I think you can now guess how to proceed from here, so I won't spoil the fun. You might need to rely on some basic property of concat for some sub-step.
Final note: in your base case, you mentioned C.1 as a justification -- are you sure you actually used that?