Alpha equivalence between variables in lambda calculus - haskell

Just a fairly simple question (so it seems to me). If two variables (x)(x) are alpha equivalent. Is (x1x2)(x2x1) alpha equivalent?

Two terms are alpha-equivalent iff one can be converted into the other purely by renaming bound variables.
A variable is considered to be a bound variable if it matches the parameter name of some enclosing lambda. Otherwise it's a free variable. Here are a few examples:
λx. x -- x is bound
λx. y -- y is free
λf. λx. f x y -- f and x are bound, y is free
f (λf. f x) -- the first f is free; the second is bound. x is free
z -- z is free
Basically, "bound" and "free" roughly correspond to the notions of "in scope" and "out of scope" in procedural languages.
Alpha-equivalence basically captures the idea that it's safe to rename a variable in a program if you also fix all the references to that variable. That is, when you change the parameter of a lambda term, you also have to go into the lambda's body and change the usages of that variable. (If the name is re-bound by another lambda inside the first lambda, you'd better make sure not to perform the renaming inside the inner lambda.)
Here are some examples of alpha-equivalent terms:
λx. x <-> λy. y <-> λberp. berp
λx. λf. f x <-> λx. λg. g x <-> λf. λx. x f <-> λx1. λx2. x2 x1
λf. λf. f f <-> λg. λf. f f <-> λf. λg. g g
So is x x alpha-equivalent to x1x2 x1x2? No! x is free in the first term, because it's not bound by an enclosing lambda. (Perhaps it's a reference to a global variable.) So it's not safe to rename it to x1x2.
I suspect your tutor really meant to say that λx. x x is alpha-equivalent to λx1x2. x1x2 x1x2. Here the x is bound by the lambda, so you can safely rename it.
Is x1 x2 alpha-equivalent to x2 x1? For the same reason, no.
And is λx1. λx2. x1 x2 equivalent to λx1. λx2. x2 x1? Again, no, because this isn't just a renaming - the x1 and x2 variables moved around.
However, λx1. λx2. x1 x2 is alpha-equivalent to λx2. λx1. x2 x1:
rename x1 to some temporary name like z: λz. λx2. z x2
rename x2 to x1: λz. λx1. z x1
rename z back to x2: λx2. λx1. x2 x1
Getting renaming right in a language implementation is a fiddly enough problem that many compiler writers opt for a nameless representation of terms called de Bruijn indices. Rather than using text, variables are represented as a number measuring how many lambdas away the variable was bound. A nameless representation of λx2. λx1. x2 x1 would look like λ. λ. 2 1. Note that that's exactly the same as the de Bruijn representation of λx1. λx2. x1 x2. de Bruijn indices thoroughly solve the problem of alpha-equivalence (although they are quite hard to read).

Related

Pattern-matching syntax not Constructor application syntax

Given a two-place data constructor, I can partially apply it to one argument then apply that to the second. Why can't I use the same syntax for pattern matching?
data Point = MkPoint Float Float
x = 1.0 :: Float; y = 2.0 :: Float
thisPoint = ((MkPoint x) y) -- partially apply the constructor
(MkPoint x1 y1) = thisPoint -- pattern match OK
((MkPoint x2) y2) = thisPoint -- 'partially apply' the pattern, but rejected: parse error at y2
((MkPoint x3 y3)) = thisPoint -- this accepted, with double-parens
Why do I want to do that? I want to grab the constructor and first arg as an as-pattern, so I can apply it to a different second arg. (Yes the work-round in this example is easy. Realistically I have a much more complex pattern, with several args of which I want to split out the last.):
(mkPx#(MkPoint x4) y4) = thisPoint -- also parse error
thatPoint = mkPx (y4 * 2)
I think there's no fundamental reason to prevent this kind of match.
Certainly it wouldn't do to allow you to write
f (MkPoint x1) = x1
and have that match a partially-applied constructor, i.e. a function. So, one reason to specify it as it was specified here is for simplicity: the RHS of an # has to be a pattern. Simple, easy to parse, easy to understand. (Remember, the very first origins of the language were to serve as a testbed for PL researchers to tinker. Simple and uniform is the word of the day for that purpose.) MkPoint x1 isn't a pattern, therefore mkPx#(MkPoint x1) isn't allowed.
I suspect that if you did the work needed to carefully specify what is and isn't allowed, wrote up a proposal, and volunteered to hack on the parser and desugarer as needed, the GHC folks would be amenable to adding a language extension. Seems like a lot of work for not much benefit, though.
Perhaps record update syntax will scratch your itch with much less effort.
data Point = MkPoint {x, y :: Float}
m#(MkPoint { x = x5 }) = m { x = x5 + 1 }
You also indicate that, aside from the motivation, you wonder what part of the Report says that the pattern you want can't happen. The relevant grammar productions from the Report are here:
pat → lpat
lpat → apat
| gcon apat1 … apatk (arity gcon = k, k ≥ 1)
apat → var [ # apat] (as pattern)
| gcon (arity gcon = 0)
| ( pat ) (parenthesized pattern)
(I have elided some productions that don't really change any of the following discussion.)
Notice that as-patterns must have an apat on their right-hand side. apats are 0-arity constructors (in which case it's not possible to partially apply it) or parenthesized lpats. The lpat production shown above indicates that for constructors of arity k, there must be exactly k apat fields. Since MkPoint has arity 2, MkPoint x is therefore not an lpat, and so (MkPoint x) is not an apat, and so m#(MkPoint x) is not an apat (and so not produced by pat → lpat → apat).
I can partially apply [a constructor] to one argument then apply that to the second.
thisPoint = ((MkPoint x) y) -- partially apply the constructor
There's another way to achieve that without parens, also I can permute the arguments
thisPoint = MkPoint x $ y
thisPoint = flip MkPoint y $ x
Do I expect I could pattern match on that? No, because flip, ($) are just arbitrary functions/operators.
I want to grab the constructor and first arg as an as-pattern, ...
What's special about the first arg? Or the all-but-last arg (since you indicate your real application is more complex)? Do you you expect you could grab the constructor + third and fourth args as an as-pattern?
Haskell's pattern matching wants to keep it simple. If you want a binding to the constructor applied to an arbitrary subset of arguments, use a lambda expression mentioning your previously-bound var(s):
mkPy = \ x5 -> MkPoint x5 y1 -- y1 bound as per your q

What can I do with a super combinator or combinator?

I am reading on the page https://wiki.haskell.org/Super_combinator and would like to know, what is the purpose of the super combinator?
And I would like to understand the following context:
Any lambda expression is of the form \x1 x2 .. xn -> E, where E is not
a lambda abstraction and n≥0. (Note that if the expression is not a
lambda abstraction, n=0.) This is a supercombinator if and only if:
the only free variables in E are x1..xn, and
every lambda abstraction in E is a supercombinator.
the first point confuses me, because
\x y -> x + y
x and y are bound variables and therefore they are not free.

Scope of variables in lambda calculus / haskell

Given an expression like s := (λx, y, z.x y z) λx, y.x λy, z.z, is the scope of the bound variables
s := (λx, y, z.x y z) (λx, y.x) (λy, z.z)
or
s := (λx, y, z.x y z) (λx, y.x (λy, z.z))
I am guessing it is the 2nd option.
This question is basically unanswerable. It would be easy to define a concrete syntax with either proposed abstract syntax tree, and I don't really think there's such a strong convention one way or the other that you would want to assume one without seeing explicit text in the syntax definition for the document you were reading.
Since this is explicitly tagged Haskell: in this specific case it is simply a parse error, and you are required to add parentheses to disambiguate. Without the first \x y z -> x y z function, though, the default would be to parse as \x y -> (x (\y z -> z)), and you would need to add parentheses to get the other option of (\x y -> x) (\y z -> z). (The parentheses around \y z -> z are not optional.)

Why is this symmetry assertion wrong?

I am really confused over why there is always a counter example to my following assertion.
//assertions must NEVER by wrong
assert Symmetric{
all r: univ -> univ | some ~r iff (some x, y: univ | x not in y and y not in x and
(x->y in r) and (y->x in r))
}
check Symmetric
The counter-example always shows 1 element in univ set. However, this should not be the case since I specified that there will be some ~r iff x not in y and y not in x. The only element should not satisfy this statement.
Yet why does the model keep showing a counterexample to my assertion?
---INSTANCE---
integers={}
univ={Univ$0}
Int={}
seq/Int={}
String={}
none={}
this/Univ={Univ$0}
skolem $Symmetric_r={Univ$0->Univ$0}
Would really appreciate some guidance!
In Alloy, assertions are used to check the correctness of logic sentences (properties of your model), not to specify properties that should always hold in your model. So you didn't specify that
there will be some ~r iff x not in y and y not in x
you instead asked Alloy whether it is true that for all binary relations r, some ~r iff x not in y and y not in x [...], and Alloy answered that it is not true, and gave you a concrete example (counterexample) in which that property doesn't hold.
A couple other points
some ~r doesn't mean "r is symmetric"; it simply means that the transpose of r is non-empty, which is not the same. A binary relation is symmetric if it is equal to its transpose, so you can write r = ~r to express that;
instead of some x, y: univ | x not in y and y not in x and [...] you can equivalently write some disj x, y: univ | [...];
however, that some expression doesn't really express the symmetry property, because all it says is that "there are some x, y such that both x->y and y->x are in r"; instead, you want to say something like "for all x, y, if x->y is in r, then y->x is in r too".

Haskell: foldl' accumulator parameter

I've been asking a few questions about strictness, but I think I've missed the mark before. Hopefully this is more precise.
Lets say we have:
n = 1000000
f z = foldl' (\(x1, x2) y -> (x1 + y, y - x2)) z [1..n]
Without changing f, what should I set
z = ...
So that f z does not overflow the stack? (i.e. runs in constant space regardless of the size of n)
Its okay if the answer requires GHC extensions.
My first thought is to define:
g (a1, a2) = (!a1, !a2)
and then
z = g (0, 0)
But I don't think g is valid Haskell.
So your strict foldl' is only going to evaluate the result of your lambda at each step of the fold to Weak Head Normal Form, i.e. it is only strict in the outermost constructor. Thus the tuple will be evaluated, however those additions inside the tuple may build up as thunks. This in-depth answer actually seems to address your exact situation here.
W/R/T your g: You are thinking of BangPatterns extension, which would look like
g (!a1, !a2) = (a1, a2)
and which evaluates a1 and a2 to WHNF before returning them in the tuple.
What you want to be concerned about is not your initial accumulator, but rather your lambda expression. This would be a nice solution:
f z = foldl' (\(!x1, !x2) y -> (x1 + y, y - x2)) z [1..n]
EDIT: After noticing your other questions I see I didn't read this one very carefully. Your goal is to have "strict data" so to speak. Your other option, then, is to make a new tuple type that has strictness tags on its fields:
data Tuple a b = Tuple !a !b
Then when you pattern match on Tuple a b, a and b will be evaluated.
You'll need to change your function regardless.
There is nothing you can do without changing f. If f were overloaded in the type of the pair you could use strict pairs, but as it stands you're locked in to what f does. There's some small hope that the compiler (strictness analysis and transformations) can avoid the stack growth, but nothing you can count on.

Resources