I have defined the following functions for beta reduction but i'm not sure how to consider the case where free variables get bounded.
data Term = Variable Char | Lambda Char Term | Pair Term Term deriving (Show,Eq)
--substition
s[M:x]= if (s=x) then M else s
AB[M:x]= (A[M:x] B [x:M])
Lambda x B[M:x] = Lambda x B
Lambda y P[M:x]= if x=y then Lambda y P else Lambda y P (M:x)
--beta reduction
Reduce [s]= s
Reduce[Lambda x B]M = B[M:x]
Reduce[L1 L2] = (Reduce [L1] Reduce [L2])
The link hammar gave in the comment describes the solution in detail.
I'd just like to offer a different solution. Nicolaas Govert de Bruijn, a Dutch mathematician, invented an alternative notation for lambda terms. The idea is that instead of using symbols for variables, we use numbers. We replace each variable by the number representing how many lambdas we need to cross until we find the abstraction that binds the variable. Abstraction then don't need to have any information at all. For example:
λx. λy. x
is converted to
λ λ 2
or
λx. λy. λz. (x z) (y z)
is converted to
λ λ λ 3 1 (2 1)
This notation has several considerable advantages. Notably, since there are no variables, there is no renaming of variables, no α-conversion. Although we have to renumber the indices accordingly when substituting, we don't have to check for names conflicts and do any renaming. The above Wikipedia article gives an example of how a β-reduction works with this notation.
Related
Given a two-place data constructor, I can partially apply it to one argument then apply that to the second. Why can't I use the same syntax for pattern matching?
data Point = MkPoint Float Float
x = 1.0 :: Float; y = 2.0 :: Float
thisPoint = ((MkPoint x) y) -- partially apply the constructor
(MkPoint x1 y1) = thisPoint -- pattern match OK
((MkPoint x2) y2) = thisPoint -- 'partially apply' the pattern, but rejected: parse error at y2
((MkPoint x3 y3)) = thisPoint -- this accepted, with double-parens
Why do I want to do that? I want to grab the constructor and first arg as an as-pattern, so I can apply it to a different second arg. (Yes the work-round in this example is easy. Realistically I have a much more complex pattern, with several args of which I want to split out the last.):
(mkPx#(MkPoint x4) y4) = thisPoint -- also parse error
thatPoint = mkPx (y4 * 2)
I think there's no fundamental reason to prevent this kind of match.
Certainly it wouldn't do to allow you to write
f (MkPoint x1) = x1
and have that match a partially-applied constructor, i.e. a function. So, one reason to specify it as it was specified here is for simplicity: the RHS of an # has to be a pattern. Simple, easy to parse, easy to understand. (Remember, the very first origins of the language were to serve as a testbed for PL researchers to tinker. Simple and uniform is the word of the day for that purpose.) MkPoint x1 isn't a pattern, therefore mkPx#(MkPoint x1) isn't allowed.
I suspect that if you did the work needed to carefully specify what is and isn't allowed, wrote up a proposal, and volunteered to hack on the parser and desugarer as needed, the GHC folks would be amenable to adding a language extension. Seems like a lot of work for not much benefit, though.
Perhaps record update syntax will scratch your itch with much less effort.
data Point = MkPoint {x, y :: Float}
m#(MkPoint { x = x5 }) = m { x = x5 + 1 }
You also indicate that, aside from the motivation, you wonder what part of the Report says that the pattern you want can't happen. The relevant grammar productions from the Report are here:
pat → lpat
lpat → apat
| gcon apat1 … apatk (arity gcon = k, k ≥ 1)
apat → var [ # apat] (as pattern)
| gcon (arity gcon = 0)
| ( pat ) (parenthesized pattern)
(I have elided some productions that don't really change any of the following discussion.)
Notice that as-patterns must have an apat on their right-hand side. apats are 0-arity constructors (in which case it's not possible to partially apply it) or parenthesized lpats. The lpat production shown above indicates that for constructors of arity k, there must be exactly k apat fields. Since MkPoint has arity 2, MkPoint x is therefore not an lpat, and so (MkPoint x) is not an apat, and so m#(MkPoint x) is not an apat (and so not produced by pat → lpat → apat).
I can partially apply [a constructor] to one argument then apply that to the second.
thisPoint = ((MkPoint x) y) -- partially apply the constructor
There's another way to achieve that without parens, also I can permute the arguments
thisPoint = MkPoint x $ y
thisPoint = flip MkPoint y $ x
Do I expect I could pattern match on that? No, because flip, ($) are just arbitrary functions/operators.
I want to grab the constructor and first arg as an as-pattern, ...
What's special about the first arg? Or the all-but-last arg (since you indicate your real application is more complex)? Do you you expect you could grab the constructor + third and fourth args as an as-pattern?
Haskell's pattern matching wants to keep it simple. If you want a binding to the constructor applied to an arbitrary subset of arguments, use a lambda expression mentioning your previously-bound var(s):
mkPy = \ x5 -> MkPoint x5 y1 -- y1 bound as per your q
I want to define a function that considers it's equally-typed arguments without considering their order. For example:
weirdCommutative :: Int -> Int -> Int
weirdCommutative 0 1 = 999
weirdCommutative x y = x + y
I would like this function to actually be commutative.
One option is adding the pattern:
weirdCommutative 1 0 = 999
or even better:
weirdCommutative 1 0 = weirdCommutative 0 1
Now lets look at the general case: There could be more than two arguments and/or two values that need to be considered without order - So considering all possible cases becomes tricky.
Does anyone know of a clean, natural way to define commutative functions in Haskell?
I want to emphasize that the solution I am looking for is general and cannot assume anything about the type (nor its deconstruction type-set) except that values can be compared using == (meaning that the type is in the Eq typeclass but not necessarily in the Ord typeclass).
There is actually a package that provides a monad and some scaffolding for defining and using commutative functions. Also see this blog post.
In a case like Int, you can simply order the arguments and feed them to a (local) partial function that only accepts the arguments in that canonically ordered form:
weirdCommutative x y
| x > y = f y x
| otherwise = f x y
where f 0 1 = 999
f x' y' = x' + y'
Now, obviously most types aren't in the Ord class – but if you're deconstructing the arguments by pattern-matching, chances are you can define at least some partial ordering. It doesn't really need to be >.
Here's the code.
largestDivisible :: (Integral a) => a
largestDivisible = head (filter p [100000,99999..])
where p x = x `mod` 3829 == 0
I am little bit confused. What is p in this case? Also, I do not understand the where expression in this particular example, because we got two expressions with p and x on the left side and we have one alignment, which is actually a boolean.
I would appreciate, if someone could explain me the above code.
p is a function, which accepts an argument x and returns True only if x is divisible by 3829. You can use where to define local functions just like you define local "values", using the same f x = y syntax you use to define top-level functions.
I am trying to understand currying by reading various blogs and stack over flow answers and I think I understood some what. In Haskell, every function is curried, it means, when you have a function like f x y = x + y
it really is ((f x) y)
in this, the function initially take the first parameter 'x' as the parameter and partially applies it to function f which in turn returns a function for y. where it takes just y a single parameter and applies the function. In both cases the function takes only one parameter and also the process of reducing a function to take single parameter is called 'currying'. Correct me if my understanding wrong here.
So if it is correct, could you please tell me if the functions 'two' and 'three' are curried functions?
three x y z = x + y + z
two = three 1
same = two 1
In this case, I have two specialized functions, 'two' and 'same' which are reduced to take only one parameter so is it curried?
Let's look at two first.
It has a signature of
two :: Num a => a -> a -> a
forget the Num a for now (it's only a constraint on a - you can read Int here).
Surely this too is a curried function.
The next one is more interesting:
same :: Num a => a -> a
(btw: nice name - it's the same but not exactly id ^^)
TBH: I don't know for sure.
The best definition I know of a curried function is this:
A curried function is a function of N arguments returning another function of (N-1) arguments.
(if you want you can extent this to fully curried functions of course)
This will only fit if you define constants as functions with 0 parameters - which you surely can.
So I would say yes(?) this too is a curried function but only in a mathy borderline way (like the sum of 0 numbers is defined to be 0)
Best just think about this equationally. The following are all equivalent definitions:
f x y z = x+y+z
f x y = \z -> x+y+z
f x = \y -> (\z -> x+y+z)
f = \x -> (\y -> (\z -> x+y+z))
Partial application is only tangentially relevant here. Most often you don't want the actual partial application to be performed and the actual lambda object to be created in memory - hoping instead that the compiler will employ - and optimize better - the full definition at the final point of full application.
The presence of the functions curry/uncurry is yet another confusing issue. Both f (x,y) = ... and f x y = ... are curried in Haskell, of course, but in our heads we tend to think about the first as a function of two arguments, so the functions translating between the two forms are named curry and uncurry, as a mnemonic.
You could think that three function with anonymous functions is:
three = \x -> (\y -> (\z -> x + y + z)))
I've been asking a few questions about strictness, but I think I've missed the mark before. Hopefully this is more precise.
Lets say we have:
n = 1000000
f z = foldl' (\(x1, x2) y -> (x1 + y, y - x2)) z [1..n]
Without changing f, what should I set
z = ...
So that f z does not overflow the stack? (i.e. runs in constant space regardless of the size of n)
Its okay if the answer requires GHC extensions.
My first thought is to define:
g (a1, a2) = (!a1, !a2)
and then
z = g (0, 0)
But I don't think g is valid Haskell.
So your strict foldl' is only going to evaluate the result of your lambda at each step of the fold to Weak Head Normal Form, i.e. it is only strict in the outermost constructor. Thus the tuple will be evaluated, however those additions inside the tuple may build up as thunks. This in-depth answer actually seems to address your exact situation here.
W/R/T your g: You are thinking of BangPatterns extension, which would look like
g (!a1, !a2) = (a1, a2)
and which evaluates a1 and a2 to WHNF before returning them in the tuple.
What you want to be concerned about is not your initial accumulator, but rather your lambda expression. This would be a nice solution:
f z = foldl' (\(!x1, !x2) y -> (x1 + y, y - x2)) z [1..n]
EDIT: After noticing your other questions I see I didn't read this one very carefully. Your goal is to have "strict data" so to speak. Your other option, then, is to make a new tuple type that has strictness tags on its fields:
data Tuple a b = Tuple !a !b
Then when you pattern match on Tuple a b, a and b will be evaluated.
You'll need to change your function regardless.
There is nothing you can do without changing f. If f were overloaded in the type of the pair you could use strict pairs, but as it stands you're locked in to what f does. There's some small hope that the compiler (strictness analysis and transformations) can avoid the stack growth, but nothing you can count on.