Is True = False? Understanding binding and pattern matching [duplicate] - haskell

This question already has answers here:
What does let 5 = 10 do? Is it not an assignment operation?
(3 answers)
Closed 4 years ago.
A friend of us is teaching to us the basics of functional programming in Haskell, and he started to write the rarest thing I ever saw:
He started with something not so amazing, but pretty cool:
(x,y) = (10,20)
(z:zs) = 0 : [1..]
and shows in the prelude:
prelude> x
10
prelude> z
0
prelude> takeN 3 zs
[1,2,3]
so far, so good... I didn't know you could bind the values like that
(x,y) = (10,20)
(z:zs) = 0 : [1..]
True = False -- HERE
What!? Everyone in the class thought, ok, something will go wrong, but not even the code compiled, then it runs:
prelude> x
10
prelude> 4
4
prelude> True
True
(I read the question What does `let 5 = 10` do? Is it not an assignment operation? and I'm not using any let here, in my example I write the code in a file and the execute it, so my question is not answered yet, none of that answers are useful for me.)

Both where and let introduce defining equations using lazy patterns.
In any module, all the top-level definitions are under a where.
module Main where
-- ^^^^^
x, y :: Int
(x, y) = undefined
main :: IO ()
main = putStrLn "hello!"
The above program will print "hello", as intended. The pattern matching against (x, y) would diverge if it were strict, but since it is lazy it does not -- the undefined expression never gets evaluated.
Definitions typed in GHCi are also under an implicit let.
After knowing this, the issue mentioned in the question is exactly the one in the let 5 = 10 question.

In Haskell a let “binding” is doing pattern matching. There are precisely two types of pattern-matching-binding in Haskell: You can either write P = x where P is a pattern or you may write v1 p1 p2 ... pn = x where each pi is a pattern. This defines (part of) a function v1. What is a pattern?
A pattern is either a variable v which causes v to be bound to whatever is being matched when the match is successful. Or a pattern may be a constant like 7 where when the matching is done it succeeds only if the thing being matched is equal to the constant. Or a pattern may be a variant of a data type: If Foo is a variant (constructor) for the type Bar taking n parameters then Foo p1 p2 ... pn, where the pi are patterns, is a pattern which looks at an object of type Bar and successfully matches if it is of the Foo variant and each of the pi successfully match.
Because Haskell is lazy, the only way to force the pattern matching to happen is to use a variable which was bound by pattern matching. So one may force the binding of x in let (x,5)=(6,6) by using x and this cause a matching failure. If a variant has no parameters there is no way for the matching to be forced so there is no way for the match to fail. Thus let True = False would fail if you ever got the match to happen but because there is no way to get the match to happen, there is no error. Note that you are not rebinding True but using it as a pattern with no arguments.

Related

How to define multiple patterns in Frege?

I'm having some trouble defining a function in Frege that uses multiple patterns. Basically, I'm defining a mapping by iterating through a list of tuples. I've simplified it down to the following:
foo :: a -> [(a, b)] -> b
foo _ [] = [] --nothing found
foo bar (baz, zab):foobar
| bar == baz = zab
| otherwise = foo bar foobar
I get the following error:
E morse.fr:3: redefinition of `foo` introduced line 2
I've seen other examples like this that do use multiple patterns in a function definition, so I don't know what I'm doing wrong. Why am I getting an error here? I'm new to Frege (and new to Haskell), so there may be something simple I'm missing, but I really don't think this should be a problem.
I'm compiling with version 3.24-7.100.
This is a pure syntactical problem that affects newcomers to languages of the Haskell family. It won't take too long until you internalize the rule that function application has higher precedence than infix expression.
This has consequences:
Complex arguments of function application need parentheses.
In infix expressions, function applications on either side of the operator do not need parentheses (however, individual components of function application may still need them).
In Frege, in addition, the following rule holds:
The syntax of function application and infix expressions on the left hand side of a definition is identical to the one on the right hand side as far as lexemes allowed on both sides are concerned. (This holds in Haskell only when # and ~ are not used.)
This is so you can define an addition function like this:
data Number = Z | Succ Number
a + Z = a
a + Succ b = Succ a + b
Hence, when you apply this to your example, you see that syntactically, you're going to redefine the : operator. To achieve what you want, you need to write it thus:
foo bar ((baz, zab):foobar) = ....
-- ^ ^
This corresponds to the situation where you apply foo to a list you are constructing:
foo 42 (x:xs)
When you write
foo 42 x:xs
this means
(foo 42 x):xs

What does immutable variable in Haskell mean?

I am quite confused with the concept of immutable variables in Haskell. It seems like that we can't change the value of variables in Haskell. But when I tried following code in GHCI, it seemed like the value of variables did change:
Prelude> foo x=x+1
Prelude> a=1
Prelude> a
1
Prelude> foo a
2
Prelude> a=2
Prelude> a
2
Prelude> foo a
3
Does this conflict with the idea of immutable variables?
Many thanks!
Haskell doesn't allow you to modify existing variables. It does, however, allow you to re-use variable names, and that's all that's happening here. One way to see this is to ask GHCi, using the :i[nfo] directive, where the variable was declared:
Prelude> let a = 1
Prelude> :i a
a :: Num a => a -- Defined at <interactive>:2:5
Prelude> let a = 2
Prelude> :i a
a :: Num a => a -- Defined at <interactive>:4:5
These are actually two entire seperate, different variables, which just happen to be called the same name! If you just ask for a, the newer definition will be “preferred”, but the old one is still there – one way to see this, as remarked by chi in the comments, is to use a in a function:
Prelude> let a = 2
Prelude> :i a
a :: Num a => a -- Defined at <interactive>:4:5
Prelude> let f x = a + x
Prelude> let a = 3
Prelude> f (-2)
0
f never needs to care that you've defined a new variable that's also called a; from its perspective a was one immutable variable that always stays as it is.
It's worth talking a bit about why GHCi prefers the later definition. This does not otherwise happen in Haskell code; in particular if you try to compile the following module, it simply gives an error concerning duplicate definition:
a = 1
a = 2
main :: IO ()
main = print a
The reason that something like this is allowed in GHCi is that it works different from Haskell modules. The sequence of GHCi commands forms in fact a sequence of actions in the IO monad†; i.e. the program would have to be
main :: IO ()
main = do
let a = 1
let a = 2
print a
Now, if you've learned about monads you'll know that this is just syntactic sugar for
main =
let a = 1 in (let a = 2 in (print a))
and this is really the crucial bit for why you can re-use the name a: the second one, a = 2, lives in a narrower scope than the first. So it is more local, and local definitions have priority. Whether this is a good idea is a bit debatable; a good argument for it is that you can have a function like
greet :: String -> IO ()
greet name = putStrLn $ "Hello, "++name++"!"
and it won't stop working just because somebody defines elsewhere
name :: Car -> String
name car | rollsOverAtRightTurn car = "Reliant Robin"
| fuelConsumption car > 50*litrePer100km
= "Hummer"
| ... = ...
Besides, it's really quite useful that you can “redefine” variables while fooling around in GHCi, even though it's not such a good idea to redefine stuff in a proper program, which is supposed to show consistent behaviour.
†As dfeuer remarks, this is not the whole truth. You can do some things in GHCi that aren't allowed in an IO do-block, in particular you can define data types and classes. But any normal statement or variable definition act as it were in the IO monad.
(The other answer using GHCi is fine but to clarify it is not specific to GHCi or monads...)
As you can see from the fact that the following Haskell program
main =
let x = 1 in
let f y = x + y in
let x = 2 in
print (x * f 3)
prints 8 rather than 10, variables can only be "bound", not "mutated", in Haskell. In other words, the program above is equivalent (called α-equivalent, meaning a consistent change of the names of bound variables only; see https://wiki.haskell.org/Alpha_conversion for more details) to
main =
let x1 = 1 in
let f y = x1 + y in
let x2 = 2 in
print (x2 + f 3)
where it is clear that x1 and x2 are different variables.

Does a function in Haskell always evaluate its return value?

I'm trying to better understand Haskell's laziness, such as when it evaluates an argument to a function.
From this source:
But when a call to const is evaluated (that’s the situation we are interested in, here, after all), its return value is evaluated too ... This is a good general principle: a function obviously is strict in its return value, because when a function application needs to be evaluated, it needs to evaluate, in the body of the function, what gets returned. Starting from there, you can know what must be evaluated by looking at what the return value depends on invariably. Your function will be strict in these arguments, and lazy in the others.
So a function in Haskell always evaluates its own return value? If I have:
foo :: Num a => [a] -> [a]
foo [] = []
foo (_:xs) = map (* 2) xs
head (foo [1..]) -- = 4
According to the above paragraph, map (* 2) xs, must be evaluated. Intuitively, I would think that means applying the map to the entire list- resulting in an infinite loop.
But, I can successfully take the head of the result. I know that : is lazy in Haskell, so does this mean that evaluating map (* 2) xs just means constructing something else that isn't fully evaluated yet?
What does it mean to evaluate a function applied to an infinite list? If the return value of a function is always evaluated when the function is evaluated, can a function ever actually return a thunk?
Edit:
bar x y = x
var = bar (product [1..]) 1
This code doesn't hang. When I create var, does it not evaluate its body? Or does it set bar to product [1..] and not evaluate that? If the latter, bar is not returning its body in WHNF, right, so did it really 'evaluate' x? How could bar be strict in x if it doesn't hang on computing product [1..]?
First of all, Haskell does not specify when evaluation happens so the question can only be given a definite answer for specific implementations.
The following is true for all non-parallel implementations that I know of, like ghc, hbc, nhc, hugs, etc (all G-machine based, btw).
BTW, something to remember is that when you hear "evaluate" for Haskell it normally means "evaluate to WHNF".
Unlike strict languages you have to distinguish between two "callers" of a function, the first is where the call occurs lexically, and the second is where the value is demanded. For a strict language these two always coincide, but not for a lazy language.
Let's take your example and complicate it a little:
foo [] = []
foo (_:xs) = map (* 2) xs
bar x = (foo [1..], x)
main = print (head (fst (bar 42)))
The foo function occurs in bar. Evaluating bar will return a pair, and the first component of the pair is a thunk corresponding to foo [1..]. So bar is what would be the caller in a strict language, but in the case of a lazy language it doesn't call foo at all, instead it just builds the closure.
Now, in the main function we actually need the value of head (fst (bar 42)) since we have to print it. So the head function will actually be called. The head function is defined by pattern matching, so it needs the value of the argument. So fst is called. It too is defined by pattern matching and needs its argument so bar is called, and bar will return a pair, and fst will evaluate and return its first component. And now finally foo is "called"; and by called I mean that the thunk is evaluated (entered as it's sometimes called in TIM terminology), because the value is needed. The only reason the actual code for foo is called is that we want a value. So foo had better return a value (i.e., a WHNF). The foo function will evaluate its argument and end up in the second branch. Here it will tail call into the code for map. The map function is defined by pattern match and it will evaluate its argument, which is a cons. So map will return the following {(*2) y} : {map (*2) ys}, where I have used {} to indicate a closure being built. So as you can see map just returns a cons cell with the head being a closure and the tail being a closure.
To understand the operational semantics of Haskell better I suggest you look at some paper describing how to translate Haskell to some abstract machine, like the G-machine.
I always found that the term "evaluate," which I had learned in other contexts (e.g., Scheme programming), always got me all confused when I tried to apply it to Haskell, and that I made a breakthrough when I started to think of Haskell in terms of forcing expressions instead of "evaluating" them. Some key differences:
"Evaluation," as I learned the term before, strongly connotes mapping expressions to values that are themselves not expressions. (One common technical term here is "denotations.")
In Haskell, the process of forcing is IMHO most easily understood as expression rewriting. You start with an expression, and you repeatedly rewrite it according to certain rules until you get an equivalent expression that satisfies a certain property.
In Haskell the "certain property" has the unfriendly name weak head normal form ("WHNF"), which really just means that the expression is either a nullary data constructor or an application of a data constructor.
Let's translate that to a very rough set of informal rules. To force an expression expr:
If expr is a nullary constructor or a constructor application, the result of forcing it is expr itself. (It's already in WHNF.)
If expr is a function application f arg, then the result of forcing it is obtained this way:
Find the definition of f.
Can you pattern match this definition against the expression arg? If not, then force arg and try again with the result of that.
Substitute the pattern match variables in the body of f with the parts of (the possibly rewritten) arg that correspond to them, and force the resulting expression.
One way of thinking of this is that when you force an expression, you're trying to rewrite it minimally to reduce it to an equivalent expression in WHNF.
Let's apply this to your example:
foo :: Num a => [a] -> [a]
foo [] = []
foo (_:xs) = map (* 2) xs
-- We want to force this expression:
head (foo [1..])
We will need definitions for head and `map:
head [] = undefined
head (x:_) = x
map _ [] = []
map f (x:xs) = f x : map f x
-- Not real code, but a rule we'll be using for forcing infinite ranges.
[n..] ==> n : [(n+1)..]
So now:
head (foo [1..]) ==> head (map (*2) [1..]) -- using the definition of foo
==> head (map (*2) (1 : [2..])) -- using the forcing rule for [n..]
==> head (1*2 : map (*2) [2..]) -- using the definition of map
==> 1*2 -- using the definition of head
==> 2 -- using the definition of *
I believe the idea must be that in a lazy language if you're evaluating a function application, it must be because you need the result of the application for something. So whatever reason caused the function application to be reduced in the first place is going to continue to need to reduce the returned result. If we didn't need the function's result we wouldn't be evaluating the call in the first place, the whole application would be left as a thunk.
A key point is that the standard "lazy evaluation" order is demand-driven. You only evaluate what you need. Evaluating more risks violating the language spec's definition of "non-strict semantics" and looping or failing for some programs that should be able to terminate; lazy evaluation has the interesting property that if any evaluation order can cause a particular program to terminate, so can lazy evaluation.1
But if we only evaluate what we need, what does "need" mean? Generally it means either
a pattern match needs to know what constructor a particular value is (e.g. I can't know what branch to take in your definition of foo without knowing whether the argument is [] or _:xs)
a primitive operation needs to know the entire value (e.g. the arithmetic circuits in the CPU can't add or compare thunks; I need to fully evaluate two Int values to call such operations)
the outer driver that executes the main IO action needs to know what the next thing to execute is
So say we've got this program:
foo :: Num a => [a] -> [a]
foo [] = []
foo (_:xs) = map (* 2) xs
main :: IO ()
main = print (head (foo [1..]))
To execute main, the IO driver has to evaluate the thunk print (head (foo [1..])) to work out that it's print applied to the thunk head (foo [1..]). print needs to evaluate its argument on order to print it, so now we need to evaluate that thunk.
head starts by pattern matching its argument, so now we need to evaluate foo [1..], but only to WHNF - just enough to tell whether the outermost list constructor is [] or :.
foo starts by pattern matching on its argument. So we need to evaluate [1..], also only to WHNF. That's basically 1 : [2..], which is enough to see which branch to take in foo.2
The : case of foo (with xs bound to the thunk [2..]) evaluates to the thunk map (*2) [2..].
So foo is evaluated, and didn't evaluate its body. However, we only did that because head was pattern matching to see if we had [] or x : _. We still don't know that, so we must immediately continue to evaluate the result of foo.
This is what the article means when it says functions are strict in their result. Given that a call to foo is evaluated at all, its result will also be evaluated (and so, anything needed to evaluate the result will also be evaluated).
But how far it needs to be evaluated depends on the calling context. head is only pattern matching on the result of foo, so it only needs a result to WHNF. We can get an infinite list to WHNF (we already did so, with 1 : [2..]), so we don't necessarily get in an infinite loop when evaluating a call to foo. But if head were some sort of primitive operation implemented outside of Haskell that needed to be passed a completely evaluated list, then we'd be evaluating foo [1..] completely, and thus would never finish in order to come back to head.
So, just to complete my example, we're evaluating map (2 *) [2..].
map pattern matches its second argument, so we need to evaluate [2..] as far as 2 : [3..]. That's enough for map to return the thunk (2 *) 2 : map (2 *) [3..], which is in WHNF. And so it's done, we can finally return to head.
head ((2 *) 2 : map (2 *) [3..]) doesn't need to inspect either side of the :, it just needs to know that there is one so it can return the left side. So it just returns the unevaluated thunk (2 *) 2.
Again though, we only evaluated the call to head this far because print needed to know what its result is, so although head doesn't evaluate its result, its result is always evaluated whenever the call to head is.
(2 *) 2 evaluates to 4, print converts that into the string "4" (via show), and the line gets printed to the output. That was the entire main IO action, so the program is done.
1 Implementations of Haskell, such as GHC, do not always use "standard lazy evaluation", and the language spec does not require it. If the compiler can prove that something will always be needed, or cannot loop/error, then it's safe to evaluate it even when lazy evaluation wouldn't (yet) do so. This can often be faster so GHC optimizations do actually do this.
2 I'm skipping over a few details here, like that print does have some non-primitive implementation we could step inside and lazily evaluate, and that [1..] could be further expanded to the functions that actually implement that syntax.
Not necessarily. Haskell is lazy, meaning that it only evaluates when it needs to. This has some interesting effects. If we take the below code, for example:
-- File: lazinessTest.hs
(>?) :: a -> b -> b
a >? b = b
main = (putStrLn "Something") >? (putStrLn "Something else")
This is the output of the program:
$ ./lazinessTest
Something else
This indicates that putStrLn "Something" is never evaluated. But it's still being passed to the function, in the form of a 'thunk'. These 'thunks' are unevaluated values that, rather than being concrete values, are like a breadcrumb-trail of how to compute the value. This is how Haskell laziness works.
In our case, two 'thunks' are passed to >?, but only one is passed out, meaning that only one is evaluated in the end. This also applies in const, where the second argument can be safely ignored, and therefore is never computed. As for map, GHC is smart enough to realise that we don't care about the end of the array, and only bothers to compute what it needs to, in your case the second element of the original list.
However, it's best to leave the thinking about laziness to the compiler and keep coding, unless you're dealing with IO, in which case you really, really should think about laziness, because you can easily go wrong, as I've just demonstrated.
There are lots and lots of online articles on the Haskell wiki to look at, if you want more detail.
Function could evaluate either return type:
head (x:_) = x
or exception/error:
head _ = error "Head: List is empty!"
or bottom (⊥)
a = a
b = last [1 ..]

In Haskell, when do we use in with let?

In the following code, the last phrase I can put an in in front. Will it change anything?
Another question: If I decide to put in in front of the last phrase, do I need to indent it?
I tried without indenting and hugs complains
Last generator in do {...} must be an expression
import Data.Char
groupsOf _ [] = []
groupsOf n xs =
take n xs : groupsOf n ( tail xs )
problem_8 x = maximum . map product . groupsOf 5 $ x
main = do t <- readFile "p8.log"
let digits = map digitToInt $concat $ lines t
print $ problem_8 digits
Edit
Ok, so people don't seem to understand what I'm saying. Let me rephrase:
are the following two the same, given the context above?
1.
let digits = map digitToInt $concat $ lines t
print $ problem_8 digits
2.
let digits = map digitToInt $concat $ lines t
in print $ problem_8 digits
Another question concerning the scope of bindings declared in let: I read here that:
where Clauses.
Sometimes it is convenient to scope bindings over several guarded equations, which requires a where clause:
f x y | y>z = ...
| y==z = ...
| y<z = ...
where z = x*x
Note that this cannot be done with a let expression, which only scopes over the expression which it encloses.
My question: so, the variable digits shouldn't be visible to the last print phrase. Do I miss something here?
Short answer: Use let without in in the body of a do-block, and in the part after the | in a list comprehension. Anywhere else, use let ... in ....
The keyword let is used in three ways in Haskell.
The first form is a let-expression.
let variable = expression in expression
This can be used wherever an expression is allowed, e.g.
> (let x = 2 in x*2) + 3
7
The second is a let-statement. This form is only used inside of do-notation, and does not use in.
do statements
let variable = expression
statements
The third is similar to number 2 and is used inside of list comprehensions. Again, no in.
> [(x, y) | x <- [1..3], let y = 2*x]
[(1,2),(2,4),(3,6)]
This form binds a variable which is in scope in subsequent generators and in the expression before the |.
The reason for your confusion here is that expressions (of the correct type) can be used as statements within a do-block, and let .. in .. is just an expression.
Because of the indentation rules of haskell, a line indented further than the previous one means it's a continuation of the previous line, so this
do let x = 42 in
foo
gets parsed as
do (let x = 42 in foo)
Without indentation, you get a parse error:
do (let x = 42 in)
foo
In conclusion, never use in in a list comprehension or a do-block. It is unneccesary and confusing, as those constructs already have their own form of let.
First off, why hugs? The Haskell Platform is generally the recommended way to go for newbies, which comes with GHC.
Now then, on to the letkeyword. The simplest form of this keyword is meant to always be used with in.
let {assignments} in {expression}
For example,
let two = 2; three = 3 in two * three
The {assignments} are only in scope in the corresponding {expression}. Regular layout rules apply, meaning that in must be indented at least as much as the let that it corresponds to, and any sub-expressions pertaining to the let expression must likewise be indented at least as much. This isn't actually 100% true, but is a good rule of thumb; Haskell layout rules are something you will just get used to over time as you read and write Haskell code. Just keep in mind that the amount of indentation is the main way to indicate which code pertains to what expression.
Haskell provides two convenience cases where you don't have to write in: do notation and list comprehensions (actually, monad comprehensions). The scope of the assignments for these convenience cases is predefined.
do foo
let {assignments}
bar
baz
For do notation, the {assignments} are in scope for any statements that follow, in this case, bar and baz, but not foo. It is as if we had written
do foo
let {assignments}
in do bar
baz
List comprehensions (or really, any monad comprehension) desugar into do notation, so they provide a similar facility.
[ baz | foo, let {assignments}, bar ]
The {assignments} are in scope for the expressions bar and baz, but not for foo.
where is somewhat different. If I'm not mistaken, the scope of where lines up with a particular function definition. So
someFunc x y | guard1 = blah1
| guard2 = blah2
where {assignments}
the {assignments} in this where clause have access to x and y. guard1, guard2, blah1, and blah2 all have access to the {assignments} of this where clause. As is mentioned in the tutorial you linked, this can be helpful if multiple guards reuse the same expressions.
In do notation, you can indeed use let with and without in. For it to be equivalent (in your case, I'll later show an example where you need to add a second do and thus more indentation), you need to indent it as you discovered (if you're using layout - if you use explicit braces and semicolons, they're exactly equivalent).
To understand why it's equivalent, you have to actually grok monads (at least to some degree) and look at the desugaring rules for do notation. In particular, code like this:
do let x = ...
stmts -- the rest of the do block
is translated to let x = ... in do { stmts }. In your case, stmts = print (problem_8 digits). Evaluating the whole desugared let binding results in an IO action (from print $ ...). And here, you need understanding of monads to intuitively agree that there's no difference between do notations and "regular" language elements describing a computation resulting in monadic values.
As for both why are possible: Well, let ... in ... has a broad range of applications (most of which have nothing to do with monads in particular), and a long history to boot. let without in for do notation, on the other hand, seems to be nothing but a small piece of syntactic sugar. The advantage is obvious: You can bind the results of pure (as in, not monadic) computations to a name without resorting to a pointless val <- return $ ... and without splitting up the do block in two:
do stuff
let val = ...
in do more
stuff $ using val
The reason you don't need an extra do block for what follows the let is that you only got a single line. Remember, do e is e.
Regarding your edit: digit being visible in the next line is the whole point. And there's no exception for it or anything. do notation becomes one single expression, and let works just fine in a single expression. where is only needed for things which aren't expressions.
For the sake of demonstration, I'll show the desugared version of your do block. If you aren't too familiar with monads yet (something you should change soon IMHO), ignore the >>= operator and focus on the let. Also note that indentation doesn't matter any more.
main = readFile "p8.log" >>= (\t ->
let digits = map digitToInt $ concat $ lines t
in print (problem_8 digits))
Some beginner notes about "are following two the same".
For example, add1 is a function, that add 1 to number:
add1 :: Int -> Int
add1 x =
let inc = 1
in x + inc
So, it's like add1 x = x + inc with substitution inc by 1 from let keyword.
When you try to suppress in keyword
add1 :: Int -> Int
add1 x =
let inc = 1
x + inc
you've got parse error.
From documentation:
Within do-blocks or list comprehensions
let { d1 ; ... ; dn }
without `in` serves to introduce local bindings.
Btw, there are nice explanation with many examples about what where and in keyword actually do.

What type of scope does Haskell use?

I'm trying to figure out if Haskell uses dynamic or static scoping.
I realize that, for example, if you define:
let x = 10
then define the function
let square x = x*x
You have 2 different "x's", and does that mean it is dynamically scoped? If not, what scoping does it use, and why?
Also, can Haskell variables have aliases (a different name for the same memory location/value)?
Thanks.
Haskell use (broadly speaking) exactly the same lexical scoping as most other languages.
eg.
x = 10
Results in a value referenced through x in the global scope, whereas
square x = x * x
will result in x being lexically scoped to the function square. It may help if you think of the above form being a syntactic nicety for:
square = \ x -> x * x
As to your other question i'm not sure what you mean by aliasing
Answering only the second part of the question:
You can have several aliases for the same "memory location", but since they are all immutable, it does not matter most of the time.
Dumb example:
foo x y = x * y
bar z = foo z z
When within foo called from bar, both x and y are clearly the same value. But since you cannot modify either x or y, you will not even notice.
There are some things wrong in your statements...
There are no mutable variables in Haskell just definitions (or immutable variables)
A variable memory location is a concept that do not exist in Haskell
In your example, x is not 10 in the function is just a argument to square, that can take any value (you can specify the type later) in this case 10 but just in this case.
Here is an example of aliases provided by Curt Sampson:
import Data.IORef
main :: IO ()
main = do x <- newIORef 0 -- write 0 into x
readIORef x >>= print -- x contains 0
let y = x
readIORef y >>= print -- y contains 0
writeIORef x 42 -- write 42 into x
readIORef y >>= print -- y contains 42
As the first part of the question is already answered by others, here is the second part:
I assume by aliasing you mean one name for another. As haskell is a functional language, and functions behave as normal identifiers in any case, you can do that like this:
y = x
which would define an alias y for the function x. Note that everything is a function. Even if it looks like a "variable", it's just a nullary function taking no arguments. Aliases for types look like this:
type Function = Double -> Double
which would define an alias Function for the type Double -> Double
Haskell uses static nested scopes. What is a bit confusing compared with other languages that have static nested scopes is that the scope of a name is a block which includes tests preceding its definition. For example
evens = 0 : map (+1) odds
odds = map : (+1) evens
here the name 'odds' is in scope in the definition of 'evens', despite the surprising fact that 'odds' has not yet been defined. (The example defines two infinite lists of even and odd numbers.)
A dead language with a similar scoping rule was Modula-3. But Haskell is a bit trickier in that you can attempt to 'redefine' a variable within the same scope but instead you just introduce another recursion equation. This is a pitfall for people who learned ML or Scheme first:
let x = 2 * n
x = x + 1 -- watch out!
This is perfectly good ML or Scheme let*, but Haskel has scheme letrec semantics, without the restriction to lambda values. No wonder this is tricky stuff!
In your example, the global definition of x is shadowed by the local definition of x. In Haskell, a variable's scope is determined by a static reading of the source code - this is called lexical scope, but can get something similar to dynamic scoping with implicit parameters (but that can lead to some unexpected behavior (I've read; never tried 'em myself)).
To sum up the other answers concisely:
lexical scope
aliasing is as easy as x = 1; y = x but doesn't usually matter because things are immutable.
The let syntax you use in your example looks like it's at the interactive ghci> prompt. Everything in interactive mode occurs within the IO monad so things may appear more mutable there than normal.
Well, as I think people have said already, Haskell doesn't have any variables as found in most other languages, it only has expressions. In your example let x = 10 x is an expression that always evaluates to 10. You can't actually change the value of x later on, though you can use the scoping rules to hide it by defining x to be another expression.
Yes, Haskell has aliases. Try out this little program:
import Data.IORef
main :: IO ()
main = do x <- newIORef 0 -- write 0 into x
readIORef x >>= print -- x contains 0
let y = x
readIORef y >>= print -- y contains 0
writeIORef x 42 -- write 42 into x
readIORef y >>= print -- y contains 42

Resources