How to implement static scope, dynamic scope, and lazy evaluation in ml and haskell? - haskell

I understand conceptually what all of these are, I'm just hoping for some code examples of how to implement them in ML and Haskell.

Haskell variables (top level definitions, variables in patterns, etc.) are all statically scoped. For example, the program:
y = "global value"
f = print y
g = let y = "local value" in f
main = g
will print "global value". Even though, in the definition of g, the function f is used after "redefining" y, this redefinition doesn't affect the definition of f which uses the statically (AKA lexically) scoped definition of y in force where f was defined.
If you want to "implement" dynamic scope, you have to be more specific about what you really mean. If you're wondering if you can write a function in plain Haskell, like:
addY :: Int -> Int
addY x = x + y
such that y might refer to a different variable from one call to the next, then the answer is no. In this definition, y always refers to the same variable (which, in Haskell, means the same, immutable value) which can be determined by static analysis of the program and cannot be dynamically redefined.
[[Edit: As #Jon Purdy points out, though, there's a Haskell extension that supports a form of dynamic scope such that the following prints various dynamically scoped local values with the same function.
{-# LANGUAGE ImplicitParams #-}
f :: (?y :: String) => IO ()
f = print ?y
g = let ?y = "g's local value" in f
h = let ?y = "h's local value" in f
main = do
g -- prints g's local value
h -- prints h's local value
let ?y = "main's local value" in f -- prints main's value
--end of edit--]]
For lazy evaluation, there are many examples, such as the following entered into an interactive GHCi session:
take 3 [1,2..] -- gives [1,2,3]
let x = (15^2, 6 `div` 0)
fst x -- gives 225
let y = snd x
y -- *** Exception: divide by zero
In the first line, if the evaluation was strict, the attempt to fully evaluate the infinite list [1,2..] (can also be written [1..] -- just counts up 1,2,3,.. forever) would go into an infinite loop, and the take function would never be called. In the second example, if evaluation was strict, the division by zero error would occur when x was defined, not only after we tried to print its second component.

Related

What does immutable variable in Haskell mean?

I am quite confused with the concept of immutable variables in Haskell. It seems like that we can't change the value of variables in Haskell. But when I tried following code in GHCI, it seemed like the value of variables did change:
Prelude> foo x=x+1
Prelude> a=1
Prelude> a
1
Prelude> foo a
2
Prelude> a=2
Prelude> a
2
Prelude> foo a
3
Does this conflict with the idea of immutable variables?
Many thanks!
Haskell doesn't allow you to modify existing variables. It does, however, allow you to re-use variable names, and that's all that's happening here. One way to see this is to ask GHCi, using the :i[nfo] directive, where the variable was declared:
Prelude> let a = 1
Prelude> :i a
a :: Num a => a -- Defined at <interactive>:2:5
Prelude> let a = 2
Prelude> :i a
a :: Num a => a -- Defined at <interactive>:4:5
These are actually two entire seperate, different variables, which just happen to be called the same name! If you just ask for a, the newer definition will be “preferred”, but the old one is still there – one way to see this, as remarked by chi in the comments, is to use a in a function:
Prelude> let a = 2
Prelude> :i a
a :: Num a => a -- Defined at <interactive>:4:5
Prelude> let f x = a + x
Prelude> let a = 3
Prelude> f (-2)
0
f never needs to care that you've defined a new variable that's also called a; from its perspective a was one immutable variable that always stays as it is.
It's worth talking a bit about why GHCi prefers the later definition. This does not otherwise happen in Haskell code; in particular if you try to compile the following module, it simply gives an error concerning duplicate definition:
a = 1
a = 2
main :: IO ()
main = print a
The reason that something like this is allowed in GHCi is that it works different from Haskell modules. The sequence of GHCi commands forms in fact a sequence of actions in the IO monad†; i.e. the program would have to be
main :: IO ()
main = do
let a = 1
let a = 2
print a
Now, if you've learned about monads you'll know that this is just syntactic sugar for
main =
let a = 1 in (let a = 2 in (print a))
and this is really the crucial bit for why you can re-use the name a: the second one, a = 2, lives in a narrower scope than the first. So it is more local, and local definitions have priority. Whether this is a good idea is a bit debatable; a good argument for it is that you can have a function like
greet :: String -> IO ()
greet name = putStrLn $ "Hello, "++name++"!"
and it won't stop working just because somebody defines elsewhere
name :: Car -> String
name car | rollsOverAtRightTurn car = "Reliant Robin"
| fuelConsumption car > 50*litrePer100km
= "Hummer"
| ... = ...
Besides, it's really quite useful that you can “redefine” variables while fooling around in GHCi, even though it's not such a good idea to redefine stuff in a proper program, which is supposed to show consistent behaviour.
†As dfeuer remarks, this is not the whole truth. You can do some things in GHCi that aren't allowed in an IO do-block, in particular you can define data types and classes. But any normal statement or variable definition act as it were in the IO monad.
(The other answer using GHCi is fine but to clarify it is not specific to GHCi or monads...)
As you can see from the fact that the following Haskell program
main =
let x = 1 in
let f y = x + y in
let x = 2 in
print (x * f 3)
prints 8 rather than 10, variables can only be "bound", not "mutated", in Haskell. In other words, the program above is equivalent (called α-equivalent, meaning a consistent change of the names of bound variables only; see https://wiki.haskell.org/Alpha_conversion for more details) to
main =
let x1 = 1 in
let f y = x1 + y in
let x2 = 2 in
print (x2 + f 3)
where it is clear that x1 and x2 are different variables.

How to evaluate an Haskell expression immediately?

When I use JS, I have two options to handle a function.
var a = function() {};
var b = a; // b is the function a itself.
var c = a(); // c is result of the evaluation of function a.
AFAIK, Haskell is lazy by default, so I always get b by default. But if I want to get c, how can I do?
Update
I think I should put a word explicitly.
I was doing something like this in ghci.
let a = getLine
a
I wanted to let a result of getLine into a.
Update2
I note this code for later reference for people like me.
I could correct translation to Haskell with #Ankur's help.
With above code example is not a good one because function a doesn't return anything.
If I change it like this;
var a = function(x,y) { return x * y; };
var b = a; // b is the function a itself.
var c = a(); // c is result of the evaluation of function a.
Translation into Haskell will become like this.
let a = \ x y -> x* y // Anonymous lambda function.
let b = a
let c = a 100 200
Your JS code would translate to Haskell as:
Prelude> let a = (\() -> ())
Prelude> let b = a
Prelude> let c = a()
Your JS function was taking Nothing (which you can model as () type) and returning nothing i.e again ()
getLine is a value of type IO String so if you say let a = getLine, a becomes value of type IO String. What you want is extract String from this IO String, which can be done as:
a <- getLine
Note that the parallel to Javascript isn't quite correct -- for instance, assuming a returns a number, b + b makes sense in Haskell but not in your example Javascript. In principle functions in Haskell are functions of exactly one argument -- what appears to be a function of two arguments is a function of one argument, which returns a function of one argument, which returns a value. b in Haskell would not be an unevaluated "zero-argument function", but an unevaluated value.
To introduce strictness you can use seq, which takes two arguments, the first of which is strictly evaluated and the second of which is returned. Read more.
Here is an example from that page where seq is used to force immediate evaluation of z':
foldl' :: (a -> b -> a) -> a -> [b] -> a
foldl' _ z [] = z
foldl' f z (x:xs) = let z' = f z x in z' `seq` foldl' f z' xs
Note the way z' is used again later as the second argument to foldl', since seq just discards the value of its first argument.
Haskell is non-strict, not lazy.
Many expressions will be evaluated strictly, see here, so you can often force strictness simply with the structure of your code.
In Haskell, if c has a type which matches the return type - the value - of a() then that is what will be assigned to it (never the function itself).
Haskell may put off the calculation until your code actually needs the value of c but in most cases you should not care.
Why do you want to force the evaluation early? Usually, the only reason to do this in Haskell is performance. In less pure languages, you might be depending on a side effect but that will not be the case in Haskell - unless you're working with, say, the IO monad and that gives you all you need to force sequential evaluation.
UPDATE Ah, so you are working with IO.

Evaluation strategy

How should one reason about function evaluation in examples like the following in Haskell:
let f x = ...
x = ...
in map (g (f x)) xs
In GHC, sometimes (f x) is evaluated only once, and sometimes once for each element in xs, depending on what exactly f and g are. This can be important when f x is an expensive computation. It has just tripped a Haskell beginner I was helping and I didn't know what to tell him other than that it is up to the compiler. Is there a better story?
Update
In the following example (f x) will be evaluated 4 times:
let f x = trace "!" $ zip x x
x = "abc"
in map (\i -> lookup i (f x)) "abcd"
With language extensions, we can create situations where f x must be evaluated repeatedly:
{-# LANGUAGE GADTs, Rank2Types #-}
module MultiEvG where
data BI where
B :: (Bounded b, Integral b) => b -> BI
foo :: [BI] -> [Integer]
foo xs = let f :: (Integral c, Bounded c) => c -> c
f x = maxBound - x
g :: (forall a. (Integral a, Bounded a) => a) -> BI -> Integer
g m (B y) = toInteger (m + y)
x :: (Integral i) => i
x = 3
in map (g (f x)) xs
The crux is to have f x polymorphic even as the argument of g, and we must create a situation where the type(s) at which it is needed can't be predicted (my first stab used an Either a b instead of BI, but when optimising, that of course led to only two evaluations of f x at most).
A polymorphic expression must be evaluated at least once for each type it is used at. That's one reason for the monomorphism restriction. However, when the range of types it can be needed at is restricted, it is possible to memoise the values at each type, and in some circumstances GHC does that (needs optimising, and I expect the number of types involved mustn't be too large). Here we confront it with what is basically an inhomogeneous list, so in each invocation of g (f x), it can be needed at an arbitrary type satisfying the constraints, so the computation cannot be lifted outside the map (technically, the compiler could still build a cache of the values at each used type, so it would be evaluated only once per type, but GHC doesn't, in all likelihood it wouldn't be worth the trouble).
Monomorphic expressions need only be evaluated once, they can be shared. Whether they are is up to the implementation; by purity, it doesn't change the semantics of the programme. If the expression is bound to a name, in practice you can rely on it being shared, since it's easy and obviously what the programmer wants. If it isn't bound to a name, it's a question of optimisation. With the bytecode generator or without optimisations, the expression will often be evaluated repeatedly, but with optimisations repeated evaluation would indicate a compiler bug.
Polymorphic expressions must be evaluated at least once for every type they're used at, but with optimisations, when GHC can see that it may be used multiple times at the same type, it will (usually) still be shared for that type during a larger computation.
Bottom line: Always compile with optimisations, help the compiler by binding expressions you want shared to a name, and give monomorphic type signatures where possible.
Your examples are indeed quite different.
In the first example, the argument to map is g (f x) and is passed once to map most likely as partially applied function.
Should g (f x), when applied to an argument within map evaluate its first argument, then this will be done only once and then the thunk (f x) will be updated with the result.
Hence, in your first example, f xwill be evaluated at most 1 time.
Your second example requires a deeper analysis before the compiler can arrive at the conclusion that (f x) is always constant in the lambda expression. Perhaps it will never optimize it at all, because it may have knowledge that trace is not quite kosher. So, this may evaluate 4 times when tracing, and 4 times or 1 time when not tracing.
This is really dependent on GHC's optimizations, as you've been able to tell.
The best thing to do is to study the GHC core that you get after optimizing the program. I would look at the generated Core and examine whether f x had its own let statement outside the map or not.
If you want to be sure, then you should factor f x out into its own variable assigned in a let, but there's not really a guaranteed way to figure it out other than reading through Core.
All that said, with the exception of things like trace that use unsafePerformIO, this will never change the semantics of your program: how it actually behaves.
In GHC without optimizations, the body of a function is evaluated every time the function is called. (A "call" means the function is applied to arguments and the result is evaluated.) In the following example, f x is inside a function, so it will execute each time the function is called.
(GHC may optimize this expression as discussed in the FAQ [1].)
let f x = trace "!" $ zip x x
x = "abc"
in map (\i -> lookup i (f x)) "abcd"
However, if we move f x out of the function, it will execute only once.
let f x = trace "!" $ zip x x
x = "abc"
in map ((\f_x i -> lookup i f_x) (f x)) "abcd"
This can be rewritten more readably as
let f x = trace "!" $ zip x x
x = "abc"
g f_x i = lookup i f_x
in map (g (f x)) "abcd"
The general rule is that, each time a function is applied to an argument, a new "copy" of the function body is created. Function application is the only thing that may cause an expression to re-execute. However, be warned that some functions and function calls do not look like functions syntactically.
[1] http://www.haskell.org/haskellwiki/GHC/FAQ#Subexpression_Elimination

Loop through a set of functions with Haskell

Here's a simple, barebones example of how the code that I'm trying to do would look in C++.
while (state == true) {
a = function1();
b = function2();
state = function3();
}
In the program I'm working on, I have some functions that I need to loop through until bool state equals false (or until one of the variables, let's say variable b, equals 0).
How would this code be done in Haskell? I've searched through here, Google, and even Bing and haven't been able to find any clear, straight forward explanations on how to do repetitive actions with functions.
Any help would be appreciated.
Taking Daniels comment into account, it could look something like this:
f = loop init_a init_b true
where
loop a b True = loop a' b' (fun3 a' b')
where
a' = fun1 ....
b' = fun2 .....
loop a b False = (a,b)
Well, here's a suggestion of how to map the concepts here:
A C++ loop is some form of list operation in Haskell.
One iteration of the loop = handling one element of the list.
Looping until a certain condition becomes true = base case of a function that recurses on a list.
But there is something that is critically different between imperative loops and functional list functions: loops describe how to iterate; higher-order list functions describe the structure of the computation. So for example, map f [a0, a1, ..., an] can be described by this diagram:
[a0, a1, ..., an]
| | |
f f f
| | |
v v v
[f a0, f a1, ..., f an]
Note that this describes how the result is related to the arguments f and [a0, a1, ..., an], not how the iteration is performed step by step.
Likewise, foldr f z [a0, a1, ..., an] corresponds to this:
f a0 (f a1 (... (f an z)))
filter doesn't quite lend itself to diagramming, but it's easy to state many rules that it satisfies:
length (filter pred xs) <= length xs
For every element x of filter pred xs, pred x is True.
If x is an element of filter pred xs, then x is an element of xs
If x is not an element of xs, then x is not an element of filter pred xs
If x appears before x' in filter pred xs, then x appears before x' in xs
If x appears before x' in xs, and both x and x' appear in filter pred xs, then x appears before x' in filter pred xs
In a classic imperative program, all three of these cases are written as loops, and the difference between them comes down to what the loop body does. Functional programming, on the contrary, insists that this sort of structural pattern does not belong in "loop bodies" (the functions f and pred in these examples); rather, these patterns are best abstracted out into higher-order functions like map, foldr and filter. Thus, every time you see one of these list functions you instantly know some important facts about how the arguments and the result are related, without having to read any code; whereas in a typical imperative program, you must read the bodies of loops to figure this stuff out.
So the real answer to your question is that it's impossible to offer an idiomatic translation of an imperative loop into functional terms without knowing what the loop body is doing—what are the preconditions supposed to be before the loop runs, and what the postconditions are supposed to be when the loop finishes. Because that loop body that you only described vaguely is going to determine what the structure of the computation is, and different such structures will call for different higher-order functions in Haskell.
First of all, let's think about a few things.
Does function1 have side effects?
Does function2 have side effects?
Does function3 have side effects?
The answer to all of these is a resoundingly obvious YES, because they take no inputs, and presumably there are circumstances which cause you to go around the while loop more than once (rather than def function3(): return false). Now let's remodel these functions with explicit state.
s = initialState
sentinel = true
while(sentinel):
a,b,s,sentinel = function1(a,b,s,sentinel)
a,b,s,sentinel = function2(a,b,s,sentinel)
a,b,s,sentinel = function3(a,b,s,sentinel)
return a,b,s
Well that's rather ugly. We know absolutely nothing about what inputs each function draws from, nor do we know anything about how these functions might affect the variables a, b, and sentinel, nor "any other state" which I have simply modeled as s.
So let's make a few assumptions. Firstly, I am going to assume that these functions do not directly depend on nor affect in any way the values of a, b, and sentinel. They might, however, change the "other state". So here's what we get:
s = initState
sentinel = true
while (sentinel):
a,s2 = function1(s)
b,s3 = function2(s2)
sentinel,s4 = function(s3)
s = s4
return a,b,s
Notice I've used temporary variables s2, s3, and s4 to indicate the changes that the "other state" goes through. Haskell time. We need a control function to behave like a while loop.
myWhile :: s -- an initial state
-> (s -> (Bool, a, s)) -- given a state, produces a sentinel, a current result, and the next state
-> (a, s) -- the result, plus resultant state
myWhile s f = case f s of
(False, a, s') -> (a, s')
(True, _, s') -> myWhile s' f
Now how would one use such a function? Well, given we have the functions:
function1 :: MyState -> (AType, MyState)
function2 :: MyState -> (BType, MyState)
function3 :: MyState -> (Bool, MyState)
We would construct the desired code as follows:
thatCodeBlockWeAreTryingToSimulate :: MyState -> ((AType, BType), MyState)
thatCodeBlockWeAreTryingToSimulate initState = myWhile initState f
where f :: MyState -> (Bool, (AType, BType), MyState)
f s = let (a, s2) = function1 s
(b, s3) = function2 s2
(sentinel, s4) = function3 s3
in (sentinel, (a, b), s4)
Notice how similar this is to the non-ugly python-like code given above.
You can verify that the code I have presented is well-typed by adding function1 = undefined etc for the three functions, as well as the following at the top of the file:
{-# LANGUAGE EmptyDataDecls #-}
data MyState
data AType
data BType
So the takeaway message is this: in Haskell, you must explicitly model the changes in state. You can use the "State Monad" to make things a little prettier, but you should first understand the idea of passing state around.
Lets take a look at your C++ loop:
while (state == true) {
a = function1();
b = function2();
state = function3();
}
Haskell is a pure functional language, so it won't fight us as much (and the resulting code will be more useful, both in itself and as an exercise to learn Haskell) if we try to do this without side effects, and without using monads to make it look like we're using side effects either.
Lets start with this structure
while (state == true) {
<<do stuff that updates state>>
}
In Haskell we're obviously not going to be checking a variable against true as the loop condition, because it can't change its value[1] and we'd either evaluate the loop body forever or never. So instead, we'll want to be evaluating a function that returns a boolean value on some argument:
while (check something == True) {
<<do stuff that updates state>>
}
Well, now we don't have a state variable, so that "do stuff that updates state" is looking pretty pointless. And we don't have a something to pass to check. Lets think about this a bit more. We want the something to be checked to depend on what the "do stuff" bit is doing. We don't have side effects, so that means something has to be (or be derived from) returned from the "do stuff". "do stuff" also needs to take something that varies as an argument, or it'll just keep returning the same thing forever, which is also pointless. We also need to return a value out all this, otherwise we're just burning CPU cycles (again, with no side effects there's no point running a function if we don't use its output in some way, and there's even less point running a function repeatedly if we never use its output).
So how about something like this:
while check func state =
let next_state = func state in
if check next_state
then while check func next_state
else next_state
Lets try it in GHCi:
*Main> while (<20) (+1) 0
20
This is the result of applying (+1) repeatedly while the result is less than 20, starting from 0.
*Main> while ((<20) . length) (++ "zob") ""
"zobzobzobzobzobzobzob"
This is the result of concatenating "zob" repeatedly while the result's length is less than 20, starting from the empty string.
So you can see I've defined a function that is (sort of a bit) analogous to a while loop from imperative languages. We didn't even need dedicated loop syntax for it! (which is the real reason Haskell has no such syntax; if you need this kind of thing you can express it as a function). It's not the only way to do so, and experienced Haskell programmers would probably use other standard library functions to do this kind of job, rather than writing while.
But I think it's useful to see how you can express this kind of thing in Haskell. It does show that you can't translate things like imperative loops directly into Haskell; I didn't end up translating your loop in terms of my while because it ends up pretty pointless; you never use the result of function1 or function2, they're called with no arguments so they'd always return the same thing in every iteration, and function3 likewise always returns the same thing, and can only return true or false to either cause while to keep looping or stop, with no information resulting.
Presumably in the C++ program they're all using side effects to actually get some work done. If they operate on in-memory things then you need to translate a bigger chunk of your program at once to Haskell for the translation of this loop to make any sense. If those functions are doing IO then you'll need to do this in the IO monad in Haskell, for which my while function doesn't work, but you can do something similar.
[1] As an aside, it's worth trying to understand that "you can't change variables" in Haskell isn't just an arbitrary restriction, nor is it just an acceptable trade off for the benefits of purity, it is a concept that doesn't make sense the way Haskell wants you to think about Haskell code. You're writing down expressions that result from evaluating functions on certain arguments: in f x = x + 1 you're saying that f x is x + 1. If you really think of it that way rather than thinking "f takes x, then adds one to it, then returns the result" then the concept of "having side effects" doesn't even apply; how could something existing and being equal to something else somehow change a variable, or have some other side effect?
You should write a solution to your problem in a more functional approach.
However, some code in haskell works a lot like imperative looping, take for example state monads, terminal recursivity, until, foldr, etc.
A simple example is the factorial. In C, you would write a loop where in haskell you can for example write fact n = foldr (*) 1 [2..n].
If you've two functions f :: a -> b and g :: b -> c where a, b, and c are types like String or [Int] then you can compose them simply by writing f . b.
If you wish them to loop over a list or vector you could write map (f . g) or V.map (f . g), assuming you've done Import qualified Data.Vector as V.
Example : I wish to print a list of markdown headings like ## <number>. <heading> ## but I need roman numerals numbered from 1 and my list headings has type type [(String,Double)] where the Double is irrelevant.
Import Data.List
Import Text.Numeral.Roman
let fun = zipWith (\a b -> a ++ ". " ++ b ++ "##\n") (map toRoman [1..]) . map fst
fun [("Foo",3.5),("Bar",7.1)]
What the hell does this do?
toRoman turns a number into a string containing the roman numeral. map toRoman does this to every element of a loop. map toRoman [1..] does it to every element of the lazy infinite list [1,2,3,4,..], yielding a lazy infinite list of roman numeral strings
fst :: (a,b) -> a simply extracts the first element of a tuple. map fst throws away our silly Meow information along the entire list.
\a b -> "##" ++ show a ++ ". " ++ b ++ "##" is a lambda expression that takes two strings and concatenates them together within the desired formatting strings.
zipWith :: (a -> b -> c) -> [a] -> [b] -> [c] takes a two argument function like our lambda expression and feeds it pairs of elements from it's own second and third arguments.
You'll observe that zip, zipWith, etc. only read as much of the lazy infinite list of Roman numerals as needed for the list of headings, meaning I've number my headings without maintaining any counter variable.
Finally, I have declared fun without naming it's argument because the compiler can figure it out from the fact that map fst requires one argument. You'll notice that put a . before my second map too. I could've written (map fst h) or $ map fst h instead if I'd written fun h = ..., but leaving the argument off fun meant I needed to compose it with zipWith after applying zipWith to two arguments of the three arguments zipWith wants.
I'd hope the compiler combines the zipWith and maps into one single loop via inlining.

What type of scope does Haskell use?

I'm trying to figure out if Haskell uses dynamic or static scoping.
I realize that, for example, if you define:
let x = 10
then define the function
let square x = x*x
You have 2 different "x's", and does that mean it is dynamically scoped? If not, what scoping does it use, and why?
Also, can Haskell variables have aliases (a different name for the same memory location/value)?
Thanks.
Haskell use (broadly speaking) exactly the same lexical scoping as most other languages.
eg.
x = 10
Results in a value referenced through x in the global scope, whereas
square x = x * x
will result in x being lexically scoped to the function square. It may help if you think of the above form being a syntactic nicety for:
square = \ x -> x * x
As to your other question i'm not sure what you mean by aliasing
Answering only the second part of the question:
You can have several aliases for the same "memory location", but since they are all immutable, it does not matter most of the time.
Dumb example:
foo x y = x * y
bar z = foo z z
When within foo called from bar, both x and y are clearly the same value. But since you cannot modify either x or y, you will not even notice.
There are some things wrong in your statements...
There are no mutable variables in Haskell just definitions (or immutable variables)
A variable memory location is a concept that do not exist in Haskell
In your example, x is not 10 in the function is just a argument to square, that can take any value (you can specify the type later) in this case 10 but just in this case.
Here is an example of aliases provided by Curt Sampson:
import Data.IORef
main :: IO ()
main = do x <- newIORef 0 -- write 0 into x
readIORef x >>= print -- x contains 0
let y = x
readIORef y >>= print -- y contains 0
writeIORef x 42 -- write 42 into x
readIORef y >>= print -- y contains 42
As the first part of the question is already answered by others, here is the second part:
I assume by aliasing you mean one name for another. As haskell is a functional language, and functions behave as normal identifiers in any case, you can do that like this:
y = x
which would define an alias y for the function x. Note that everything is a function. Even if it looks like a "variable", it's just a nullary function taking no arguments. Aliases for types look like this:
type Function = Double -> Double
which would define an alias Function for the type Double -> Double
Haskell uses static nested scopes. What is a bit confusing compared with other languages that have static nested scopes is that the scope of a name is a block which includes tests preceding its definition. For example
evens = 0 : map (+1) odds
odds = map : (+1) evens
here the name 'odds' is in scope in the definition of 'evens', despite the surprising fact that 'odds' has not yet been defined. (The example defines two infinite lists of even and odd numbers.)
A dead language with a similar scoping rule was Modula-3. But Haskell is a bit trickier in that you can attempt to 'redefine' a variable within the same scope but instead you just introduce another recursion equation. This is a pitfall for people who learned ML or Scheme first:
let x = 2 * n
x = x + 1 -- watch out!
This is perfectly good ML or Scheme let*, but Haskel has scheme letrec semantics, without the restriction to lambda values. No wonder this is tricky stuff!
In your example, the global definition of x is shadowed by the local definition of x. In Haskell, a variable's scope is determined by a static reading of the source code - this is called lexical scope, but can get something similar to dynamic scoping with implicit parameters (but that can lead to some unexpected behavior (I've read; never tried 'em myself)).
To sum up the other answers concisely:
lexical scope
aliasing is as easy as x = 1; y = x but doesn't usually matter because things are immutable.
The let syntax you use in your example looks like it's at the interactive ghci> prompt. Everything in interactive mode occurs within the IO monad so things may appear more mutable there than normal.
Well, as I think people have said already, Haskell doesn't have any variables as found in most other languages, it only has expressions. In your example let x = 10 x is an expression that always evaluates to 10. You can't actually change the value of x later on, though you can use the scoping rules to hide it by defining x to be another expression.
Yes, Haskell has aliases. Try out this little program:
import Data.IORef
main :: IO ()
main = do x <- newIORef 0 -- write 0 into x
readIORef x >>= print -- x contains 0
let y = x
readIORef y >>= print -- y contains 0
writeIORef x 42 -- write 42 into x
readIORef y >>= print -- y contains 42

Resources