I'm wondering whether we can use where outside a function?
e.g.
fun::Int->Int
fun n = n + 1
main = do
fun x where x = 30
Obviously it does't work when I compile it,
I want to declare x as local variable for fun only
Your function has the wrong type to be used as the final expression in a do block. It needs to return an Monad m => m Int value, not simply an Int. As main (in its usual use) is required to be an IO action, this means m should be IO.
fun :: Int -> IO Int
fun n = return (n + 1)
A let would be more appropriate than a where in this case, though.
main = do
let x = 30 in fun x
Now, x is in scope only for the call to fun. If you wrote
main = do
let x = 30
fun x
then x is technically in scope for the rest of the do block, not just the call to fun. Despite sharing the same keyword let, there is a distinct difference between a let in a do block and a regular let expression. (The relationship is that
do
let name = value
foo
is equivalent to
let name = value
in do
foo
)
Note that do itself does not create a monadic value; it is simply syntactic sugar for various operators which assume monadic properties. A quick overview:
do { x <- y; foo x; } becomes y >>= (\x -> foo x).
do { foo; bar; } becomes foo >> bar.
do { let x = y; foo; } becomes let x = y in do foo
do foo becomes foo.
Most relevant to your code is rule 4; a single expression in a do block is equivalent to the expression by itself, meaning you can strip the do. Only after the do block is desugared does Haskell begin to type-check the result.
Related
I understand conceptually what all of these are, I'm just hoping for some code examples of how to implement them in ML and Haskell.
Haskell variables (top level definitions, variables in patterns, etc.) are all statically scoped. For example, the program:
y = "global value"
f = print y
g = let y = "local value" in f
main = g
will print "global value". Even though, in the definition of g, the function f is used after "redefining" y, this redefinition doesn't affect the definition of f which uses the statically (AKA lexically) scoped definition of y in force where f was defined.
If you want to "implement" dynamic scope, you have to be more specific about what you really mean. If you're wondering if you can write a function in plain Haskell, like:
addY :: Int -> Int
addY x = x + y
such that y might refer to a different variable from one call to the next, then the answer is no. In this definition, y always refers to the same variable (which, in Haskell, means the same, immutable value) which can be determined by static analysis of the program and cannot be dynamically redefined.
[[Edit: As #Jon Purdy points out, though, there's a Haskell extension that supports a form of dynamic scope such that the following prints various dynamically scoped local values with the same function.
{-# LANGUAGE ImplicitParams #-}
f :: (?y :: String) => IO ()
f = print ?y
g = let ?y = "g's local value" in f
h = let ?y = "h's local value" in f
main = do
g -- prints g's local value
h -- prints h's local value
let ?y = "main's local value" in f -- prints main's value
--end of edit--]]
For lazy evaluation, there are many examples, such as the following entered into an interactive GHCi session:
take 3 [1,2..] -- gives [1,2,3]
let x = (15^2, 6 `div` 0)
fst x -- gives 225
let y = snd x
y -- *** Exception: divide by zero
In the first line, if the evaluation was strict, the attempt to fully evaluate the infinite list [1,2..] (can also be written [1..] -- just counts up 1,2,3,.. forever) would go into an infinite loop, and the take function would never be called. In the second example, if evaluation was strict, the division by zero error would occur when x was defined, not only after we tried to print its second component.
Example of broken code:
data Foo = Foo {
bar :: (Int -> Int)
}
baz = Foo { bar i = i*3 }
Why isn't this possible?
It's just a syntactic limitation - I suspect that if this feature has been considered, it would have been rejected because there are straightforward alternatives. Also, if it was supported, the next question would be why not pattern-matching with multiple clauses, and overall it would just make the language bigger for not all that much gain.
You can use baz = Foo { bar = \x -> x*3 } instead for the specific case you've given, or define an auxiliary function.
This should work:
baz = Foo { bar = (\x -> x*3) }
When I use JS, I have two options to handle a function.
var a = function() {};
var b = a; // b is the function a itself.
var c = a(); // c is result of the evaluation of function a.
AFAIK, Haskell is lazy by default, so I always get b by default. But if I want to get c, how can I do?
Update
I think I should put a word explicitly.
I was doing something like this in ghci.
let a = getLine
a
I wanted to let a result of getLine into a.
Update2
I note this code for later reference for people like me.
I could correct translation to Haskell with #Ankur's help.
With above code example is not a good one because function a doesn't return anything.
If I change it like this;
var a = function(x,y) { return x * y; };
var b = a; // b is the function a itself.
var c = a(); // c is result of the evaluation of function a.
Translation into Haskell will become like this.
let a = \ x y -> x* y // Anonymous lambda function.
let b = a
let c = a 100 200
Your JS code would translate to Haskell as:
Prelude> let a = (\() -> ())
Prelude> let b = a
Prelude> let c = a()
Your JS function was taking Nothing (which you can model as () type) and returning nothing i.e again ()
getLine is a value of type IO String so if you say let a = getLine, a becomes value of type IO String. What you want is extract String from this IO String, which can be done as:
a <- getLine
Note that the parallel to Javascript isn't quite correct -- for instance, assuming a returns a number, b + b makes sense in Haskell but not in your example Javascript. In principle functions in Haskell are functions of exactly one argument -- what appears to be a function of two arguments is a function of one argument, which returns a function of one argument, which returns a value. b in Haskell would not be an unevaluated "zero-argument function", but an unevaluated value.
To introduce strictness you can use seq, which takes two arguments, the first of which is strictly evaluated and the second of which is returned. Read more.
Here is an example from that page where seq is used to force immediate evaluation of z':
foldl' :: (a -> b -> a) -> a -> [b] -> a
foldl' _ z [] = z
foldl' f z (x:xs) = let z' = f z x in z' `seq` foldl' f z' xs
Note the way z' is used again later as the second argument to foldl', since seq just discards the value of its first argument.
Haskell is non-strict, not lazy.
Many expressions will be evaluated strictly, see here, so you can often force strictness simply with the structure of your code.
In Haskell, if c has a type which matches the return type - the value - of a() then that is what will be assigned to it (never the function itself).
Haskell may put off the calculation until your code actually needs the value of c but in most cases you should not care.
Why do you want to force the evaluation early? Usually, the only reason to do this in Haskell is performance. In less pure languages, you might be depending on a side effect but that will not be the case in Haskell - unless you're working with, say, the IO monad and that gives you all you need to force sequential evaluation.
UPDATE Ah, so you are working with IO.
In Haskell, why do you not use 'in' with 'let' inside of a do-block, but you must otherwise?
For example, in the somewhat contrived examples below:
afunc :: Int -> Int
afunc a =
let x = 9 in
a * x
amfunc :: IO Int -> IO Int
amfunc a = do
let x = 9
a' <- a
return (a' * x)
It's an easy enough rule to remember, but I just don't understand the reason for it.
You are providing expressions to define both afunc and amfunc. Let-expressions and do-blocks are both expressions. However, while a let-expression introduces a new binding that scopes around the expression given after the 'in' keyword, a do-block isn't made of expressions: it is a sequence of statements. There are three forms of statements in a do-block:
a computation whose result is bound to some variable x, as in
x <- getChar
a computation whose result is ignored, as in
putStrLn "hello"
A let-statement, as in
let x = 3 + 5
A let-statement introduces a new binding, just as let-expressions do. The scope of this new binding extends over all the remaining statements in the do-block.
In short, what comes after the 'in' in a let-expression is an expression, whereas what comes after a let expression is a sequence of statements. I can of course express a computation of a particular statement using a let-expression, but then the scope of the binding would not extend beyond that statement to statements that follow. Consider:
do putStrLn "hello"
let x = 3 + 5 in putStrLn "eight"
putStrLn (show x)
The above code causes the following error message in GHC:
Not in scope: `x'
whereas
do putStrLn "hello"
let x = 3 + 5
putStrLn "eight"
putStrLn (show x)
works fine.
You can indeed use let .. in in do-notation. In fact, according to the Haskell Report, the following
do{let decls; stmts}
desugars into
let decls in do {stmts}
I imagine that it is useful because you might otherwise have to have some deep indentation or delimiting of the "in"-block, going from your in .. to the very end of the do-block.
The short answer is that Haskell do blocks are funny. Haskell is an expression-based language—except in do blocks, because the point of do blocks is to provide for a "statement" syntax of sorts. Most "statements" are just expressions of type Monad m => m a, but there are two syntaxes that don't correspond to anything else in the language:
Binding the result of an action with <-: x <- action is a "statement" but not an expression. This syntax requires x :: a and action :: Monad m => m a.
The in-less variant of let, which is like an assignment statement (but for pure code on the right hand side). In let x = expr, it must be the case that x :: a and expr :: a.
Note that just like uses of <- can be desugared (in that case, into >>= and lambda), the in-less let can always be desugared into the regular let ... in ...:
do { let x = blah; ... }
=> let x = blah in do { ... }
Many functional programming languages have support for curried parameters.
To support currying functions the parameters to the function are essentially a tuple where the last parameter can be omitted making a new function requiring a smaller tuple.
I'm thinking of designing a language that always uses records (aka named parameters) for function parameters.
Thus simple math functions in my make believe language would be:
add { left : num, right : num } = ...
minus { left : num, right : num } = ..
You can pass in any record to those functions so long as they have those two named parameters (they can have more just "left" and "right").
If they have only one of the named parameter it creates a new function:
minus5 :: { left : num } -> num
minus5 = minus { right : 5 }
I borrow some of haskell's notation for above.
Has any one seen a language that does this?
OCaml has named parameters and currying is automatic (though sometimes type annotation is required when dealing with optional parameters), but they are not tupled :
Objective Caml version 3.11.2
# let f ~x ~y = x + y;;
val f : x:int -> y:int -> int = <fun>
# f ~y:5;;
- : x:int -> int = <fun>
# let g = f ~y:5;;
val g : x:int -> int = <fun>
# g ~x:3;;
- : int = 8
Sure, Mathematica can do that sort of thing.