Related
I'm learning about optimization in Composing Programs. I have 3 functions. memo is for memoization technique, count is a wrapper, so every time fib is called a counter is activated. fib is just an implementation of Fibonacci with recursivity.
def memo(f):
cache = {}
def memoized(n):
if n not in cache:
cache[n] = f(n)
return cache[n]
return memoized
def count(f):
def counted(*args):
counted.call_count += 1
return f(*args)
counted.call_count = 0
return counted
def fib(n):
if n == 0:
return 0
if n == 1:
return 1
return fib(n - 2) + fib(n - 1)
To call the functions, I assing count(fib) to counted_fib, so every call is recorded. And* fib *variable is reasigned (now, fib is memo(counted_fib) not the original function)
counted_fib = count(fib)
fib = memo(counted_fib)
My problem start when the function fib is called (e.g. fib(19)) Everything works as it should be, but the recursion inside the original function (return fib(n - 2) + fib(n - 1)) trigger fib as memo(counted_fib).
This is not a problem about something that is not working. I'm just trying to understand why it happens if one thing is declared before another.
If what you are confused about is the way that the fib name behaves when being reassigned from the original fib() function to being a variable calling memo(counted_fib, I think this article about python's namespace behavior will help.
Python programs are executed starting from the first statement. But even before that, the global namespace is loaded. So you assign the fib name to fib(). But later on, you change that assignment to memo(counted_fib), and that last assignment is the one loaded with the global namespace once the program starts.
Hope it helps!
In python, everything is an Object, functions included.
When you declare def fib it creates an object call fib and this object is callable.
This callable is calling in self by reference.
When you re-declare fib = memo(counted_fib) this move the reference fib to another object. (in this case memo function)
The count() function keeps track of each time the fib function was called by incrementing a call_count variable which can be checked afterwards. Then memo(counted_fib) is used to provide caching facilities for the counted_fib function. It does so by maintaining a cache object that keeps track of already computed values, thus ensuring that expensive recursive calls to counted_fib will not be duplicated when its inputs are called multiple times. Since every time fib is called inside counted_fib, it gets counted, memo(countedfib) ensures that each counted_fib call will only run once and all its computations will get cached for future access. Thus, after assigning counted_fib it would be possible to make successive calls to fib without overwriting its original definition or spend more time in expensive recursive calls.
Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 6 years ago.
Improve this question
I am learning a haskell for a few days and the laziness is something like buzzword. Because of the fact I am not familiar with laziness ( I have been working mainly with non-functional languages ) it is not easy concept for me.
So, I am asking for any excerise / example which show me what laziness is in the fact.
Thanks in advance ;)
In Haskell you can create an infinite list. For instance, all natural numbers:
[1,2..]
If Haskell loaded all the items in memory at once that wouldn't be possible. To do so you would need infinite memory.
Laziness allows you to get the numbers as you need them.
Here's something interesting: dynamic programming, the bane of every intro. algorithms student, becomes simple and natural when written in a lazy and functional language. Take the example of string edit distance. This is the problem of measuring how similar two DNA strands are or how many bytes changed between two releases of a binary executable or just how 'different' two strings are. The dynamic programming algorithm, expressed mathematically, is simple:
let:
• d_{i,j} be the edit distance of
the first string at index i, which has length m
and the second string at index j, which has length m
• let a_i be the i^th character of the first string
• let b_j be the j^th character of the second string
define:
d_{i,0} = i (0 <= i <= m)
d_{0,j} = j (0 <= j <= n)
d_{i,j} = d_{i - 1, j - 1} if a_i == b_j
d_{i,j} = min { if a_i != b_j
d_{i - 1, j} + 1 (delete)
d_{i, j - 1} + 1 (insert)
d_{i - 1, j - 1} + 1 (modify)
}
return d_{m, n}
And the algorithm, expressed in Haskell, follows the same shape of the algorithm:
distance a b = d m n
where (m, n) = (length a, length b)
a' = Array.listArray (1, m) a
b' = Array.listArray (1, n) b
d i 0 = i
d 0 j = j
d i j
| a' ! i == b' ! j = ds ! (i - 1, j - 1)
| otherwise = minimum [ ds ! (i - 1, j) + 1
, ds ! (i, j - 1) + 1
, ds ! (i - 1, j - 1) + 1
]
ds = Array.listArray bounds
[d i j | (i, j) <- Array.range bounds]
bounds = ((0, 0), (m, n))
In a strict language we wouldn't be able to define it so straightforwardly because the cells of the array would be strictly evaluated. In Haskell we're able to have the definition of each cell reference the definitions of other cells because Haskell is lazy – the definitions are only evaluated at the very end when d m n asks the array for the value of the last cell. A lazy language lets us set up a graph of standing dominoes; it's only when we ask for a value that we need to compute the value, which topples the first domino, which topples all the other dominoes. (In a strict language, we would have to set up an array of closures, doing the work that the Haskell compiler does for us automatically. It's easy to transform implementations between strict and lazy languages; it's all a matter of which language expresses which idea better.)
The blog post does a much better job of explaining all this.
So, I am asking for any excerise / example which show me what laziness is in the fact.
Click on Lazy on haskell.org to get the canonical example. There are many other examples just like it to illustrate the concept of delayed evaluation that benefits from not executing some parts of the program logic. Lazy is certainly not slow, but the opposite of eager evaluation common to most imperative programming languages.
Laziness is a consequence of non-strict function evaluation. Consider the "infinite" list of 1s:
ones = 1:ones
At the time of definition, the (:) function isn't evaluated; ones is just a promise to do so when it is necessary. Such a time would be when you pattern match:
myHead :: [a] -> a
myHead (x:rest) = x
When myHead ones is called, x and rest are needed, but the pattern match against 1:ones simply binds x to 1 and rest to ones; we don't need evaluate ones any further at this time, so we don't.
The syntax for infinite lists, using the .. "operator" for arithmetic sequences, is sugar for calls to enumFrom and enumFromThen. That is
-- An infintite list of ones
ones = [1,1..] -- enumFromThen 1 1
-- The natural numbers
nats = [1..] -- enumFrom 1
so again, laziness just comes from the non-strict evaluation of enumFrom.
Unlike with other languages, Haskell decouples the creation and definition of an object.... You can easily watch this in action using Debug.Trace.
You can define a variable like this
aValue = 100
(the value on the right hand side could include a complicated evaluation, but let's keep it simple)
To see if this code ever gets called, you can wrap the expression in Debug.Trace.trace like this
import Debug.Trace
aValue = trace "evaluating aValue" 100
Note that this doesn't change the definition of aValue, it just forces the program to output "evaluating aValue" whenever this expression is actually created at runtime.
(Also note that trace is considered unsafe for production code, and should only be used to debug).
Now, try two experiments.... Write two different mains
main = putStrLn $ "The value of aValue is " ++ show aValue
and
main = putStrLn "'sup"
When run, you will see that the first program actually creates aValue (you will see the "creating aValue" message, while the second does not.
This is the idea of laziness.... You can put as many definitions in a program as you want, but only those that are used will be actually created at runtime.
The real use of this can be seen with objects of infinite size. Many lists, trees, etc. have an infinite number of elements. Your program will use only some finite number of values, but you don't want to muddy the definition of the object with this messy fact. Take for instance the infinite lists given in other answers here....
[1..] -- = [1,2,3,4,....]
You can again see laziness in action here using trace, although you will have to write out a variant of [1..] in an expanded form to do this.
f::Int->[Int]
f x = trace ("creating " ++ show x) (x:f (x+1)) --remember, the trace part doesn't change the expression, it is just used for debugging
Now you will see that only the elements you use are created.
main = putStrLn $ "the list is " ++ show (take 4 $ f 1)
yields
creating 1
creating 2
creating 3
creating 4
the list is [1,2,3,4]
and
main = putStrLn "yo"
will not show any item being created.
I'm trying to write a function in Haskell that calculates all factors of a given number except itself.
The result should look something like this:
factorlist 15 => [1,3,5]
I'm new to Haskell and the whole recursion subject, which I'm pretty sure I'm suppoused to apply in this example but I don't know where or how.
My idea was to compare the given number with the first element of a list from 1 to n div2
with the mod function but somehow recursively and if the result is 0 then I add the number on a new list. (I hope this make sense)
I would appreciate any help on this matter
Here is my code until now: (it doesn't work.. but somehow to illustrate my idea)
factorList :: Int -> [Int]
factorList n |n `mod` head [1..n`div`2] == 0 = x:[]
There are several ways to handle this. But first of all, lets write a small little helper:
isFactorOf :: Integral a => a -> a -> Bool
isFactorOf x n = n `mod` x == 0
That way we can write 12 `isFactorOf` 24 and get either True or False. For the recursive part, lets assume that we use a function with two arguments: one being the number we want to factorize, the second the factor, which we're currently testing. We're only testing factors lesser or equal to n `div` 2, and this leads to:
createList n f | f <= n `div` 2 = if f `isFactorOf` n
then f : next
else next
| otherwise = []
where next = createList n (f + 1)
So if the second parameter is a factor of n, we add it onto the list and proceed, otherwise we just proceed. We do this only as long as f <= n `div` 2. Now in order to create factorList, we can simply use createList with a sufficient second parameter:
factorList n = createList n 1
The recursion is hidden in createList. As such, createList is a worker, and you could hide it in a where inside of factorList.
Note that one could easily define factorList with filter or list comprehensions:
factorList' n = filter (`isFactorOf` n) [1 .. n `div` 2]
factorList'' n = [ x | x <- [1 .. n`div` 2], x `isFactorOf` n]
But in this case you wouldn't have written the recursion yourself.
Further exercises:
Try to implement the filter function yourself.
Create another function, which returns only prime factors. You can either use your previous result and write a prime filter, or write a recursive function which generates them directly (latter is faster).
#Zeta's answer is interesting. But if you're new to Haskell like I am, you may want a "simple" answer to start with. (Just to get the basic recursion pattern...and to understand the indenting, and things like that.)
I'm not going to divide anything by 2 and I will include the number itself. So factorlist 15 => [1,3,5,15] in my example:
factorList :: Int -> [Int]
factorList value = factorsGreaterOrEqual 1
where
factorsGreaterOrEqual test
| (test == value) = [value]
| (value `mod` test == 0) = test : restOfFactors
| otherwise = restOfFactors
where restOfFactors = factorsGreaterOrEqual (test + 1)
The first line is the type signature, which you already knew about. The type signature doesn't have to live right next to the list of pattern definitions for a function, (though the patterns themselves need to be all together on sequential lines).
Then factorList is defined in terms of a helper function. This helper function is defined in a where clause...that means it is local and has access to the value parameter. Were we to define factorsGreaterOrEqual globally, then it would need two parameters as value would not be in scope, e.g.
factorsGreaterOrEqual 4 15 => [5,15]
You might argue that factorsGreaterOrEqual is a useful function in its own right. Maybe it is, maybe it isn't. But in this case we're going to say it isn't of general use besides to help us define factorList...so using the where clause and picking up value implicitly is cleaner.
The indentation rules of Haskell are (to my tastes) weird, but here they are summarized. I'm indenting with two spaces here because it grows too far right if you use 4.
Having a list of boolean tests with that pipe character in front are called "guards" in Haskell. I simply establish the terminal condition as being when the test hits the value; so factorsGreaterOrEqual N = [N] if we were doing a call to factorList N. Then we decide whether to concatenate the test number into the list by whether dividing the value by it has no remainder. (otherwise is a Haskell keyword, kind of like default in C-like switch statements for the fall-through case)
Showing another level of nesting and another implicit parameter demonstration, I added a where clause to locally define a function called restOfFactors. There is no need to pass test as a parameter to restOfFactors because it lives "in the scope" of factorsGreaterOrEqual...and as that lives in the scope of factorList then value is available as well.
I want to know what is call-by-need.
Though I searched in wikipedia and found it here: http://en.wikipedia.org/wiki/Evaluation_strategy,
but could not understand properly.
If anyone can explain with an example and point out the difference with call-by-value, it would be a great help.
Suppose we have the function
square(x) = x * x
and we want to evaluate square(1+2).
In call-by-value, we do
square(1+2)
square(3)
3*3
9
In call-by-name, we do
square(1+2)
(1+2)*(1+2)
3*(1+2)
3*3
9
Notice that since we use the argument twice, we evaluate it twice. That would be wasteful if the argument evaluation took a long time. That's the issue that call-by-need fixes.
In call-by-need, we do something like the following:
square(1+2)
let x = 1+2 in x*x
let x = 3 in x*x
3*3
9
In step 2, instead of copying the argument (like in call-by-name), we give it a name. Then in step 3, when we notice that we need the value of x, we evaluate the expression for x. Only then do we substitute.
BTW, if the argument expression produced something more complicated, like a closure, there might be more shuffling of lets around to eliminate the possibility of copying. The formal rules are somewhat complicated to write down.
Notice that we "need" values for the arguments to primitive operations like + and *, but for other functions we take the "name, wait, and see" approach. We would say that the primitive arithmetic operations are "strict". It depends on the language, but usually most primitive operations are strict.
Notice also that "evaluation" still means to reduce to a value. A function call always returns a value, not an expression. (One of the other answers got this wrong.) OTOH, lazy languages usually have lazy data constructors, which can have components that are evaluated on-need, ie, when extracted. That's how you can have an "infinite" list---the value you return is a lazy data structure. But call-by-need vs call-by-value is a separate issue from lazy vs strict data structures. Scheme has lazy data constructors (streams), although since Scheme is call-by-value, the constructors are syntactic forms, not ordinary functions. And Haskell is call-by-name, but it has ways of defining strict data types.
If it helps to think about implementations, then one implementation of call-by-name is to wrap every argument in a thunk; when the argument is needed, you call the thunk and use the value. One implementation of call-by-need is similar, but the thunk is memoizing; it only runs the computation once, then it saves it and just returns the saved answer after that.
Imagine a function:
fun add(a, b) {
return a + b
}
And then we call it:
add(3 * 2, 4 / 2)
In a call-by-name language this will be evaluated so:
a = 3 * 2 = 6
b = 4 / 2 = 2
return a + b = 6 + 2 = 8
The function will return the value 8.
In a call-by-need (also called a lazy language) this is evaluated like so:
a = 3 * 2
b = 4 / 2
return a + b = 3 * 2 + 4 / 2
The function will return the expression 3 * 2 + 4 / 2. So far almost no computational resources have been spent. The whole expression will be computed only if its value is needed - say we wanted to print the result.
Why is this useful? Two reasons. First if you accidentally include dead code it doesn't weigh your program down and thus can be a lot more efficient. Second it allows to do very cool things like efficiently calculating with infinite lists:
fun takeFirstThree(list) {
return [list[0], list[1], list[2]]
}
takeFirstThree([0 ... infinity])
A call-by-name language would hang there trying to create a list from 0 to infinity. A lazy language will simply return [0,1,2].
A simple, yet illustrative example:
function choose(cond, arg1, arg2) {
if (cond)
do_something(arg1);
else
do_something(arg2);
}
choose(true, 7*0, 7/0);
Now lets say we're using the eager evaluation strategy, then it would calculate both 7*0 and 7/0 eagerly. If it is a lazy evaluated strategy (call-by-need), then it would just send the expressions 7*0 and 7/0 through to the function without evaluating them.
The difference? you would expect to execute do_something(0) because the first argument gets used, although it actually depends on the evaluation strategy:
If the language evaluates eagerly, then it will, as stated, evaluate 7*0 and 7/0 first, and what's 7/0? Divide-by-zero error.
But if the evaluation strategy is lazy, it will see that it doesn't need to calculate the division, it will call do_something(0) as we were expecting, with no errors.
In this example, the lazy evaluation strategy can save the execution from producing errors. In a similar manner, it can save the execution from performing unnecessary evaluation that it won't use (the same way it didn't use 7/0 here).
Here's a concrete example for a bunch of different evaluation strategies written in C. I'll specifically go over the difference between call-by-name, call-by-value, and call-by-need, which is kind of a combination of the previous two, as suggested by Ryan's answer.
#include<stdio.h>
int x = 1;
int y[3]= {1, 2, 3};
int i = 0;
int k = 0;
int j = 0;
int foo(int a, int b, int c) {
i = i + 1;
// 2 for call-by-name
// 1 for call-by-value, call-by-value-result, and call-by-reference
// unsure what call-by-need will do here; will likely be 2, but could have evaluated earlier than needed
printf("a is %i\n", a);
b = 2;
// 1 for call-by-value and call-by-value-result
// 2 for call-by-reference, call-by-need, and call-by-name
printf("x is %i\n", x);
// this triggers multiple increments of k for call-by-name
j = c + c;
// we don't actually care what j is, we just don't want it to be optimized out by the compiler
printf("j is %i\n", j);
// 2 for call-by-name
// 1 for call-by-need, call-by-value, call-by-value-result, and call-by-reference
printf("k is %i\n", k);
}
int main() {
int ans = foo(y[i], x, k++);
// 2 for call-by-value-result, call-by-name, call-by-reference, and call-by-need
// 1 for call-by-value
printf("x is %i\n", x);
return 0;
}
The part we're most interested in is the fact that foo is called with k++ as the actual parameter for the formal parameter c.
Note that how the ++ postfix operator works is that k++ returns k at first, and then increments k by 1. That is, the result of k++ is just k. (But, then after that result is returned, k will be incremented by 1.)
We can ignore all of the code inside foo up until the line j = c + c (the second section).
Here's what happens for this line under call-by-value:
When the function is first called, before it encounters the line j = c + c, because we're doing call-by-value, c will have the value of evaluating k++. Since evaluating k++ returns k, and k is 0 (from the top of the program), c will be 0. However, we did evaluate k++ once, which will set k to 1.
The line becomes j = 0 + 0, which behaves exactly like how you'd expect, by setting j to 0 and leaving c at 0.
Then, when we run printf("k is %i\n", k); we get that k is 1, because we evaluated k++ once.
Here's what happens for the line under call-by-name:
Since the line contains c and we're using call-by-name, we replace the text c with the text of the actual argument, k++. Thus, the line becomes j = (k++) + (k++).
We then run j = (k++) + (k++). One of the (k++)s will be evaluated first, returning 0 and setting k to 1. Then, the second (k++) will be evaluated, returning 1 (because k was set to 1 by the first evaluation of k++), and setting k to 2. Thus, we end up with j = 0 + 1 and k set to 2.
Then, when we run printf("k is %i\n", k);, we get that k is 2 because we evaluated k++ twice.
Finally, here's what happens for the line under call-by-need:
When we encounter j = c + c; we recognize that this is the first time the parameter c is evaluated. Thus we need to evaluate its actual argument (once) and store that value to be the evaluation of c. Thus, we evaluate the actual argument k++, which will return k, which is 0, and therefore the evaluation of c will be 0. Then, since we evaluated k++, k will be set to 1. We then use this stored evaluation as the evaluation for the second c. That is, unlike call-by-name, we do not re-evaluate k++. Instead, we reuse the previously evaluated initial value for c, which is 0. Thus, we get j = 0 + 0; just as if c was pass-by-value. And, since we only evaluated k++ once, k is 1.
As explained in the previous step, j = c + c is j = 0 + 0 under call-by-need, and it runs exactly as you'd expect.
When we run printf("k is %i\n", k);, we get that k is 1 because we only evaluated k++ once.
Hopefully this helps to differentiate how call-by-value, call-by-name, and call-by-need work. If it would be helpful to differentiate call-by-value and call-by-need more clearly, let me know in a comment and I'll explain the code earlier on in foo and why it works the way it does.
I think this line from Wikipedia sums things up nicely:
Call by need is a memoized variant of call by name, where, if the function argument is evaluated, that value is stored for subsequent use. If the argument is pure (i.e., free of side effects), this produces the same results as call by name, saving the cost of recomputing the argument.
In Haskell in 5 steps the factorial function is defined as follows:
let fac n = if n == 0 then 1 else n * fac (n-1)
But for hugs, it says that fac needs to be in fac.h. Can anyone explain why this is the case - missing the ability to define named functions seems like a massive limitation for an interpreter?
The basic answer as far as I can tell is that the Hugs interactive toplevel is essentially an expression parser, and function/data definitions are not expressions. Your example actually would work if you made it an expression and wrote let fac n = if n == 0 then 1 else n * fac (n-1) in fac 19. Adding support for this would be a pretty big effort, and apparently the Hugs implementors thought that it was good enough to require function/data definitions to be in files.
Hugs misses the ability to define any named functions (recursive or not). It also misses the ability to define data types.