Suppose I have this in an long run ml file:
let l = [1;2;3]
let l = [1;2;3;4]
let _ = ...
Will the first l = [1;2;3] be GCed sometime?
What if the code is like this:
let l = [1;2;3]
let l = [1;2;3;4]
let l = [1;2;3]
let _ = ...
There are three l. 1st is shadowed by 2nd, and then 2nd is shadowed by 3rd.
Are the following situations possible, since GC's schedule is not determined?
when reaching 3rd l, the GC has not collected the 1st [1;2;3], so the same memory was reused or re-referenced
Immediately after 2nd l, GC collected 1st [1;2;3], then the 3rd l create new memory for [1;2;3]
Not in the OCaml toplevel, defining a new value l does not free the previous l, which (as far as I remember the implementation) lives forever. It doesn't matter because it is a constant and only takes space proportional to the source code that engendered it, like binary code does.
$ rlwrap ocaml
OCaml version 4.00.1
# let l = [ 1 ] ;;
val l : int list = [1]
# let w = Weak.create 1 ;;
val w : '_a Weak.t = <abstr>
# Weak.set w 0 (Some l) ;;
- : unit = ()
# Gc.full_major () ;;
- : unit = ()
# Weak.check w 0 ;;
- : bool = true
#
This true means that l still lives in memory.
# let l = [ 2 ] ;;
val l : int list = [2]
# Weak.check w 0 ;;
- : bool = true
# Gc.full_major () ;;
- : unit = ()
# Weak.check w 0 ;;
- : bool = true
#
And it still does not, although it is not “reachable” for a fine definition of reachable (not the definition the GC uses).
Neither compiler frees the original l either:
$ cat t.ml
let l = [ 1 ] ;;
let w = Weak.create 1 ;;
Weak.set w 0 (Some l) ;;
Gc.full_major () ;;
Printf.printf "%B\n" (Weak.check w 0) ;;
let l = [ 2 ] ;;
Printf.printf "%B\n" (Weak.check w 0) ;;
Gc.full_major () ;;
Printf.printf "%B\n" (Weak.check w 0) ;;
$ ocamlc t.ml
$ ./a.out
true
true
true
$ ocamlopt t.ml
$ ./a.out
true
true
true
Another example of the GC's definition of “reachability” being more approximative than the definition one might like is:
let g () = Gc.full_major ()
let f () = let l = [ 1 ] in (* do something with l; *) g(); 1
At the point when g is executed (called from f), the value l is no longer reachable (for a fine definition of reachable) and could be garbage-collected. It won't be because it is still referenced from the stack. The GC, with its coarse notion of reachable, will only be able to free it after f has terminated.
that depends on whether you still have references for it elsewhere - if l is the only reference, then yes, that reference is released on the second assignment, and will get GCed when appropriate.
lists are immutable in ocaml; list literals return new list instances.
from a programming stand-point, you can treat your "third" list as an entirely new one (even though it contains the same elements as the first).
from an under-the-cover implementation standpoint, it may be that the system is smart enough to know to reuse memory from the first list on the third assignment; i can't immediately think of an efficient way that might work, though, with the code as it's currently written.
so, at the third assignment:
the first list might still be around
it's still going to be, from a programming point of view, presented to you as an entirely new list
it's possible, but very unlikely, that the system, under the covers, knows/is able to optimize memory use well enough to reuse the memory from the first assignment. most likely, unless referenced elsewhere, that first list is garbage collected/will be garbage collected.
NOTE
Based on #PascalCuoq's answer below, and some experimentation:
The behavior is quite different if you're defining the variables as top-level variables. The OCaml garbage collector treats top level declarations as (permanent?) garbage collection roots - so they won't be GCed even if there are no longer reachable from running code.
So, if the above example is executed at the top level, there will be three different "l"'s in memory, none of which will be garbage collected.
Related
I've seen many OCaml programs that have all their functions at the top and then a unit definition at the end, like:
let rec factorial num =
if num = 0 then 1
else num * factorial (num-1)
let () =
let num2 = read_int () in
print_int (factorial num2)
Why is this? Does it act like a main function? If so, you shouldn't be able to use several of them right?
What is the best way to handle several input for example? Writing several unit definitions?
Yes, a unit expression at the top level of a module acts like the main function of the module. I.e., it gets executed at the time the program is started.
You can have many unit expressions anywhere you can have one unit expression. The ; operator is specifically intended for such cases:
let () =
Printf.printf "hello\n";
Printf.printf "world\n"
As a side comment, I often write a main function in my main module:
let main () =
(* main calculation of program *)
let () = main ()
This is possibly a holdover from all the years I wrote C code.
I have also seen this in other people's code (possibly there are a lot of us who used to write C code).
I really like Jeffrey's answer, but in case if you want extra details and what to know what let () = foo means here is some extracurricular reading.
Abstractly speaking the operation of OCaml programs could be defined as a machine that reduces expressions until they become irreducible. And an irreducible expression is called a value. For example, 5 + 3 is reduced to 8 and there is no other way to reduce 8 so 8 is a value. A more complex example of a value is (fun x -> x + 1). And a more complex example of expression would be
(fun x -> x + 1) 5
Which is reduced to 6.
The whole semantics of the language is defined as a set of such reduction rules. And a program in OCaml is an ordered list of definitions of the form,
let <pattern> = <expression>
So that when an OCaml program is evaluated (executed) it reduces the part of each definition and assigns it to the pattern on the left-hand side, e.g.,
let 5 = 2 + 3
is a valid definition in OCaml. It will reduce the 2 + 3 expression to 5 and then try to match the resulting value with the left-hand side. If it matches, then the next definition is evaluated, and so on. If it doesn't the program is terminated.
Here 5 is a very simple value that matches only with 5 and, in general, your values will be more complex. However, there is a value that is even more primitive than 5. It is a value of type unit that has only one inhabitant, denoted as (). And this is also the value, to which colloquially expressions with side effects are reduced. Since in OCaml every expression must reduce to a value, we need a value that represents no value, and that is unit. For example print_endline "foo" reduces to () with a side effect of emitting string foo to the standard output.
Therefore, when we write
let foo () = print_endline "foo"
let () = foo ()
We evaluate (reduce) the function foo until it reaches the () value that indicates that we fully reduced foo ().
We could also use a wildcard matcher and write
let _ = foo ()
or bind the result to a variable, e.g.,
let bar = foo ()
But it is considered a good style to use () on the left-hand side of an expression that evaluates to () to indicate that the right-hand side doesn't produce any interesting value. It also prevents common errors, e.g.,
let () = foo
will yield an error saying that unit -> unit and can't be matched with unit and even provide a hint: Did you forget to provide ()' as argument?`
Suppose we have the following toy binary tree structure:
datatype Tree = Leaf | Branch of Tree * Tree
fun left(Branch(l,r))= l
fun right(Branch(l,r))= r
And suppose that we have some large and expensive to compute tree
val c: Tree= …
val d: Tree= Branch(c,c)
Can we verify in the SML/NJ interpreter that left(d) and right(d) indeed refer to the same place in memory?
(This question was borne out of working with lazy streams which may possibly contain cycles, and trying to debug whether the memoization is working correctly.)
I think we can do this by casting both values to word using Unsafe.cast, which reinterprets the pointers as numbers that can be compared with =. Here is an is function that implements this idea:
infix 4 is (* = > < >= ... *)
fun op is(a: 'a, b: 'a) = (Unsafe.cast a: word) = Unsafe.cast b
Note that:
I needed to annotate (a, b) to make sure the type checker restricts the arguments to the same type, because the typesig of is would otherwise be 'a * 'b -> bool
I needed to annotate the first Unsafe.cast application to prevent SML/NJ from having to use polyEqual and thus avoid emitting Warning: calling polyEqual
Type inference can figure out the rest just fine
Here's an example that illustrates structural sharing in vectors:
local
fun const a _ = a
val v = Vector.tabulate(1000, const #[1,2,3])
val v = Vector.update(v, 230, #[1,2,3]) (* same value, but new allocation *)
in
val test1 = Vector.sub(v, 0) is Vector.sub(v, 999)
val test2 = Vector.sub(v, 0) is Vector.sub(v, 230)
end
And it does work as expected. The repl answers with
(* val test1 = true : bool
val test2 = false : bool *)
Now here's the tree example from your question:
local
datatype Tree
= Leaf
| Branch of Tree * Tree
fun left (Branch(l,r)) = l
fun right (Branch(l,r)) = r
val c = Branch(Leaf,Leaf) (* imagine it being something complex *)
val d = Branch(c, c)
in
val test3 = left d is right d
end
When we try it, it answers correctly:
(* val test3 = true : bool *)
I think this answers your question. Below this point I talk about the choice of word and a little about what might be happening internally when we convert to it
As far as I'm aware, SML/NJ does pointer tagging like many lisps, v8, OCaml, etc.. which means we want to cast specifically to a type which isn't heap-allocated.. because we want to be able to read the pointer value, not misinterpret heap objects.
I think word works fine for that purpose; it's immediate like an int, and unsigned unlike it.. so it should correspond to the memory address (don't hold me up on that).
There seems to be a bug* that prevents you from inspecting the word value directly in the repl, it may be the pointer tagging at play.
* at least the compiler reports it as that? as of v110.99
One workaround is to immediately convert the value to a different representation (perhaps being boxed is required?), like a string, or Word64.word
fun addrOf x = Word.toString (Unsafe.cast x)
Indeed, when we try to use our newly defined addrOf function to compare the address versus its stringified value we can observe the effects of pointer tagging
(* We'll need these definitions onwards, might as well have them here: *)
infix 5 >> <<
val op >> = Word.>>
val op << = Word.<<
val unsafeWord = Option.valOf o Word.fromString
local
val x = SOME 31 (* dummy boxed value *)
val addr = unsafeWord (addrOf x)
in
val test4 = Unsafe.cast x = addr
val test5 = Unsafe.cast x >> 0w1 = addr >> 0w1 (* get rid of lowest bit *)
end
(* val test4 = false : bool
val test5 = true : bool *)
So then, if it's the case that the tag is just the lowest bit of a machine word in SML/NJ, like it is in many tagged pointer implementations, then the pointer should be, accurately, the casting value shifted right once, then left once again.
fun addrOf x = Unsafe.cast x >> 0w1 << 0w1
The reason we do this seemingly nop conversion (remember, all pointers are even) is because it properly tags the cast word value in the process.
If we shift left first then right, the tag itself would find its way to the value with the first operation as the coerced pointer turns into proper word.. that's why we shift right first instead.. Shifting left from that zero-fills the lower bit, so no info about the address is lost, but an immediate value tag is properly present internally.
local
fun strAddrOf x = Word.toString (Unsafe.cast x)
fun isEven x = Word.andb (x, 0w1) = 0w0
val x = SOME 42
val ogAddr = unsafeWord (strAddrOf x) (* a known-correct conversion: no shifting takes place *)
val badAddr = Unsafe.cast x << 0w1 >> 0w1
val goodAddr = Unsafe.cast x >> 0w1 << 0w1
in
val test6 = ogAddr = badAddr
val test7 = ogAddr = goodAddr
val test8 = isEven ogAddr
end
(* val test6 = false : bool
val test7 = true : bool
val test8 = true : bool *)
This shifting in addrOf allows you to get the pointer value directly without intermediate conversions (and boxing) to string or word64. Of course, this solution breaks down with actual unboxed values so it's good to test for whether the object is boxed to begin with (Unsafe.boxed) in your addrOf definition, and return 0wx0 in case you were working with immediates.
Hope this works for your purposes. It certainly did for mine so far!
Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 6 years ago.
Improve this question
I am learning a haskell for a few days and the laziness is something like buzzword. Because of the fact I am not familiar with laziness ( I have been working mainly with non-functional languages ) it is not easy concept for me.
So, I am asking for any excerise / example which show me what laziness is in the fact.
Thanks in advance ;)
In Haskell you can create an infinite list. For instance, all natural numbers:
[1,2..]
If Haskell loaded all the items in memory at once that wouldn't be possible. To do so you would need infinite memory.
Laziness allows you to get the numbers as you need them.
Here's something interesting: dynamic programming, the bane of every intro. algorithms student, becomes simple and natural when written in a lazy and functional language. Take the example of string edit distance. This is the problem of measuring how similar two DNA strands are or how many bytes changed between two releases of a binary executable or just how 'different' two strings are. The dynamic programming algorithm, expressed mathematically, is simple:
let:
• d_{i,j} be the edit distance of
the first string at index i, which has length m
and the second string at index j, which has length m
• let a_i be the i^th character of the first string
• let b_j be the j^th character of the second string
define:
d_{i,0} = i (0 <= i <= m)
d_{0,j} = j (0 <= j <= n)
d_{i,j} = d_{i - 1, j - 1} if a_i == b_j
d_{i,j} = min { if a_i != b_j
d_{i - 1, j} + 1 (delete)
d_{i, j - 1} + 1 (insert)
d_{i - 1, j - 1} + 1 (modify)
}
return d_{m, n}
And the algorithm, expressed in Haskell, follows the same shape of the algorithm:
distance a b = d m n
where (m, n) = (length a, length b)
a' = Array.listArray (1, m) a
b' = Array.listArray (1, n) b
d i 0 = i
d 0 j = j
d i j
| a' ! i == b' ! j = ds ! (i - 1, j - 1)
| otherwise = minimum [ ds ! (i - 1, j) + 1
, ds ! (i, j - 1) + 1
, ds ! (i - 1, j - 1) + 1
]
ds = Array.listArray bounds
[d i j | (i, j) <- Array.range bounds]
bounds = ((0, 0), (m, n))
In a strict language we wouldn't be able to define it so straightforwardly because the cells of the array would be strictly evaluated. In Haskell we're able to have the definition of each cell reference the definitions of other cells because Haskell is lazy – the definitions are only evaluated at the very end when d m n asks the array for the value of the last cell. A lazy language lets us set up a graph of standing dominoes; it's only when we ask for a value that we need to compute the value, which topples the first domino, which topples all the other dominoes. (In a strict language, we would have to set up an array of closures, doing the work that the Haskell compiler does for us automatically. It's easy to transform implementations between strict and lazy languages; it's all a matter of which language expresses which idea better.)
The blog post does a much better job of explaining all this.
So, I am asking for any excerise / example which show me what laziness is in the fact.
Click on Lazy on haskell.org to get the canonical example. There are many other examples just like it to illustrate the concept of delayed evaluation that benefits from not executing some parts of the program logic. Lazy is certainly not slow, but the opposite of eager evaluation common to most imperative programming languages.
Laziness is a consequence of non-strict function evaluation. Consider the "infinite" list of 1s:
ones = 1:ones
At the time of definition, the (:) function isn't evaluated; ones is just a promise to do so when it is necessary. Such a time would be when you pattern match:
myHead :: [a] -> a
myHead (x:rest) = x
When myHead ones is called, x and rest are needed, but the pattern match against 1:ones simply binds x to 1 and rest to ones; we don't need evaluate ones any further at this time, so we don't.
The syntax for infinite lists, using the .. "operator" for arithmetic sequences, is sugar for calls to enumFrom and enumFromThen. That is
-- An infintite list of ones
ones = [1,1..] -- enumFromThen 1 1
-- The natural numbers
nats = [1..] -- enumFrom 1
so again, laziness just comes from the non-strict evaluation of enumFrom.
Unlike with other languages, Haskell decouples the creation and definition of an object.... You can easily watch this in action using Debug.Trace.
You can define a variable like this
aValue = 100
(the value on the right hand side could include a complicated evaluation, but let's keep it simple)
To see if this code ever gets called, you can wrap the expression in Debug.Trace.trace like this
import Debug.Trace
aValue = trace "evaluating aValue" 100
Note that this doesn't change the definition of aValue, it just forces the program to output "evaluating aValue" whenever this expression is actually created at runtime.
(Also note that trace is considered unsafe for production code, and should only be used to debug).
Now, try two experiments.... Write two different mains
main = putStrLn $ "The value of aValue is " ++ show aValue
and
main = putStrLn "'sup"
When run, you will see that the first program actually creates aValue (you will see the "creating aValue" message, while the second does not.
This is the idea of laziness.... You can put as many definitions in a program as you want, but only those that are used will be actually created at runtime.
The real use of this can be seen with objects of infinite size. Many lists, trees, etc. have an infinite number of elements. Your program will use only some finite number of values, but you don't want to muddy the definition of the object with this messy fact. Take for instance the infinite lists given in other answers here....
[1..] -- = [1,2,3,4,....]
You can again see laziness in action here using trace, although you will have to write out a variant of [1..] in an expanded form to do this.
f::Int->[Int]
f x = trace ("creating " ++ show x) (x:f (x+1)) --remember, the trace part doesn't change the expression, it is just used for debugging
Now you will see that only the elements you use are created.
main = putStrLn $ "the list is " ++ show (take 4 $ f 1)
yields
creating 1
creating 2
creating 3
creating 4
the list is [1,2,3,4]
and
main = putStrLn "yo"
will not show any item being created.
I'm currently reading Implementing functional languages: a tutorial by SPJ and the (sub)chapter I'll be referring to in this question is 3.8.7 (page 136).
The first remark there is that a reader following the tutorial has not yet implemented C scheme compilation (that is, of expressions appearing in non-strict contexts) of ECase expressions.
The solution proposed is to transform a Core program so that ECase expressions simply never appear in non-strict contexts. Specifically, each such occurrence creates a new supercombinator with exactly one variable which body corresponds to the original ECase expression, and the occurrence itself is replaced with a call to that supercombinator.
Below I present a (slightly modified) example of such transformation from 1
t a b = Pack{2,1} ;
f x = Pack{2,2} (case t x 7 6 of
<1> -> 1;
<2> -> 2) Pack{1,0} ;
main = f 3
== transformed into ==>
t a b = Pack{2,1} ;
f x = Pack{2,2} ($Case1 (t x 7 6)) Pack{1,0} ;
$Case1 x = case x of
<1> -> 1;
<2> -> 2 ;
main = f 3
I implemented this solution and it works like charm, that is, the output is Pack{2,2} 2 Pack{1,0}.
However, what I don't understand is - why all that trouble? I hope it's not just me, but the first thought I had of solving the problem was to just implement compilation of ECase expressions in C scheme. And I did it by mimicking the rule for compilation in E scheme (page 134 in 1 but I present that rule here for completeness): so I used
E[[case e of alts]] p = E[[e]] p ++ [Casejump D[[alts]] p]
and wrote
C[[case e of alts]] p = C[[e]] p ++ [Eval] ++ [Casejump D[[alts]] p]
I added [Eval] because Casejump needs an argument on top of the stack in weak head normal form (WHNF) and C scheme doesn't guarantee that, as opposed to E scheme.
But then the output changes to enigmatic: Pack{2,2} 2 6.
The same applies when I use the same rule as for E scheme, i.e.
C[[case e of alts]] p = E[[e]] p ++ [Casejump D[[alts]] p]
So I guess that my "obvious" solution is inherently wrong - and I can see that from outputs. But I'm having trouble stating formal arguments as to why that approach was bound to fail.
Can someone provide me with such argument/proof or some intuition as to why the naive approach doesn't work?
The purpose of the C scheme is to not perform any computation, but just delay everything until an EVAL happens (which it might or might not). What are you doing in your proposed code generation for case? You're calling EVAL! And the whole purpose of C is to not call EVAL on anything, so you've now evaluated something prematurely.
The only way you could generate code directly for case in the C scheme would be to add some new instruction to perform the case analysis once it's evaluated.
But we (Thomas Johnsson and I) decided it was simpler to just lift out such expressions. The exact historical details are lost in time though. :)
I need as an example how to program a parallel iter-function using ocaml-threads. My first idea was to have a function similiar to this:
let procs = 4 ;;
let rec _part part i lst = match lst with
[] -> ()
| hd::tl ->
let idx = i mod procs in
(* Printf.printf "part idx=%i\n" idx; *)
let accu = part.(idx) in
part.(idx) <- (hd::accu);
_part part (i+1) tl ;;
Then a parallel iter could look like this (here as process-based variant):
let iter f lst = let part = Array.create procs [] in
_part part 0 lst;
let rec _do i =
(* Printf.printf "do idx=%i\n" i; *)
match Unix.fork () with
0 -> (* Code of child *)
if i < procs then
begin
(* Printf.printf "child %i\n" i; *)
List.iter f part.(i)
end
| pid -> (* Code of father *)
(* Printf.printf "father %i\n" i; *)
if i >= procs then ignore (Unix.waitpid [] pid)
else _do (i+1)
in
_do 0 ;;
Because the usage of Thread-module is a little bit different, how would I code this using ocaml's thread module?
And there is another question, the _part() function must scan the whole list to split them into n parts and then each part will be piped through each own processes (here). Still exists there a solution without splitting a list first?
If you have a function which processes a list, and you want to run it on several lists independently, you can call Thread.create with that function and every list. If you store your lists in array part then:
let threads = Array.map (Thread.create (List.iter f)) part in
Array.iter Thread.join threads
INRIA OCaml threads are not actual threads: only one thread executes at any given time, which means if you have four processors and four threads, all four threads will use the same processor and the other three will remain unused.
Where threads are useful is that they still allow asynchronous programming: some Thread module primitives can wait for an external resource to become available. This can reduce the time your software spends blocked by an unavailable resource, because you can have another thread do something else in the mean time. You can also use this to concurrently start several external asynchronous processes (like querying several web servers through HTTP). If you don't have a lot of resource-related blocking, this is not going to help you.
As for your list-splitting question: to access an element of a list, you must traverse all previous elements. While this traversal could theoretically be split across several threads or processes, the communication overhead would likely make it a lot slower than just splitting things ahead of time in one process. Or using arrays.
Answer to a question from the comments. The answer does not quite fit in a comment itself.
There is a lock on the OCaml runtime. The lock is released when an OCaml thread is about to enter a C function that
may block;
may take a long time.
So you can only have one OCaml thread using the heap, but you can sometimes have non-heap-using C functions working in parallel with it.
See for instance the file ocaml-3.12.0/otherlibs/unix/write.c
memmove (iobuf, &Byte(buf, ofs), numbytes); // if we kept the data in the heap
// the GC might move it from
// under our feet.
enter_blocking_section(); // release lock.
// Another OCaml thread may
// start in parallel of this one now.
ret = write(Int_val(fd), iobuf, numbytes);
leave_blocking_section(); // take lock again to continue
// with Ocaml code.