Why is garbage collection required for tail call optimization? Is it because if you allocate memory in a function which you then want to do a tail call on, there'd be no way to do the tail call and regain that memory? (So the stack would have to be saved so that, after the tail call, the memory could be reclaimed.)
Like most myths, there may be a grain of truth to this one. While GC isn't required for tail call optimization, it certainly helps in a few cases. Let's say you have something like this in C++:
int foo(int arg) {
// Base case.
vector<double> bar(10);
// Populate bar, do other stuff.
return foo(someNumber);
}
In this case, return foo(someNumber); looks like a tail call, but because you're using RAII memory management, and you have to free bar, this line would translate to a lower level as follows (in informal pseudocode):
ret = foo(someNumber);
free(bar);
return ret;
If you have GC, it is not necessary for the compiler to insert instructions to free bar. Therefore, this function can be optimized to a tail call.
Where did you hear that?
Even C compilers without any kind of garbage collector are able to optimize tail recursive calls to their iterative equivalent.
Garbage collection is not required for tail-call optimization.
Any variables allocated on the call stack will be reused in the recursive call, so there's no memory leak there.
Any local variables allocated on the heap and not freed before a tail call will leak memory whether or not tail-call optimization is used. Local variables allocated on the heap and freed before the tail call will not leak memory regardless of whether or not tail-call optimization is used.
It's true, garbage collection is not really required for tail-call optimization.
However, let's say you have 1 GB of RAM and you want to filter a 900MB list of integers to keep only the positive ones. Assume about half are positive, half are negative.
In a language with GC, you just write the function. GC will occur a bunch of times, and you'll end up with a 450 MB list. The code will look like this:
list *result = filter(make900MBlist(), funcptr);
make900MBlist will be incrementally GCd as the parts filter has gone through are no longer referenced by anything.
In a language without GC, to preserve tail-recursion, you'd have to do something like this:
list *srclist = make900MBlist();
list *result = filter(srclist, funcptr);
freelist(srclist);
This will end up having to use 900MB + 450MB before finally freeing the srclist, so the program will run out of memory and fail.
If you write your own filter_reclaim, that frees the input list as its no longer necessary:
list *result = filter_reclaim(make900MBlist(), funcptr);
It will no longer be tail-recursive, and you'll likely overflow your stack.
Related
In C#, there are structs and classes. Structs are usually (i.e. there are exceptions) stack allocated and classes are always heap allocated. Class instances, therefore, put pressure on the GC and are considered "slower" than structs. Microsoft has a best practice guide when to use structs over classes. This says to consider a struct if:
It logically represents a single value, similar to primitive types (int, double, etc.).
It has an instance size under 16 bytes.
It is immutable.
It will not have to be boxed frequently.
In C#, using struct instances that are larger than 16 bytes is generally said to perform worse than garbage collected class instances (dynamically allocated).
When does a boxed instance (which is heap-allocated) perform better, in terms of speed, than a non-boxed equivalent instance (which is stack-allocated)? Is there any best practice about when we should dynamically allocate (on the heap) instead of sticking to the default stack allocation?
TL;DR: start with no boxing, then profile.
Stack Allocation vs Boxed Allocation
This is perhaps more clear cut:
Stick to the stack,
Unless the value is big enough that it would blow it up.
While semantically writing fn foo() -> Bar implies moving Bar from the callee frame to the caller frame, in practice you are more likely to end up with the equivalent of a fn foo(__result: mut * Bar) signature where the caller allocates space on its stack and passes a pointer to the callee.
This may not always be sufficient to avoid copying, as some patterns may prevent writing directly in the return slot:
fn defeat_copy_elision() -> WithDrop {
let one = side_effectful();
if side_effectful_too() {
one
} else {
side_effects_hurt()
}
}
Here, there is no magic:
if the compiler uses the return slot for one, then in case the branch evaluates to false it has to move one out then instantiate the new WithDrop into it, and finally destroy one,
if the compiler instantiates one on the current stack, and it has to return it, then it has to perform a copy.
If the type didn't need Drop, there would be no issue.
Despite these oddball cases, I advise sticking to the stack if possible unless profiling reveals a place where it'd be beneficial to box.
Inline Member or Boxed Member
This case is much more complicated:
the size of the struct/enum is affected, thus CPU cache behavior is affected:
less frequently used big variants are a good candidate for boxing (or boxing parts of them),
less frequently accessed big members are a good candidate for boxing.
at the same time, there are costs for boxing:
it's incompatible with Copy types, and implicitly implements Drop (which, as seen above, disables some optimizations),
allocating/freeing memory has unbounded latency1,
accessing boxed memory introduces data-dependency: you cannot know which cache line to request before knowing the address.
As a result, this is a very fine balancing act. Boxing or unboxing a member may improve the performance of some parts of the codebase while decreasing the performance of others.
There is definitely no one-size fits all.
Thus, once again, I advise avoiding boxing until profiling reveals a place where it'd be beneficial to box.
1 Consider that on Linux, any memory allocation for which there is no spare memory in the process may require a system call, which if there is no spare memory in the OS may trigger the OOM killer to kill a process, at which point its memory is salvaged and made available. A simple malloc(1) may easily require milliseconds.
The following implementation from Wikipedia:
volatile unsigned int produceCount = 0, consumeCount = 0;
TokenType buffer[BUFFER_SIZE];
void producer(void) {
while (1) {
while (produceCount - consumeCount == BUFFER_SIZE)
sched_yield(); // buffer is full
buffer[produceCount % BUFFER_SIZE] = produceToken();
// a memory_barrier should go here, see the explanation above
++produceCount;
}
}
void consumer(void) {
while (1) {
while (produceCount - consumeCount == 0)
sched_yield(); // buffer is empty
consumeToken(buffer[consumeCount % BUFFER_SIZE]);
// a memory_barrier should go here, the explanation above still applies
++consumeCount;
}
}
says that a memory barrier must be used between the line that accesses the buffer and the line that updates the Count variable.
This is done to prevent the CPU from reordering the instructions above the fence along-with that below it. The Count variable shouldn't be incremented before it is used to index into the buffer.
If a fence is not used, won't this kind of reordering violate the correctness of code? The CPU shouldn't perform increment of Count before it is used to index into buffer. Does the CPU not take care of data dependency while instruction reordering?
Thanks
If a fence is not used, won't this kind of reordering violate the correctness of code? The CPU shouldn't perform increment of Count before it is used to index into buffer. Does the CPU not take care of data dependency while instruction reordering?
Good question.
In c++, unless some form of memory barrier is used (atomic, mutex, etc), the compiler assumes that the code is single-threaded. In which case, the as-if rule says that the compiler may emit whatever code it likes, provided that the overall observable effect is 'as if' your code was executed sequentially.
As mentioned in the comments, volatile does not necessarily alter this, being merely an implementation-defined hint that the variable may change between accesses (this is not the same as being modified by another thread).
So if you write multi-threaded code without memory barriers, you get no guarantees that changes to a variable in one thread will even be observed by another thread, because as far as the compiler is concerned that other thread should not be touching the same memory, ever.
What you will actually observe is undefined behaviour.
It seems, that your question is "can incrementing Count and assigment to buffer be reordered without changing code behavior?".
Consider following code tansformation:
int count1 = produceCount++;
buffer[count1 % BUFFER_SIZE] = produceToken();
Notice that code behaves exactly as original one: one read from volatile variable, one write to volatile, read happens before write, state of program is the same. However, other threads will see different picture regarding order of produceCount increment and buffer modifications.
Both compiler and CPU can do that transformation without memory fences, so you need to force those two operations to be in correct order.
If a fence is not used, won't this kind of reordering violate the correctness of code?
Nope. Can you construct any portable code that can tell the difference?
The CPU shouldn't perform increment of Count before it is used to index into buffer. Does the CPU not take care of data dependency while instruction reordering?
Why shouldn't it? What would the payoff be for the costs incurred? Things like write combining and speculative fetching are huge optimizations and disabling them is a non-starter.
If you're thinking that volatile alone should do it, that's simply not true. The volatile keyword has no defined thread synchronization semantics in C or C++. It might happen to work on some platforms and it might happen not to work on others. In Java, volatile does have defined thread synchronization semantics, but they don't include providing ordering for accesses to non-volatiles.
However, memory barriers do have well-defined thread synchronization semantics. We need to make sure that no thread can see that data is available before it sees that data. And we need to make sure that a thread that marks data as able to be overwritten is not seen before the thread is finished with that data.
When exporting a Haskell function to be called from C, when does Haskell's garbage get collected? If C owns main then there is no way to predict the next call in to Haskell. This question is especially pertinent when running single-threaded Haskell or without parallel GC.
When you initialize the ghc runtime, you can pass rts flags to it via argc and argv like so:
RtsConfig conf = defaultRtsConfig;
conf.rts_opts_enabled = RtsOptsAll;
hs_init_ghc(&argc, &argv, conf);
This lets you set options to, for example fix a smaller maximum heap size or use a compaction algorithm on the nursery to further reduce allocation. Further, note there is an idle GC whose interval can be set (or disabled), and if you link the threaded runtime, that should run whether or not you ever yield back to a Haskell call.
Edit: I haven't actually performed experimentation to verify the following, but if we look at the source of hs_init_ghc we see that it initializes signal handlers, which should include the timer handlers that respond on SIGVTALRM and indeed it also starts the time, which calls (on POSIX) timer_create that should throw those signals on regular intervals. In turn, this periodically should "wake up" the RTS whether or not anything is happening, which in turn should mean that it will run idle GC whether or not the system yields back to Haskell from C. But again, I have only read the code and commentary, not tested this myself.
I've been looking at D today and on the surface it looks quite amazing. I like how it includes many higher level constructs directly in the language so silly hacks or terse methods don't have to be used. One thing that really worries me if the GC. I know this is a big issues and have read many discussions about it.
My own simple tests sprouted from a question here shows that the GC is extremely slow. Over 10 times slower than straight C++ doing the same thing. (obviously the test does not directly convert into real world but the performance hit is extreme and would slow down real world happens that behave similarly(allocating many small objects quickly)
I'm looking into writing a real time low latency audio application and it is possible that the GC will ruin the performance of the application to make it nearly useless. In a sense, if it has any issues it will ruin the real time audio aspect which is much more crucial since, unlike graphics, audio runs at a much higher frame rate(44000+ vs 30-60). (due to it's low latency it is more crucial than a standard audio player which can buffer significant amounts of data)
Disabling the GC improved the results to within about 20% of the C++ code. This is significant. I'll give the code at the end for analysis.
My questions are:
How difficult is it to replace D's GC with a standard smart pointers implementation so that libraries that rely on the GC can still be used. If I remove GC completely I'll lose a lot of grunt work, as D already has limit libraries compared to C++.
Does GC.Disable only halt the garbage collection temporarily(preventing the GC thread from running) and GC.Enable pick back up where it left off. So I could potentially disable the GC from running in high cpu usage moments to prevent latency issues.
Is there any way to enforce a pattern to not use GC consistently. (this is because I've not programming in D and when I start writing my glasses that do not use the GC I would like to be sure I don't forget to implement their own clean up.
Is it possible to replace the GC in D easily? (not that I want to but it might be fun to play around with different methods of GC one day... this is similar to 1 I suppose)
What I'd like to do is trade memory for speed. I do not need the GC to run every few seconds. In fact, if I can properly implement my own memory management for my data structures then chances are it will not need to run very often at all. I might need to run it only when memory becomes scarce. From what I've read, though, the longer you wait to call it the slower it will be. Since there generally will be times in my application where I can get away with calling it without issues this will help alleviate some of the pressure(but then again, there might be hours when I won't be able to call it).
I am not worried about memory constraints as much. I'd prefer to "waste" memory over speed(up to a point, of course). First and foremost is the latency issues.
From what I've read, I can, at the very least, go the route of C/C++ as long as I don't use any libraries or language constructs that rely on the GC. The problem is, I do not know the ones that do. I've seen string, new, etc mentioned but does that mean I can't use the build in strings if I don't enable the GC?
I've read in some bug reports that the GC might be really buggy and that could explain its performance problems?
Also, D uses a bit more memory, in fact, D runs out of memory before the C++ program. I guess it is about 15% more or so in this case. I suppose that is for the GC.
I realize the following code is not representative of your average program but what it says is that when programs are instantiating a lot of objects(say, at startup) they will be much slower(10 times is a large factor). Of the GC could be "paused" at startup then it wouldn't necessarily be an issue.
What would really be nice is if I could somehow have the compiler automatically GC a local object if I do not specifically deallocate it. This almost give the best of both worlds.
e.g.,
{
Foo f = new Foo();
....
dispose f; // Causes f to be disposed of immediately and treats f outside the GC
// If left out then f is passed to the GC.
// I suppose this might actually end up creating two kinds of Foo
// behind the scenes.
Foo g = new manualGC!Foo(); // Maybe something like this will keep GC's hands off
// g and allow it to be manually disposed of.
}
In fact, it might be nice to actually be able to associate different types of GC's with different types of data with each GC being completely self contained. This way I could tailor the performance of the GC to my types.
Code:
module main;
import std.stdio, std.conv, core.memory;
import core.stdc.time;
class Foo{
int x;
this(int _x){x=_x;}
}
void main(string args[])
{
clock_t start, end;
double cpu_time_used;
//GC.disable();
start = clock();
//int n = to!int(args[1]);
int n = 10000000;
Foo[] m = new Foo[n];
foreach(i; 0..n)
//for(int i = 0; i<n; i++)
{
m[i] = new Foo(i);
}
end = clock();
cpu_time_used = (end - start);
cpu_time_used = cpu_time_used / 1000.0;
writeln(cpu_time_used);
getchar();
}
C++ code
#include <cstdlib>
#include <iostream>
#include <time.h>
#include <math.h>
#include <stdio.h>
using namespace std;
class Foo{
public:
int x;
Foo(int _x);
};
Foo::Foo(int _x){
x = _x;
}
int main(int argc, char** argv) {
int n = 120000000;
clock_t start, end;
double cpu_time_used;
start = clock();
Foo** gx = new Foo*[n];
for(int i=0;i<n;i++){
gx[i] = new Foo(i);
}
end = clock();
cpu_time_used = (end - start);
cpu_time_used = cpu_time_used / 1000.0;
cout << cpu_time_used;
std::cin.get();
return 0;
}
D can use pretty much any C library, just define the functions needed. D can also use C++ libraries, but D does not understand certain C++ constructs. So... D can use almost as many libraries as C++. They just aren't native D libs.
From D's Library reference.
Core.memory:
static nothrow void disable();
Disables automatic garbage collections performed to minimize the process footprint. Collections may continue to occur in instances where the implementation deems necessary for correct program behavior, such as during an out of memory condition. This function is reentrant, but enable must be called once for each call to disable.
static pure nothrow void free(void* p);
Deallocates the memory referenced by p. If p is null, no action occurs. If p references memory not originally allocated by this garbage collector, or if it points to the interior of a memory block, no action will be taken. The block will not be finalized regardless of whether the FINALIZE attribute is set. If finalization is desired, use delete instead.
static pure nothrow void* malloc(size_t sz, uint ba = 0);
Requests an aligned block of managed memory from the garbage collector. This memory may be deleted at will with a call to free, or it may be discarded and cleaned up automatically during a collection run. If allocation fails, this function will call onOutOfMemory which is expected to throw an OutOfMemoryError.
So yes. Read more here: http://dlang.org/garbage.html
And here: http://dlang.org/memory.html
If you really need classes, look at this: http://dlang.org/memory.html#newdelete
delete has been deprecated, but I believe you can still free() it.
Don't use classes, use structs. Structs are stack allocated, classes are heap. Unless you need polymorphism or other things classes support, they are overhead for what you are doing. You can use malloc and free if you want to.
More or less... fill out the function definitions here: https://github.com/D-Programming-Language/druntime/blob/master/src/gcstub/gc.d . There's a GC proxy system set up to allow you to customize the GC. So it's not like it is something that the designers do not want you to do.
Little GC knowledge here:
The garbage collector is not guaranteed to run the destructor for all unreferenced objects. Furthermore, the order in which the garbage collector calls destructors for unreference objects is not specified. This means that when the garbage collector calls a destructor for an object of a class that has members that are references to garbage collected objects, those references may no longer be valid. This means that destructors cannot reference sub objects. This rule does not apply to auto objects or objects deleted with the DeleteExpression, as the destructor is not being run by the garbage collector, meaning all references are valid.
import std.c.stdlib; that should have malloc and free.
import core.memory; this has GC.malloc, GC.free, GC.addroots, //add external memory to GC...
strings require the GC because they are dynamic arrays of immutable chars. ( immutable(char)[] ) Dynamic arrays require GC, static do not.
If you want manual management, go ahead.
import std.c.stdlib;
import core.memory;
char* one = cast(char*) GC.malloc(char.sizeof * 8);.
GC.free(one);//pardon me, I'm not used to manual memory management.
//I am *asking* you to edit this to fix it, if it needs it.
why create a wrapper class for an int? you are doing nothing more than slowing things down and wasting memory.
class Foo { int n; this(int _n){ n = _n; } }
writeln(Foo.sizeof); //it's 8 bytes, btw
writeln(int.sizeof); //Its *half* the size of Foo; 4 bytes.
Foo[] m;// = new Foo[n]; //8 sec
m.length=n; //7 sec minor optimization. at least on my machine.
foreach(i; 0..n)
m[i] = new Foo(i);
int[] m;
m.length=n; //nice formatting. and default initialized to 0
//Ooops! forgot this...
foreach(i; 0..n)
m[i] = i;//.145 sec
If you really need to, then write the Time-sensitive function in C, and call it from D.
Heck, if time is really that big of a deal, use D's inline assembly to optimize everything.
I suggest you read this article: http://3d.benjamin-thaut.de/?p=20
There you will find a version of the standard library that does own memory management and completely avoids garbage collection.
D's GC simply isn't as sophisticated as others like Java's. It's open-source so anyone can try to improve it.
There is an experimental concurrent GC named CDGC and there is a current GSoC project to remove the global lock: http://www.google-melange.com/gsoc/project/google/gsoc2012/avtuunainen/17001
Make sure to use LDC or GDC for compilation to get better optimized code.
The XomB project also uses a custom runtime but it's D version 1 I think.
http://wiki.xomb.org/index.php?title=Main_Page
You can also just allocate all memory blocks you need then use a memory pool to get blocks without the GC.
And by the way, it’s not as slow as you mentionned. And GC.disable() doesn’t really disable it.
We might look at the problem from a bit different view. Suboptimal performance of allocating many little objects, which you mention as a rationale for the question, has little to do with GC alone. Rather, it's a matter of balance between general-purpose (but suboptimal) and highly-performant (but task-specialised) memory management tools. The idea is: presence of GC doesn't prevent you from writing a real-time app, you just have to use more specific tools (say, object pools) for special cases.
Since this hasn't been closed yet, recent versions of D have the std.container library which contains an Array data structure that is significantly more efficient with respect to memory than the built-in arrays. I can't confirm that the other data structures in the library are also efficient, but it may be worth looking into if you need to be more memory conscious without having to resort to manually creating data structures that don't require garbage collection.
D is constantly evolving. Most of the answers here are 9+ years old, so I figured I'd answer these questions again for anyone curious what the current situation is.
(...) replace D's GC with a standard smart pointers implementation so that libraries that rely on the GC can still be used. (...)
Replacing the GC itself with smart pointers is not something I've looked into (i.e. where new creates a smart pointer). There are several D libraries that add smart pointers. You can interface with any C library. Interfacing with C++ and even Objective-C is also supported to some degree, so that should cover you pretty well.
Does GC.disable only halt the garbage collection temporarily (preventing the GC thread from running) and GC.enable pick back up where it left off. (...)
"Collections may continue to occur in instances where the implementation deems necessary for correct program behaviour, such as during an out of memory condition."
[source]
So mostly, yes. You can also manually invoke collection during down-time.
Is there any way to enforce a pattern to not use GC consistently. (...) when I start writing my classes that do not use the GC I would like to (...)
Classes are always allocated on the GC and are reference types. Structs should be used instead. However, keep in mind that structs are value types, so by default they're copied when being moved. You can #disable the copy constructor if you don't like this behaviour, but then your struct won't be POD.
What you're probably looking for is #nogc, which is a function attribute that stops a function from using the GC. You can't mark a struct type as #nogc, but you can mark each of its methods as #nogc. Just keep in mind that #nogc code can't call GC code. There's also nothrow.
If you intend to never use GC, you ought to look into Better C. It's a D language setting that removes all of D's runtime, standard library (Phobos), GC and all GC-reliant features (namely associative arrays and exceptions) in favour of using C's runtime and the C Standard Library.
Is it possible to replace the GC in D (...)
Yes it is: https://dlang.org/spec/garbage.html#gc_registry
And you can configure the pre-existing GC to better suit your needs if you don't want to make your own GC.
Does the exception "Stack Overflow" is only linked to the use of recursion? In what other context we can give this exception?
allocating local variables that are too large to fit on the stack, for example an array with a million elements on a 64K stack.
too deep a call stack, even without recursion, if each routine has many local variables. example:
a() calls b() calls c()...calls z() calls a1()... calls z99().
Each routines local variable as well as a return address for each function (and maybe a stack smashing protector) stays on the stack, until the stack unwinds as each function exits.
If you allocate huge buffers on the stack using C99's dynamic arrays. i.e.
void stackkiller(int size) {
char toohuge[size];
printf("Don't call this with too big an argument!\n");
}
Since any recursive algorithm can be turned into an iterative one and vice versa, here's the algorithm for a stack overflow in C99:
void stackoverflow()
{
for (size_t n = 0; ; n *= 2)
char a[n];
}
(Turn compiler optimization off.)
If you have too much memory allocated on the stack (for example, see in VS compiler).
One important thing to note, is that both deep recursion and deep call chains do not fundamentally cause the stack to overflow. What cause the overflow is that every call allocates a new stack frame, thus adding to the stack space usage.
Many languages allow arbitrarily deep recursion/calls by using tail call optimization. Some languages such as ML and Haskell will internally convert some (when the call is at the end of the calling function) function/recursive calls to avoid using additional stack space, thus allowing effectively infinite recursion. The idea here is that if the call is at the very end of the calling function, the calling functions stack space is no longer needed and can be reclaimed for use by the called function.