Can I realloc an OCaml GC block? - garbage-collection

There is no realloc in OCaml memory.h or alloc.h (byterun/caml). Does that mean that is not possible to realloc an OCaml GC block (or value)? The use-case I'm thinking of is string concat, where the following can be optimized using realloc:
a = a ^ b
In my benchmarks, string concat is actually faster in PHP (5.5) than native-compiled OCaml (4.02.1).
Edit: In another benchmark, Buffer is much faster than PHP concat, and still faster than OCaml concat even when converting the buffer to string in every loop iteration. Code.

It's true, there's no realloc()-style primitive for OCaml memory.
The efficiency of realloc() isn't a given; it depends on the pattern of calls. Generally speaking it only saves time if there is free space after the reallocated object.
In OCaml this is very unlikely; in the younger GC generation no objects have free space following them; blocks are allocated in decreasing memory order. In the older generation, you can have the odd hole here and there. If compaction has been done recently, only the one old object at the end can have free space after it.
For these reasons, I suspect that realloc() would have almost no benefit in the OCaml GC environment.
I would actually tend to doubt that realloc() is the reason you're getting faster times for PHP than for OCaml. I don't think it's a real timesaver except in unusual circumstances. But maybe you have some measurements that support this theory.
Update
You might want to experiment with the OCaml Buffer module, which will let you concatenate strings imperatively. A buffer is likely to be faster than using the ^ operator for many cases (but has the usual disadvantages of a mutable value).

Related

What memory leaks can occur outside the view of GHC's heap profiler

I have a program that exhibits the behavior of a memory leak. It gradually takes up all of the systems memory until it fills all swap space and then the operating system kills it. This happens once every several days.
I have extensively profiled the heap in a manner of ways (-hy, -hm, -hc) and tried limiting heap size (-M128M) tweaked the number of generations (-G1) but no matter what I do the heap size appears constant-ish and low always (measured in kB not MB or GB). Yet when I observe the program in htop, its resident memory steadily climbs.
What this indicates to me is that the memory leak is coming from somewhere besides the GHC heap. My program makes use of dependencies, specifically Haskell's yaml library which wraps the C library libyaml, it is possible that the leak is in the number of foreign pointers it has to objects allocated by libyaml.
My question is threefold:
What places besides the GHC heap can memory leak from in a Haskell program?
What tools can I use to track these down?
What changes to my source code need to be made to avoid these types of leaks, as they seem to differ from the more commonly experienced space leaks in Haskell?
This certainly sounds like foreign pointers aren't being finalized properly. There are several possible reasons for this:
The underlying C library doesn't free memory properly.
The Haskell library doesn't set up finalization properly.
The ForeignPtr objects aren't being freed.
I think there's actually a decent chance that it's option 3. If the RTS consistently finds enough memory in the first GC generation, then it just won't bother running a major collection. Fortunately, this is the easiest to diagnose. Just have your program run System.Memory.performGC every so often. If that fixes it, you've found the bug and can tweak just how often you want to do that.
Another possible issue is that you could have foreign pointers lying around in long-lived thunks or other closures. Make sure you don't.
One particularly strong possibility when working with a wrapped C library is that the wrapper functions will return ByteStrings whose underlying arrays were allocated by C code. So any ByteStrings you get back from yaml could potentially be off-heap.

In Rust, is there a way to directly read the content of a file into the given uninitialized byte array?

I am looking for a way to directly read the content of a file into the provided uninitialized byte array.
Currently, I have a code like the following:
use std::fs::File;
use std::mem::MaybeUninit;
let buf: MaybeUninit<[u8; 4096]> = MaybeUninit::zeroed();
let f = File::open("some_file")?;
f.read(buf.as_mut_ptr().as_mut().unwrap())?;
The code does work, except that it unnecessarily initializes the byte array with 0. I would like to replace MaybeUninit::zeroed() with MaybeUninit::uninit() but doing so will trigger an undefined behavior according to the document of MaybeUninit. Is there a way to initialize an uninitialized memory region with the content of the file without first reading the data to somewhere else, by only using the standard library? Or do we need to go for the OS-specific API?
The previous shot at the answer is kept below for posterity. Let's deal with the actual elephant in the room:
Is there a way to initialize an uninitialized memory region with the content of the file without first reading the data to somewhere else, by only using the standard library? Or do we need to go for the OS-specific API?
There is: Read::read_to_end(&mut self, &mut Vec<u8>)
This function will drain your impl Read object, and depending on the underlying implementation will do one or more reads, extending the Vec provided as it goes and appending all bytes to it.
It then returns the number of bytes read. It can also be interrupted, and this error needs to be handled.
You are trying to micro-optimize something based on heuristics you think are the case, when they are not.
The initialization of the array is done in one go as low-level as it can get with memset, all in one chunk. Both calloc and malloc+memset are highly optimized, calloc relies on a trick or two to make it even more performant. Somebody on codereview pitted "highly optimized code" against a naive implementation and lost as a result.
The takeaway is that second-guessing the compiler is typically fraught with issues and, overall, not worth micro-optimizing for unless you can put some real numbers on the issues.
The second takeaway is one of memory logic. As I am sure you are aware, allocation of memory is dramatically faster in some cases depending on the position of the memory you are allocating and the size of the contiguous chunk you are allocating, due to how memory is laid out in atomic units (pages). This is a much more impactful factor, to the point that below the hood, the compiler will often align your memory request to an entire page to avoid having to fragment it, particularly as it gets into L1/L2 caches.
If anything isn't clear, let me know and I'll generate some small benchmarks for you.
Finally, MaybeUninit is not at all the tool you want for the job in any case. The point of MaybeUninit isn't to skip a memset or two, since you will be performing those memsets yourself by having to guarantee (by contract due to assume_init) that those types are sane. There are cases for this, but they're rare.
In larger cases
There is an impact on performance in uninitializing vs. initializing memory, and we're going to show this by taking an absolutely perfect scenario: we're going to make ourselves a 64M buffer in memory and wrap it in a Cursor so we get a Read type. This Read type will have latency far, far inferior to most I/O operations you will encounter in the wild, since it is almost guaranteed to reside entirely in L2 cache during the benchmark cycle (due to its size) or L3 cache (because we're single-threaded). This should allow us to notice the performance loss from memsetting.
We're going to run three versions for each case (the code):
One where we define out buffer as [MaybeUninit::uninit().assume_init(); N], i.e. we're taking N chunks of MaybeUninit<u8>
One where out MaybeUninit is a contiguous N-element long chunk
One where we're just mapping straight into an initialized buffer
The results (on a core i9-9900HK laptop):
large reads/one uninit time: [1.6720 us 1.7314 us 1.7848 us]
large reads/small uninit elements
time: [2.1539 us 2.1597 us 2.1656 us]
large reads/safe time: [2.0627 us 2.0697 us 2.0771 us]
small reads/one uninit time: [4.5579 us 4.5722 us 4.5893 us]
small reads/small uninit elements
time: [5.1050 us 5.1219 us 5.1383 us]
small reads/safe time: [7.9654 us 7.9782 us 7.9889 us]
The results are as expected:
Allocating N MaybeUninit is slower than one huge chunk; this is completely expected and should not come as a surprise.
Small, iterative 4096-byte reads are slower than a huge, single, 128M read even when the buffer only contains 64M
There is a small performance loss in reading using initialized memory, of about 30%
Opening anything else on the laptop while testing causes a 50%+ increase in benchmarked time
The last point is particularly important, and it becomes even more important when dealing with real I/O as opposed to a buffer in memory. The more layers of cache you have to traverse, the more side-effects you get from other processes impacting your own processing. If you are reading a file, you will typically encounter:
The filesystem cache (may or may not be swapped)
L3 cache (if on the same core)
L2 cache
L1 cache
Depending on the level of the cache that produces a cache miss, you're more or less likely to have your performance gain from using uninitialized memory dwarfed by the performance loss in having a cache miss.
So, the (unexpected TL;DR):
Small, iterative reads are slower
There is a performance gain in using MaybeUninit but it is typically an order of magnitude less than any I/O opt

1GB Vector, will Vector.Unboxed give trouble, will Vector.Storable give trouble?

We need to store a large 1GB of contiguous bytes in memory for long periods of time (weeks to months), and are trying to choose a Vector/Array library. I had two concerns that I can't find the answer to.
Vector.Unboxed seems to store the underlying bytes on the heap, which can be moved around at will by the GC.... Periodically moving 1GB of data would be something I would like to avoid.
Vector.Storable solves this problem by storing the underlying bytes in the c heap. But everything I've read seems to indicate that this is really only to be used for communicating with other languages (primarily c). Is there some reason that I should avoid using Vector.Storable for internal Haskell usage.
I'm open to a third option if it makes sense!
My first thought was the mmap package, which allows you to "memory-map" a file into memory, using the virtual memory system to manage paging. I don't know if this is appropriate for your use case (in particular, I don't know if you're loading or computing this 1GB of data), but it may be worth looking at.
In particular, I think this prevents the GC moving the data around (since it's not on the Haskell heap, it's managed by the OS virtual memory subsystem). On the other hand, this interface handles only raw bytes; you couldn't have, say, an array of Customer objects or something.

Is g_slice really faster than malloc

The GLib docs recommend use of the GLib Slice Allocator over malloc:
"For newly written code it is recommended to use the new g_slice API instead of g_malloc() and friends, as long as objects are not resized during their lifetime and the object size used at allocation time is still available when freeing."
-- http://developer.gnome.org/glib/unstable/glib-Memory-Slices.html
But in practise is g_slice significantly faster than Windows/Linux malloc(faster enough to warrant the extra trouble of handling sizes and GLib's preprocessor hacks like g_slice_new)? I'm planning to use GLib in my C++ program to handle INIish configuration (GKeyFile) and to get access to data structures not available in C++ like GHashTable, so the GLib dependency doesn't matter anyway.
Faster enough to be worth it sort of depends on your app. But they should be faster.
There is another issue besides speed, which is memory fragmentation and per-block overhead. GSlice
leaves malloc to deal with large or variable-size allocations while handling small known-size objects more space-efficiently.
Slice API heavily borrows from research conducted by Sun Microsystems in 1980s and it was called slab allocation back then. I could not find original research paper but here is a wikipedia page about it or you can just google for "slab allocation".
Essentially it eliminates expensive allocation/deallocation operations by facilitating reuse of memory blocks. It also reduces or eliminates memory fragmentation. So it is not all about speed, even though it should improve it as well.
If you should used or not - it depends... Look at Havoc's answer - he summarized it pretty well.
Update 1:
Note, that modern Linux kernels include SLAB allocator as one of the option and it is often the default. So, the difference between g_slice() and malloc() may be unnoticeable in that case. However, purpose of glib is cross-platform compatibility, so using slice API may somewhat guarantee consistent performance across different platforms.
Update 2:
As it was pointed by a commenter my first update is incorrect. SLAB allocation is used by kernel to allocate memory to processes but malloc() uses an unrelated mechanism, so claim that malloc() is equivalent to g_slice() on Linux is invalid. Also see this answer for more details.

Writing to adjacent array elements from different threads?

Are there any modern, common CPUs where it is unsafe to write to adjacent elements of an array concurrently from different threads? I'm especially interested in x86. You may assume that the compiler doesn't do anything obviously ridiculous to increase memory granularity, even if it's technically within the standard.
I'm interested in the case of writing arbitrarily large structs, not just native types.
Note:
Please don't mention the performance issues with regard to false sharing. I'm well aware of these, but they're of no practical importance for my use cases. I'm also aware of visibility issues with regard to data written from threads other than the reader. This is addressed in my code.
Clarification: This issue came up because on some processors (for example, old DEC Alphas) memory could only be addressed at word level. Therefore, writing to memory in non-word size increments (for example, single bytes) actually involved read-modify-write of the byte to be written plus some adjacent bytes under the hood. To visualize this, think about what's involved in writing to a single bit. You read the byte or word in, perform a bitwise operation on the whole thing, then write the whole thing back. Therefore, you can't safely write to adjacent bits concurrently from different threads.
It's also theoretically possible, though utterly silly, for a compiler to implement memory writes this way when the hardware doesn't require it. x86 can address single bytes, so it's mostly not an issue, but I'm trying to figure out if there's any weird corner case where it is. More generally, I want to know if writing to adjacent elements of an array from different threads is still a practical issue or mostly just a theoretical one that only applies to obscure/ancient hardware and/or really strange compilers.
Yet another edit: Here's a good reference that describes the issue I'm talking about:
http://my.safaribooksonline.com/book/programming/java/0321246780/threads-and-locks/ch17lev1sec6
Writing a native sized value (i.e. 1, 2, 4, or 8 bytes) is atomic (well, 8 bytes is only atomic on 64-bit machines). So, no. Writing a native type will always write as expected.
If you're writing multiple native types (i.e. looping to write an array) then it's possible to have an error if there's a bug in the operating system kernel or an interrupt handler that doesn't preserve the required registers.
Yes, definitely, writing a mis-aligned word that straddles the CPU cache line boundary is not atomic.

Resources