How Erlang atoms can be garbage collected - garbage-collection

It is said that atoms are not garbage collected. Once you’ve created an atom, it remains in the atom table, which might cause memory leakage at the end of the day!
I'm fairly new to Erlang, and my question is: How the atoms can be garbage collected? And if not possible, how to minimize that effect?

Atoms aren't issue unless you are creating them dynamically. If you did that, then you are on your way to crash an Erlang system.
How to create Atoms dynamically?
For example calling list_to_atom function inside a loop.
If you are interested in Erlang garbage collection, then read this paper by Joe Armstrong: One Pass Real-Time Generational Mark-Sweep Garbage Collection (1995).
Always keep in mind: Don't create Atoms dynamically!
Well sometimes you might need to create an Atom dynamically BUT don't over use it!

While I'm not sure atoms are garbage-collected, you can easily do without worrying whether you will blow up the system's memory. As #Chiron said, as long as all your atoms are known at compile time you should be ok.
What if I really need to use list_to_atom/1 somehow? Well, you may be able to twist your issue using this kind of function:
atom("apple") -> apple;
atom("orange") -> orange;
atom("banana") -> banana.
One other workaround is list_to_existing_atom/1
But the VM can still eat more and more RAM: other connected Erlang nodes may register atoms globally, that is allocate atoms at run time.

From learn you some Erlang:
Atoms are really nice and a great way to send messages or represent
constants. However there are pitfalls to using atoms for too many
things: an atom is referred to in an "atom table" which consumes
memory (4 bytes/atom in a 32-bit system, 8 bytes/atom in a 64-bit
system). The atom table is not garbage collected, and so atoms will
accumulate until the system tips over, either from memory usage or
because 1048577 atoms were declared.
This means atoms should not be generated dynamically for whatever
reason; if your system has to be reliable and user input lets someone
crash it at will by telling it to create atoms, you're in serious
trouble. Atoms should be seen as tools for the developer because
honestly, it's what they are.

Related

What is the difference between using the box keyword and Box::new?

Is box just syntactic sugar or can it be applied to use cases where Box::new is not sufficient? I read somewhere that box is unstable, does that mean I can only use it with the nightly Rust versions?
Box::new is just a function, like any other function. It is not special in any way whatsoever. It is grubby and smells faintly of very-close-to-the-expiration date cheese.
box is magic and made up ground-up pixies and the dreams of little children. It is dressed in the finest, swankiest clothes and carries with it the faint aroma of freshly cut pine.
When you execute Box::new(e), because it is a function, e must be completely evaluated and constructed before it can start the call. If this means allocating and filling a 500kB structure on the stack, then it has to allocate and fill a 500kB structure on the stack, and then pass that to Box::new, which only then can allocate the space on the heap (which might fail), and then copy that 500kB into the heap.
When you execute box e, because it is wonderful like a cool breeze on a hot summer's day, the compiler can reorder things such that it begins by allocating the 500kB on the heap, and then filling the 500kB structure directly on the heap. And then it's done. No extra copying, no chewing through stack space. No wasted effort if that "allocate on the heap" thing fails to work out.
box is life, box is love; all hail box!
(And yes, as of writing, it's still unstable which means you need a nightly compiler to bask in its radiance. But soon, the dawn will come. Get it? Dawn? Nightly? ... I'll show myself out...)

Suitable Haskell type for large, frequently changing sequence of floats

I have to pick a type for a sequence of floats with 16K elements. The values will be updated frequently, potentially many times a second.
I've read the wiki page on arrays. Here are the conclusions I've drawn so far. (Please correct me if any of them are mistaken.)
IArrays would be unacceptably slow in this case, because they'd be copied on every change. With 16K floats in the array, that's 64KB of memory copied each time.
IOArrays could do the trick, as they can be modified without copying all the data. In my particular use case, doing all updates in the IO monad isn't a problem at all. But they're boxed, which means extra overhead, and that could add up with 16K elements.
IOUArrays seem like the perfect fit. Like IOArrays, they don't require a full copy on each change. But unlike IOArrays, they're unboxed, meaning they're basically the Haskell equivalent of a C array of floats. I realize they're strict. But I don't see that being an issue, because my application would never need to access anything less than the entire array.
Am I right to look to IOUArrays for this?
Also, suppose I later want to read or write the array from multiple threads. Will I have backed myself into a corner with IOUArrays? Or is the choice of IOUArrays totally orthogonal to the problem of concurrency? (I'm not yet familiar with the concurrency primitives in Haskell and how they interact with the IO monad.)
A good rule of thumb is that you should almost always use the vector library instead of arrays. In this case, you can use mutable vectors from the Data.Vector.Mutable module.
The key operations you'll want are read and write which let you mutably read from and write to the mutable vector.
You'll want to benchmark of course (with criterion) or you might be interested in browsing some benchmarks I did e.g. here (if that link works for you; broken for me).
The vector library is a nice interface (crazy understatement) over GHC's more primitive array types which you can get to more directly in the primitive package. As are the things in the standard array package; for instance an IOUArray is essentially a MutableByteArray#.
Unboxed mutable arrays are usually going to be the fastest, but you should compare them in your application to IOArray or the vector equivalent.
My advice would be:
if you probably don't need concurrency first try a mutable unboxed Vector as Gabriel suggests
if you know you will want concurrent updates (and feel a little brave) then first try a MutableArray and then do atomic updates with these functions from the atomic-primops library. If you want fine-grained locking, this is your best choice. Of course concurrent reads will work fine on whatever array you choose.
It should also be theoretically possible to do concurrent updates on a MutableByteArray (equivalent to IOUArray) with those atomic-primops functions too, since a Float should always fit into a word (I think), but you'd have to do some research (or bug Ryan).
Also be aware of potential memory reordering issues when doing concurrency with the atomic-primops stuff, and help convince yourself with lots of tests; this is somewhat uncharted territory.

Any way to manually indicate element of a MutableArray# safe to GC?

In my application I'm working with MutableArrays (via the primitive package) shared across threads. I know when individual elements are no longer used and I'd like some way (unsafeMarkGarbage or something) to indicate to the runtime that they can be collected. At least I'd like to experiment with that if such a function or equivalent technique exists.
EDIT, to add a bit more detail: I've got a conceptual "infinite tape" implemented as a linked list of short MutableArray segments, something like:
data Seg a = Seg (MutableArray a) (IORef (Maybe (Seg a)))
I access the tape using a concurrent counter and always know when an element of the tape will no longer be accessed. In certain cases when a thread is descheduled it's possible that entire array segments (both the array and its elements) which could have been GC'd will stick around as their references will persist.
An ideal solution would avoid an additional write (maybe that's silly), avoid another layer of indirection in the array, and allow entire MutableArrays to be collected when all their elements expire.
Weak references do seem to be the most promising sort of mechanism I've seen, but I can't yet see how they can help me here.
I would suggest you store undefined in the positions that you would like to garbage collect.

Space leaks with Haskell's cereal library?

As a hobby project called 'beercan', I'm reverse-engineering the resource files of the Torchlight games. Using an okay-ish hex editor, I try to guess the structure of the files, and then I model my ideas, use cereal to write Getters (and later some Putters), and try to decode every file in an application of the library.
I've just started on Torchlight's compiled layout files (*.LAYOUT in TL1, *.LAYOUT.cmp in TL2). The format turns out to be a little trickier than the dat files, but I think I figured out the basic structure, and how they are encoded in the TL2 files. so I'm trying to make a map of file versions, tag numbers, and guessed data types.
To do so, I wrote an application that flattens the data structure, leaving only the guessed type of the values of the leaves, each annotated with the file version and the node and leaf tag numbers. I turn this into a map from the file version and tag numbers to a set of the guessed types. For every file, I'd expect this Map to maybe take twice the file size in memory. (Not sure, though.) Then, I merge these maps, and I print the map.
For some reason, even if I only take 20MB worth of files (100 files), memory usage increases linearly to about 200MB, then decreases to the final size of the resulting map, and then deflates rapidly as I print it.
I wouldn't expect this memory usage. Does anyone know how I could fix it? I've tried to force values after decoding them (using deepseq), I've tried adding bangs to data types, but this hasn't really helped. I've tried copying all bytestrings I keep in the file structure, which brought down the memory usage a bit, but it's still unacceptably high, especially when I want to analyze the entire dataset (200MB+ of original files).
-edit- I've pushed a (not very S)SCCE to demonstrate the performance issue, (accidentally) along with my profiling results.
Clone the repository.
cabal configure, with flags to enable profiling (is it normal to need --enable-library-profiling --enable-executable-profiling --ghc-options="-rtsopts -prof"?)
cabal build
cd test, and run StressTest.sh.
This script tries to load a regular TL2 layout file 100 times. On my machine, top says it takes about 500MB of memory, and the profiling results are consistent with my description above.
I totally agree with #petrpudlak, we would need actual code to make any meaningful comments to the question "why does my code use so much memory?" :) (sorry, you did offer code), however, some of the patterns you describe are pretty typical in Haskell and some generic discussion is possible.
First of all, note that native Haskell types use a lot more memory than you might guess. Take a look at the ghc memory footprint page at http://www.haskell.org/haskellwiki/GHC/Memory_Footprint. Note that even a simple Char will take a full 16 bytes of memory! Add to that pointers for linked list items in a String, and you will easily use more than an order of magnitude greater memory than you might have guessed. If memory is important, you should use another data type, like Data.Text or Data.ByteString, which store Strings internally more like c would (as a block of bytes in memory, with 1-4 bytes per char, depending on encoding and what char is used). If data other than Strings are the problem, you can use unboxed arrays for arbitrary data types.
Second of all, if possible, you can cut down memory usage by processing items in series (where the memory will be garbage collected right away). Haskell laziness often does this for you automatically, for instance, try to run the following program
import Data.Char
main = interact $ map toUpper
As you type, the output will appear continuously (your OS, not Haskell, may buffer full lines, so you may need to hit 'enter' before seeing anything, but you will see output update for each 'enter'). Rather than loading the whole input into memory and then processing all at once, Char memory is being created and garbage collected Char by Char.
Of course this isn't always possible (ie- if you have to process the data in a very nonlocal way), but most of the time at least parts of the code can be refactored this way to cut down total memory usage.
Edit- Sorry, I just realized that you did post a link to the code, and you are using ByteString..... So some of what I wrote isn't valid. But I do still see boxed lists and unpacking of the ByteString, so I will leave the answer as it is.
The memory usage pattern sounds like your application is building up a lot of unnecessary thunks and then memory consumption starts going down when those thunks get evaluated. I only glanced at your code quickly but one simple change you could try is to replace all imports of Data.Map with Data.Map.Strict. This is especially important if you are doing a lot of updates on the values inside a Map without forcing evaluation in between.
Another things you should be aware of is that replicateM is quite inefficient with larger numbers in a strict monad (see e.g. this answer). I'm not sure what kinds of counts you are usually dealing with in your application, but it's good to keep in mind.
It might also help to use strict fields in simple container data types like your LeafValue type and compile with -funbox-strict-fields (and -O2 of course).

How can I leak memory in Clojure?

For a presentation at the Bay Area Clojure Meetup on Thursday I am compiling a list of ways to leak memory in Clojure.
So far I have:
hold onto the head of an infinite sequence
creating lots of generic classes by calling lambda in a loop (is this still a problem)
holding a reference to unused data
...
What else?
By keeping a reference to a seq on a large collection. eg:
(drop 999990 (vec (range 1000000)))
returns a seq of ten elements that holds a reference to the whole vector!
Another obvious way is to use any Java library that leaks memory. (e.g. Qt Jambi)
About lambdas, read here and here and here. I think this is fixed in the latest versions of Clojure.
There is the intern call as well.
Note that your examples are not leaking memory in the common sense of the word. You can still access the objects (not sure about the classes -- I assume one can re-find them via some API), i.e. they haven't been lost. With certain things like the classes and interned strings it is just impossible to forget the data so the effect is the same.
Clojure memory leaks will usually be very similar to Java memory leaks. However the fact that collections are "persistent" means that if you add something into a collection and don't realize that you retained a reference to the old version of the collection as well as the new value means that memory is consumed to keep the old version hanging around.

Resources