Is there any way to ensure key material gets securely erased from the memory after the program exits? Being able to erase it manually and keep the program running would be even better. As Haskell uses automated garbage collection (which may not happen at all if there is loads of free memory?), I assume that the second task is impossible. Could something that serves the purpose be implemented using FFI?
GHC can return memory to the OS when it is no longer needed, so merely blanking the memory on exit won't achieve your goal. Garbage collection is a complicated business, but there is in general no way to ensure that old copies of your secure data are not returned to the OS memory pool.
However the OS will blank your memory before allocating it to another process. If you don't trust the OS to keep your memory secure then you have a much bigger problem.
I'm not sure what you mean by "unreliable"; Haskell GC is reliable, but the program has comparatively little visibility of what is happening.
However if you are concerned merely with a cryptographic key rather than a big, complicated data structure then life gets a bit better. You can use a Foreign Pointer to point to a memory location for your key, and then make blanking that bit of memory into a part of your finaliser. You can even write a bit of code that allocates a block of memory, mlocks it, and then hands off foreign pointers to key-sized chunks of that memory on request, with finalisers that wipe the key. That would probably do what you want.
The point of a ForeignPtr is that it is guaranteed not to be moved or re-interpreted by the GC.
Related
I have a program that exhibits the behavior of a memory leak. It gradually takes up all of the systems memory until it fills all swap space and then the operating system kills it. This happens once every several days.
I have extensively profiled the heap in a manner of ways (-hy, -hm, -hc) and tried limiting heap size (-M128M) tweaked the number of generations (-G1) but no matter what I do the heap size appears constant-ish and low always (measured in kB not MB or GB). Yet when I observe the program in htop, its resident memory steadily climbs.
What this indicates to me is that the memory leak is coming from somewhere besides the GHC heap. My program makes use of dependencies, specifically Haskell's yaml library which wraps the C library libyaml, it is possible that the leak is in the number of foreign pointers it has to objects allocated by libyaml.
My question is threefold:
What places besides the GHC heap can memory leak from in a Haskell program?
What tools can I use to track these down?
What changes to my source code need to be made to avoid these types of leaks, as they seem to differ from the more commonly experienced space leaks in Haskell?
This certainly sounds like foreign pointers aren't being finalized properly. There are several possible reasons for this:
The underlying C library doesn't free memory properly.
The Haskell library doesn't set up finalization properly.
The ForeignPtr objects aren't being freed.
I think there's actually a decent chance that it's option 3. If the RTS consistently finds enough memory in the first GC generation, then it just won't bother running a major collection. Fortunately, this is the easiest to diagnose. Just have your program run System.Memory.performGC every so often. If that fixes it, you've found the bug and can tweak just how often you want to do that.
Another possible issue is that you could have foreign pointers lying around in long-lived thunks or other closures. Make sure you don't.
One particularly strong possibility when working with a wrapped C library is that the wrapper functions will return ByteStrings whose underlying arrays were allocated by C code. So any ByteStrings you get back from yaml could potentially be off-heap.
Several minutes ago, I and my friends solved some algorithmic problems on the leetcode.com and share our solutions. We used high level languages and when new memory allocated by Array.new(128) in Ruby or int[] map = new int[128]; in Java it already filled by zero-like values nil or 0 respectively.
So it's guarantied that high level program have cleared place.
And here I have a question: In C or Assembler program could it happens that new chunk of memory stores data from other process unchanged?
And thus one process get data of another process. And even may be data from another user that worked in system some time ago. Could it be a way information leaked?
Do OS clear a memory before sharing it among processes? and If so is it very expensive to run so many iterations?
Thank you.
UPD: http://www.cplusplus.com/articles/ETqpX9L8/ looks like it need to clear valuable data in "lower-level" languages manually to prevent data leaks to other processes.
Yes, in lower-level languages where memory is not initialized, it could contain valuable stuff from other processes. There have been encryption key leakage attacks done this way by continually allocating memory and scanning it for what looks like useful information.
Security sensitive programs that store passwords or crypto keys, etc should always clear the memory ASAP after use. It's not only to prevent leaks through re-allocated memory, but there are also other attack vectors like RAM dumps that could be used to extract secrets. Always zero or randomize your memory when you are done with it.
I am confused about using Buffer.allocUnsafe() and Buffer.alloc() , I know that Buffer.allocUnsafe() creates a buffer with pre-filled data or old buffers, but why do i need such thing if Buffer.alloc() creates a buffer with zero filled data
In Node.js Buffer is an abstraction over RAM, therefore if you allocate it in an unsafe way, there is a high risk of having even some source code in the buffer instance. Try running console.log(Buffer.allocUnsafe(10000).toString('utf-8')) and I guarantee that you will see some code in your stdout.
Allocation is a synchronous operation and we know that single threaded Node.js doesn't really feel good about synchronous stuff. Unsafe allocation is much faster than safe, because the buffer santarization step takes time. Safe allocation is, well, safe, but there is a performance trade off.
I'd suggest sticking to safe allocation first and if you end up with low performance, you can think of ways to implement unsafe allocation, without exposing private stuff. Just keep in mind that allocUnsafe method has the word unsafe for a reason. E.g, if you are going to pass some compliance certification like PCI DSS, I'm pretty sure QSA will notice that and will have a lot of questions.
Buffer.alloc(size, fill, encoding) -> returns a new initialized Buffer
of the specified size. This method is slower than Buffer.allocUnsafe(size) but guarantees that newly created Buffer instances never contain old data that is potentially sensitive.
Buffer.allocUnsafe(size) -> the Buffer is uninitialized, the allocated
segment of memory might contain old data that is potentially
sensitive. Using a Buffer created by Buffer.allocUnsafe() without completely overwriting the memory can allow this old data to be leaked when the Buffer memory is read.
Note: While there are clear performance advantages to using Buffer.allocUnsafe(), extra care must be taken in order to avoid introducing security vulnerabilities into an application
We need to store a large 1GB of contiguous bytes in memory for long periods of time (weeks to months), and are trying to choose a Vector/Array library. I had two concerns that I can't find the answer to.
Vector.Unboxed seems to store the underlying bytes on the heap, which can be moved around at will by the GC.... Periodically moving 1GB of data would be something I would like to avoid.
Vector.Storable solves this problem by storing the underlying bytes in the c heap. But everything I've read seems to indicate that this is really only to be used for communicating with other languages (primarily c). Is there some reason that I should avoid using Vector.Storable for internal Haskell usage.
I'm open to a third option if it makes sense!
My first thought was the mmap package, which allows you to "memory-map" a file into memory, using the virtual memory system to manage paging. I don't know if this is appropriate for your use case (in particular, I don't know if you're loading or computing this 1GB of data), but it may be worth looking at.
In particular, I think this prevents the GC moving the data around (since it's not on the Haskell heap, it's managed by the OS virtual memory subsystem). On the other hand, this interface handles only raw bytes; you couldn't have, say, an array of Customer objects or something.
I'm writing a memory allocation routine, and it's currently running smoothly. I get my memory from the OS with mmap() in 4096-byte pages. When I start my memory allocator I allocate 1gig of virtual address space with mmap(), and then as allocations are made I divide it up into hunks according to the specifics of my allocation algorithm.
I feel safe allocating as much as a 1gig of memory on a whim because I know mmap() doesn't actually put pages into physical memory until I actually write to them.
Now, the program using my allocator might have a spurt where it needs a lot of memory, and in this case the OS would have to eventually put a whole 1gig worth of pages into physical RAM. The trouble is that the program might then go into a dormant period where it frees most of that 1gig and then uses only minimal amounts of memory. Yet, all I really do inside of my allocator's MyFree() function is to flip a few bits of bookkeeping data which mark the previously used gig as free, but I know this doesn't cause the OS remove those pages from physical memory.
I can't use something like munmap() to fix this problem, because the nature of the allocation algorithm is such that it requires a continuous region of memory without any holes in it. Basically I need a way to tell the OS "Listen, you can take these pages out of physical memory and clear them to 0, but please remap them on the fly when I need them again, as if they were freshly mmap()'d"
What would be the best way to go about this?
Actually, after writing this all up I just realized that I can probably do an munmap() followed immediately by a fresh mmap(). Would that be the correct way to go about? I get the sense that there's probably some more efficient way to do this.
You are looking for madvise(addr, length, MADV_DONTNEED). From the manpage:
MADV_DONTNEED: Do not expect access in the near future. (For the time being, the application is finished with the given range, so the kernel can free resources associated with it.) Subsequent accesses of pages in this range will succeed, but will result either in reloading of the memory contents from the underlying mapped file (see mmap(2)) or zero-fill-on-demand pages for mappings without an underlying file.
Note especially the language about how subsequent accesses will succeed but revert to zero-fill-on-demand (for mappings without an underlying file).
Your thinking-out-loud alternative of an munmap followed immediately by another mmap will also work but risks kernel-side inefficiencies because it is no longer tracking the allocation a single contiguous region; if there are many such unmap-and-remap events the kernelside data structures might wind up being quite bloated.
By the way, with this kind of allocator it's very important that you use MAP_NORESERVE for the initial allocation, and then touch each page as you allocate it, and trap any resulting SIGSEGV and fail the allocation. (And you'll need to document that your allocator installs a handler for SIGSEGV.) If you don't do this your application will not work on systems that have disabled memory overcommit. See the mmap manpage for more detail.