In Node.js, when is data stored on the heap? - node.js

In C you explicitly ask for and manage memory on the heap, so interaction with the heap is well defined/apparent. How do you reason about this in Node.js?
Sub-questions:
where/how are functions stored?
are there certain objects/primitives that always get stored on the heap? (e.g. buffers)
does data migrate from the stack over to the heap? when?
References to good resources on this subject would also be appreciated, thanks.

You don't care about stack vs heap nor about freeing memory. It happens automatically since Node.js offers a precise tracing garbage collector. Some data is stored in GC heap. Some data is on the stack. You can't generally tell because it depends on optimisations performed by the JIT-compiler at runtime. Profiling tools might provide application-specific insight.
As for resources other than memory (e.g. files and sockets), use finally:
var file = open(…);
try {
…
} finally {
close(file);
}

Related

How to measure the amount of static memory in a Linux executable?

I'm a compiler researcher and I'm currently researching the impact of different compilation strategies in the final size of an executable. For instance, if a strategy is to generate multiple versions of the same function that bake in some of the parameters (constant propagation), it's to be expected that the amount of code will increase; dead code elimination, on the other hand, decrease it. Similarly, if some optimization relies on using a static memory pool whose size is determined at compile-time, I'd like to know how much space it is taking.
Most tools I've looked into (e.g. valgrind) seem to focus on stack and heap usage, but don't say much about static memory. Is there some other tool I'm missing?
Try using newrelic monitoring tool it will help you to collect garbage data of code or which api is using. In linux there is only physical memory and swap memory,application level they use heap memory for that new relic will be helpful.
Newrelic link to get stack details :
https://newrelic.com/ --> 100GB free trail is available to understand your requirement.

What memory leaks can occur outside the view of GHC's heap profiler

I have a program that exhibits the behavior of a memory leak. It gradually takes up all of the systems memory until it fills all swap space and then the operating system kills it. This happens once every several days.
I have extensively profiled the heap in a manner of ways (-hy, -hm, -hc) and tried limiting heap size (-M128M) tweaked the number of generations (-G1) but no matter what I do the heap size appears constant-ish and low always (measured in kB not MB or GB). Yet when I observe the program in htop, its resident memory steadily climbs.
What this indicates to me is that the memory leak is coming from somewhere besides the GHC heap. My program makes use of dependencies, specifically Haskell's yaml library which wraps the C library libyaml, it is possible that the leak is in the number of foreign pointers it has to objects allocated by libyaml.
My question is threefold:
What places besides the GHC heap can memory leak from in a Haskell program?
What tools can I use to track these down?
What changes to my source code need to be made to avoid these types of leaks, as they seem to differ from the more commonly experienced space leaks in Haskell?
This certainly sounds like foreign pointers aren't being finalized properly. There are several possible reasons for this:
The underlying C library doesn't free memory properly.
The Haskell library doesn't set up finalization properly.
The ForeignPtr objects aren't being freed.
I think there's actually a decent chance that it's option 3. If the RTS consistently finds enough memory in the first GC generation, then it just won't bother running a major collection. Fortunately, this is the easiest to diagnose. Just have your program run System.Memory.performGC every so often. If that fixes it, you've found the bug and can tweak just how often you want to do that.
Another possible issue is that you could have foreign pointers lying around in long-lived thunks or other closures. Make sure you don't.
One particularly strong possibility when working with a wrapped C library is that the wrapper functions will return ByteStrings whose underlying arrays were allocated by C code. So any ByteStrings you get back from yaml could potentially be off-heap.

Data security during dynamic memory allocation

Several minutes ago, I and my friends solved some algorithmic problems on the leetcode.com and share our solutions. We used high level languages and when new memory allocated by Array.new(128) in Ruby or int[] map = new int[128]; in Java it already filled by zero-like values nil or 0 respectively.
So it's guarantied that high level program have cleared place.
And here I have a question: In C or Assembler program could it happens that new chunk of memory stores data from other process unchanged?
And thus one process get data of another process. And even may be data from another user that worked in system some time ago. Could it be a way information leaked?
Do OS clear a memory before sharing it among processes? and If so is it very expensive to run so many iterations?
Thank you.
UPD: http://www.cplusplus.com/articles/ETqpX9L8/ looks like it need to clear valuable data in "lower-level" languages manually to prevent data leaks to other processes.
Yes, in lower-level languages where memory is not initialized, it could contain valuable stuff from other processes. There have been encryption key leakage attacks done this way by continually allocating memory and scanning it for what looks like useful information.
Security sensitive programs that store passwords or crypto keys, etc should always clear the memory ASAP after use. It's not only to prevent leaks through re-allocated memory, but there are also other attack vectors like RAM dumps that could be used to extract secrets. Always zero or randomize your memory when you are done with it.

1GB Vector, will Vector.Unboxed give trouble, will Vector.Storable give trouble?

We need to store a large 1GB of contiguous bytes in memory for long periods of time (weeks to months), and are trying to choose a Vector/Array library. I had two concerns that I can't find the answer to.
Vector.Unboxed seems to store the underlying bytes on the heap, which can be moved around at will by the GC.... Periodically moving 1GB of data would be something I would like to avoid.
Vector.Storable solves this problem by storing the underlying bytes in the c heap. But everything I've read seems to indicate that this is really only to be used for communicating with other languages (primarily c). Is there some reason that I should avoid using Vector.Storable for internal Haskell usage.
I'm open to a third option if it makes sense!
My first thought was the mmap package, which allows you to "memory-map" a file into memory, using the virtual memory system to manage paging. I don't know if this is appropriate for your use case (in particular, I don't know if you're loading or computing this 1GB of data), but it may be worth looking at.
In particular, I think this prevents the GC moving the data around (since it's not on the Haskell heap, it's managed by the OS virtual memory subsystem). On the other hand, this interface handles only raw bytes; you couldn't have, say, an array of Customer objects or something.

Detect and remove Memory Leak in Linux Application

We have a a very large project which is basically an application which uses Linux Application programming and runs on PowerPC processor. This project was initially developed by another company. We acquired the project from the company and now we are maintaining the project.
The application is reported to have a lot of memory leak issue. Since this is a large project, it is not possible to go to each source code file and find out the memory leak. We have used Valgrid, mpatrol and other memory leak detection tools. These tools did not help much and the memory leak has not decreased by a significant percentage.
In this situation, how to go about to reduce the memory leak by a significant amount.Is there a general method which people use in these case to reduce the memory leak other than the memory leak detection tools like mentioned above.
Usually Valgrind belongs to the best tools for this tasks. If it does not work correctly, there might only be a couple of things you can still do.
First question: What language is the application in? Valgrind is very good for C and C++, but will not help you with garbage collected or scripting language. So check the language first. There might be something similar for java, but I have not used that much java, so you would have to ask someone else.
Play around a lot with the settings of valgrind. There are several plugins, that can help with this. One example could be using --leak-check=full or similar options. There are also plugins for valgrind, that can enhance it detection capabilities.
You say, that the application was reported to have a memory leak. How was this detected? Did the application detect this by itself. If it was detected by the application on it's own without any external tools, this probably means someone has added their own memory tracker inside the application. Custom memory tracker, memory pools etc. mess up valgrind and any other leak detection system very bad. So in case any custom memory handling is present in the application, your only choice is to either deactivate it (if possible) or to hook into this custom mechanism. How this could be done depends on your application only.
Add your own memory tracker. For example in C++ it is possible to hook into new/delete calls and get them to track the memory. There are a couple of libraries you can use for this. You can also write your own new/delete replacement in about 500 LOC. If you decide to use this method, be sure to read a lot of tutorials on replacing new/delete, since there are several things that are unusual in the C++ world when attempting this task.
What makes you so sure, there is an memory leak in the application (i.e. how was this detected)? If a tool just reported huge numbers of allocated memory, this might not even mean, there is an actual memory leak. A memory leak means that the handles to the memory are lost and hence it becomes impossible to every reach and free that memory again. In case your application just get's a lot of memory and keeps it accessible, you probably have a completely different problem. For example you simply might use an algorithm with a bad space complexity at one point or the other, leading to many allocations. In this case you will not need a leak detector, but rather a memory profiler, which gives you more detailed overview of the memory footprint of the code parts. However I have never used a profiler for this kind of task before, so I cannot give you any more hints on this.
You could replace all memory allocation calls with calls to your own allocation methods, which should call original methods and at the same time count memory usage and where it was allocated. This will allow you to find the leaks and eliminate them by hand.
There might also be automated tools that allow you to do this - not sure, haven't used any. But this method works.
Perhaps you might also consider using Boehm's garbage collector (that is using GC_malloc instead of malloc etc... and not bother about free-ing data).

Resources