Size of native memory in Android Profiler - android-studio

In Android Studio Profiler, there are two places which display the size of native memory occupied by an app.
The first place is in the horizontal bar. Profiler documentation describes this as: "Native: Memory from objects allocated from C or C++ code".
The second place is the app heap dump, Native Size column. In documentation it is described: "Native Size: Total amount of native memory used by this object type (in bytes)"
In my case, the horizontal bar displays 30.12 MB and the heap dump "native size" column displays around 9.28 MB (which is the sum of all objects in app heap that have non-zero native size column)
Why are these two sizes different?

For the "horizontal bar" if you look closely you'll realize that the size of the memory used by graphics is 0. reason for this is that on some devices, bitmaps and other graphical assets are handled in the native memory. So extra memory other than the 9.28mb is most likely these graphical assets.

Related

Peak heap memory usage of an OCaml program

I would like to compute the peak memory usage of my OCaml program when it is running in compiled form as native code. I considered using the stats API in the Gc module, but it seems to return a snapshot at the time it is called. Is there some information in the Gc module or some other module that I can use to get peak heap usage just before my program terminates?
You can get the current size of the major heap using Gc.stat, see the live_words field, multiply it by the word size in bytes to get the size in bytes (8 in a 64 bit system). It doesn't really matter but you can also add the size of the minor heap to the calculation, which is available via Gc.get () see the minor_heap_size field (again in words).
You can create an alarm with Gc.create_alarm to check the size of the heap after each major collection to get the maximum size ever used.

Memory leak in Chrome and difference between snapshot size and memory allocation

I am currently trying to find a memory leak in a page that receives some updates over websockets. So what I did is I checked Chrome Task Manager first and it shows that the memory allocated for tab is growing. After that I checked with timeline tool(forced GC a few times) and the memory seems to behave quite ok.
There are some html nodes that are added(green line), so I am assuming that there are some nodes that are still referenced from js code, but then when I get to profiler(Record Heap Allocation) I see a strange behaviour - the snapshot itself is 109MB:
But after I stop profiling memory jumps up and it's not nearly as 109MB. Examples of what I've seen:
before snapshot 361M, after snapshot 723M, snapshot - 89M
before snapshot 329M, after snapshot 612M, snapshot - 54.4M
before snapshot 450M, after snapshot 773M, snapshot - 109M
I see a few nodes that are still referenced, but their Retained size is a lot smaller then the size of a snapshot.
So what I want to know is why there is such a strange behaviour in Chrome Profiler(difference of snapshot size and memory consumption) and how do I find what consumes the memory?
That snapshot size only includes used JS heap size. Task manager will show you process total size which is a lot more than just JS heap. Also a big chunk of memory might be occupied by typed arrays which have their buffers allocated off JS heap. Can you switch from "Summary" view to "Statistics" in the heap snapshot and see what the breakdown of the js heap looks like?

node.js RSS memory grows over time despite fairly consistent heap sizes

I've got a node.js application where the RSS memory usage seems to keep growing despite the heapUsed/heapTotal staying relatively constant.
Here's a graph of the three memory measurements taken over a week (from process.memoryUsage()):
You may note that there's a somewhat cyclical pattern - this corresponds with the application's activity throughout each day.
There actually does seem to be a slight growth in the heap, although it's nowhere near that of the RSS growth. So I've been taking heap dumps every now and then (using node-heapdump), and using Chrome's heap compare feature to find leaks.
One such comparison might look like the following (sorted by size delta in descending order):
What actually shows up does depend on when the snapshot was taken (eg sometimes more Buffer objects are allocated etc) - here I've tried to take a sample which demonstrates the issue best.
First thing to note is that the sizes on the left side (203MB vs 345MB) are much higher than heap sizes shown in the graph. Secondly, the size deltas clearly don't match up with the 142MB difference. In fact, sorting by size delta in ascending order, many objects have be deallocated, which means that the heap should be smaller!
Does anyone have any idea on:
why is this the case? (RSS constantly growing with stable heap size)
how can I stop this from happening, short of restarting the server every now and then?
Other details:
Node version: 0.10.28
OS: Ubuntu 12.04, 64-bit
Update: list of modules being used:
async v0.2.6
log4js v0.6.2
mysql v2.0.0-alpha7
nodemailer v0.4.4
node-time v0.9.2 (for timezone info, not to be confused with nodetime)
sockjs v0.3.8
underscore v1.4.4
usage v0.3.9 (for CPU stats, not used for memory usage)
webkit-devtools-agent v0.2.3 (loaded but not activated)
heapdump v0.2.0 is loaded when a dump is made.
Thanks for reading.
The difference you see between RSS usage and heap usage are buffers.
"A Buffer is similar to an array of integers but corresponds to a raw memory allocation outside the V8 heap"
https://nodejs.org/api/buffer.html#buffer_buffer

how to display top heap allocations grouped by stack?

I have a dump of a process that was running with user-stack-traces flag on. I am trying to analyze a leak from WinDbg. Using instructions here I am able to see top allocations grouped by allocation sizes, list all allocations with specific size, display stack of an allocation using allocation address.
Is there a way to display top allocations grouped by stack? (By 'top' I mean highest contributors to total heap size or total allocation count.) All the information is already in the dump, I just need the right WinDbg extension. I would be surprised if no one wrote such an extension so far.

How is total memory in Java calculated

If I have 8GB RAM and I use the following on a 64-bit JVM
max heap size 6144MB
max perm gen space 2048MB
stack size 2MB
Q1 : Is perm gen space allocated from the max heap or a separate?
Q2 : if seperate then will the jvm with above settings get started or it will give error as heap + permgen + stack + program data would be above the total RAM?
First of all remember that the parameter you set with -Xmx (since that's the way I suppose you are setting your heap size) is the size of heap available to your Java code, not the amount of memory the JVM will consume. The difference comes from housekeeping structures that the JVM keeps (garbage collector structures, JIT overhead etc.), sometimes memory allocated by native code, buffers, and so on. The size of this additional memory depends on JVM version, the app you are running, and other factors, but I've seen JVMs allocate twice as much RAM as the heap size visible to the application. For the average case, I usually consider 50% to be a safe margin, with 20-30% acceptable. If you set your heap size to be close to amount of RAM in your machine, you will hit the swap and performance will suffer.
Now for the enumerated questions:
Perm gen is a separate space from the heap at least in Oracle's JDK 6. It is separate because it undergoes completely different memory management rules than the regular heap. By the way, 2 GB of pergen space is huge - are you sure you really need it?
Regarding the second question, see above. If this is Oracle's JDK, you are likely to run into trouble since perm and heap sums up but there will be additional memory, usually on the order of 20-50% of your 6 GB heap, and together with heap and perm space this will be more than your RAM. At first try this setup may work, but once both the heap and perm gen space usages come close to their configured limits, you could run out of memory.
heap and permgen are different memory parts of JVM. As such you will be consuming virtually all the memory on system. It is always better to leave 20% ram to be free for os/other tasks to execute properly.
Also, 2 gb for perm space is a huge figure. Have you looked at jar optimisation meaning that only relevant classes are present in the classpath?
This depends on the JVM and the version of the JVM.
In Hotspot Java 6, PermGen space is independent from the max heap size argument (-Xmx and -Xms control only the Young/OldGen sizes). The PermGen space size is given by the -XX:PermSize and -XX:MaxPermSize. See Java SE 6 HotSpot[tm] Virtual Machine Garbage Collection Tuning
UPDATE: In Hotspot Java 8, there is no PermGen space anymore and the objects reside in the Young/Old Generation spaces.

Resources