Xamarin monodroid - how to track .net memory leaks - memory-leaks

I have lots of memory allocated in my mono droid program which doesn't appear to directly belong to either the Dalvik or the Mono heaps. Also, I can't figure out how to track .NET memory leaks.
When I call
adb shell dumpsys meminfo MyProgram.Droid
This is the output:
** MEMINFO in pid 1364 [MyProgram.Droid] **
Shared Private Heap Heap Heap
Pss Dirty Dirty Size Alloc Free
------ ------ ------ ------ ------ ------
Native 36 24 36 38080 37775 124
Dalvik 6934 15164 6572 16839 15384 1455
Cursor 0 0 0
Ashmem 0 0 0
Other dev 4 36 0
.so mmap 12029 2416 9068
.jar mmap 0 0 0
.apk mmap 16920 0 0
.ttf mmap 3 0 0
.dex mmap 2299 296 8
Other mmap 64 24 36
Unknown 28920 8216 28728
TOTAL 67209 26176 44448 54919 53159 1579
I assume that the "Unknown" section is the mono framework, including the .NET heap. However when I call
GC.GetTotalMemory(true)
It tells me that only 5Mb of memory is allocated. That leaves 23Mb which I cannot trace (and there's 38Mb of allocated native heap)
Additionally, I don't see that Xamarin have any tools for tracking .NET memory leaks. I've added garbage collection logging with
adb shell setprop debug.mono.log gc,gref
But this is incredibly verbose and hard to read and doesn't even include allocation sizes.
At this point I don't know what to do to track down the resulting leaks. Since the allocations appear to be on the native heap, do I need to use the NDK in order to track down what's going on? Are there any tools I can use on the C# side to track the .NET leaks?
Thanks!

Related

How do you fix memory leak in node.js

I am struggling for past day about this particular case in my app.
Throughout the day, application fetches data from the other database (around 400k entries) and processes it. After processing, application maps items to collections inside this application's database and saves them.
Everything goes fine until the end of the function. Currently, I am invoking it through a http request but it is going to be a CRON job. So basically, after hitting the .res() endpoint, the CPU does not slows down on usage and the heap memory is not freeing. I am just wondering what could be happening.
I read a little about identifying memory leaks and found out this. [2] is the processor, that was used to process data.
x[ 1] App Mem: 68 MB CPU: 1 % online xx x
x[ 2] App Mem: 182 MB CPU: 135 % online xx x
x[ 3] App Mem: 60 MB CPU: 0 % online xx x
x[ 4] App Mem: 60 MB CPU: 0 % online xx x
x[ 5] App Mem: 59 MB CPU: 0 % online
As you can clearly see, memory and CPU is way higher than on the other node instances. I thought .res() would kill the task, release memory and slow the cpu down but it doesn't.
I use Expressjs, have undertaken MVC pattern and store all logic in class components. Is there any way to free up this memory as I have tried to apply some sort of cleaning-up function but it did not work :(.
I have started running out of ideas on how to identify what could be the possible problem. Maybe there is something obvious, that I do not pay attention to but to be honest I don't have that much experience in node to find it out.

Memory overflow! in Linux

My embedded system runs Linux 3.10.14.
While running, my application prints out this message.
ERR: Memory overflow! free bytes=56000, bytes used=4040000, bytes to allocate=84000
But when I do "free", it seems I have enough free memory.
/ # free
total used free shared buffers
Mem: 27652 20788 6864 0 0
-/+ buffers: 20788 6864
Swap: 0 0 0
Any possible root cause of the error message?
Or how can I use free memory to the last 1 byte?
Please comment if I am missing any information.
Thank you!
According to the output of "free", we can see totally there are 27652 bytes, 20788 bytes are used and 6864 bytes are free.
The print from your application, it seems to try to allocate 84000 bytes, but there are only 56000 bytes available。
So there is a question that how much memory do your system have? 27652 bytes or
4096000 bytes?
The printout is got from the system ?

Excessive Gen 2 Free Blocks in Crash Dump

On inspecting a crash dump file for an out of memory exception reported by a client the results of !DumpHeap -stat showed that 575MB of memory is being taken up by 45,000 objects of type "Free" most of which I assume would have to reside in Gen 2 due to the size.
The first places I looked for problems were the large object heap (LOH) and pinned objects. The large object heap with free space included was only 70MB so that wasn't the issue and running !gchandles showed:
GC Handle Statistics:
Strong Handles: 155
Pinned Handles: 265
Async Pinned Handles: 8
Ref Count Handles: 163
Weak Long Handles: 0
Weak Short Handles: 0
Other Handles: 0
which is a very small number of handles (around 600) compared to the number of free objects (45,000). To me this rules out the free blocks being caused by pinning.
I also looked into the free blocks themselves to see if maybe they had a consistent size, but on inspection the sizes varied widely and went from just short of 5MB to only around 12 bytes or so.
Any help would be appreciated! I am at a loss since there is fragmentation but no signs of it being cause by the two places that I know to look which are the large object heap (LOH) and pinned handles.
In which generation are the free objects?
I assume would have to reside in Gen 2 due to the size
Size is not related to generations. To find out in which generation the free blocks reside, you can follow these steps:
From !dumpheap -stat -type Free get the method table:
0:003> !dumpheap -stat -type Free
total 7 objects
Statistics:
MT Count TotalSize Class Name
00723538 7 100 Free
Total 7 objects
From !eeheap -gc, get the start addresses of the generations.
0:003> !eeheap -gc
Number of GC Heaps: 1
generation 0 starts at 0x026a1018
generation 1 starts at 0x026a100c
generation 2 starts at 0x026a1000
ephemeral segment allocation context: none
segment begin allocated size
026a0000 026a1000 02731ff4 0x00090ff4(593908)
Large object heap starts at 0x036a1000
segment begin allocated size
036a0000 036a1000 036a3250 0x00002250(8784)
Total Size 0x93244(602692)
------------------------------
GC Heap Size 0x93244(602692)
Then dump the free objects only of a specific generation by passing the start and end address (e.g. of generation 2 here):
0:003> !dumpheap -mt 00723538 0x026a1000 0x026a100c
Address MT Size
026a1000 00723538 12 Free
026a100c 00723538 12 Free
total 2 objects
Statistics:
MT Count TotalSize Class Name
00723538 2 24 Free
Total 2 objects
So in my simple case, there are 2 free objects in generation 2.
Pinned handles
600 pinned objects should not cause 45.000 free memory blocks. Still from my experience, 600 pinned handles are a lot. But first, check in which generation the free memory blocks reside.

Ada Task Declaration Causes Memory Leak

I have the following little Ada program:
procedure Leaky_Main is
task Beer;
task body Beer is
begin
null;
end Beer;
begin
null;
end Leaky_Main;
All fairly basic, but when i compile like this:
gnatmake -g -gnatwI leaky_main.adb
and run it through Valgrind like this :
valgrind --tool=memcheck -v --leak-check=full --read-var-info=yes --leak-check=full --show-reachable=yes ./leaky_main
I get the following error summary:
==2882== 2,104 bytes in 1 blocks are still reachable in loss record 1 of 1
==2882== at 0x4028876: malloc (vg_replace_malloc.c:236)
==2882== by 0x42AD3B8: __gnat_malloc (in /usr/lib/i386-linux-gnu/libgnat-4.4.so.1)
==2882== by 0x40615FF: system__task_primitives__operations__new_atcb (in /usr/lib/i386-linux-gnu/libgnarl-4.4.so.1)
==2882== by 0x406433C: system__tasking__initialize (in /usr/lib/i386-linux-gnu/libgnarl-4.4.so.1)
==2882== by 0x4063C86: system__tasking__initialization__init_rts (in /usr/lib/i386-linux-gnu/libgnarl-4.4.so.1)
==2882== by 0x4063DA6: system__tasking__initialization___elabb (in /usr/lib/i386-linux-gnu/libgnarl-4.4.so.1)
==2882== by 0x8049ADA: adainit (b~leaky_main.adb:142)
==2882== by 0x8049B7C: main (b~leaky_main.adb:189)
==2882==
==2882== LEAK SUMMARY:
==2882== definitely lost: 0 bytes in 0 blocks
==2882== indirectly lost: 0 bytes in 0 blocks
==2882== possibly lost: 0 bytes in 0 blocks
==2882== still reachable: 2,104 bytes in 1 blocks
==2882== suppressed: 0 bytes in 0 blocks
==2882==
==2882== ERROR SUMMARY: 0 errors from 0 contexts (suppressed: 21 from 6)
--2882--
--2882-- used_suppression: 21 U1004-ARM-_dl_relocate_object
==2882==
==2882== ERROR SUMMARY: 0 errors from 0 contexts (suppressed: 21 from 6)
Does anyone know why this is being reported as an error ?
Im fairly sure there is no actual leak, but i would like to know why/how it happens.
Thanks,
The problematic allocation looks to be the task control block (TCB). This has to be retained after the task has completed so that you can say
if Beer’Terminated then
...
so I think it’s probably an artefact of when valgrind does thecheck.
I’ve only come across this where the task was allocated; it was necessary to wait until ’Terminated was True before deallocating the task, or GNAT happily deallocated the stack but silently didn’t deallocate the TCB, leading to a real leak like yours. AdaCore recently fixed this (I don’t have the reference, it was on their developer log).
You should use deleaker for debugging. I prefer it)

How to check usage of different parts of memory?

I have computer with 2 Intel Xeon CPUs and 48 GB of RAM. RAM is divided between CPUs - two parts 24 GB + 24 GB. How can I check how much of each specific part is used?
So, I need something like htop, which shows how fully each core is used (see this example), but rather for memory than for cores. Or something that would specify which part (addresses) of memory are used and which are not.
The information is in /proc/zoneinfo, contains very similar information to /proc/vmstat except broken down by "Node" (Numa ID). I don't have a NUMA system here to test it for you and provide a sample output for a multi-node config; it looks like this on a one-node machine:
Node 0, zone DMA
pages free 2122
min 16
low 20
high 24
scanned 0
spanned 4096
present 3963
[ ... followed by /proc/vmstat-like nr_* values ]
Node 0, zone Normal
pages free 17899
min 932
low 1165
high 1398
scanned 0
spanned 223230
present 221486
nr_free_pages 17899
nr_inactive_anon 3028
nr_active_anon 0
nr_inactive_file 48744
nr_active_file 118142
nr_unevictable 0
nr_mlock 0
nr_anon_pages 2956
nr_mapped 96
nr_file_pages 166957
[ ... more of those ... ]
Node 0, zone HighMem
pages free 5177
min 128
low 435
high 743
scanned 0
spanned 294547
present 292245
[ ... ]
I.e. a small statistic on the usage/availability total followed by the nr_* values also found on a system-global level in /proc/vmstat (which then allow a further breakdown as of what exactly the memory is used for).
If you have more than one memory node, aka NUMA, you'll see these zones for all nodes.
edit
I'm not aware of a frontend for this (i.e. a numa vmstat like htop is a numa-top), but please comment if anyone knows one !
The numactl --hardware command will give you a short answer like this:
node 0 cpus: 0 1 2 3 4 5
node 0 size: 49140 MB
node 0 free: 25293 MB
node 1 cpus: 6 7 8 9 10 11
node 1 size: 49152 MB
node 1 free: 20758 MB
node distances:
node 0 1
0: 10 21
1: 21 10

Resources