Perfview - Memory Dump - NoPtrs - memory-leaks

I need to investigate a memory leak or at least a constant increase of the memory on the server. So I took multiple memory dumps and I see a very big sized object in the Gen1 Objects but it tells me NoPtrs.
So does it mean that it is unreferenced but still in memory could it be the source of my memory leak? Do you have some advices on how I could continue on it and identify what is creating those byte[]?
The website in made in .Net 6
Here is a screenshot of it.
Regards,

Related

Nodejs process consumes more memory than actual memory dump size

I've just noticed that pm2 process consumes a ton of memory, but when I tried to take a heap snapshot to figure it out the heap snapshot was 20x more times less.
This is what was inspected:
Then I read some articles covering how to debug heap snaphots and all of them were in vitro experiments. I doubt anyone code like this. All of the heap snaphots I did were healthy, just like any other process with low memory consumption. Does nodejs produce something like runtime cache or functions' calculations results in a form of weak map which is detached from heap snapshot?
Is there any way to restrict nodejs memory usage?

When does Node garbage collect?

I have a NodeJS server running on a small VM with 256MB of RAM and I notice the memory usage keeps growing as the server receives new requests. I read that an issue on small environments is that Node doesn't know about the memory constraints and therefore doesn't try to garbage collect until much later (so for instance, maybe it would only want to start garbage collecting once it reaches 512MB of used RAM), is it really the case?
I also tried using various flags such as --max-old-space-size but didn't see much change so I'm not sure if I have an actual memory leak or if Node just doesn't GC as soon as possible?
This might not be a complete answer, but it's coming from experience and might provide some pointers. Memory leak in NodeJS is one of the most challenging bugs that most developers could ever face.
But before we talk about memory leak, to answer your question - unless you explicitly configure --max-old-space-size, there are default memory limits that would take over. Since certain phases of Garbage collection in node are expensive (and sometimes blocking) steps, depending upon how much memory is available to it, it would delay (e.g. mark-sweep collection) some of the expensive GC cycles. I have seen that in a Machine with 16 GB of memory it would easily let the memory go as high as 800 MB before significant Garbage Collections would happen. But I am sure that doesn't make ~800 MB any special limit. It would really depend on how much available memory it has and what kind of application are you running. E.g. it is totally possible that if you have some complex computations, caches (e.g. big DB Connection Pools) or buggy logging libraries - they would themselves always take high memory.
If you are monitoring your NodeJs's memory footprint - sometime after the the server starts-up, everything starts to warm up (express loads all the modules and create some startup objects, caches warm up and all of your high memory consuming modules became active), it might appear as if there is a memory leak because the memory would keep climbing, sometimes as high as ~1 gb. Then you would see that it stabilizes (this limit used to be lesser in <v8 versions).
But sometimes there are actual memory leaks (which might be hard to spot if there is no specific pattern to it).
In your case, 256 MB seems to be meeting just the minimum RAM requirements for nodejs and might not really be enough. Before you start getting anxious of memory leak, you might want to pump it up to 1.5 GB and then monitor everything.
Some good resources on NodeJS's memory model and memory leak.
Node.js Under the Hood
Memory Leaks in NodeJS
Can garbage collection happen while the main thread is
busy?
Understanding and Debugging Memory Leaks in Your Node.js Applications
Some debugging tools to help spot the memory leaks
Node inspector |
Chrome
llnode
gcore

Nodejs memory leak - memory allocation decreased after taking snapshot in chrome debug

I'm investigating a memory leak in my nodejs script, by checking process.memoryUsage().heapUsed, the usage is around 3000MB.
chrome://inspect also shows memory usage of around 3000MB. However, every time after I take a heap snapshot, the heap snapshot saved reduced to around 73 MB, process.memoryUsage().heapUsed also reduced to that figure.
Anyone has a theory on how is this happening?
It sounds like the garbage collector is running after you check the usage. Basically every once in awhile it will check to see if there is anything that isn't tied to anything anymore and will remove it, freeing up space. See this article for more details:
https://blog.sessionstack.com/how-javascript-works-memory-management-how-to-handle-4-common-memory-leaks-3f28b94cfbec

.NET Core memory not released

Recently we had memory spike issue in one of our .Net core 2.0 application. When we analyze the memory dump, we found around 2GB of free space which is not released.
Following is the statistics from windbg tool.
Statistics:
MT Count TotalSize Class Name
000000e32eaf4ca0 15404 2584481422 Free
Can you please let me know what could be the reason for memory not getting released?
At least for the following reasons:
.NET thinks that your program needs that memory again soon
The free parts of the memory do not make up a complete block to give back to the OS
Regarding 1.), .NET might keep the memory which it got from the operating system, because giving it back to the OS and then reclaiming it from the OS has an impact on performance. If .NET thinks your program will request memory in the near future, it could simply "cache" it for you.
Regarding 2.), that's best explained in an example. Assume that your application requested 10 MB. .NET might take a 50 MB block from the OS. The application then requests 1 MB. .NET puts it into the existing 50 MB block. The 10 MB get freed. However, .NET cannot give 49 MB back to the OS, because giving memory back to the OS can only be done at the same size in which it was given.
.NET operates directly on VirtualAlloc() and VirtualFree(), therefore the possibilities are somewhat limited.
If you look at the Count column, you'll see that your 2.5 GB are not contiguous. It is split into 15000 fragments. Your program might suffer from memory fragmentation. Memory fragmentation can occur on the small object heap due to pinned handles and on the large object heap because that one does not get compacted.

What is Hazelcast HD Memory? - on/off heap?

I have read this official post on the Hazelcast High Density Memory.
Am I right in assuming that this HD memory still consumes memory from the JVM (in which the application is running and not creating another JVM in the server and using it solely for hz instance)?
And that the only difference in this native memory configuration is that, the memory is allocated off heap rather than the default on-heap memory allocation?
HDMS or Hazelcast high Density Memory Store allocates memory into the same process space as the Java heap. That means the process still owns all the memory but the Java heap is otherwise independent and the Hazelcast allocated space (off-heap / non-Java-heap) is not target to Garbage Collection. Values are serialized and the resulting bytestream is copied to the native memory and when reading it is copied back into the Java heap area and sent to the requestor.
Imagine HDMS as a fancy malloc implementation :)
HDMS or High Density Memory Store is part of Hazelcast Enterprise HD offering. HDMS is a way for Java software to access multiple terabytes of memory per node without struggling with long and unpredictable garbage collection pauses. This memory store provides the benefits of "off-heap" memory using of many high-performance memory management techniques. HDMS solves problems related with garbage collection limitations so that applications can utilizes hardware memory more efficiently without the need of extra clusters. It is designed as a plug-gable memory manager which enables multiple memory stores for different data structures like IMap and JCache.

Resources