How do I access memory locations inside heap in circuit python and save it to nvm memory? - garbage-collection

I know the heap memory is managed by garbage collector in circuit python but how do I access the locations where the dynamically allocatted data is stored? I'm trying to save the entire heap memory to a non volatile storage.
I tried exploring the gc module inbuilt functions which usually work in cpython but the same are not working for Circuit Python

Related

Nodejs process consumes more memory than actual memory dump size

I've just noticed that pm2 process consumes a ton of memory, but when I tried to take a heap snapshot to figure it out the heap snapshot was 20x more times less.
This is what was inspected:
Then I read some articles covering how to debug heap snaphots and all of them were in vitro experiments. I doubt anyone code like this. All of the heap snaphots I did were healthy, just like any other process with low memory consumption. Does nodejs produce something like runtime cache or functions' calculations results in a form of weak map which is detached from heap snapshot?
Is there any way to restrict nodejs memory usage?

Perfview - Memory Dump - NoPtrs

I need to investigate a memory leak or at least a constant increase of the memory on the server. So I took multiple memory dumps and I see a very big sized object in the Gen1 Objects but it tells me NoPtrs.
So does it mean that it is unreferenced but still in memory could it be the source of my memory leak? Do you have some advices on how I could continue on it and identify what is creating those byte[]?
The website in made in .Net 6
Here is a screenshot of it.
Regards,

Local Memory vs Global Memory

Just for clarity.
Does Local Memory refer to the memory allocated to a certain program?
And does the Global memory refer the the Main memory?
I am reading about Uniform Memory Access time and Non Uniform Memory Access time. They say a multiprocessor computer has a Uniform Memory Access time if the time it takes to access memory data locally is the same as the amount of time it takes to access data globally.
I thought by "locally" they are referring to a cache, but in the preceding statements they clarify that a local memory is not a cache.

How does Nodejs/V8 know how big a native object is?

Specifically, in node-opencv, opencv Matrix objects are represented as a javascript object wrapping a c++ opencv Matrix.
However, if you don't .release() them manually, the V8 engine does not seem to know how big they are, and the NodeJS memory footprint can grow far beyond any limits you try to set on the command line; i.e. it only seems to run the GC when it approaches the set memory limits, but because it does not see the objects as large, this does not happen until it's too late.
Is there something we can add to the objects which will allow V8 to see them as large objects?
Illustrating this, you can create and 'forget' large 1M buffers all day on a nodejs set to limit it's memory to 256Mbytes.
But if you do the same with 1M opencv Matrices, NodeJS will quickly use much more than the 256M limit - unless you either run GC manually, or release the Matrices manually.
(caveat: a c++ opencv matrix is a reference to memory; i.e. more than one Matrix object can point to the same data - but it would be a start to have V8 see ALL references to the same memory as being the size of that memory for the purposes of GC, safer that seeing them as all very small.)
Circumstances: on an RPi3, we have a limited memory footprint, and processing live video (using about 4M of mat objects per frame) can soon exhaust all memory.
Also, the environment I'm working in (a Node-Red node) is designed for 'public' use, so difficult to ensure that all users completely understand the need to manually .release() images; hence this question is about how to bring this large data under the GC's control.
You can inform v8 about your external memory usage with AdjustAmountOfExternalAllocatedMemory(int64_t delta). There are wrappers for this function in n-api and NAN.
By the way, "large objects" has a special meaning in v8: objects large enough to be created in large object space and never moved. External memory is off-heap memory and I think what you're referring to.

What is Hazelcast HD Memory? - on/off heap?

I have read this official post on the Hazelcast High Density Memory.
Am I right in assuming that this HD memory still consumes memory from the JVM (in which the application is running and not creating another JVM in the server and using it solely for hz instance)?
And that the only difference in this native memory configuration is that, the memory is allocated off heap rather than the default on-heap memory allocation?
HDMS or Hazelcast high Density Memory Store allocates memory into the same process space as the Java heap. That means the process still owns all the memory but the Java heap is otherwise independent and the Hazelcast allocated space (off-heap / non-Java-heap) is not target to Garbage Collection. Values are serialized and the resulting bytestream is copied to the native memory and when reading it is copied back into the Java heap area and sent to the requestor.
Imagine HDMS as a fancy malloc implementation :)
HDMS or High Density Memory Store is part of Hazelcast Enterprise HD offering. HDMS is a way for Java software to access multiple terabytes of memory per node without struggling with long and unpredictable garbage collection pauses. This memory store provides the benefits of "off-heap" memory using of many high-performance memory management techniques. HDMS solves problems related with garbage collection limitations so that applications can utilizes hardware memory more efficiently without the need of extra clusters. It is designed as a plug-gable memory manager which enables multiple memory stores for different data structures like IMap and JCache.

Resources