freeing up memory of Buffer instance in Node.js - node.js

node comes with abundance of method to create Buffers, but I haven't found one that dealocated the allocated piece of memory.
Do I just set buffer to null when I am done using it and let garbage collection kick in?
var buffer = new Buffer("pls dont null me");
buffer = null;

You should not care about it.
When you stop using the variable, the garbage collector will collect.
Just in case, its ok if you want to set null.
See the buffer documentation in the Node.js site.

Related

Any way to hint/force Garbage Collection on Node Google Cloud Function

I've got a Google Cloud Function that is running out of memory even though it shouldn't need to.
The function compiles info from a number of spreadsheets, the spreadsheets are large but handled sequentially.
Essentially the function does:
spreadsheets.forEach(spreadsheet => {
const data = spreadsheet.loadData();
mainSpreadsheet.saveData(data);
});
The data is discarded on each loop, so the garbage collector could clean up the memory, but in practice that doesn't seem to be happening and the process is crashing close to the end.
I can see from other answers it is possible to force garbage collection or even prevent node from over allocating memory
However, both of these involve command line arguments which I can't control with a cloud function. Is there any work around, or am I stuck with this as an issue when using Google Cloud Functions?
A colleague tipped me off that changing the code to
spreadsheets.forEach(spreadsheet => {
let data = spreadsheet.loadData();
mainSpreadsheet.saveData(data);
data = null;
});
Might be enough to tip the GC off that it can clean up that structure.
I was skeptical, but the function is now running to completion. Turns out you can hint to the GC in node

Strange memory usage when using CacheEntryProcessor and modifying cache entry

I wonder if anybody can explain what is going wrong with my code.
I have an IgniteCache of Long->Object[] which is a kind of batching mechanism.
The cache is onheap,partitioned and has one backup configured.
I want to modify some of the objects within the cache entry value array.
So I wrote and implementation of CacheEntryProcessor
#Override
public Object process(MutableEntry<Long, Object[]> entry, Object... arguments)
throws EntryProcessorException {
boolean updated = false;
int key = (int)arguments[0];
Set<Long> someIds = Ignition.ignite().cluster().nodeLocalMap().get(key);
Object[] values = entry.getValue();
for (int i = 0; i < values.length; i++) {
Person p = (Person) values[i];
if (someIds.contains(p.getId())) {
p.modify();
if (!updated) {
updated = true;
}
}
}
if (updated) {
entry.setValue(values);
}
return null;
}
}
When the cluster is loaded with data each node consumes around 20GB of heap.
When I run the cache processor with cache.invokeAll on multiple node cluster I have a crazy memory behavior - when the processor is being run I see memory usage going up to even 48GB or higher eventually leading to node separation from the cluster cause GC took too long.
However, if I remove the entry.setValue(values) line, which stores back the modified array into the cache everything is ok, apart from the fact that the data will not be replicated since the cache is not aware of the change - the update is only visible on the primary node :(
Can anybody tell me how to make it work? What is wrong with this approach?
First of all, I would not recommend to allocate large heap sizes. This will very likely cause a long GC pause even if everything is working properly. Basically, JVM will not clean up memory until it reaches certain threshold, and when it does reach, there will be to much garbage to collect. Try switching to off-heap or start more Ignite nodes.
The fact that more garbage is generated in case you update the entry, makes perfect sense. Basically each time you update you replace old value with a new one, and the old one becomes garbage.
If none of this helps, grab a heap dump and check what is occupying the memory.

Windows UMDF CComPtr IWDFMemory does not get freed

In my UMDF driver i have a IWDFMemory packed inside a CComPtr
CComPtr<IWDFMemory> memory;
The documentation of CComPtr says, If a CComPtr object gets out of scope, it gets automagically freed. That means this code should not create any memory leaks:
void main()
{
CComPtr<IWDFDriver> driver = /*driver*/;
/*
driver initialisation
*/
{
// new scope starts here
CComPtr<IWDFMemory> memory = NULL;
driver->CreateWdfMemory(0x1000000, NULL, NULL, &memory);
// At this point 16MB memory have been allocated.
// I can verify this by the task manager.
// scope ends here
}
// If I understand right the memory I allocated in previous scope should already
// be freed at this point. But in the task manager I still can see the 16 MB
// memory used by the process.
}
Also if I manually assign NULL to memory or call memory.Release() before scope end the memory does not get freed. I am wondering what is happening here?
According to MSDN:
If NULL is specified in the pParentObject parameter, the driver object
becomes the default parent object for the newly created memory object.
If a UMDF driver creates a memory object that the driver uses with a
specific device object, request object, or other framework object, the
driver should set the memory object's parent object appropriately.
When the parent object is deleted, the memory object and its buffer
are deleted.
Since you do indeed pass NULL, the memory won't be released until the CComPtr<IWDFDriver> object is released.

How to track object inside heap in node.js to find memory leak?

I have memory leak, and I know where is it (I think so), but I don't know why it is happening.
Memory leak occurs while load-testing following endpoint (using restify.js server):
server.get('/test',function(req,res,next){
fetchSomeDataFromDB().done(function(err,rows){
res.json({ items: rows })
next()
})
})
I am pretty sure that res object is not disposed (by garbage collector). On every request memory used by app is growing. I have done some additional test:
var data = {}
for(var i = 0; i < 500; ++i) {
data['key'+i] = 'abcdefghijklmnoprstuwxyz1234567890_'+i
}
server.get('/test',function(req,res,next){
fetchSomeDataFromDB().done(function(err,rows){
res._someVar = _.extend({},data)
res.json({ items: rows })
next()
})
})
So on each request I am assigning big object to res object as its attribute. I observed that with this additional attribute memory grows much faster. Memory grows like 100Mb per 1000 requests done during 60 sec. After next same test memory grows 100mb again, and so on. Now when I know that res object is not "released" how I can track what is still keeping reference to res? Let say I will perform heap snapshot - how I can find what is referecing res?
screenshot of heap comparison between 10 requests:
Actually it seems that Instance.DAO is leaking?? this class belongs to ORM that I am using to query DB... What do you think?
One more screen of same coparison sorted by #delta:
It seems more likely that the GC hasn't collected the object yet since you are not leaking res anywhere in this code. Try running your script with the --expose-gc node argument and then set up an interval that periodically calls gc();. This will force the GC to run instead of being lazy.
If after that you find that are leaking memory for sure, you could use tools like the heapdump module to use the Chrome developer heap inspector to see what objects are taking up space.

Any memory leak by delete and create same object several times

It may be just a memory leak question. For those not familiar with WinSCP, skip ahead to question.
I am using .net assembly of WinSCP in a C++/CLI program. My program will read in a schedule file. The file instructs the program to transfer files from various locations. Some transfers could come from the same server, so my program should close the existing connection if it is a new server. If the server is the same, keep the connection to use.
As there is no Session::Close(), the document recommends to use Session::Dispose() (Refer to Session.Dispose() Documentation.) Yet when I compile, I see error message saying:
'Dispose' is not a member of 'WinSCP::Session'
Eventually I use delete session. Part of my program will then roughly look like:
void Transfer(String ^ sAction, String ^ sMode,
String ^ sSource_Server, String ^ sSource_Path,
String ^ sDest_Server, String ^ sDest_Path,
bool bDelDir, bool bDelFile )
{
if ((GlobalClass::g_sFtp_Server != sSource_Server && sAction == "G")
|| (GloblaClass::g_sFtp_Server != sDest_Server && sAction == "P"))
{
// Close existing connection first.
if (GlobalClass::g_sftpSession != nullptr)
delete GlobalClass::g_sftpSession;
if (GlobalClass::g_sftpSessionOptions != nullptr)
// Reuse the object
GlobalClass::g_sftpSessionOptions->HostName = sSource_Server;
else
{
// Recreate object and fill in detail
GlobalClass::g_sftpSessionOptions = gcnew WinSCP::SessionOptions();
GlobalClass::g_sftpSessionOptions->Protocol ....
GlobalClass::g_sftpSessionOptions->HostName ....
}
// Create new session
GlobalClass::g_sftpSession = gcnew WinSCP::Session();
GlobalClass::g_sftpSession->Open(GlobalClass::g_sftpSessionOptions);
// Set GlobalClass::g_sFtp_Server
}
// Transfer files accordingly...
}
【Question】: Will there be any memory leak by deleting the object (delete GlobalClass::g_sftpSession) and create it again (GlobalClass::g_sftpSession = gcnew WinSCP::Session()) many times per minute?
From several .net resources I have read, the delete object action will mark the object to be garbage collected. When will it be done? It is entirely up to the gc mechanism. So if my program has to make connections to several sites, it has to do that delete and create several times. By the program finishes (usually in less than 1 minute), can I count on the garbage collection mechanism to clean out all memory? Reason I ask is my program will run every minute. If there is memory leak each time program is run, my machine will be out of memory very soon.
The WinSCP .NET assembly Session class has the Dispose method. Though it's probably hidden by C++/CLI. You call the Dispose indirectly using the delete. See How do you dispose of an IDisposable in Managed C++ and Calling C++/CLI delete on C# object.
Generally, even if you do not, the garbage collector will do this for you (in an unpredictable moment), as you do not keep reference to old sessions. But it definitely won't let your machine run out of memory.
On the other hand, you NEED to call the Dispose (the delete) to close the unused sessions anyway, otherwise you may run out of allowed connections to the servers (or even exhaust servers' resources).
If you want to check, if and when the session is disposed, set the Session.DebugLogPath and search a log for an entry like:
[2014-04-23 08:08:50.756Z] [000a] Session.Dispose entering
Your question whether there's a chance for a memory leak, when a program finishes is irrelevant. Any memory allocated by a process is released by an operating system when the process finishes. No matter what leak/bug/anything is in the program itself. See also Does the heap get freed when the program exits? Anyway, I believe your code does not leak memory.

Resources