Running tests with mocha results in memory leak and large string structures - memory-leaks

I am trying to set up an environment for detecting memory leaks in my application.
App setup: Angular + Electron
Simulating app use with: Mocha + Spectron + Webdriverio
I have tests for different user scenarios that I run on freshly setup app and periodically collect memory usage of each process.
When the app is in idle, memory usage is as expected. But I have run into a problem with other test cases. It seems that when running tests with mocha, I get unexpected and unknown structures in memory. That results in a memory leak.
I have attached a screenshot below (Memory tab on dev tools), that best describes my confusion.
Snapshot 1: Taken after the app is set up (81.8 MB)
Snapshot 2: Taken after a group of tests have completed (~ 10 minutes of normal use) and the app has returned to starting state (109 MB)
Snapshot 3: Taken after I have forced GC (via "Collect Garbage" button) (108 MB)
Comparing snapshot 1 and 2, I can see where most of the memory is (~19 MB): in strings.
Inspection of retainers tells me that those strings are linked to (Global handlers)>(GC roots), selecting one of the strings and executing $0 in console results in the same output for all strings: <body>...</body>. When I hover the element, it is linked to a body of my app (for every string).
"Expanding string structure" gives me a feeling, that this is caused by some module being loaded multiple times and its references never being destroyed (my guess is that is is loaded via Module() in internal/modules/cjs/loader.js:136)?
Expanding string structure
When examining memory with "Allocation timelines", I don't find this "large string objects" under unreleased memory for same action that results in new "large string object" under "heap snapshot > comparison"
When I simulate a test scenario by hand or I simulate clicks via function in console, there is no memory leak.
All of that makes me think, I am doing or using something wrong (regarding mocha).
My questions:
Is mocha not suitable for this kind of setup (i.e. it holds some references until the app is closed)?
If a structure is retained only by (Global handlers)>(GC roots), when will it be released? I read here, that they are not something you need to worry about but in my case, they are :/
How are there multiple strings (multiple references?) that, when called via $0, all reference same DOM element (<body>)?
How come this string objects are not visible in "Allocation timelines"?
What can be the cause of this type of memory leak?

No, i don't think it is mocha related thing.
Trick is that mocha runs at nodejs side, and controls browser thru chromiumdriver using webdriver protocol (HTTP):
What i can see from strings in your snapshot it is actually some code that is send from chromedriver into your app.
I believe this is some issue of chromedriver.
This might be some injections into page when chromedriver tries to execute some commands.
You can try to cleanup cookies, local and session storage between tests, or hard reload with https://webdriver.io/docs/api/browser/reloadSession.html - but reload is pretty slow thing...
Or reload just current context with https://webdriver.io/docs/api/webdriver.html#refresh
Also you can try to manually execute some clenup js code on app side with
https://webdriver.io/docs/api/browser/execute.html

Related

Nodejs | Chrome Memory Debugging

Context:
I have an Node.js application which memory seems to very high, I don't know if that is memory leak or not, because it is reduced after certain period time, but some times on heavy load it keeps on increasing takes much longer to get reduced.
So going through the articles and couple of videos, i figured that i have to heap snapshot and analyse what is causing the memory leak.
Steps:
I have taken 4 snap shots as of now in my local to reproduce the memory leak.
Snapshot 1: 800MB
Snapshot 2: 1400MB
Snapshot 3: 1600MB
Snapshot 4: 2000+MB
When i uploaded the heapdump files to chrome dev tools I see there a lot of information but i don't know how to proceed from there.
Please check below screenshot, it says there is constructor [array] which has 687206 as shallow Size & Retained Size is 721414 in the columns, so when expanded that constructor i can see there are 4097716 constructors created ( refer the second screenshot attached below ).
Question
What does internal array [] means ? Why is there 4097716 created ?
How can a filter out the constructor which created by my app and showing me that instead of some system/v8 engine constructor ?
In the same screenshot one of the constructor uses global variable called tenantRequire function, this is custom global function which is being used internally in some places instead of normal Node.js require, I see the variable across all the constructor like "Array", "Object". This is that global tenantRequire code for reference. It is just patched require function with trycatch. Is this causing the memory leak somehow ?
Refer screenshot 3, [string] constructor it has 270303848 as shallow size. When i expanded it shows modules loaded by Node.js. Question why is this taking that much size ? & Why is my lodash modules are repeated in that string constructor ?
Without knowing much about your app and the actions that cause the high memory usage, it's hard to tell what could be the issue. Which tool did you use to record the heap snapshot? What is the sequence of operations you did when you recorded the snapshot? Could you add this information to your question?
A couple of remarks
You tagged the question with node.js and showed Chrome DevTools. That's ok. You can totally take a heap snapshot of a Node.js application and analyze it in Chrome DevTools. But since both Node.js and Chrome use the same JS engine (V8) and the same garbage collector (Orinoco), it might be a bit confusing for someone who reads the question. Just to make sure I understand it correctly: the issue is in a Node.js app, not in a browser app. And you are using Chrome just to analyze the heap snapshot. Right?
Also, you wrote that you took the snapshots to reproduce the memory leak. That's not correct. You performed some action which you thought would cause a high memory usage, recorded a heap snapshot, and later loaded the snapshot in Chrome DevTools to observe the supposed memory leak.
Trace first, profile second
Every time you suspect a performance issue, you should first use tracing to understand which functions in your applications are problematic (i.e. slow, create a lot of objects that have to be garbage-collected, etc).
Then, when you know which functions to focus on, you can profile them.
Try these visual tools
There are a few tools that can help you with tracing/profiling your app. Have a look a FlameScope (a web app) and node-clinic (a suite of tools). There is also Perfetto, but I think it's for Chrome apps, not Node.js apps.
I also highly recommend the V8 blog.

NodeJS discrepancy between heapUsed and chrome inspector

I am currently working on reducing the memory consumption of a NodeJS command line application.
Using the chrome inspector I was able to locate a huge object that was being retained due to a coding
error in the application.
After fixing the error I can no longer see the object (LargeObjectbelow) in the inspector and it seems the memory reduction
effort was a success, however, I am puzzled by the following behaviour.
When executing nodejs using --expose-gc and observing the heap memory usage by running
<code that created an object 'LargeObject' containing very large Maps, ~400Mb)>
global.gc()
const used = process.memoryUsage().heapUsed / 1024 / 1024;
console.log(`The script uses approximately ${Math.round(used * 100) / 100} MB`);
I'm still seeing essentially the same number as prior to the bugfix. If I stop the program after the GC in the inspector there are no
references to the LargeObject and the total memory under Retained Size is what I expect. (ie, reduced by around 400Mb)
If I however explicitly remove the references to the large Maps,
LargeObject.Map1 = null;
LargeObject.Map2 = null; // This is where LargeObject is in scope. It is *not* in scope in the above code
The process heapUsed print shows a drastically smaller number (dropped by around ~400Mb as I would expect)
I have quite a lot of trust in the chrome inspector and I feel confident that the object is no longer being retained, that said, I'd love to understand why the heapUsed value only drops in the second case. Is there a risk that something is still retaining the object but that the Chrome Inspector fails to detect it?

Memory Leak examples written in 4D

What are some examples of developer created memory leaks written in the 4D programming language?
By developer created memory leak, i am referring to a memory leak created by bad programming that could have been avoided with better programming.
32 bit
When ran in a 32 bit application it should eventually crash once it attempts to allocate more than 2^32 bytes (4 GB) of memory. If on the Mac OS X platform, the bottom of the crash report beneath the VM Region Summary should show a memory value around 3.7 GB:
TOTAL               3.7G
64 bit
When ran in a 64 bit application the code will continue to raise the amount of memory allocated and will not plateau, in that situation the OS will eventually complain that it has ran out of memory:
Overview
There are many ways that a developer can create there own memory leaks. Most of what you want to avoid is listed here.
use CLEAR VARIABLE when done using a variable
use CLEAR SET when done using a set
use CLEAR NAMED SELECTION when done using a named selection
use CLEAR LIST when done using a list
re-size your BLOBs to 0 with SET BLOB SIZE when done using the BLOB or use CLEAR VARIABLE
re-size your arrays to 0 when done using the array or use CLEAR VARIABLE
don't forget to close any open XML trees such as XML, DOM, SVG, etc (DOM CLOSE XML, SVG_CLEAR)
if using ODBC always remember to free the connection using ODBC_SQLFreeConnect
make sure to cleanup any offscreen areas used
Examples
Here are two specific examples of developer created memory leaks:
Forgetting to close XML
Bad code:
Repeat
$xmlRef:=DOM Create XML Ref("root")
Until (<>crashed_or_quit)
The code snippet above will leak memory because each call to DOM CREATE XML REF will create a new reference to a memory location, while the developer of this code has neglected to include a call to free the memory. Running this in a loop in a 32 bit host application will eventually cause a crash.
Fixed code:
This code can be easily fixed by calling DOM CLOSE XML when finished with the XML reference.
Repeat
$xmlRef:=DOM Create XML Ref("root")
DOM CLOSE XML($xmlRef)
Until (<>crashed_or_quit)
Forgetting to clear a list
Bad code:
Repeat
$listRef:=New list
Until (<>crashed_or_quit)
The code snippet above will leak memory because each time NEW LIST is called a reference to a new location in memory is returned. The developer is supposed to clear the the memory at the referenced location by using the CLEAR LIST($listRef) command. As a bonus, if the list has any sublists attached, the sublists can be cleared by passing the * parameter like CLEAR LIST($listRef;*).
Fixed code:
This can be easily fixed by calling CLEAR LIST($listRef;*) as seen in the following fixed example:
Repeat
$listRef:=New list
CLEAR LIST($listRef;*)
Until (<>crashed_or_quit)

Crash in ID3DXConstantTable SetFloat/SetVector

We have a application with a render engine developed in Direct3d/C++. Recently we have come across a crash( access violation) involving ID3DXConstantTable SetFloat/SetVector and shows inside D3dx9_42.dll when we attached a debugger in release binaries with PDBs. One of the ways this crash vanishes when we reduce the number of D3dPOOL Rendertarget textures which are used but from estimating the GPU memory load its no where close to even half of the total available as we are using 3GB NVIDIA cards.
Suspected it to be some heap corruptions due to memory overwrites we went about code checking and following that we used the Application Verifier along with a debugger to root out of memory overwrites which might crash at a later stage of running.. We came across few issues which we ironed out. But still that crash remains at the very first frame render ID3DXConstantTable SetFloat/SetVector . More info :This is 32 bit application running with LARGEADDRESSAWARE flag. Any pointers ?
Well a moment later only i found out the issue I executed the application with the registry switch MEM_TOP_DOWN(AllocationPreference=0x100000) and it instantly crashed at the first setfloat() location.Then goto to know the constant table had to be retrieved using D3DXGetShaderConstantTableEx() with the D3DXCONSTTABLE_LARGEADDRESSAWARE flag :) Thanks

Node JS, Highcharts Memory usage keeps climbing

I am looking after an app built with Node JS that's producing some interesting issues. It was originally running on Node JS v0.3.0 and I've since upgraded to v0.10.12. We're using Node JS to render charts on the server and we've noticed the memory usage keeps climbing chart after chart.
Q1: I've been monitoring the RES column in top for the Node JS process, is this correct or should I be monitoring something else?
I've been setting variables to null to try and reallocate memory back to the system resources (I read this somewhere as a solution) and it makes only a slight difference.
I've pushed the app all the way to 1.5gb and it then ceases to function and the process doesn't appear to die. No error messages which I found odd.
Q2: Is there anything else I can do?
Thanks
Steve
That is a massive jump in versions. You may want to share what code changes you may have made to get it working on latest stable. The api is not the same as back in v0.3, so that may be part of the problem.
If not then the issue you see it more likely from heap fragmentation than from an actual leak. In later v8 versions garbage collection is more liberal with cleanup to improve performance. (see http://code.google.com/p/chromium/issues/detail?id=112386 for some discussion on this)
You may try running the application with --max_old_space_size=32 which will limit the amount of memory v8 can use to around 32MB. Note the docs say "max size of the old generation", so it won't be exactly 32MB. Just around it, for lack of a better technical explanation.
Also you can track the amount of external memory usage with --trace_external_memory. This will allow you to know if external memory (i.e. Buffers) are being retained in your application.
You're note on the application hanging around 1.5GB would tell me you're probably on a 64-bit system. You only mentioned it ceases to function, but didn't note if the CPU is spinning during that time. Also since I don't have example code I'm not sure of what might be causing this to happen.
I'd try running on latest development (v0.11.3 at the time of this writing) and see if the issue is fixed. A lot of performance/memory enhancements are being worked on that may help your issue.
I guess you have somewhere a memory leak (in form of a closure?) that keeps the (not longer used?) diagrams(?) somewhere in memory.
The v8 sometimes needs a bit tweaking when it comes to > 1 GB of memory. Try out --noincremental_marking and/or --max_old_space_size=81920000 (if you have 8 GB available).
Check for more options with node --v8-options and go through the --trace*-parameters to find out what slows down/stops node.

Resources