Indexeddb seems to not free memory - memory-leaks

I'm using localforage with its IndexedDB driver for storing simple key-value-pairs. But I encountered a problem when trying to remove or update values.
Whenever I call localforage.removeItem("existingKey") or localforage.setItem("existingKey", "some value") with a key of an existing entry no memory is freed.
When using WebSQL instead of IndexedDB the behavior does no occur.
Example on JsBin (Only works in Chrome due to calls to navigator.storage)

Related

Nodejs | Chrome Memory Debugging

Context:
I have an Node.js application which memory seems to very high, I don't know if that is memory leak or not, because it is reduced after certain period time, but some times on heavy load it keeps on increasing takes much longer to get reduced.
So going through the articles and couple of videos, i figured that i have to heap snapshot and analyse what is causing the memory leak.
Steps:
I have taken 4 snap shots as of now in my local to reproduce the memory leak.
Snapshot 1: 800MB
Snapshot 2: 1400MB
Snapshot 3: 1600MB
Snapshot 4: 2000+MB
When i uploaded the heapdump files to chrome dev tools I see there a lot of information but i don't know how to proceed from there.
Please check below screenshot, it says there is constructor [array] which has 687206 as shallow Size & Retained Size is 721414 in the columns, so when expanded that constructor i can see there are 4097716 constructors created ( refer the second screenshot attached below ).
Question
What does internal array [] means ? Why is there 4097716 created ?
How can a filter out the constructor which created by my app and showing me that instead of some system/v8 engine constructor ?
In the same screenshot one of the constructor uses global variable called tenantRequire function, this is custom global function which is being used internally in some places instead of normal Node.js require, I see the variable across all the constructor like "Array", "Object". This is that global tenantRequire code for reference. It is just patched require function with trycatch. Is this causing the memory leak somehow ?
Refer screenshot 3, [string] constructor it has 270303848 as shallow size. When i expanded it shows modules loaded by Node.js. Question why is this taking that much size ? & Why is my lodash modules are repeated in that string constructor ?
Without knowing much about your app and the actions that cause the high memory usage, it's hard to tell what could be the issue. Which tool did you use to record the heap snapshot? What is the sequence of operations you did when you recorded the snapshot? Could you add this information to your question?
A couple of remarks
You tagged the question with node.js and showed Chrome DevTools. That's ok. You can totally take a heap snapshot of a Node.js application and analyze it in Chrome DevTools. But since both Node.js and Chrome use the same JS engine (V8) and the same garbage collector (Orinoco), it might be a bit confusing for someone who reads the question. Just to make sure I understand it correctly: the issue is in a Node.js app, not in a browser app. And you are using Chrome just to analyze the heap snapshot. Right?
Also, you wrote that you took the snapshots to reproduce the memory leak. That's not correct. You performed some action which you thought would cause a high memory usage, recorded a heap snapshot, and later loaded the snapshot in Chrome DevTools to observe the supposed memory leak.
Trace first, profile second
Every time you suspect a performance issue, you should first use tracing to understand which functions in your applications are problematic (i.e. slow, create a lot of objects that have to be garbage-collected, etc).
Then, when you know which functions to focus on, you can profile them.
Try these visual tools
There are a few tools that can help you with tracing/profiling your app. Have a look a FlameScope (a web app) and node-clinic (a suite of tools). There is also Perfetto, but I think it's for Chrome apps, not Node.js apps.
I also highly recommend the V8 blog.

Running tests with mocha results in memory leak and large string structures

I am trying to set up an environment for detecting memory leaks in my application.
App setup: Angular + Electron
Simulating app use with: Mocha + Spectron + Webdriverio
I have tests for different user scenarios that I run on freshly setup app and periodically collect memory usage of each process.
When the app is in idle, memory usage is as expected. But I have run into a problem with other test cases. It seems that when running tests with mocha, I get unexpected and unknown structures in memory. That results in a memory leak.
I have attached a screenshot below (Memory tab on dev tools), that best describes my confusion.
Snapshot 1: Taken after the app is set up (81.8 MB)
Snapshot 2: Taken after a group of tests have completed (~ 10 minutes of normal use) and the app has returned to starting state (109 MB)
Snapshot 3: Taken after I have forced GC (via "Collect Garbage" button) (108 MB)
Comparing snapshot 1 and 2, I can see where most of the memory is (~19 MB): in strings.
Inspection of retainers tells me that those strings are linked to (Global handlers)>(GC roots), selecting one of the strings and executing $0 in console results in the same output for all strings: <body>...</body>. When I hover the element, it is linked to a body of my app (for every string).
"Expanding string structure" gives me a feeling, that this is caused by some module being loaded multiple times and its references never being destroyed (my guess is that is is loaded via Module() in internal/modules/cjs/loader.js:136)?
Expanding string structure
When examining memory with "Allocation timelines", I don't find this "large string objects" under unreleased memory for same action that results in new "large string object" under "heap snapshot > comparison"
When I simulate a test scenario by hand or I simulate clicks via function in console, there is no memory leak.
All of that makes me think, I am doing or using something wrong (regarding mocha).
My questions:
Is mocha not suitable for this kind of setup (i.e. it holds some references until the app is closed)?
If a structure is retained only by (Global handlers)>(GC roots), when will it be released? I read here, that they are not something you need to worry about but in my case, they are :/
How are there multiple strings (multiple references?) that, when called via $0, all reference same DOM element (<body>)?
How come this string objects are not visible in "Allocation timelines"?
What can be the cause of this type of memory leak?
No, i don't think it is mocha related thing.
Trick is that mocha runs at nodejs side, and controls browser thru chromiumdriver using webdriver protocol (HTTP):
What i can see from strings in your snapshot it is actually some code that is send from chromedriver into your app.
I believe this is some issue of chromedriver.
This might be some injections into page when chromedriver tries to execute some commands.
You can try to cleanup cookies, local and session storage between tests, or hard reload with https://webdriver.io/docs/api/browser/reloadSession.html - but reload is pretty slow thing...
Or reload just current context with https://webdriver.io/docs/api/webdriver.html#refresh
Also you can try to manually execute some clenup js code on app side with
https://webdriver.io/docs/api/browser/execute.html

KnockoutJS Memory Leak

I'm fairly certain I'm having memory leaks using KO version 2.0. I have an observable array that is populated with the result of an AJAX call. This collection is data-bound with a for each to a DIV container. Each object in the array has one single observable value that is bound to a checkbox. I've examined the heap using Chrome and my conclusion is the following:
If the AJAX call returns 3 elements, they are rendered properly on the DOM. If I take a snapshot of the heap at this point, there are three SearchResult objects in there. If I trigger the AJAX call again and it returns 5 elements, all 5 are correctly rendered to the DOM. However, if I take a snapshot of the heap in Chrome, and compare them, there are 8 elements listed as still being on the heap, all of them listed as being "added" and none are listed as "deleted". The DOM display is always correct, but the memory use just keeps climbing and climbing because the old search results are never deallocated.
Can anyone help me or give me pointers for diagnosing the memory leak?
UPDATE
I've created a jsFiddle to show the gist of what I'm doing. I've striped down EVERYTHING but the core functionality and I can still duplicate the memory leak when running on my local machine. Obviously the code won't work as it's posted because it needs to hit my local server to run the search.
UPDATE 2
I pulled the newest 2.1.0.0 Beta version and the leak disappeared. I'm not a huge fan of using beta version of things or of the classic "just upgrade to the new version" fix. I am still very interested in knowing what changed or what I was doing wrong that was creating the leak.
You're not doing anything wrong, it looks like ko.cleanNode was ignoring foreach bindings and not properly disposing of the outdated objects within the updated observableArray.
https://github.com/SteveSanderson/knockout/issues/271
This has been fixed in 2.1.0beta

what causes memory leak in java

I have a web application deployed in Oracle iPlanet web server 7. Website is used actively in Internet.
After deploying, heap size is growing and after 2 or 3 weeks, OutOfMemory error is thrown.
So I began to use profiling tool. I am not familiar with heap dump. All I noticed that char[], hashmap and String objects occupy too much at heap. How can I notice what causes memory leak from heap dump? My assumptations about my memory leak;
I do so much logging in code using log4j for keeping in log.txt file. Is there a problem with it?
may be an error removing inactive sessions?
some static values like cities, gender type stored in static hashmap ?
I have a login mechanism but no logout mechanism. When site is opened again, new login needed. (silly but not implemented yet.) ?
All?
Do you have an idea about them or can you add another assumptions about memory leak?
Since Java has garbage collection a "memory leak" would usually be the result of you keeping references to some objects when they shouldn't be kept alive.
You might be able to see just from the age of the objects which ones are potentially old and being kept around when they shouldn't.
log4j shouldn't cause any problems.
The hashmap should be okay, since you actually want to keep these values around.
Inactive sessions might be the problem if they're stored in memory and if something keeps references to them.
There is one more thing you can try: new project, Plumbr, which aims to find memory leaks in java applications. It is in beta stage, but should be stable enough to give it a try.
As a side node, Strings and char[] are almost always on top of the profilers' data. This rarely means any real problem.

j2me Application shows out of memory exception in JBLEND

my j2me application shows out of memory exception in JBLEND. It work fine in JBED. By monitoring the memory, I realized that the document.parse(xmlParser) method consumes a lot of memory. I think the reason for the excption is memory is not freeing after parsing xml. is it right??? How can i solve the problem???
Whatever document.parse(xmlParser) returns, you should dereference it as soon a you don't need it anymore, i.e. you should set fields pointing to the returned object to null (or unset indirect references).
I've never used JBLEND or JBED, but the Wireless Toolkit respectively JaveME SDK also has a nice memory profiler which helps you track down memory and object reference problems.

Resources