RazorEngine Memory Usage - razorengine

I have created a windows service to build and send emails. I am using the Razor Engine to parse the email templates. I am using a dynamic ExpandoObject to create the model.
My problem is when each email is created and sent the memory is increasing but it is never been released. I have profiled the service with Ants Memory profiler(I haven't used this before) but it is showing the following results:
With Razor Engine
Parsing 200 emails with Razor.Parse(text,model)
Generation 1: 12.9kb
Generation 2: 15.88mb
Large Object Heap: 290.9kb
Unused memory allocated to .NET: 3.375mb
Unmanaged: 69.51mb
Total number of memory fragments: 197
No Razor Engine
Returning 200 emails unparsed text.
Generation 1: 13.87kb
Generation 2: 3.798mb
Large Object Heap: 95.58kb
Unused memory allocated to .NET: 4.583mb
Unmanaged: 44.58mb
Total number of memory fragments: 7
With Razor the biggest generation 2 instances are:
System.Reflection.Emit __FixUpData[] - 2,447,640 live bytes, 3,138 instances
Has anyone any idea why the objects aren't being released and the Generation 2 is growing? Is there a way to have a new instance of the RazorEngine each time I want to parse a template and when its finished it will not be referenced and will go to the GC.
Ive tried creating a new instance of Template service each time I parse a template but this hasnt made a difference
using (ITemplateService templateService = new TemplateService())
{
result = templateService.Parse<ExpandoObject>(text, model);
}

Each time you parse a template, RazorEngine compiles an in-memory assembly.
That can get expensive.
You should re-use your templates as much as possible.

Old question, but to activate the template cache you must supply the "cache" argument to the Parse method (which can/should be the path to your template) :
return RazorViewService.Parse(File.ReadAllText(path), model, null, cache);

If you're using the same template repeatedly call RazorEngine.Compile once then Razor.Run there after to ensure that the template is compiled only once.
Also I believe the memory the RazorEngine seems to leak memory when DEBUG is enabled in a build. Make sure your production code is built with the Release profile i.e. without the DEBUG compiler constant.

When you compile a template, the dynamic assemblies that are created are loaded into the current appdomain. There is no facility to unload them so as you compile more templates, the memory keeps growing.
You can use the IsolatedTemplateService in RazorEngine 3.x to get around this. What it does is load the compiled template into a new appdomain. When that appdomain is garbage collected, the template assemblies loaded into that appdomain are then collected as well. There are some limitations though--such as the inability to use dynamic models (Expando objects) or anonymous models. The model also needs to be serializable.
Check this out from the author of RazorEngine: http://www.fidelitydesign.net/?p=473

Related

Nodejs | Chrome Memory Debugging

Context:
I have an Node.js application which memory seems to very high, I don't know if that is memory leak or not, because it is reduced after certain period time, but some times on heavy load it keeps on increasing takes much longer to get reduced.
So going through the articles and couple of videos, i figured that i have to heap snapshot and analyse what is causing the memory leak.
Steps:
I have taken 4 snap shots as of now in my local to reproduce the memory leak.
Snapshot 1: 800MB
Snapshot 2: 1400MB
Snapshot 3: 1600MB
Snapshot 4: 2000+MB
When i uploaded the heapdump files to chrome dev tools I see there a lot of information but i don't know how to proceed from there.
Please check below screenshot, it says there is constructor [array] which has 687206 as shallow Size & Retained Size is 721414 in the columns, so when expanded that constructor i can see there are 4097716 constructors created ( refer the second screenshot attached below ).
Question
What does internal array [] means ? Why is there 4097716 created ?
How can a filter out the constructor which created by my app and showing me that instead of some system/v8 engine constructor ?
In the same screenshot one of the constructor uses global variable called tenantRequire function, this is custom global function which is being used internally in some places instead of normal Node.js require, I see the variable across all the constructor like "Array", "Object". This is that global tenantRequire code for reference. It is just patched require function with trycatch. Is this causing the memory leak somehow ?
Refer screenshot 3, [string] constructor it has 270303848 as shallow size. When i expanded it shows modules loaded by Node.js. Question why is this taking that much size ? & Why is my lodash modules are repeated in that string constructor ?
Without knowing much about your app and the actions that cause the high memory usage, it's hard to tell what could be the issue. Which tool did you use to record the heap snapshot? What is the sequence of operations you did when you recorded the snapshot? Could you add this information to your question?
A couple of remarks
You tagged the question with node.js and showed Chrome DevTools. That's ok. You can totally take a heap snapshot of a Node.js application and analyze it in Chrome DevTools. But since both Node.js and Chrome use the same JS engine (V8) and the same garbage collector (Orinoco), it might be a bit confusing for someone who reads the question. Just to make sure I understand it correctly: the issue is in a Node.js app, not in a browser app. And you are using Chrome just to analyze the heap snapshot. Right?
Also, you wrote that you took the snapshots to reproduce the memory leak. That's not correct. You performed some action which you thought would cause a high memory usage, recorded a heap snapshot, and later loaded the snapshot in Chrome DevTools to observe the supposed memory leak.
Trace first, profile second
Every time you suspect a performance issue, you should first use tracing to understand which functions in your applications are problematic (i.e. slow, create a lot of objects that have to be garbage-collected, etc).
Then, when you know which functions to focus on, you can profile them.
Try these visual tools
There are a few tools that can help you with tracing/profiling your app. Have a look a FlameScope (a web app) and node-clinic (a suite of tools). There is also Perfetto, but I think it's for Chrome apps, not Node.js apps.
I also highly recommend the V8 blog.

Running tests with mocha results in memory leak and large string structures

I am trying to set up an environment for detecting memory leaks in my application.
App setup: Angular + Electron
Simulating app use with: Mocha + Spectron + Webdriverio
I have tests for different user scenarios that I run on freshly setup app and periodically collect memory usage of each process.
When the app is in idle, memory usage is as expected. But I have run into a problem with other test cases. It seems that when running tests with mocha, I get unexpected and unknown structures in memory. That results in a memory leak.
I have attached a screenshot below (Memory tab on dev tools), that best describes my confusion.
Snapshot 1: Taken after the app is set up (81.8 MB)
Snapshot 2: Taken after a group of tests have completed (~ 10 minutes of normal use) and the app has returned to starting state (109 MB)
Snapshot 3: Taken after I have forced GC (via "Collect Garbage" button) (108 MB)
Comparing snapshot 1 and 2, I can see where most of the memory is (~19 MB): in strings.
Inspection of retainers tells me that those strings are linked to (Global handlers)>(GC roots), selecting one of the strings and executing $0 in console results in the same output for all strings: <body>...</body>. When I hover the element, it is linked to a body of my app (for every string).
"Expanding string structure" gives me a feeling, that this is caused by some module being loaded multiple times and its references never being destroyed (my guess is that is is loaded via Module() in internal/modules/cjs/loader.js:136)?
Expanding string structure
When examining memory with "Allocation timelines", I don't find this "large string objects" under unreleased memory for same action that results in new "large string object" under "heap snapshot > comparison"
When I simulate a test scenario by hand or I simulate clicks via function in console, there is no memory leak.
All of that makes me think, I am doing or using something wrong (regarding mocha).
My questions:
Is mocha not suitable for this kind of setup (i.e. it holds some references until the app is closed)?
If a structure is retained only by (Global handlers)>(GC roots), when will it be released? I read here, that they are not something you need to worry about but in my case, they are :/
How are there multiple strings (multiple references?) that, when called via $0, all reference same DOM element (<body>)?
How come this string objects are not visible in "Allocation timelines"?
What can be the cause of this type of memory leak?
No, i don't think it is mocha related thing.
Trick is that mocha runs at nodejs side, and controls browser thru chromiumdriver using webdriver protocol (HTTP):
What i can see from strings in your snapshot it is actually some code that is send from chromedriver into your app.
I believe this is some issue of chromedriver.
This might be some injections into page when chromedriver tries to execute some commands.
You can try to cleanup cookies, local and session storage between tests, or hard reload with https://webdriver.io/docs/api/browser/reloadSession.html - but reload is pretty slow thing...
Or reload just current context with https://webdriver.io/docs/api/webdriver.html#refresh
Also you can try to manually execute some clenup js code on app side with
https://webdriver.io/docs/api/browser/execute.html

Memory Leak examples written in 4D

What are some examples of developer created memory leaks written in the 4D programming language?
By developer created memory leak, i am referring to a memory leak created by bad programming that could have been avoided with better programming.
32 bit
When ran in a 32 bit application it should eventually crash once it attempts to allocate more than 2^32 bytes (4 GB) of memory. If on the Mac OS X platform, the bottom of the crash report beneath the VM Region Summary should show a memory value around 3.7 GB:
TOTAL               3.7G
64 bit
When ran in a 64 bit application the code will continue to raise the amount of memory allocated and will not plateau, in that situation the OS will eventually complain that it has ran out of memory:
Overview
There are many ways that a developer can create there own memory leaks. Most of what you want to avoid is listed here.
use CLEAR VARIABLE when done using a variable
use CLEAR SET when done using a set
use CLEAR NAMED SELECTION when done using a named selection
use CLEAR LIST when done using a list
re-size your BLOBs to 0 with SET BLOB SIZE when done using the BLOB or use CLEAR VARIABLE
re-size your arrays to 0 when done using the array or use CLEAR VARIABLE
don't forget to close any open XML trees such as XML, DOM, SVG, etc (DOM CLOSE XML, SVG_CLEAR)
if using ODBC always remember to free the connection using ODBC_SQLFreeConnect
make sure to cleanup any offscreen areas used
Examples
Here are two specific examples of developer created memory leaks:
Forgetting to close XML
Bad code:
Repeat
$xmlRef:=DOM Create XML Ref("root")
Until (<>crashed_or_quit)
The code snippet above will leak memory because each call to DOM CREATE XML REF will create a new reference to a memory location, while the developer of this code has neglected to include a call to free the memory. Running this in a loop in a 32 bit host application will eventually cause a crash.
Fixed code:
This code can be easily fixed by calling DOM CLOSE XML when finished with the XML reference.
Repeat
$xmlRef:=DOM Create XML Ref("root")
DOM CLOSE XML($xmlRef)
Until (<>crashed_or_quit)
Forgetting to clear a list
Bad code:
Repeat
$listRef:=New list
Until (<>crashed_or_quit)
The code snippet above will leak memory because each time NEW LIST is called a reference to a new location in memory is returned. The developer is supposed to clear the the memory at the referenced location by using the CLEAR LIST($listRef) command. As a bonus, if the list has any sublists attached, the sublists can be cleared by passing the * parameter like CLEAR LIST($listRef;*).
Fixed code:
This can be easily fixed by calling CLEAR LIST($listRef;*) as seen in the following fixed example:
Repeat
$listRef:=New list
CLEAR LIST($listRef;*)
Until (<>crashed_or_quit)

KnockoutJS Memory Leak

I'm fairly certain I'm having memory leaks using KO version 2.0. I have an observable array that is populated with the result of an AJAX call. This collection is data-bound with a for each to a DIV container. Each object in the array has one single observable value that is bound to a checkbox. I've examined the heap using Chrome and my conclusion is the following:
If the AJAX call returns 3 elements, they are rendered properly on the DOM. If I take a snapshot of the heap at this point, there are three SearchResult objects in there. If I trigger the AJAX call again and it returns 5 elements, all 5 are correctly rendered to the DOM. However, if I take a snapshot of the heap in Chrome, and compare them, there are 8 elements listed as still being on the heap, all of them listed as being "added" and none are listed as "deleted". The DOM display is always correct, but the memory use just keeps climbing and climbing because the old search results are never deallocated.
Can anyone help me or give me pointers for diagnosing the memory leak?
UPDATE
I've created a jsFiddle to show the gist of what I'm doing. I've striped down EVERYTHING but the core functionality and I can still duplicate the memory leak when running on my local machine. Obviously the code won't work as it's posted because it needs to hit my local server to run the search.
UPDATE 2
I pulled the newest 2.1.0.0 Beta version and the leak disappeared. I'm not a huge fan of using beta version of things or of the classic "just upgrade to the new version" fix. I am still very interested in knowing what changed or what I was doing wrong that was creating the leak.
You're not doing anything wrong, it looks like ko.cleanNode was ignoring foreach bindings and not properly disposing of the outdated objects within the updated observableArray.
https://github.com/SteveSanderson/knockout/issues/271
This has been fixed in 2.1.0beta

What is the HostCodeHeap and why are they leaking?

We have .NET application (actually an IronPython app). We noticed that overtime the app becomes bigger in memory and sluggish.
Using WINDBG (!eeheap -loader), we noticed the that the LoaderHeap is getting bigger (150MB increase per day). From the !eeheap output it seems that the increase is due to HostCodeHeap (objects?).
I'd like to know what are these objects and why how can I prevent them from growing to infinity.
Thanks!
They are likely objects created for dynamically emitted code. Several components in the framework do this, and it may well be that IronPython uses some on its own.
I'd heard of similar issues while using Linq-TO-SQL, XML serialization, compiled XSLT transforms and other dynamically generated code.
See also "Leaking Unmanaged Heap Memory" near figure 2 in this MSDN magazine article.

Resources