ServiceStack Funq Container WeakReference proliferation - memory-leaks

I recently wrote a small service that handles high amounts of throughput (on the order of 60+ million requests per day) and it is encountering memory issues. At first, I looked through all of the usual suspects convinced that it had to be something I wrote as opposed to something to do with the very useful very performance-oriented ServiceStack libraries. Upon using windbg to !dumpheap -stat on the production server, however, I discovered to my surprise that the vast majority of objects in memory were System.WeakReference types with !gcroots pointing to ServiceStack's Funq container.
I do not even use an IoC'ed data structure in my service, so I was wondering why this is happening? Am I initializing something incorrectly? My apphost initialize class just calls the base constructor with the assembly and name information, I do not override the Configure method at all.
public SvcName() : base("SvcName", typeof(SvcName).Assembly) { }
I read elsewhere that the System.WeakReference objects are often inserted by .NET in rare instances due to the Visual Studio compiling the binaries having the "Edit and Continue" debugging option on, but turning it off in my VS has no effect (presumably because the SS binaries are already compiled and just referenced in my project).
Has anyone else ever had this issue?

WeakReference is used in Funq to track IDisposable's that's stored in a WeakReference Stack of disposables as seen here. Basically Funq tracks every IDisposable WeakReference created so they can all be disposed of when the Container is disposed.
I would first look at whether you could reduce your use of IDisposable instances (e.g. using more singletons), otherwise try modifying the Funq source code to use Stack<IDisposable> instead of a Stack<WeakReference> and let me know if this resolves your issue, if it does I can include an opt-in option in ServiceStack to use Stack<IDisposable> instead of Stack<WeakReference>.

Related

Best way to inject Principal info into n-tier ASP MVC 5 using Unity DI and not using OWIN

I'm writing an n-tier ASP MVC 5 app with Entity Framework in the data access layer. Bottom line is that I want an easy way to get the current user information, which is available in the Presentation Layer's HttpContext, into my Data Access Layer(repository), where it is not. Obviously, this sounds like a job for my DI container (Unity 3). My first reaction was to try creating a child container, which would be request specific, in the controller Action. Because I went with the config and bootstrapping generously supplied by the good folks who support MvcSiteMapProvider, I ran into trouble because they have abstracted the DI container to handle several flavors and the container is read-only. Then I stumbled across the Unity.Mvc PerRequestLifetimeManager which superficially sounds like a solution but there's kind of a dearth of information about it. Before I get lost for days down the various rabbit holes, has anyone else been down this path and found a clean, semi-elegant solution?
Forgive my possible lack in understanding of the complexities of your project; but from a simpleton perspective some options for persisting entity audit info (last update datetime, user, etc.):
1. have a base class for your entities, define therein the audit info properties, set them in constructor or expose them to the app to set/override.
2. have a base class for your context, override SaveChanges, examine the entities being persisted and update their audit info (http://msdn.microsoft.com/en-ca/library/vstudio/cc716714(v=vs.100).aspx).
Looks like the PerRequestLifetimeManager is working well. I expected a lot of trouble mixing the MvcSiteMapProvider's bootstrapper and other infrastructure with the Unity.Mvc NuGet package but so far it looks like they are very happy together.

Domino Database connection for a Java bean architecture

We are moving our multi-database web application from LS to a Java beans architecture, but are struggling to decide how best to handle database connections and what scope should we use for them.
If we use sessionScope then connection to 5-6 databases per call will be created for each user. If we use a applicationScope bean for the database connection then it will remain open until the server is restarted, causing memory leaks. I understand that certain values such as System Configuration values which rarely change can be cached at applicationScope level, but I am concerned about the rest of the connections.
My question really is what's the best way to handle domino database connections (domino objects are not serializable) without affecting performance or memory leaks or automatic GC issues?
This is a tough one because it deals with architecting a specific solution vs just some generic "this works better than that" advice. We have had great success architecting a consumer XPage application so that data is retrieved from additional databases. Sort of a front end with database backends but with Domino.
We use no applicationScope anything because there is nothing global to the application but even if there was there is enough chatter out there to indicate that perhaps applicationScope is not as ubiquitous as it sounds and therefore you have to monitor your objects closely.
You already figured out the Domino object issue so that has to be done no matter which approach you choose.
Depending on your application you may be staring down some major rearchitecting but my recommendation is to try it with the sessionScope first and see how it performs. Do some benchmarking. If it works fast enough then go with that but as you develop your beans you should pay VERY close attention to performance optimization. The multiple database calls could be an issue but you really won't know until you play with it a little bit.
One thing that will help is that if you build your classes beans using a more detailed architecture than you think you need at first (don't try to pile everything into a single class or bean), not only will it be easier to adapt your architecture if needed but you will also start to see design patterns that you may not have even known were possibilities.
As Russell mentions, there is no one way to do this and each will have their pros/cons.
There is a Wrapped Document class you can use to store Document information.
public static DominoDocument wrap(java.lang.String database,
lotus.domino.Database db,
java.lang.String parentId,
java.lang.String form,
java.lang.String computeWithForm,
java.lang.String concurrencyMode,
boolean allowDeletedDocs,
java.lang.String saveLinksAs)
Javadoc is here:
http://public.dhe.ibm.com/software/dw/lotus/Domino-Designer/JavaDocs/XPagesExtAPI/8.5.2/com/ibm/xsp/model/domino/wrapped/DominoDocument.html
However this just does some of the handling of recycle() in the background. So you are still going to have the same overheads generated by making/recycle() of the database objects.
The main overhead you will find is the creating the connection to the Database in your Java code. Once that connection is made, everything else is relatively faster.
I would recommend when testing this for performance that you use the XPages Toolkit. Videos on how to use it are part of the XPages Masterclass on openNTF.
http://www.openntf.org/internal/home.nsf/project.xsp?action=openDocument&name=XPages%20Masterclass

Any reason not to stick my CLLocationManager in a global variable?

Pretty much all viewcontrollers in the app I'm building needs a CLLocationManager. Is there any reason not to put it into a global variable (by way of a static class)? The alternative seems to be to set it up separately for every viewcontroller (wasteful) or pass it along to every viewcontroller (messy).
I usually setup a shared instance and call it...."LocationManager". You can check out an old revision here:
https://gist.github.com/1603316
Xamarin Mobile API is also another good project to get synced up with. The goal is to create a shared library that abstracts away the common interfaces to things like GPS, Accelerometer, Contacts, etc:
http://blog.xamarin.com/2011/11/22/introducing-the-xamarin-mobile-api/
Update: to answer your question the only reason I can think of to NOT to create a shared instance implementation is if you plan on accessing it from a bunch of different threads. To solve for this in my implementation I would simply create thread-safe members with thread-safe access patterns to those members.

Entity Framework Code First 4.1 IIS worker process memory footprint grows continually

I have a small website (MVC 3) that does some basic data collection. I have some seemingly random timeouts and upon investigation I have noticed that whenever a page that contains CRUD operations is executed, the IIS worker process memory usage grows a little bit but never reduces.
The site uses EF Code First. This is my first attempt at EFCF so I wouldn't be surprised if I created a problem. Any suggestion on what I should check or best practices for handling the objects to ensure that they are properly disposed of when the view completes would be greatly appreciated.
I can provide sample code if necessary.
Make sure your code is not holding on to a reference of your DBContext. Your DBcontext instance should be as short lived as possible. Also, check to see if you have disabled object tracking. If object tracking is enabled and you are keeping an instance of your DBContext as a session/application/static variable. then your memory usage will grow.
To disable object tracking construct your queries in the following manner
from e in mycontext.Entities.AsNoTracking()
where (condition)
select e
This will stop the dbcontext from caching your entities.

Is there any way to create dynamic types at runtime without having them permanently in the app domain?

My current understanding of dynamically generated types is this:
If you generate a type via CodeDom and load it into an AppDomain, there is no way to unload that type (i.e. Assembly.Unload(...) doesn't exists) without destroying the AppDomain as a whole.
Are there any other ideas on how to create custom types at runtime?
Can the C# 4.0 dynamic keyword be used somehow magically? Could the .NET 4 ExpandoObject be utilised is some lovely way?
Could anonymous types and the dynamic keyword be combined with some technical wizardry?! It feels like we have the tools scattered that might achieve something useful. But I could be wrong.
Once an assembly or type has been loaded into the AppDomain, it's there until the AppDomain is torn down, period, no exceptions.
That's why CodeDom is pure evil when used in any kind of bulk. It's a guaranteed memory leak and performance problem. EVERY compile with CodeDom generates a new assembly. I think you have a few of options:
Run a sandboxed AppDomain for your dynamic types.
Run your primary AppDomain in an environment where recycles and pooling are acceptable. Obviously in a client application, this is not possible, but if you're running in ASP .NET you can add code that monitors the number of Assemblies loaded in your AppDomain and requests a recycle when that number reaches a critical point. Then just have IIS pool your web application and you still have high availability, since you have multiple AppDomains running at once.
Use TypeBuilder and Reflection.Emit. This lets you use one dynamic assembly for all of your dynamically generated types.
If you want to dynamically generate C# style code like you can with CodeDom, you can still use this in conjunction with TypeBuilder, so your dynamic C# code gets compiled to a TypeBuilder in a dynamic assembly, instead of to a new assembly every time. To do this you can use MCS (Mono compiler service). You can pass it C# formatted classes and with a little tweaking, you can have it compile your code to a single dynamic assembly. See Mono Compiler as a Service (MCS)

Resources