How to reduce Enterprise Library 5.0 Logging memory usage? - c#-4.0

I am using enterprise logging 5.0 within a .net 4.0 console application. I notice very high memory usage within my application. I was able to determine that the cause was due to the following call:
var logWriter = EnterpriseLibraryContainer.Current.GetInstance<LogWriter>();
After some profiling and manual testing with a simple console application, I was able to determine that memory usage drop from 45mb to 10mb when the following dlls were removed from the execution folder:
Microsoft.Practices.EnterpriseLibrary.Validation.dll
Microsoft.Practices.EnterpriseLibrary.Data.dll
The log initialization is my first call to Enterprise library apis. My console application does not make any calls to the Data.dll and Validation.dll. They exist in my execution folder, because they are references for other class libraries and our deployment setup.
I am assuming that EnterpriseLibraryContainer.Current is initializing based on what is found in the execution folder. I tried creating my logwriter with the following but I got the same result:
var configSource = new FileConfigurationSource(configPath);
var logWriterFactory = new LogWriterFactory(configSource);
var logWriter = logWriterFactory.Create();
Is it possible to initialize a logwriter without increasing the memory usage with the validation and data dlls present in the execution folder?
UPDATE:
So after some debugging within the entlib source. I believe the following is finding the dll and instantiating the Validation.dll, and Data.dll, despite not being referenced at all in the project.
From EntLib50Src\Blocks\Common\Src\Configuration\ContainerModel\TypeLoadingLocator.cs
private IEnumerable<TypeRegistration> GetRegistrationsInternal(IConfigurationSource configurationSource,
Func<ITypeRegistrationsProvider, IConfigurationSource, IEnumerable<TypeRegistration>> registrationAccessor)
{
Type providerType = Type.GetType(Name);
if (providerType == null) return new TypeRegistration[0];
var provider = (ITypeRegistrationsProvider)Activator.CreateInstance(providerType);
return registrationAccessor(provider, configurationSource);
}
the call to Type.GetType(Name) looks in the Executing Assembly location, which seems to be the reason why it registered the entlib data access.
After Debugging further my original application which contains connection strings with Oracle ODP.net Providers. (which I failed to mention from the start)
(my current application execution makes no calls or references to data access, connection strings are defined because application uses dynamic calls to other dlls which need connection strings., but for my test I am not invoking any of those calls)
Since Microsoft.Practices.EnterpriseLibrary.Data.dll is found, EnterpriseLibrary continues default registration of types for dataaccess and I found that the following call is the cause for the huge memory spike:
\EntLib50Src\Blocks\Data\Src\Data\Configuration\DatabaseSyntheticConfigSettings.cs
private static DbProviderMapping GetDefaultMapping(string dbProviderName)
{
// try to short circuit by default name
if (DbProviderMapping.DefaultSqlProviderName.Equals(dbProviderName))
return defaultSqlMapping;
if (DbProviderMapping.DefaultOracleProviderName.Equals(dbProviderName))
return defaultOracleMapping;
// get the default based on type
var providerFactory = DbProviderFactories.GetFactory(dbProviderName);
if (SqlClientFactory.Instance == providerFactory)
return defaultSqlMapping;
if (OracleClientFactory.Instance == providerFactory)
return defaultOracleMapping;
return null;
}
The DbProviderFactories.GetFactory(dbProviderName) call when dbProviderName=Oracle.DataAccess.Client.OracleClientFactory causes the huge memory spike.
So looks like the reason for the huge memory spike was due to odp.net and the fact that its registering DBFactories.
It seems like I cannot create a logger without registering everything present in the executing assembly location. Ideally I would like to not register data access unless its explicitly told to.

The fact that ODP.NET is being used as the underlying provider is the main reason for the memory spike, and the GetDefaultMapping will return null anyways. The following change could be made:
private static DbProviderMapping GetDefaultMapping(string dbProviderName)
{
// try to short circuit by default name
if (DbProviderMapping.DefaultSqlProviderName.Equals(dbProviderName))
return defaultSqlMapping;
if (DbProviderMapping.DefaultOracleProviderName.Equals(dbProviderName))
return defaultOracleMapping;
if (dbProviderName != "Oracle.DataAccess.Client")
{
// get the default based on type
var providerFactory = DbProviderFactories.GetFactory(dbProviderName);
if (SqlClientFactory.Instance == providerFactory)
return defaultSqlMapping;
if (OracleClientFactory.Instance == providerFactory)
return defaultOracleMapping;
}
return null;
}
This still doesn't explain why Oracle.DataAccess.Client uses so much memory for :
DbProviderFactories.GetFactory(dbProviderName);
as well as to why its registering the data access with the container.

Related

Azure Logic App, Cant get data from CreateFile Function

So I've noticed a strange behavior which I would like to share and see if anyone has had the similar problem.
We are using on Prem solution where we pickup a file or a http event request, map it to an outgoing xml xsd/schema and then create the file later on prem.
The problem was that the system where we save the file does not cooperate so good with the logic app, the logic app failes sometime because the system takes the file before the logic app can finish writing the full content.
The system receiving the files only read .xml files, so we though we should first rename the files to tmp, let logic app create the files and then rename them.
This solution sounded quite simple before we started actually applying it to the logic app.
If we take FileSystem function which has Rename File function and use the parameters “Name” from the create file on prem
{
"statusCode": 404,
"message": "Resource not found"
}
We get the message 404 that the resource is not found, now this complicates a lot of things, I’ve checked the privileges on the account that should not be an issue.
What we also have tried is listing all files in the folder, creating a foreach and then adding a rule and the Rename File function. This makes it work but the logic app does not cope well with receiving a lof of files at ones with that solution.
But the Rename Files works when it’s in a foreach loop and we extract the file names in a list from root folder or normal folder.
But why does it not work with just using the Rename Function? Is this perhaps an azure function bug in the Logic app Rename File Function?
So after discussing with Microsoft support on Azure they have actually confirmed that there is a bug with the “Create File” function.
It looks like all the data and information is actually lost during that functions, the support technicians do not know why that is happening but they have had similar cases which people have reported.
I have not stumbled across any of those posts, but I will post how we solved the problem with a work around.
FYI, The support team has taken the case further so that the developers at azure should look into it, because it’s not just “name” tag which is lost from Create a File, ( it’s all valuable options are actually lost ).
So first we initialize a variable and then actually set the variable name with two steps before we create the file:
The name is set with a temp name and a GUID.
Next step is creating the file with the temp-name used in function “Set Variable Temp FileName”
And on the Rename File function we use the Path from where we store the temp file and add \”FILENAME”
And add the “New Name” which we want to use.
This proved to work but is a workaround, support confirmed that you should be able to just use the “RenameFile” after creating the file with a temp name and changing it to the desired name.
But since Create a File does not send or pass any information at all from this list we have to initialize Variables to make it work.
If anyone has stumbled on the same problem where the Backend system reads the files before they are managed to be created by the logic app and you need some workaround this worked good for me.
Hope it helps!
We recently had the same issue; and the workaround of renaming the file also failed.
The cause seems to be that the Azure On Prem Gateway creates a file (or renames a file), then releases its lock, before checking that the file exists. In the gap between releasing the lock and checking that the file exists, the file may be picked up (deleted) thus causing LogicApps to think the step failed (reporting a 404 error), and thus confusion.
Our workaround was to create a Windows service which we hosted on the file servers (so they'd be able to respond to file changes before anything else on the network). This service has a configuration file which accepts a list of paths and file filters, and it uses the FileSystemWatcher to monitor for new files, or renamed files. When it detects a match it takes out a read lock on the file. This ensure it's not blocked by anything writing to the file (i.e. so it doesn't have to wait for the On Prem Gateway's write aciton to complete before obtaining its own lock), but whilst our service holds its lock the file can't be deleted (so the consumer can't remove the file / buying time for the On Prem Gateway to perform it's post-write read and report success). Our service releases its own lock after a defined period (we've gone with 30 seconds, though you could likely get away with much less). At that point, the consumer can successfully consume the file.
Basic code for the file watch & locking logic below:
sing System;
using System.IO;
using System.Diagnostics;
using System.Threading.Tasks;
namespace AzureFileGatewayHelper
{
public class Interceptor: IDisposable
{
object lockable = new object();
bool disposed = false;
readonly FileSystemWatcher watcher;
readonly int lockTimeInMS;
public Interceptor(string path, string filter, int lockTimeInSeconds)
{
lockTimeInMS = lockTimeInSeconds * 1000;
watcher = new FileSystemWatcher();
watcher.Path = path;
watcher.Filter = filter;
watcher.NotifyFilter = NotifyFilters.LastAccess
| NotifyFilters.LastWrite
| NotifyFilters.FileName
| NotifyFilters.DirectoryName;
watcher.Created += OnIncercept;
watcher.Renamed += OnIncercept;
}
public Interceptor(InterceptorConfigElement config) : this(config.Path, config.Filter, config.TimeToLockInSeconds) { Debug.WriteLine($"Loaded config ${config.Key}: Path: '${config.Path}'; Filter: '${config.Filter}'; LockTime: : '${config.TimeToLockInSeconds}'."); }
public void Start()
{
watcher.EnableRaisingEvents = true;
}
public void Stop()
{
if (watcher != null)
watcher.EnableRaisingEvents = false;
}
private async void OnIncercept(object source, FileSystemEventArgs e)
{
using (var fs = new FileStream(e.FullPath, FileMode.Open, FileAccess.Read, FileShare.ReadWrite))
{
Debug.WriteLine($"Locked: {e.FullPath} {e.ChangeType}");
await Task.Delay(lockTimeInMS);
}
Debug.WriteLine($"Unlocked {e.FullPath} {e.ChangeType}");
}
public void Dispose()
{
if (disposed) return;
lock (lockable)
{
if (disposed) return;
Stop();
watcher?.Dispose();
disposed = true;
}
}
}
}

Azure Storage Tables - Update Condition Not Satisfied

I'm getting a random Exception when I try to update an entity on a storage table. The exception I get is
System.Data.Services.Client.DataServiceRequestException: An error occurred while processing this request. ---> System.Data.Services.Client.DataServiceClientException: {"odata.error":{"code":"UpdateConditionNotSatisfied","message":{"lang":"en-US","value":"The update condition specified in the request was not satisfied.\nRequestId:2a205f10-0002-013b-028d-0bbec8000000\nTime:2015-10-20T23:17:16.5436755Z"}}} ---
I know that this might be a concurrency issue, but the thing is that there's no other process accessing that entity.
From time to time I get dozens of these exceptions, I restart the server and it starts working fine again.
public static class StorageHelper
{
static TableServiceContext tableContext;
static CloudStorageAccount storageAccount;
static CloudTableClient CloudClient;
static StorageHelper()
{
storageAccount = CloudStorageAccount.Parse(CloudConfigurationManager.GetSetting("StorageConnectionString"));
CloudClient = storageAccount.CreateCloudTableClient();
tableContext = CloudClient.GetTableServiceContext();
tableContext.IgnoreResourceNotFoundException = true;
}
public static void Save(int myId,string newProperty,string myPartitionKey,string myRowKey){
var entity = (from j in tableContext.CreateQuery<MyEntity>("MyTable")
where j.PartitionKey == myId
select j).FirstOrDefault();
if (entity != null)
{
entity.MyProperty= myProperty;
tableContext.UpdateObject(entity);
tableContext.SaveChanges();
}
else
{
entity = new MyEntity();
entity.PartitionKey =MyPartitionKey;
entity.RowKey =MyRowKey;
entity.MyProperty= myProperty;
tableContext.AddObject("MyTable", entity);
tableContext.SaveChanges();
}
}
The code you've posted uses the very old table layer which is now obsolete. We strongly recommend you update to a newer version of the storage library and use the new table layer. See this StackOverflow question for more information. Also note that if you're using a very old version of the storage library these will eventually stop working as the service version they're using is going to be deprecated service side.
We do not recommend that customers reuse TableServiceContext objects as has been done here. They contain a variety of tracking that can cause performance issues as well as other adverse effects. These kind of limitations is part of the reason we recommend (as described above) moving to the newer table layer. See the how-to for more information.
On table entity update operations you must send an if-match header indicating an etag. The library will set this for you if you set the entity's etag value. To update no matter what the etag of the entity on the service, use "*".
I suggest you can consider using the Transient Fault Handling Application Block from Microsoft's Enterprise Library to retry when your application encounters such transient fault in Azure to minimize restarting the server every time when the same exception occurs.
https://msdn.microsoft.com/en-us/library/hh680934(v=pandp.50).aspx
While updating your entity, set ETag = "*".
Your modified code should look something like this -
if (entity != null)
{
entity.MyProperty= "newProperty";
tableContext.UpdateObject(entity);
tableContext.SaveChanges();
}

GCHandle, AppDomains managed code and 3rd party dll

I have looking at many threads about the exception "cannot pass a GCHandle across AppDomains" but I still don't get it....
I'm working with an RFID Reader which is driven by a DLL. I don't have source code for this DLL but only a sample to show how to use it.
The sample works great but I have to copy some code in another project to add the reader to the middleware Microsoft Biztalk.
The problem is that the process of Microsoft Biztalk works in another AppDomain. The reader handle events when a tag is read. But when I run it under Microsoft Biztalk I got this annoying exception.
I can't see any solution on how to make it work...
Here is some code that may be interesting :
// Let's connecting the result handlers.
// The reader calls a command-specific result handler if a command is done and the answer is ready to send.
// So let's tell the reader which functions should be called if a result is ready to send.
// result handler for reading EPCs synchronous
Reader.KSRWSetResultHandlerSyncGetEPCs(ResultHandlerSyncGetEPCs);
[...]
var readerErrorCode = Reader.KSRWSyncGetEPCs();
if (readerErrorCode == tKSRWReaderErrorCode.KSRW_REC_NoError)
{
// No error occurs while sending the command to the reader. Let's wait until the result handler was called.
if (ResultHandlerEvent.WaitOne(TimeSpan.FromSeconds(10)))
{
// The reader's work is done and the result handler was called. Let's check the result flag to make sure everything is ok.
if (_readerResultFlag == tKSRWResultFlag.KSRW_RF_NoError)
{
// The command was successfully processed by the reader.
// We'll display the result in the result handler.
}
else
{
// The command can't be proccessed by the reader. To know why check the result flag.
logger.error("Command \"KSRWSyncGetEPCs\" returns with error {0}", _readerResultFlag);
}
}
else
{
// We're getting no answer from the reader within 10 seconds.
logger.error("Command \"KSRWSyncGetEPCs\" timed out");
}
}
[...]
private static void ResultHandlerSyncGetEPCs(object sender, tKSRWResultFlag resultFlag, tKSRWExtendedResultFlag extendedResultFlag, tKSRWEPCListEntry[] epcList)
{
if (Reader == sender)
{
// Let's store the result flag in a global variable to get access from everywhere.
_readerResultFlag = resultFlag;
// Display all available epcs in the antenna field.
Console.ForegroundColor = ConsoleColor.White;
foreach (var resultListEntry in epcList)
{
handleTagEvent(resultListEntry);
}
// Let's set the event so that the calling process knows the command was processed by reader and the result is ready to get processed.
ResultHandlerEvent.Set();
}
}
You are having a problem with the gcroot<> helper class. It is used in the code that nobody can see, inside that DLL. It is frequently used by C++ code that was designed to interop with managed code, gcroot<> stores a reference to a managed object. The class uses the GCHandle type to add the reference. The GCHandle.ToIntPtr() method returns a pointer that the C++ code can store. The operation that fails is GCHandle.FromIntPtr(), used by the C++ code to recover the reference to the object.
There are two basic explanations for getting this exception:
It can be accurate. Which will happen when you initialized the code in the DLL from one AppDomain and use it in another. It isn't clear from the snippet where the Reader class object gets initialized so there are non-zero odds that this is the explanation. Be sure to keep it close to the code that uses the Reader class.
It can be caused by another bug, present in the C++ code inside the DLL. Unmanaged code often suffers from pointer bugs, the kind of bug that can accidentally overwrite memory. If that happens with the field that stores the gcroot<> object then nothing goes wrong for a while. Until the code tries to recover the object reference again. At that point the CLR notices that the corrupted pointer value no longer matches an actual object handle and generates this exception. This is certainly the hard kind of bug to solve since this happens in code you cannot fix and showing the programmer that worked on it a repro for the bug is very difficult, such memory corruption problems never repro well.
Chase bullet #1 first. There are decent odds that Biztalk runs your C# code in a separate AppDomain. And that the DLL gets loaded too soon, before or while the AppDomain is created. Something you can see with SysInternals' ProcMon. Create a repro of this by writing a little test program that creates an AppDomain and runs the test code. If that reproduces the crash then you'll have a very good way to demonstrate the issue to the RFID vendor and some hope that they'll use it and work on a fix.
Having a good working relationship with the RFID reader vendor to get to a resolution is going to be very important. That's never not a problem, always a good reason to go shopping elsewhere.

Azure caching and entity framework deserialization issue

I have a web project deployed in azure using colocated caching. I have 2 instances of this web role.
I am using Entity framework 5 and upon fetching some entities from the db, I cache them using colocated caching.
My entities are defined in class library called Drt.BusinessLayer.Entities
However when I visit my web app, I get the error:
The deserializer cannot load the type to deserialize because type 'System.Data.Entity.DynamicProxies.Country_4C17F5A60A033813EC420C752F1026C02FA5FC07D491A3190ED09E0B7509DD85' could not be found in assembly 'EntityFrameworkDynamicProxies-Drt.BusinessLayer.Entities, Version=1.0.0.0, Culture=neutral, PublicKeyToken=null'. Check that the type being serialized has the same contract as the type being deserialized and the same assembly is used.
Also sometimes I get this too:
Assembly 'EntityFrameworkDynamicProxies-Drt.BusinessLayer.Entities, Version=1.0.0.0, Culture=neutral, PublicKeyToken=null' is not found.
It appears that there is an error getting the entities out/deserialized. Since they are 2 instances of my web role, instance1 might place some entity objects in the cache and instance2 might get them out. I was expecting this to work, but I am unsure why I am getting this error....
Can anyone help/advise?
I ran into the same issue. At least in my case, the problem was the DynamicProxies with which the EF wraps all the model classes. In other words, you might think you're retrieving a Country class, but under the hood, EF is actually dynamically generating a class that's called something like Country_4C17F5A60A033813EC420C752F1026C02FA5FC07D491A3190ED09E0B7509DD85. The last part of the name is obviously generated at run-time, and it can be expected to remain static throughout the life of your application - but (and this is the key) only on the same instance of the app domain. If you've got two machines accessing the same out-of-process cache, one will be storing an object of the type Country_4C17F5A60A033813EC420C752F1026C02FA5FC07D491A3190ED09E0B7509DD85, but that type simply won't exist on the other machine. Its dynamic Country class will be something like Country_JF7ASDF8ASDF8ADSF88989ASDF8778802348JKOJASDLKJQAWPEORIU7879243AS, and so there won't be any type into which it can deserialize the serialized object. The same thing will happen if you restart the app domain your web app is running in.
I'm sure the big brains at MS could come up with a better solution, but the one I've been using is to do a "shallow clone" of my EF objects before I cache them. The C# method I'm using looks like this:
public static class TypeHelper
{
public static T ShallowClone<T>(this T obj) where T : class
{
if (obj == null) return null;
var newObj = Activator.CreateInstance<T>();
var fields = typeof(T).GetFields();
foreach (var field in fields)
{
if (field.IsPublic && (field.FieldType.IsValueType || field.FieldType == typeof(string)))
{
field.SetValue(newObj, field.GetValue(obj));
}
}
var properties = typeof(T).GetProperties();
foreach (var property in properties)
{
if ((property.CanRead && property.CanWrite) &&
(property.PropertyType.IsValueType || property.PropertyType == typeof(string)))
{
property.SetValue(newObj, property.GetValue(obj, null), null);
}
}
return newObj;
}
}
This takes care of two problems at once: (1) It ensures that only the EF object I'm specifically interested in gets cached, and not the entire object graph - sometimes huge - to which it's attached; and (2) The object that it caches is of a common type, and not the dynamically generated type: Country and not Country_4C17F5A60A033813EC420C752F1026C02FA5FC07D491A3190ED09E0B7509DD85.
It's certainly not perfect, but it does seem a reasonable workaround for many scenarios.
It would in fact be nice, though, if the good folks at MS were to come up with a way to cache EF objects without this.
I'm not familiar with azure-caching in particular, but I'm guessing you need to hydrate your entities completely before passing them to anything that does serialization, which is something a distributed or out-of-process cache would do.
So, just do .Include() on all relationships when you're fetching an entity or disable lazy initialization and you should be fine.

Accessing global variable in multithreaded Tomcat server

Edit: I've figured out the constructor for the singleton is getting called multiple times so it appears the classes are getting loaded more than once by separate class loaders. How can I make a global singleton in Tomcat? I've been googling, but no luck so far.
I have a singleton object that I construct like thus:
private static volatile KeyMapper mapper = null;
public static KeyMapper getMapper()
{
if(mapper == null)
{
synchronized(Utils.class)
{
if(mapper == null)
{
mapper = new LocalMemoryMapper();
}
}
}
return mapper;
}
The class KeyMapper is basically a synchronized wrapper to HashMap with only two functions, one to add a mapping and one to remove a mapping. When running in Tomcat 6.24 on my 32bit Windows machine everything works fine. However when running on a 64 bit Linux machine (CentOS 5.4 with OpenJDK 1.6.0-b09) I add one mapping and print out the size of the HashMap used by KeyMapper to verify the mapping got added (i.e. verify size = 1). Then I try to retrieve the mapping with another request and I keep getting null and when I checked the size of the HashMap it was 0. I'm confident the mapping isn't accidentally being removed since I've commented out all calls to remove (and I don't use clear or any other mutators, just get and put).
The requests are going through Tomcat 6.24 (configured to use 200 threads with a minimum of 4 threads) and I passed -Xnoclassgc to the jvm to ensure the class isn't inadvertently getting garbage collected (jvm is also running in -server mode). I also added a finalize method to KeyMapper to print to stderr if it ever gets garbage collected to verify that it wasn't being garbage collected.
I'm at my wits end and I can't figure out why one minute the entry in HashMap is there and the next it isn't :(
Another wild guess: is it possible the two requests are being served by different copies of your web app? Each would be in its own ClassLoader and thus have a different copy of the singleton.
Have you tried removing the outer check
if(mapper == null)
{
Thereby always hitting the Synchronized point, it's subtle stuff but possibly you're hitting the double-checked locking idiom problem. Described here and in many other articles.
Must admit I've never seen the problem actually bite someone before, but this sure sounds like it.
With this solution, the JVM guarantees that it's only one mapper and that's it's initialized before use.
public enum KeyMapperFactory {
;
private static KeyMapper mapper = new LocalMemoryMapper();
public static KeyMapper getMapper() {
return mapper;
}
}
This may not be the cause of your problem but you are using the faulty double-checked locking. See this,
http://en.wikipedia.org/wiki/Double-checked_locking#Usage_in_Java
I found a rather poor fix. I exported my code as a JAR and put it in $TOMCAT/lib and that worked. This is clearly a class loader issue.
Edit: Figured out the solution
Ok, I finally figured out the problem.
I had made my application the default application for the server by adding a to server.xml and setting the path to "". However, when I was accessing it through the URL http://localhost/somepage.jsp for somethings, but also the URL http://localhost/appname/anotherpage.jsp for other things.
Once I changed all the URLs to use http://localhost/ instead of http://localhost/appname the problem was fixed.

Resources