I'm playing around with the retrieval of the most information I can regarding the use of .NET Caching.
With the Cache object we can retrieve 3 parameters
Count
EffectivePercentagePhysicalMemoryLimit
EffectivePrivateBytesLimit
But how about all the rest?
Where can I get information such as "available memory in server", "used cache memory", and so on...
There was a old project in ASP Allience called Cache Manager, but it's no longer available and all I could find was an image of it, where it does display exactly this:
I was looking at the docs and reading about the new .NET 4 entries in the System.Runtime.Cache like the CacheMemoryLimit and PhysicalMemoryLimit but I can't find real examples on how do I use it...
Does anyone have a wrapper for Cache Info around? or any idea how to use this new methods available?
my current Cache Implementation is:
public class InMemoryCache : ICacheService
{
private int minutes = 15;
public T Get<T>(string cacheID, Func<T> getItemCallback) where T : class
{
T item = HttpRuntime.Cache.Get(cacheID) as T;
if (item == null)
{
item = getItemCallback();
HttpRuntime.Cache.Insert(
cacheID,
item,
null,
DateTime.Now.AddMinutes(minutes),
System.Web.Caching.Cache.NoSlidingExpiration);
}
return item;
}
public void Clear()
{
IDictionaryEnumerator enumerator = HttpRuntime.Cache.GetEnumerator();
while (enumerator.MoveNext())
HttpRuntime.Cache.Remove(enumerator.Key.ToString());
}
public Dictionary<string, string> Stats()
{
var cache = HttpRuntime.Cache;
var r = new Dictionary<string, string>();
r.Add("Count", cache.Count.ToString());
r.Add("EffectivePercentagePhysicalMemoryLimit", cache.EffectivePercentagePhysicalMemoryLimit.ToString());
r.Add("EffectivePrivateBytesLimit", cache.EffectivePrivateBytesLimit.ToString());
return r;
}
}
Take a look at this:
https://www.youtube.com/watch?v=Dz_7hukyejQ
This is based on a 100% managed custom memory manager that stroes cached items in byte[] segments that are like 256 mb in size. This allows to store 100s of millions of objects without slowing anything down as GC does not see the "objects" as they reside in byte[].
The video shows how one can see cache in action and see how many objects,pages,priorities etc...
Here is the code:
https://github.com/aumcode/nfx/tree/master/Source/NFX/ApplicationModel/Pile
Namely the main interface of cache:
https://github.com/aumcode/nfx/blob/master/Source/NFX/ApplicationModel/Pile/ICache.cs
You can have named tables, age-based or absolute timestamp expiration with memory limits and entry priorities
Related
I've installed MVC Site Map Provider for MVC5 and just used everything out of the the box. It works fine. Now I want to implement roles based menu trimming so assuming my controller:
public class Home: Controller
{
[Authorize(Roles="Admin")]
public ActionResult Index()
{
return View();
}
}
Now basically only Admin role users can see the menu. Perfect works fine.
Also to implement this I added to my web.config this line:
<add key="MvcSiteMapProvider_SecurityTrimmingEnabled" value="true" />
The problem is that it works but it's slow. It takes about 7 seconds for the page to load. If I remove the web.config line, basically removing menu trimming based on roles it takes ~300ms for the page to load. Something is wrong in here.
Any ideas why my menu trimming based on roles is slow? I haven't done any customizations.
The security trimming feature relies on creating a controller instance for every node in order to determine if the current user context has access.
The most likely cause of this slowness is that your controllers (or their base class) have too much heavy processing happening in the constructor.
public class HomeController
{
public HomeController() {
// Lots of heavy processing
System.Threading.Thread.Sleep(300);
};
}
The above example will add 300 ms to the page load time for every node that represents an action method in the HomeController. If your other controllers also have heavy processing during instantiation, they will also add additional time to each page load.
When following DI best practices, this is not an issue because heavy processing takes place in external services after the controller instance is created.
public interface IHeavyProcessingService
{
IProcessingResult DoSomethingExpensive();
}
public class HeavyProcessingService : IHeavyProcessingService
{
public HeavyProcessingService() {
}
public IProcessingResult DoSomethingExpensive() {
// Lots of heavy processing
System.Threading.Thread.Sleep(300);
}
}
public class HomeController
{
private readonly IHeavyProcessingService heavyProcessingService;
// The constructor does no heavy processing. It is deferred until after
// the instance is created by HeavyProcessingService.
// The only thing happening here is assignment of dependencies.
public HomeController(IHeavyProcessingService heavyProcessingService) {
if (heavyProcessingService == null)
throw new ArgumentNullException("heavyProcessingService");
this.heavyProcessingService = heavyProcessingService;
};
public ActionResult Index()
{
var result = this.heavyProcessingService.DoSomethingExpensive();
// Do something with the result of the heavy processing
return View();
}
public ActionResult About()
{
return View();
}
public ActionResult Contact()
{
return View();
}
}
Notice in the above example that no heavy processing happens in the constructor? This means that creating an instance of HomeController is very cheap. It also means that action methods that don't require the heavy processing to happen (as in About() and Contact() in the example) won't take the hit of heavy processing required by Index().
If not using DI, MVC still requires that a new controller instance be created for each request (controller instances are never shared between users or action methods). Although, in that case it is not as noticeable on a per user basis because only 1 instance is created per user. Basically, MvcSiteMapProvider is slowing down because of a pre-existing issue with your application (which you can now fix).
Even if you are not using DI, it is still a best practice to defer heavy processing until after the controller instance is created.
public class HomeController
{
private readonly IHeavyProcessingService heavyProcessingService;
public HomeController() {
this.heavyProcessingService = new HeavyProcessingService();
};
public ActionResult Index()
{
var result = this.heavyProcessingService.DoSomethingExpensive();
// Do something with the result of the heavy processing
return View();
}
}
But if moving heavy processing into external services in your application is not an option, you can still defer processing until its needed by moving the processing into another method so it is not too expensive to create controller instances.
public class HomeController
{
public HomeController() {
};
private IProcessingResult DoSomethingExpensive() {
// Lots of heavy processing
System.Threading.Thread.Sleep(300);
}
public ActionResult Index()
{
var result = this.DoSomethingExpensive();
// Do something with the result of the heavy processing
return View();
}
}
Although there is a bug posted for Route values not preserved correctly in v4?
But looks like it was fixed in version 4 next release.
Another Workaround to fix this problem is cache here is a related article.
MVC siteMap provider cache
When I bind a "back button" to a the router in ReactiveUI, my ViewModel is no longer garbage collected (my view too). Is this a bug, or is this me doing something dumb?
Here is my MeetingPageViewModel:
public class MeetingPageViewModel : ReactiveObject, IRoutableViewModel
{
public MeetingPageViewModel(IScreen hs, IMeetingRef mRef)
{
HostScreen = hs;
}
public IScreen HostScreen { get; private set; }
public string UrlPathSegment
{
get { return "/meeting"; }
}
}
Here is my MeetingPage.xaml.cs file:
public sealed partial class MeetingPage : Page, IViewFor<MeetingPageViewModel>
{
public MeetingPage()
{
this.InitializeComponent();
// ** Comment this out and both the View and VM will get garbage collected.
this.BindCommand(ViewModel, x => x.HostScreen.Router.NavigateBack, y => y.backButton);
// Test that goes back right away to make sure the Execute
// wasn't what was causing the problem.
this.Loaded += (s, a) => ViewModel.HostScreen.Router.NavigateBack.Execute(null);
}
public MeetingPageViewModel ViewModel
{
get { return (MeetingPageViewModel)GetValue(ViewModelProperty); }
set { SetValue(ViewModelProperty, value); }
}
public static readonly DependencyProperty ViewModelProperty =
DependencyProperty.Register("ViewModel", typeof(MeetingPageViewModel), typeof(MeetingPage), new PropertyMetadata(null));
object IViewFor.ViewModel
{
get { return ViewModel; }
set { ViewModel = (MeetingPageViewModel)value; }
}
}
I then run, and to see what is up, I use VS 2013 Pro, and turn on the memory analyzer. I also (as a test) put in forced GC collection of all generations and a wait for finalizers. When that line is uncommented above, when all is done, there are three instances of MeetingPage and MeetingPageViewModel. If I remove the BindCommand line, there are no instances.
I was under the impression that these would go away on their own. Is the problem the HostScreen object or the Router that refers to an object that lives longer than this VM? And that pins things down?
If so, what is the recommended away of hooking up the back button? Using Splat and DI? Many thanks!
Following up on the idea I had at the end, I can solve this in the following way. In my App.xaml.cs, I make sure to declare the RoutingState to the dependency injector:
var r = new RoutingState();
Locator.CurrentMutable.RegisterConstant(r, typeof(RoutingState));
then, in the ctor of each view (the .xaml.cs code) with a back button for my Windows Store app, I no longer use the code above, but replace it with:
var router = Locator.Current.GetService<RoutingState>();
backButton.Click += (s, args) => router.NavigateBack.Execute(null);
After doing that I can visit the page as many times as I want and never do I see the instances remaining in the analyzer.
I'll wait to mark this as an answer to give real experts some time to suggest another (better?) approach.
I am using the latest version of Automapper (v3.0.0.0-ci1036) and when it converts an object with binary data, it uses crazy amounts of memory. (200MB for a 10MB file). Here is an example of such a "file" beging converted:
class Program
{
static void Main(string[] args)
{
convertObject();
}
private static void convertObject()
{
var rnd = new Random();
var fileContents = new Byte[1024 * 1024 * 10];
rnd.NextBytes(fileContents);
var attachment = new Attachment { Content = fileContents };
Mapper.CreateMap<Attachment, AttachmentDTO>();
Console.WriteLine("Press enter to convert");
Console.ReadLine();
var dto = Mapper.Map<Attachment, AttachmentDTO>(attachment);
Console.WriteLine(dto.Content.Length + " bytes");
Console.ReadLine();
}
}
public class Attachment
{
public byte[] Content { get; set; }
}
public class AttachmentDTO
{
public byte[] Content { get; set; }
}
Is there something wrong with my code, or do I have to stop using automapper for objects that contain binary data?
I am not sure, but your reason might be the following:
Your C# application works on the .NET runtime which clear the heap memory when possible using the Garbage Collector.
This technique has the side effect to fragment your heap memory. So, for example, you might have 100MB allocated, with the 40% available for new variables that is fragmented in smaller chunks of max 5MB.
In this situation, when you allocate a new array of 10 MB, the .NET virtual machine has not room to allocate it, even if it has 40MB free.
To solve the problem it move up your available heap memory to 110MB (in the best case) and allocate the new 10 MB for your new byte array.
Also see:
http://msdn.microsoft.com/en-us/magazine/dd882521.aspx#id0400035
I'm new to WCF RIA Services, and have been working with LightSwitch for 4 or so months now.
I created a generic screen to be used for editing lookup tables all over my LightSwitch application, mostly to learn how to create a generic screen that can be used with different entity sets on a dynamic basis.
The screen is pretty simple:
Opened with arguments similar to this:
Application.ShowLookupTypesList("StatusTypes", "StatusTypeId"); which correspond to the entity set for the lookup table in the database.
Here's my WCF RIA service code:
using System.Data.Objects.DataClasses;
using System.Diagnostics;
using System.Reflection;
using System;
using System.Collections.Generic;
using System.ComponentModel.DataAnnotations;
using System.Data;
using System.Linq;
using System.ServiceModel.DomainServices.EntityFramework;
using System.ServiceModel.DomainServices.Server;
namespace WCF_RIA_Project
{
public class LookupType
{
[Key]
public int TypeId { get; set; }
public string Name { get; set; }
}
public static class EntityInfo
{
public static Type Type;
public static PropertyInfo Key;
public static PropertyInfo Set;
}
public class WCF_RIA_Service : LinqToEntitiesDomainService<WCSEntities>
{
public IQueryable<LookupType> GetLookupTypesByEntitySet(string EntitySetName, string KeyName)
{
EntityInfo.Set = ObjectContext.GetType().GetProperty(EntitySetName);
EntityInfo.Type = EntityInfo.Set.PropertyType.GetGenericArguments().First();
EntityInfo.Key = EntityInfo.Type.GetProperty(KeyName);
return GetTypes();
}
[Query(IsDefault = true)]
public IQueryable<LookupType> GetTypes()
{
var set = (IEnumerable<EntityObject>)EntityInfo.Set.GetValue(ObjectContext, null);
var types = from e in set
select new LookupType
{
TypeId = (int)EntityInfo.Key.GetValue(e, null),
Name = (string)EntityInfo.Type.GetProperty("Name").GetValue(e, null)
};
return types.AsQueryable();
}
public void InsertLookupType(LookupType lookupType)
{
dynamic e = Activator.CreateInstance(EntityInfo.Type);
EntityInfo.Key.SetValue(e, lookupType.TypeId, null);
e.Name = lookupType.Name;
dynamic set = EntityInfo.Set.GetValue(ObjectContext, null);
set.AddObject(e);
}
public void UpdateLookupType(LookupType currentLookupType)
{
var set = (IEnumerable<EntityObject>)EntityInfo.Set.GetValue(ObjectContext, null);
dynamic modified = set.FirstOrDefault(t => (int)EntityInfo.Key.GetValue(t, null) == currentLookupType.TypeId);
modified.Name = currentLookupType.Name;
}
public void DeleteLookupType(LookupType lookupType)
{
var set = (IEnumerable<EntityObject>)EntityInfo.Set.GetValue(ObjectContext, null);
var e = set.FirstOrDefault(t => (int)EntityInfo.Key.GetValue(t, null) == lookupType.TypeId);
Debug.Assert(e.EntityState != EntityState.Detached, "Entity was in a detached state.");
ObjectContext.ObjectStateManager.ChangeObjectState(e, EntityState.Deleted);
}
}
}
When I add an item to the list from the running screen, save it, then edit it and resave, I receive data conflict "Another user has deleted this record."
I can workaround this by reloading the query after save, but it's awkward.
If I remove, save, then readd and save an item with the same name I get unable to save data, "The context is already tracking a different entity with the same resource Uri."
Both of these problems only affect my generic screen using WCF RIA Services. When I build a ListDetail screen for a specific database entity there are no problems. It seems I'm missing some logic, any ideas?
I've learned that this the wrong approach to using LightSwitch.
There are several behind-the-scenes things this generic screen won't fully emulate and may not be do-able without quite a bit of work. The errors I've received are just one example. LightSwitch's built-in conflict resolution will also fail.
LS's RAD design means just creating a bunch of similar screens is the way to go, with some shared methods. If the actual layout needs changed across many identical screens at once, you can always find & replace the .lsml files if you're careful and make backups first. Note that modifying these files directly isn't supported.
I got that error recently. In my case I create a unique ID in my WCF RIA service, but in my screen behind code I must explicitly set a unique ID when I create the object that will later be passed to the WCF RIA Service insert method (this value will then be overwritten with the unique counter ID in the table of the underlying database).
See the sample code for this project:
http://lightswitchhelpwebsite.com/Blog/tabid/61/EntryId/157/A-Visual-Studio-LightSwitch-Picture-File-Manager.aspx
Sorry for big chunk of code, I couldn't explain that with less.Basically I'm trying to write into a file from many tasks.
Can you guys please tell me what I'm doing wrong? _streamWriter.WriteLine() throws the ArgumentOutOfRangeException.
class Program
{
private static LogBuilder _log = new LogBuilder();
static void Main(string[] args)
{
var acts = new List<Func<string>>();
var rnd = new Random();
for (int i = 0; i < 10000; i++)
{
acts.Add(() =>
{
var delay = rnd.Next(300);
Thread.Sleep(delay);
return "act that that lasted "+delay;
});
}
Parallel.ForEach(acts, act =>
{
_log.Log.AppendLine(act.Invoke());
_log.Write();
});
}
}
public class LogBuilder : IDisposable
{
public StringBuilder Log = new StringBuilder();
private FileStream _fileStream;
private StreamWriter _streamWriter;
public LogBuilder()
{
_fileStream = new FileStream("log.txt", FileMode.Create, FileAccess.ReadWrite, FileShare.ReadWrite);
_streamWriter = new StreamWriter(_fileStream) { AutoFlush = true };
}
public void Write()
{
lock (Log)
{
if (Log.Length <= 0) return;
_streamWriter.WriteLine(Log.ToString()); //throws here. Although Log.Length is greater than zero
Log.Clear();
}
}
public void Dispose()
{
_streamWriter.Close(); _streamWriter.Dispose(); _fileStream.Close(); fileStream.Dispose();
}
}
This is not a bug in StringBuilder, it's a bug in your code. And the modification you shown in your followup answer (where you replace Log.String with a loop that extracts characters one at a time) doesn't fix it. It won't throw an exception any more, but it won't work properly either.
The problem is that you're using the StringBuilder in two places in your multithreaded code, and one of them does not attempt to lock it, meaning that reading can occur on one thread simultaneously with writing occurring on another. In particular, the problem is this line:
_log.Log.AppendLine(act.Invoke());
You're doing that inside your Parallel.ForEach. You are not making any attempt at synchronization here, even though this will run on multiple threads at once. So you've got two problems:
Multiple calls to AppendLine may be in progress simultaneously on multiple threads
One thread may attempt to be calling Log.ToString at the same time as one or more other threads are calling AppendLine
You'll only get one read at a time because you are using the lock keyword to synchronize those. The problem is that you're not also acquiring the same lock when calling AppendLine.
Your 'fix' isn't really a fix. You've succeeded only in making the problem harder to see. It will now merely go wrong in different and more subtle ways. For example, I'm assuming that your Write method still goes on to call Log.Clear after your for loop completes its final iteration. Well in between completing that final iteration, and making the call to Log.Clear, it's possible that some other thread will have got in another call to AppendLine because there's no synchronization on those calls to AppendLine.
The upshot is that you will sometimes miss stuff. Code will write things into the string builder that then get cleared out without ever being written to the stream writer.
Also, there's a pretty good chance of concurrent AppendLine calls causing problems. If you're lucky they will crash from time to time. (That's good because it makes it clear you have a problem to fix.) If you're unlucky, you'll just get data corruption from time to time - two threads may end up writing into the same place in the StringBuilder resulting either in a mess, or completely lost data.
Again, this is not a bug in StringBuilder. It is not designed to support being used simultaneously from multiple threads. It's your job to make sure that only one thread at a time does anything to any particular instance of StringBuilder. As the documentation for that class says, "Any instance members are not guaranteed to be thread safe."
Obviously you don't want to hold the lock while you call act.Invoke() because that's presumably the very work you want to parallelize. So I'd guess something like this might work better:
string result = act();
lock(_log.Log)
{
_log.Log.AppendLine(result);
}
However, if I left it there, I wouldn't really be helping you, because this looks very wrong to me.
If you ever find yourself locking a field in someone else's object, it's a sign of a design problem in your code. It would probably make more sense to modify the design, so that the LogBuilder.Write method accepts a string. To be honest, I'm not even sure why you're using a StringBuilder here at all, as you seem to use it just as a holding area for a string that you immediately write to a stream writer. What were you hoping the StringBuilder would add here? The following would be simpler and doesn't seem to lose anything (other than the original concurrency bugs):
public class LogBuilder : IDisposable
{
private readonly object _lock = new object();
private FileStream _fileStream;
private StreamWriter _streamWriter;
public LogBuilder()
{
_fileStream = new FileStream("log.txt", FileMode.Create, FileAccess.ReadWrite, FileShare.ReadWrite);
_streamWriter = new StreamWriter(_fileStream) { AutoFlush = true };
}
public void Write(string logLine)
{
lock (_lock)
{
_streamWriter.WriteLine(logLine);
}
}
public void Dispose()
{
_streamWriter.Dispose(); fileStream.Dispose();
}
}
I think the cause is because you are accessing the stringBuilder in the Parellel bracket
_log.Log.AppendLine(act.Invoke());
_log.Write();
and inside the LogBuilder you perform lock() to disallow memory allocation on stringBuidler. You are changing the streamwriter to handle the log in every character so would give the parellel process to unlock the memory allocation to stringBuilder.
Segregate the parallel process into distinct action would likely reduce the problem
Parallel.ForEach(acts, act =>
{
_log.Write(act.Invoke());
});
in the LogBuilder class
private readonly object _lock = new object();
public void Write(string logLines)
{
lock (_lock)
{
//_wr.WriteLine(logLines);
Console.WriteLine(logLines);
}
}
An alternate approach is to use TextWriter.Synchronized to wrap StreamWriter.
void Main(string[] args)
{
var rnd = new Random();
var writer = new StreamWriter(#"C:\temp\foo.txt");
var syncedWriter = TextWriter.Synchronized(writer);
var tasks = new List<Func<string>>();
for (int i = 0; i < 1000; i++)
{
int local_i = i; // get a local value, not closure-reference to i
tasks.Add(() =>
{
var delay = rnd.Next(5);
Thread.Sleep(delay);
return local_i.ToString() + " act that that lasted " + delay.ToString();
});
}
Parallel.ForEach(tasks, task =>
{
var value = task();
syncedWriter.WriteLine(value);
});
writer.Dispose();
}
Here are some of the synchronization helper classes
http://referencesource.microsoft.com/#q=Synchronized
System.Collections
static ArrayList Synchronized(ArrayList list)
static IList Synchronized(IList list)
static Hashtable Synchronized(Hashtable table)
static Queue Synchronized(Queue queue)
static SortedList Synchronized(SortedList list)
static Stack Synchronized(Stack stack)
System.Collections.Generic
static IList Synchronized(List list)
System.IO
static Stream Synchronized(Stream stream)
static TextReader Synchronized(TextReader reader)
static TextWriter Synchronized(TextWriter writer)
System.Text.RegularExpressions
static Match Synchronized(Match inner)
static Group Synchronized(Group inner)
It is seems that it isn't problem of Parallelism. It's StringBuilder's problem.
I have replaced:
_streamWriter.WriteLine(Log.ToString());
with:
for (int i = 0; i < Log.Length; i++)
{
_streamWriter.Write(Log[i]);
}
And it worked.
For future reference: http://msdn.microsoft.com/en-us/library/system.text.stringbuilder(v=VS.100).aspx
Memory allocation section.