Invalid Object Name Exception Thrown on Update - subsonic

I have a project using Simple Repository which was working before I rebuilt my dev machine. This may just be coincidence but I am now using SQL Server 2008 Express to develop against rather than 2005 and now when I run my project I get the exception "Invalid object name 'TableName'". the table exists as records are inserted fine but when it comes to update a record that is when the exception is thrown.
In case it helps this is an example of the code where the error is thrown:
/// <summary>
/// Updates the specified entity.
/// </summary>
/// <param name="entity">The entity.</param>
public void Update(IList<Result> entity)
{
using (TransactionScope ts = new TransactionScope())
{
using (SharedDbConnectionScope scs = new SharedDbConnectionScope())
{
foreach (Result result in entity)
{
Update(result);
}
ts.Complete();
}
}
}
public void Update(Result entity)
{
repo.Update(entity);
}

Related

OnDisconnected not called when published to IIS server works fine in Visual Studio

Please forgive me if this has already been asked before. I looked around, but my situation didn't come into play on any answered question I came across.
I'm Using SignalR 2.2.0
Setup: I have a WebAPI 2 web application (call it API For short) that holds my hub called ChatHub. I have a MVC site (MVC) that is calling my hub on the API site.
Both of these sites are on the same server just different ports. I am using VS 2013 and when I test locally, my local system also uses the same ports...also the url to the API site is loaded from the web config which is different for Release and Debug and local(so the url is correct and the ports on the server are fine...really on thing wrong is OnDisconnectednot getting fired)
After a lot of trial and error and searching, I finally got a test application up. Everything was working perfect. Then I had to modify my hub fit into the business model. IE Take the in memory list of users and messages and record them in the database (among other things). Everything work perfectly when running locally, however once the sites are published to IIS on the server....The OnDisconnected is never called. On the test app / when running locally, this is almost instantly hit by most browsers. But even after waiting 15+ minutes, the method is still not fired.
Here is my hub:(shorted for clarity)
/// <summary>
/// Chat Hub Class
/// </summary>
public class ChatHubAsync : Hub
{
#region Data Members
IDemographicsProvider DemographicsProvider { get; set;}
IChatRepository Repo { get; set; }
#endregion
#region CTOR
/// <summary>
/// Unity Constructor
/// </summary>
[InjectionConstructor]
public ChatHubAsync()
: this(new ChatRepositoryEF(), DataProviders.Demographics)
{
}
/// <summary>
/// Constructor for Chat Hub
/// </summary>
/// <param name="Repository"></param>
/// <param name="Demographics"></param>
public ChatHubAsync(IChatRepository Repository, IDemographicsProvider Demographics)
{
Repo = Repository;
DemographicsProvider = Demographics;
}
#endregion
#region Methods
/// <summary>
/// On Connected method to call base class
/// </summary>
/// <returns></returns>
public override async Task OnConnected()
{
await base.OnConnected();
}
/// <summary>
/// Connect to Hub
/// </summary>
/// <returns>Void</returns>
public async Task Connect()
{
if (await Repo.GetUser(Context.ConnectionId) == null)
{
await RemoveDuplicates(getGuidFromQueryString("userID"), getGuidFromQueryString("groupID"));
var user = await CreateUser();
await Repo.Connect(user);
await Clients.Caller.onConnected(user);
}
}
/// <summary>
/// Add User To Group
/// </summary>
/// <returns></returns>
public async Task AddToGroup()
{
Guid id = getGroupFromQueryString();
if (id != Guid.Empty)
{
string groupID = id.ToString();
var user = await Repo.GetUser(Context.ConnectionId);
try
{
if(user == null)
{
await Connect();
user = await Repo.GetUser(Context.ConnectionId);
}
await Groups.Add(Context.ConnectionId, groupID);
var users = await Repo.OnlineUsers(id);
var messages = await Repo.RetrieveMessages(id, 20);
var status = await Repo.AddOnlineUserToGroup(Context.ConnectionId, id);
await Clients.Caller.onGroupJoined(user, users, messages, status);
Clients.Group(groupID, Context.ConnectionId).onNewUserConnected(user);
}
catch(Exception E)
{
Console.WriteLine(E.Message);
}
}
}
/// .....More Methods that are irrelevant....
/// <summary>
/// Disconnect from Hub
/// </summary>
/// <param name="stopCalled"></param>
/// <returns></returns>
public override async Task OnDisconnected(bool stopCalled)
{
try
{
var item = await Repo.GetUser(Context.ConnectionId);
if (item != null)
{
if (item.GroupID != null && item.GroupID != Guid.Empty)
{
var id = item.GroupID.ToString();
Repo.Disconnect(Context.ConnectionId);
Clients.OthersInGroup(id).onUserDisconnected(Context.ConnectionId, item.UserName);
Groups.Remove(Context.ConnectionId, id);
}
}
}
catch (Exception E)
{
Console.WriteLine(E.Message);
}
await base.OnDisconnected(stopCalled);
}
#endregion
#region private Messages
private async Task<IOnlineUser> CreateUser()
{
///Code removed
}
private Guid getGroupFromQueryString()
{
return getGuidFromQueryString("groupID");
}
private Guid getGuidFromQueryString(string name)
{
Guid id;
try
{
var item = getItemFromQueryString(name);
if (Guid.TryParse(item, out id))
{
return id;
}
throw new Exception("Not a Valid Guid");
}
catch(Exception E)
{
Console.WriteLine(E.Message);
return Guid.Empty;
}
}
private async Task RemoveDuplicates(Guid User, Guid Group)
{
///Code removed
}
#endregion
}
UPDATE:
I have no idea why, but once I removed the calls to the database (ALL CALLS TO THE DATABASE) and went back to in memory lists, On Disconnected started getting called again.
Added back any call to the database using either straight sql or EntityFramework and the OnDisconnected stopped getting called.
Any ideas why adding database calls would cause the onDisconnected to stop getting called?
In case anyone else is having this issue. The problem was caused by the application not impersonating the correct user....it worked fine in visual studio because when impersonation failed to use the provided user, it used me. Out side of VS it didn't have that option and tried using the machine account which failed to log in. This was fixed by adding the impersonation in the application pool instead of using it in the web config. I was having issues with this earlier in the project when using async calls, but I thought I had those fixed with a few changes to the web config. I worked for all the async calls that I was making, but it was still failing in a few select areas that was causing the OnDisconnect to not fire on the server.

Factories, services, repository in DDD

I have some questions regarding factories, repositories and services in DDD. I have the following entities: Folder, file, FileData.
In my opinion the "Folder" is an aggregate root and should have the responsibility of creating the File and FileData object.
So my first question is should I use a factory to create this aggreate or is it up to the repository? At this time I have 2 repositories, one for Folder and another for File, but it seems to me I should merge them together. The following code snippet, shows my Folder Repository, which is located in my infrastructure layer:
public class FolderRepository : IFolderRepository
{
#region Fields
private readonly IFolderContext _context;
private readonly IUnitOfWork _unitOfWork;
#endregion
#region Constructor
public FolderRepository(IUnitOfWork unitOfWork)
{
_unitOfWork = unitOfWork;
_context = _unitOfWork.Context as IFolderContext;
}
#endregion
public IUnitOfWork UnitOfWork
{
get { return _unitOfWork; }
}
public IQueryable<Folder> All
{
get { return _context.Folders; }
}
public Folder Find(Guid id)
{
return _context.Folders.Find(id);
}
public void InsertGraph(Folder entity)
{
_context.Folders.Add(entity);
}
public void InsertOrUpdate(Folder entity)
{
if (entity.Id == Guid.Empty)
{
_context.SetAdd(entity);
}
else
{
_context.SetModified(entity);
}
}
public bool Delete(Guid id)
{
var folder = this.Find(id) ?? _context.Folders.Find(id);
_context.Folders.Remove(folder);
return folder == null;
}
public int AmountOfFilesIncluded(Folder folder)
{
throw new NotImplementedException();
//return folder.Files.Count();
}
public void Dispose()
{
_context.Dispose();
}
}
Next I have created a service in my application layer, this is called "IoService". I have my doubts about the location of the service. Should it be moved to the domain layer?
public class IoService : IIoService
{
#region Fields
private readonly IFolderRepository _folderRepository;
private readonly IFileRepository _fileRepository;
private readonly IUserReferenceRepository _userReferenceRepository;
#endregion
#region Constructor
public IoService(IFolderRepository folderRepository, IFileRepository fileRepository, IUserReferenceRepository userReferenceRepository)
{
if(folderRepository == null)
throw new NullReferenceException("folderRepository");
if(fileRepository == null)
throw new NullReferenceException("fileRepository");
if (userReferenceRepository == null)
throw new NullReferenceException("userReferenceRepository");
_folderRepository = folderRepository;
_fileRepository = fileRepository;
_userReferenceRepository = userReferenceRepository;
}
#endregion
#region Folder Methods
/// <summary>
/// Create a new 'Folder'
/// </summary>
/// <param name="userReference"></param>
/// <param name="name"></param>
/// <param name="parentFolder"></param>
/// <param name="userIds">The given users represent who have access to the folder</param>
/// <param name="keywords"></param>
/// <param name="share"></param>
public void AddFolder(UserReference userReference, string name, Folder parentFolder = null, IList<Guid> userIds = null, IEnumerable<string> keywords = null, bool share = false)
{
var userReferenceList = new List<UserReference> { userReference };
if (userIds != null && userIds.Any())
{
userReferenceList.AddRange(userIds.Select(id => _userReferenceRepository.Find(id)));
}
var folder = new Folder
{
Name = name,
ParentFolder = parentFolder,
Shared = share,
Deleted = false,
CreatedBy = userReference,
UserReferences = userReferenceList
};
if (keywords != null)
{
folder.Keywords = keywords.Select(keyword =>
new Keyword
{
Folder = folder,
Type = "web",
Value = keyword,
}).ToList();
}
//insert into repository
_folderRepository.InsertOrUpdate(folder);
//save
_folderRepository.UnitOfWork.Save();
}
/// <summary>
/// Get 'Folder' by it's id
/// </summary>
/// <param name="id"></param>
/// <returns></returns>
public Folder GetFolder(Guid id)
{
return _folderRepository.Find(id);
}
#endregion
#region File Methods
/// <summary>
/// Add a new 'File'
/// </summary>
/// <param name="userReference"></param>
/// <param name="folder"></param>
/// <param name="data"></param>
/// <param name="name"></param>
/// <param name="title"></param>
/// <param name="keywords"></param>
/// <param name="shared"></param>
public void AddFile(UserReference userReference, Folder folder, FileData data, string name, string title = "", IEnumerable<string> keywords = null, bool shared = false)
{
var file = new File
{
Name = name,
Folder = folder,
FileData = data,
CreatedBy = userReference,
Type = data.Type
};
if (keywords != null)
{
file.Keywords = keywords.Select(keyword =>
new Keyword
{
File = file,
Type = "web",
Value = keyword,
}).ToList();
}
folder.Files.Add(file);
folder.Updated = DateTime.UtcNow;
_folderRepository.InsertOrUpdate(folder);
//save
_folderRepository.UnitOfWork.Save();
}
/// <summary>
/// Get 'File' by it's id
/// </summary>
/// <param name="id"></param>
/// <returns></returns>
public File GetFile(Guid id)
{
return _fileRepository.Find(id);
}
#endregion
}
To summarize:
Should I use the service for creating the folder object. Or should the service just use a factory, which have the responsibility of creating the object and send the created object to the repository? What about dependency injection in the service, should I inject my services from the UI layer with IOC containers like Unity or should I just hardcode the dependencies in the service?
Thanks
So my first question is should I use a factory to create this aggregate
or is it up to the repository?
A factory is responsible for creation while a repository is responsible for persistence. Upon reconstitution, the repository will effectively create instances. However, often times this creation process is done with reflection and doesn't go through a factory to prevent initialization that should only occur at creation time.
At this time I have 2 repositories, one for Folder and another for
File, but it seems to me I should merge them together.
In DDD, you'd have a repository for each aggregate. This repository would be responsible for persisting all entities and value objects that are part of the aggregate.
I have my doubts about the location of the service. Should it be moved
to the domain layer?
IMO, an application service can be placed into the domain layer since it already serves as a facade and keeping them together would bring the benefits of cohesion. One thought about the IoService is that methods such as AddFile would usually be parameterized by aggregate identities as opposed to instances. Since the application service already references a repository, it can load the appropriate aggregates as needed. Otherwise, calling code would be responsible for calling the repository.
Should I use the service for creating the folder object. Or should the
service just use a factory, which have the responsibility of creating
the object and send the created object to the repository?
The IoService looks good as is except for the previous comment about being parameterized by identities rather than instances.
What about dependency injection in the service, should I inject my
services from the UI layer with IOC containers like Unity or should I
just hardcode the dependencies in the service?
This is a matter of preference. If you can benefit from using an IoC container then use it. However, don't use it just to use it. You are already doing dependency injection, just without a fancy IoC container.
SAMPLE
class File
{
public File(string name, Folder folder, FileData data, UserReference createdBy, IEnumerable<string> keywords = null)
{
//...
}
}
...
class Folder
{
public File AddFile(string name, FileData data, UserReference createdBy, IEnumerable<string> keywords = null)
{
var file = new File(name, this, data, createdBy, keywords)
this.Files.Add(file);
this.Updated = DateTime.UtcNow;
return file;
}
}
...
public void AddFile(UserReference userReference, Guid folderId, FileData data, string name, string title = "", IEnumerable<string> keywords = null, bool shared = false)
{
var folder = _folderRepository.Find(folderId);
if (folder == null)
throw new Exception();
folder.AddFile(name, data, keywords);
_folderRepository.InsertOrUpdate(folder);
_folderRepository.UnitOfWork.Save();
}
In this example, more of the behavior is delegated to the Folder aggregate and File entity. The application service simple calls the appropriate methods on the aggregate.

Tracking WeakReference to objects from multiple threads

I am designing a static message bus that would allow subscribing to and publishing of messages of an arbitrary type. To avoid requiring observers to unsubscribe explicitly, I would like to keep track of WeakReference objects that point to delegates instead of tracking delegates themselves. I ended up coding something similar to what Paul Stovell described in his blog http://www.paulstovell.com/weakevents.
My problem is this: as opposed to Paul's code, my observers subscribe to messages on one thread, but messages may be published on another. In this case, I observe that by the time I need to notify observers, my WeakReference.Target values are null indicating that targets have been collected, even though I know for certain they weren't. The problem persists for both short and long weak references.
Conversely, when subscribing and publishing is done from the same thread, the code works fine. The latter is true even if I actually end up enumerating over targets on a new thread from ThreadPool, for as long as the request initially comes from the same thread I subscribe to messages on.
I understand that this is a very specific case, so any help is greatly appreciated.
My question is: should I not be able to reliably access WeakReference objects from multiple threads provided proper thread synchronization is in place? It appears that I cannot, which does not make much sense to me. So, what am I not doing right?
It looks like after reducing my code to a simpler form (see below), it now works fine. This means, the problem that caused weak reference targets to be collected too early must reside elsewhere in my code. So, to answer my own question, it appears that weak references can be securely accessed from multiple threads.
Here is my test code:
using System;
using System.Collections.Concurrent;
using System.Collections.Generic;
using System.Linq;
using System.Linq.Expressions;
using System.Threading;
using System.Threading.Tasks;
namespace Test
{
class Program
{
static void Main(string[] args)
{
Console.WriteLine("Starting the app");
Test test = new Test();
// uncomment these lines to cause automatic unsubscription from Message1
// test = null;
// GC.Collect();
// GC.WaitForPendingFinalizers();
// publish Message1 on this thread
// MessageBus.Publish<Message1>(new Message1());
// publish Message1 on another thread
ThreadPool.QueueUserWorkItem(delegate
{
MessageBus.Publish<Message1>(new Message1());
});
while (!MessageBus.IamDone)
{
Thread.Sleep(100);
}
Console.WriteLine("Exiting the app");
Console.WriteLine("Press <ENTER> to terminate program.");
Console.WriteLine();
Console.ReadLine();
}
}
public class Test
{
public Test()
{
Console.WriteLine("Subscribing to message 1.");
MessageBus.Subscribe<Message1>(OnMessage1);
Console.WriteLine("Subscribing to message 2.");
MessageBus.Subscribe<Message2>(OnMessage2);
}
public void OnMessage1(Message1 message)
{
Console.WriteLine("Got message 1. Publishing message 2");
MessageBus.Publish<Message2>(new Message2());
}
public void OnMessage2(Message2 message)
{
Console.WriteLine("Got message 2. Closing the app");
MessageBus.IamDone = true;
}
}
public abstract class MessageBase
{
public string Message;
}
public class Message1 : MessageBase
{
}
public class Message2 : MessageBase
{
}
public static class MessageBus
{
// This is here purely for this test
public static bool IamDone = false;
/////////////////////////////////////
/// <summary>
/// A dictionary of lists of handlers of messages by message type
/// </summary>
private static ConcurrentDictionary<string, List<WeakReference>> handlersDict = new ConcurrentDictionary<string, List<WeakReference>>();
/// <summary>
/// Thread synchronization object to use with Publish calls
/// </summary>
private static object _lockPublishing = new object();
/// <summary>
/// Thread synchronization object to use with Subscribe calls
/// </summary>
private static object _lockSubscribing = new object();
/// <summary>
/// Creates a work queue item that encapsulates the provided parameterized message
/// and dispatches it.
/// </summary>
/// <typeparam name="TMessage">Message argument type</typeparam>
/// <param name="message">Message argument</param>
public static void Publish<TMessage>(TMessage message)
where TMessage : MessageBase
{
// create the dictionary key
string key = String.Empty;
key = typeof(TMessage).ToString();
// initialize a queue work item argument as a tuple of the dictionary type key and the message argument
Tuple<string, TMessage, Exception> argument = new Tuple<string, TMessage, Exception>(key, message, null);
// push the message on the worker queue
ThreadPool.QueueUserWorkItem(new WaitCallback(_PublishMessage<TMessage>), argument);
}
/// <summary>
/// Publishes a message to the bus, causing observers to be invoked if appropriate.
/// </summary>
/// <typeparam name="TArg">Message argument type</typeparam>
/// <param name="stateInfo">Queue work item argument</param>
private static void _PublishMessage<TArg>(Object stateInfo)
where TArg : class
{
try
{
// translate the queue work item argument to extract the message type info and
// any arguments
Tuple<string, TArg, Exception> arg = (Tuple<string, TArg, Exception>)stateInfo;
// call all observers that have registered to receive this message type in parallel
Parallel.ForEach(handlersDict.Keys
// find the right dictionary list entry by message type identifier
.Where(handlerKey => handlerKey == arg.Item1)
// dereference the list entry by message type identifier to get a reference to the observer
.Select(handlerKey => handlersDict[handlerKey]), (handlerList, state) =>
{
lock (_lockPublishing)
{
List<int> descopedRefIndexes = new List<int>(handlerList.Count);
// search the list of references and invoke registered observers
foreach (WeakReference weakRef in handlerList)
{
// try to obtain a strong reference to the target
Delegate dlgRef = (weakRef.Target as Delegate);
// check if the underlying delegate reference is still valid
if (dlgRef != null)
{
// yes it is, get the delegate reference via Target property, convert it to Action and invoke the observer
try
{
(dlgRef as Action<TArg>).Invoke(arg.Item2);
}
catch (Exception e)
{
// trouble invoking the target observer's reference, mark it for deletion
descopedRefIndexes.Add(handlerList.IndexOf(weakRef));
Console.WriteLine(String.Format("Error looking up target reference: {0}", e.Message));
}
}
else
{
// the target observer's reference has been descoped, mark it for deletion
descopedRefIndexes.Add(handlerList.IndexOf(weakRef));
Console.WriteLine(String.Format("Message type \"{0}\" has been unsubscribed from.", arg.Item1));
MessageBus.IamDone = true;
}
}
// remove any descoped references
descopedRefIndexes.ForEach(index => handlerList.RemoveAt(index));
}
});
}
// catch all Exceptions
catch (AggregateException e)
{
Console.WriteLine(String.Format("Error dispatching messages: {0}", e.Message));
}
}
/// <summary>
/// Subscribes the specified delegate to handle messages of type TMessage
/// </summary>
/// <typeparam name="TArg">Message argument type</typeparam>
/// <param name="action">WeakReference that represents the handler for this message type to be registered with the bus</param>
public static void Subscribe<TArg>(Action<TArg> action)
where TArg : class
{
// validate input
if (action == null)
throw new ArgumentNullException(String.Format("Error subscribing to message type \"{0}\": Specified action reference is null.", typeof(TArg)));
// build the queue work item key identifier
string key = typeof(TArg).ToString();
// check if a message of this type was already added to the bus
if (!handlersDict.ContainsKey(key))
{
// no, it was not, create a new dictionary entry and add the new observer's reference to it
List<WeakReference> newHandlerList = new List<WeakReference>();
handlersDict.TryAdd(key, newHandlerList);
}
lock (_lockSubscribing)
{
// append this new observer's reference to the list, if it does not exist already
if (!handlersDict[key].Any(existing => (existing.Target as Delegate) != null && (existing.Target as Delegate).Equals(action)))
{
// append the new reference
handlersDict[key].Add(new WeakReference(action, true));
}
}
}
}
}
This is an amendment to my previous answer. I have discovered why my original code did not work and this information may be useful for others. In my original code MessageBus was instantiated as Singleton:
public class MessageBus : Singleton<MessageBus> // Singleton<> is my library class
In the example above, it was declared as static:
public static class MessageBus
Once I converted my code to use a static, things started working. Having said that, I could not yet figure out why the singleton did not work.

Azure/web-farm ready SecurityTokenCache

Our site uses ADFS for auth. To reduce the cookie payload on every request we're turning IsSessionMode on (see Your fedauth cookies on a diet).
The last thing we need to do to get this working in our load balanced environment is to implement a farm ready SecurityTokenCache. The implementation seems pretty straightforward, I'm mainly interested in finding out if there are any gotchas we should consider when dealing with SecurityTokenCacheKey and the TryGetAllEntries and TryRemoveAllEntries methods (SecurityTokenCacheKey has a custom implementation of the Equals and GetHashCode methods).
Does anyone have an example of this? We're planning on using AppFabric as the backing store but an example using any persistent store would be helpful- database table, Azure table-storage, etc.
Here are some places I've searched:
In Hervey Wilson's PDC09
session he uses a
DatabaseSecurityTokenCache. I haven't been able to find the sample
code for his session.
On page 192 of Vittorio Bertocci's excellent
book, "Programming Windows Identity Foundation" he mentions uploading
a sample implementation of an Azure ready SecurityTokenCache to the
book's website. I haven't been able to find this sample either.
Thanks!
jd
3/16/2012 UPDATE
Vittorio's blog links to a sample using the new .net 4.5 stuff:
ClaimsAwareWebFarm
This sample is an answer to the feedback we got from many of you guys: you wanted a sample showing a farm ready session cache (as opposed to a tokenreplycache) so that you can use sessions by reference instead of exchanging big cookies; and you asked for an easier way of securing cookies in a farm.
To come up with a working implementation we ultimately had to use reflector to analyze the different SessionSecurityToken related classes in Microsoft.IdentityModel. Below is what we came up with. This implementation is deployed on our dev and qa environments, seems to be working fine, it's resiliant to app pool recycles etc.
In global.asax:
protected void Application_Start(object sender, EventArgs e)
{
FederatedAuthentication.ServiceConfigurationCreated += this.OnServiceConfigurationCreated;
}
private void OnServiceConfigurationCreated(object sender, ServiceConfigurationCreatedEventArgs e)
{
var sessionTransforms = new List<CookieTransform>(new CookieTransform[]
{
new DeflateCookieTransform(),
new RsaEncryptionCookieTransform(
e.ServiceConfiguration.ServiceCertificate),
new RsaSignatureCookieTransform(
e.ServiceConfiguration.ServiceCertificate)
});
// following line is pseudo code. use your own durable cache implementation.
var durableCache = new AppFabricCacheWrapper();
var tokenCache = new DurableSecurityTokenCache(durableCache, 5000);
var sessionHandler = new SessionSecurityTokenHandler(sessionTransforms.AsReadOnly(),
tokenCache,
TimeSpan.FromDays(1));
e.ServiceConfiguration.SecurityTokenHandlers.AddOrReplace(sessionHandler);
}
private void WSFederationAuthenticationModule_SecurityTokenValidated(object sender, SecurityTokenValidatedEventArgs e)
{
FederatedAuthentication.SessionAuthenticationModule.IsSessionMode = true;
}
DurableSecurityTokenCache.cs:
/// <summary>
/// Two level durable security token cache (level 1: in memory MRU, level 2: out of process cache).
/// </summary>
public class DurableSecurityTokenCache : SecurityTokenCache
{
private ICache<string, byte[]> durableCache;
private readonly MruCache<SecurityTokenCacheKey, SecurityToken> mruCache;
/// <summary>
/// The constructor.
/// </summary>
/// <param name="durableCache">The durable second level cache (should be out of process ie sql server, azure table, app fabric, etc).</param>
/// <param name="mruCapacity">Capacity of the internal first level cache (in-memory MRU cache).</param>
public DurableSecurityTokenCache(ICache<string, byte[]> durableCache, int mruCapacity)
{
this.durableCache = durableCache;
this.mruCache = new MruCache<SecurityTokenCacheKey, SecurityToken>(mruCapacity, mruCapacity / 4);
}
public override bool TryAddEntry(object key, SecurityToken value)
{
var cacheKey = (SecurityTokenCacheKey)key;
// add the entry to the mru cache.
this.mruCache.Add(cacheKey, value);
// add the entry to the durable cache.
var keyString = GetKeyString(cacheKey);
var buffer = this.GetSerializer().Serialize((SessionSecurityToken)value);
this.durableCache.Add(keyString, buffer);
return true;
}
public override bool TryGetEntry(object key, out SecurityToken value)
{
var cacheKey = (SecurityTokenCacheKey)key;
// attempt to retrieve the entry from the mru cache.
value = this.mruCache.Get(cacheKey);
if (value != null)
return true;
// entry wasn't in the mru cache, retrieve it from the app fabric cache.
var keyString = GetKeyString(cacheKey);
var buffer = this.durableCache.Get(keyString);
var result = buffer != null;
if (result)
{
// we had a cache miss in the mru cache but found the item in the durable cache...
// deserialize the value retrieved from the durable cache.
value = this.GetSerializer().Deserialize(buffer);
// push this item into the mru cache.
this.mruCache.Add(cacheKey, value);
}
return result;
}
public override bool TryRemoveEntry(object key)
{
var cacheKey = (SecurityTokenCacheKey)key;
// remove the entry from the mru cache.
this.mruCache.Remove(cacheKey);
// remove the entry from the durable cache.
var keyString = GetKeyString(cacheKey);
this.durableCache.Remove(keyString);
return true;
}
public override bool TryReplaceEntry(object key, SecurityToken newValue)
{
var cacheKey = (SecurityTokenCacheKey)key;
// remove the entry in the mru cache.
this.mruCache.Remove(cacheKey);
// remove the entry in the durable cache.
var keyString = GetKeyString(cacheKey);
// add the new value.
return this.TryAddEntry(key, newValue);
}
public override bool TryGetAllEntries(object key, out IList<SecurityToken> tokens)
{
// not implemented... haven't been able to find how/when this method is used.
tokens = new List<SecurityToken>();
return true;
//throw new NotImplementedException();
}
public override bool TryRemoveAllEntries(object key)
{
// not implemented... haven't been able to find how/when this method is used.
return true;
//throw new NotImplementedException();
}
public override void ClearEntries()
{
// not implemented... haven't been able to find how/when this method is used.
//throw new NotImplementedException();
}
/// <summary>
/// Gets the string representation of the specified SecurityTokenCacheKey.
/// </summary>
private string GetKeyString(SecurityTokenCacheKey key)
{
return string.Format("{0}; {1}; {2}", key.ContextId, key.KeyGeneration, key.EndpointId);
}
/// <summary>
/// Gets a new instance of the token serializer.
/// </summary>
private SessionSecurityTokenCookieSerializer GetSerializer()
{
return new SessionSecurityTokenCookieSerializer(); // may need to do something about handling bootstrap tokens.
}
}
MruCache.cs:
/// <summary>
/// Most recently used (MRU) cache.
/// </summary>
/// <typeparam name="TKey">The key type.</typeparam>
/// <typeparam name="TValue">The value type.</typeparam>
public class MruCache<TKey, TValue> : ICache<TKey, TValue>
{
private Dictionary<TKey, TValue> mruCache;
private LinkedList<TKey> mruList;
private object syncRoot;
private int capacity;
private int sizeAfterPurge;
/// <summary>
/// The constructor.
/// </summary>
/// <param name="capacity">The capacity.</param>
/// <param name="sizeAfterPurge">Size to make the cache after purging because it's reached capacity.</param>
public MruCache(int capacity, int sizeAfterPurge)
{
this.mruList = new LinkedList<TKey>();
this.mruCache = new Dictionary<TKey, TValue>(capacity);
this.capacity = capacity;
this.sizeAfterPurge = sizeAfterPurge;
this.syncRoot = new object();
}
/// <summary>
/// Adds an item if it doesn't already exist.
/// </summary>
public void Add(TKey key, TValue value)
{
lock (this.syncRoot)
{
if (mruCache.ContainsKey(key))
return;
if (mruCache.Count + 1 >= this.capacity)
{
while (mruCache.Count > this.sizeAfterPurge)
{
var lru = mruList.Last.Value;
mruCache.Remove(lru);
mruList.RemoveLast();
}
}
mruCache.Add(key, value);
mruList.AddFirst(key);
}
}
/// <summary>
/// Removes an item if it exists.
/// </summary>
public void Remove(TKey key)
{
lock (this.syncRoot)
{
if (!mruCache.ContainsKey(key))
return;
mruCache.Remove(key);
mruList.Remove(key);
}
}
/// <summary>
/// Gets an item. If a matching item doesn't exist null is returned.
/// </summary>
public TValue Get(TKey key)
{
lock (this.syncRoot)
{
if (!mruCache.ContainsKey(key))
return default(TValue);
mruList.Remove(key);
mruList.AddFirst(key);
return mruCache[key];
}
}
/// <summary>
/// Gets whether a key is contained in the cache.
/// </summary>
public bool ContainsKey(TKey key)
{
lock (this.syncRoot)
return mruCache.ContainsKey(key);
}
}
ICache.cs:
/// <summary>
/// A cache.
/// </summary>
/// <typeparam name="TKey">The key type.</typeparam>
/// <typeparam name="TValue">The value type.</typeparam>
public interface ICache<TKey, TValue>
{
void Add(TKey key, TValue value);
void Remove(TKey key);
TValue Get(TKey key);
}
Here is a sample that I wrote. I use Windows Azure to store the tokens forever, defeating any possible replay.
http://tokenreplaycache.codeplex.com/releases/view/76652
You will need to place this in your web.config:
<service>
<securityTokenHandlers>
<securityTokenHandlerConfiguration saveBootstrapTokens="true">
<tokenReplayDetection enabled="true" expirationPeriod="50" purgeInterval="1">
<replayCache type="LC.Security.AzureTokenReplayCache.ACSTokenReplayCache,LC.Security.AzureTokenReplayCache, Version=1.0.0.0, Culture=neutral, PublicKeyToken=null" />
</tokenReplayDetection>
</securityTokenHandlerConfiguration>
</securityTokenHandlers>
</service>

EF4.1 hangs for 120 seconds when using Microsoft.ApplicationServer.Caching.DataCacheFactory

I have a consistent, repeatable 120 second hang whenever the application calls
this.cacheProvider.Add(new CacheItem(cacheKey, data, this.regionName), cachePolicy);
at line 60 of the CachedDataSource.cs of the sample.. The .Add method is internal to Microsoft's DLL and I don't have code to it. Here are my parameters:
cacheKey = "listofCompanies"
data = // this is an EF 4.0 database first model class with 70 entries... result from IQueryable
this.regionName = "companies"
Reproducing the error:
I have a database-first EF4.0 project that I recently upgraded to 4.1 by adding the "EntityFramework" reference and a ContextGenerator to my DAL.
If I undo these changes, then my application is instantly performant.
My DAL and repository are stored in a separate DLL from my MVC application. Not sure if this is playing a part of the issue.
About my repository
/// Sample repository. Note that I return List<T> as IEnumerable,
/// and I use IDisposable
///
public class CompanyRepository : DisposableBase, ICompanyRepository
{
public IEnumerable<CompanyDetail> GetOneCompany(int? CompanyID)
{
var t = from c in _entities.CompanyDetail
where c.CompanyID == CompanyID.Value
select c;
return t.ToList();
}
}
/// <summary>
/// Disposable implementation based on advice from this link:
/// from Http://www.asp.net/entity-framework/tutorials/implementing-the-repository-and-unit-of-work-patterns-in-an-asp-net-mvc-application
/// </summary>
public class DisposableBase : IDisposable
{
protected TLSAdminEntities1 _entities;
public DisposableBase()
{
_entities = new TLSAdminEntities1();
disposed = false;
}
private bool disposed ;
protected virtual void Dispose(bool disposing)
{
if (!this.disposed)
{
if (disposing)
{
_entities.Dispose();
}
}
this.disposed = true;
}
public void Dispose()
{
Dispose(true);
GC.SuppressFinalize(this);
}
}
Question
Is this a bug, or am I using EF4.1, or the Caching layer incorrectly?
You mention that data is the result of IQueryable. Have you tried to perform .ToList() first on the data before sending it over to cache?

Resources