Refreshing in-memory cache across multiple instances in Azure - azure

I have an Azure web app which uses in-memory caching, adding keys as follows
public static void Add(object item, string key)
{
var wrapper = new CacheItemWrapper()
{
InsertedAt = DateTime.Now,
Item = item
};
MemoryCache.Default.Add(key, wrapper, ObjectCache.InfiniteAbsoluteExpiration);
}
Occasionally the user of our application will make a change which requires the cache to refresh. I can call a method which clears the cache, the problem is, it only works on the instance which picks up the request. Other instances still have the old values in memory.
Is there any way I can do either of these things
a) run a method across multiple instances, or
b) raise an event which all instances listen for?
The code above could be changed to expire within a short time so that all instances could pick this up. However, it's quite a long process to update the cache and this might affect performance. Given the application knows when it needs to refresh the cache, it would be much better and more responsive if it could be done programmatically.

Related

Best practice for connections to CRM in web application

am sorry if this question will be a bit to broad but if this is question about normal ASP.NET MVC 5 Owin based application with default connection to MSSQL server i would not have such hard time but we use CRM as our database.
Ok as i mention am working on ASP.NET MVC5 application and am having hard time finding what is the best practice to create, keep open and to close connection to Dynamics CRM 365?
I found some many posts and blogs but everyone pulling on his side of the road.
Some say it's better for every request to open new connection in using statement so it could be closed right away (that's sounds good but it's a possible that requests will be slow because on every request it needs to open new connection to CRM).
Some say it' better to make singleton object on application scope, keep it open during application lifetime and reuse it on every request.
Normally i would use OrganizationServiceProxy in some simple console app but in this case am not sure should i use OrganizationServiceProxy or CrmServiceClient or something else?
If anyone have or had some similar problem, any hint would be great.
UPDATE:
#Nicknow
I downloaded SDK from SDK 365 and am using this dll-s.
Microsoft.Xrm.Sdk.dll, Microsoft.Crm.Sdk.Proxy.dll, Microsoft.Xrm.Tooling.Connector.dll and Microsoft.IdentityModel.Clients.ActiveDirectory.dll.
You mention
Microsoft.CrmSdk.XrmTooling.CoreAssembly 8.2.0.5.
if am correct this nuget package use official assembly that i downloaded, or there are some modification to this package?
About that test
proof test
if i got it right, no matter if i use using statement, implement Dispose() method or just use static class on application scope for a lifetime of application i will allways get same instance (If i use default settings RequireNewInstance=false)?
For code simplicity, I usually create a static class (a singleton could be used too, but would usually be overkill) to return a CrmServiceClient object. That way my code is not littered with new CrmServiceClient calls should I want to change anything about how the connection is being made.
So it would be good practice to create static class on application scope that lives for application lifetime? That means that every user that makes request would use same instance ? Wouldn't that be i performance issue for that one connection?
All of your method calls will execute to completion or throw an exception thus even if the GC takes a while there is no open connection sitting out there eating up resources and/or blocking other activity.
This one takes me back to section where i allways get same instance of CrmServiceClient and got the part that xrm.tooling handles cached connection o the other side but what happens on this side (web application).
Isn't the connection to CRM (i.e. CrmServiceClient) unmanaged resources, shouldn't i Dispose() it explicitly?
I found some examples with CrmServiceClient and pretty much in all examples CrmServiceClient is casted in IOrganizationService using CrmServiceClient.OrganizationWebProxyClient or CrmServiceClient.OrganizationServiceProxy.
Why is that and what are the benefits of that?
I got so many questions but this is already allot to ask, is there any online documentation that you could point me to it?
First, I'm assuming you are using the latest SDK DLLs from Nuget: Microsoft.CrmSdk.XrmTooling.CoreAssembly 8.2.0.5.
I never wrap the connection in a using statement and I don't think I've ever seen an example where that is done. There are examples from the "old days", before we had tooling library, where calls to create OrganizationServiceProxy were wrapped in a using statement, which caused a lot of inexperienced developers to release code with connection performance issues.
Luckily most of this has been fixed for us through the Xrm.Tooling library.
Create your connection object using CrmServiceClient:
CrmServiceClient crmSvc = new CrmServiceClient(#"...connection string goes here...");
Now if I create an OrganizationServiceContext (or an early-bound equivalent) object I do wrap that in a using so that it is determinedly disposed when I've completed my unit of work.
using (var ctx = new OrganizationServiceContext(crmSvc))
{
var accounts = from a in ctx.CreateQuery("account")
select a["name"];
Console.WriteLine(accounts.ToList().Count());
}
The Xrm.Tooling library handles everything else for you as far the connection channel and authentication. Unless you specify to create a new channel each time (by adding 'RequireNewInstance=true' to the connection string or setting the useUniqueInstance to true when calling new CrmServiceClient) the library will reuse the existing authenticated channel.
I used the following code to do a quick proof test:
void Main()
{
var sw = new Stopwatch();
sw.Start();
var crmSvc = GetCrmClient();
Console.WriteLine($"Time to get Client # 1: {sw.ElapsedMilliseconds}");
crmSvc.Execute(new WhoAmIRequest());
Console.WriteLine($"Time to WhoAmI # 1: {sw.ElapsedMilliseconds}");
var crmSvc2 = GetCrmClient();
Console.WriteLine($"Time to get Client # 2: {sw.ElapsedMilliseconds}");
crmSvc2.Execute(new WhoAmIRequest());
Console.WriteLine($"Time to WhoAmI # 2: {sw.ElapsedMilliseconds}");
}
public CrmServiceClient GetCrmClient()
{
return new CrmServiceClient("...connection string goes here...");
}
When I run this with RequireNewInstance=true I get the following console output:
Time to get Client # 1: 2216
Time to WhoAmI # 1: 2394
Time to get Client # 2: 4603
Time to WhoAmI # 2: 4780
Clearly it was taking about the same amount of time to create each connection.
Now, if I change it to RequireNewInstance=false (which is the default) I get the following:
Time to get Client # 1: 3761
Time to WhoAmI # 1: 3960
Time to get Client # 2: 3961
Time to WhoAmI # 2: 4145
Wow, that's a big difference. What is going on? On the second call the Xrm.Tooling library uses the existing service channel and authentication (which it cached.)
You can take this one step further and wrap your new CrmServiceClient calls in a using and you'll get the same behavior, because disposing of the return instanced does not destroy the cache.
So this will return times similar to above:
using (var crmSvc = GetCrmClient())
{
Console.WriteLine($"Time to get Client # 1: {sw.ElapsedMilliseconds}");
crmSvc.Execute(new WhoAmIRequest());
Console.WriteLine($"Time to WhoAmI # 1: {sw.ElapsedMilliseconds}");
}
using (var crmSvc2 = GetCrmClient())
{
Console.WriteLine($"Time to get Client # 2: {sw.ElapsedMilliseconds}");
crmSvc2.Execute(new WhoAmIRequest());
Console.WriteLine($"Time to WhoAmI # 2: {sw.ElapsedMilliseconds}");
}
For code simplicity, I usually create a static class (a singleton could be used too, but would usually be overkill) to return a CrmServiceClient object. That way my code is not littered with new CrmServiceClient calls should I want to change anything about how the connection is being made.
To fundamentally answer the question about using, we don't need to use it because there is nothing to be released. All of your method calls will execute to completion or throw an exception thus even if the GC takes a while there is no open connection sitting out there eating up resources and/or blocking other activity.

Service Fabric reverse proxy port configurability

I'm trying to write an encapsulation to get the uri for a local reverse proxy for service fabric and I'm having a hard time deciding how I want to approach configurability for the port (known as "HttpApplicationGatewayEndpoint" in the service manifest or "reverseProxyEndpointPort" in the arm template). The best way I've thought to do it would be to call "GetClusterManifestAsync" from the fabric client and parse it from there, but I'm also not a fan of that for a few reasons. For one, the call returns a string xml blob, which isn't guarded against changes to the manifest schema. I've also not yet found a way to query the cluster manager to find out which node type I'm currently on, so if for some silly reason the cluster has multiple node types and each one has a different reverse proxy port (just being a defensive coder here), that could potentially fail. It seems like an awful lot of effort to go through to dynamically discover that port number, and I've definitely missed things in the fabric api before, so any suggestions on how to approach this issue?
Edit:
I'm seeing from the example project that it's getting the port number from a config package in the service. I would rather not have to do it that way as then I'm going to have to write a ton of boilerplate for every service that'll need to use this to read configs and pass this around. Since this is more or less a constant at runtime then it seems to me like this could be treated as such and fetched somewhere from the fabric client?
After some time spent in the object browser I was able to find the various pieces I needed to make this properly.
public class ReverseProxyPortResolver
{
/// <summary>
/// Represents the port that the current fabric node is configured
/// to use when using a reverse proxy on localhost
/// </summary>
public static AsyncLazy<int> ReverseProxyPort = new AsyncLazy<int>(async ()=>
{
//Get the cluster manifest from the fabric client & deserialize it into a hardened object
ClusterManifestType deserializedManifest;
using (var cl = new FabricClient())
{
var manifestStr = await cl.ClusterManager.GetClusterManifestAsync().ConfigureAwait(false);
var serializer = new XmlSerializer(typeof(ClusterManifestType));
using (var reader = new StringReader(manifestStr))
{
deserializedManifest = (ClusterManifestType)serializer.Deserialize(reader);
}
}
//Fetch the setting from the correct node type
var nodeType = GetNodeType();
var nodeTypeSettings = deserializedManifest.NodeTypes.Single(x => x.Name.Equals(nodeType));
return int.Parse(nodeTypeSettings.Endpoints.HttpApplicationGatewayEndpoint.Port);
});
private static string GetNodeType()
{
try
{
return FabricRuntime.GetNodeContext().NodeType;
}
catch (FabricConnectionDeniedException)
{
//this code was invoked from a non-fabric started application
//likely a unit test
return "NodeType0";
}
}
}
News to me in this investigation was that all of the schemas for any of the service fabric xml is squirreled away in an assembly named System.Fabric.Management.ServiceModel.

Refresh Cache in Spring

How I can refresh spring cache when I inserted data into Database through my services and when I added data directly into database.Can we achieve this?.
Note:
I am using following libs
1)net.sf.json-lib
2)spring-support-context
through my services
This is typically achieved in your application's services (e.g. #Service application components) using the Spring #Cacheable, #CachePut annotations, for example...
#Service
class BookService {
#Cacheable("Books")
Book findBook(ISBN isbn) {
...
return booksRepository().find(isbn);
}
#CachePut(cacheNames = "Books", key = "#book.isbn")
Book update(Book book) {
...
return booksRepository.save(book);
}
}
Both #Cacheable and #CachePut will update the cache provider as the underlying method may callback through to the underlying database.
and when I added data directly into database
This is typically achieved by the underlying cache store. For example, in GemFire, you can use a CacheLoader to "read-through" (to your underlying database perhaps) on "cache misses". See GemFire's user guide documentation on "How Data Loaders Work" as an example and more details.
So, back to our example, if the "Book (Store)" database was updated independent of the application (using Spring's Caching Annotation support and infrastructure), then a developer just needs to define a strategy on cache misses. And, in GemFire that could be a CacheLoader, and when...
bookService.find(<some isbn>);
is called resulting in a "cache miss", GemFire's CacheLoader will kick in, load the cache with that book and Spring will see it as a "cache hit".
Of course, our implementation of bookService.find(..) went to the underlying database anyway, but it only retrieves a "single" book. A loader could be implemented to populate an entire set (or range) of books based on some criteria (such as popularity), where the application service expects those particular set of books to be searched for by potential customers, using the application, first, and pre-cache them.
So, while Spring's Cache annotations typically work per entry, a cache store specific strategy can be used to prefetch and, in-a-way, "refresh" the cache, lazily, on the first cache miss.
In summary, while the former can be handled by Spring, the "refresh" per say is typically handled by the caching provider (e.g. GemFire).
Hopefully this gives you some ideas.
Cheers,
John

Handling large number of same requests in Azure/IIS WebRole

I have a Azure Cloud Service based HTTP API which is currently serving its data out of an Azure SQL database. We also have a in role cache at the WebRole side.
Generally this model is working fine for us but sometimes what happening is that we get a large number of requests for the same resource within a short period of time span and if that resource is not there in the cache, all the requests went directly to our DB which is a problem for us as many time DB is not able to take that much load.
By looking at the nature of the problem, it seems like it should be a pretty common problem which most of the people build API would face. I was thinking if somehow, I can send only 1st request to DB and hold all the remaining till the time when 1st one completes, to control the load going to DB but I did get any good of doing it. Is there any standard/recommended way of doing it in Azure/IIS?
The way we're handling this kind of scenario is by putting calls to the DB in a lock statement. That way only one caller will hit the DB. Here's pseudo code that you can try:
var cachedItem = ReadFromCache();
if (cachedItem != null)
{
return cachedItem;
}
lock(object)
{
cachedItem = ReadFromCache();
if (cachedItem != null)
{
return cachedItem;
}
var itemsFromDB = ReadFromDB();
putItemsInCache(itemsFromDB);
reurn itemsFromDB;
}

Uploading photos using Grails Services

I would like to ask, What would be the most suitable scope for my upload photo service in Grails ? I created this PhotoService in my Grails 2.3.4 web app, all it does is to get the request.getFile("myfile") and perform the necessary steps to save it on the hard drive whenever a user wants to upload an image. To illustrate what it looks like, I give a skeleton of these classes.
PhotoPageController {
def photoService
def upload(){
...
photoService.upload(request.getFile("myfile"))
...
}
}
PhotoService{
static scope="request"
def upload(def myFile){
...
// I do a bunch of task to save the photo
...
}
}
The code above isn't the exact code, I just wanted to show the flow. But my question is:
Question:
I couldn't find the exact definition of these different grails scopes, they have a one liner explanation but I couldn't figure out if request scope means for every request to the controller one bean is injected, or each time a request comes to upload action of the controller ?
Thoughts:
Basically since many users might upload at the same time, It's not a good idea to use singleton scope, so my options would be prototype or request I guess. So which one of them works well and also which one only gets created when the PhotoService is accessed only ?
I'm trying to minimize the number of services being injected into the application context and stays as long as the web app is alive, basically I want the service instance to die or get garbage collect at some point during the web app life time rather than hanging around in the memory while there is no use for it. I was thinking about making it session scope so when the user's session is terminated the service is cleaned up too, but in some cases a user might not want to upload any photo and the service gets created for no reason.
P.S: If I move the "def photoService" within the upload(), does that make it only get injected when the request to upload is invoked ? I assume that might throw exception because there would be a delay until Spring injects the service and then the ref to def photoService would be n
I figured out that Singleton scope would be fine since I'm not maintaining the state for each request/user. Only if the service is supposed to maintain state, then we can go ahead and use prototype or other suitable scopes. Using prototype is safer if you think the singleton might cause unexpected behavior but that is left to testing.

Resources