i am having a problem related to the cache disposal in azure cloud application.
i am using MVC3 structure, using 2 instances.
as we know that the Microsoft azure automatically allocates a web role to serve a web request based on load balancing.
but the problem is that when i dispose a cache "HttpRuntime.Cache.Remove("CacheName")", it is disposed of from the current web role that i am currently alloted by microsoft.
and doesnt dispose the cache from the other instance.
please help me, can i dispose a cache from the two instance a the same time?
using any C# sharp code.
This is a good reason to use a distributed cache. Synchronizing cache adds and removes individually across many instances and caches is hard to do well. Any code or solution that attempts to solve the issue will be pretty hacky. Moving the caching out to a distributed cache will solve the problem for you correctly.
Have you looked at the Windows Azure AppFabric Caching solution?
Related
There is a need to cache objects to improve the perf of my Azure function. I tried .NET ObjectCache (System.Runtime.Caching) and it worked well in my testing (tested with upto 10min cache retention period).
In order to take this solution forward, I have few quick questions:
What is the recycling policy of Azure function. Is there any default? Can it be configured?
What is the implication in the cost?
Is my approach right or are there any better solutions?
Any questions that you may know, please help.
Thank you.
Javed,
An out-of-process solution such as Redis (or even using Table storage, depending on the workload) would be recommended.
As a rule of thumb, functions should be stateless, particularly if you're running in the dynamic runtime, where scaling operations (up and down) could happen at any time and your host is not guaranteed to stay up.
If you opt to use the classic hosting, you do have a little more flexibility, as you can enable the "always on" feature, but I'd still recommend the out-of-process approach. Running in the classic mode does have a cost implication as well, since you're no longer taking advantage of the consumption based billing model offered by the dynamic hosting.
I hope this helps!
If you just need a smallish key-value cache, you could use the file system. D:\HOME (also found in the environment variable %HOME%) is shared across all instances. I'm not sure if the capacities are any different for Azure Functions, but for Sites and WebJobs, Free and Shared sites get 1GB of space, Basic sites get 10GB, and Standard sites get 50GB.
Alternatively, you could try running .NET ObjectCache in production. It may survive multiple calls to the same instance (file system or static in-memory property). Note, this will not be shared across instances though so only use it as a best effort cache.
Note, both of these approaches pose problems for multi-tenant products as it could be an avenue for unintended cross-tenant data sharing or even more malicious activities like DNS cache poisoning. You'd want to implement authorization controls for these things just as if they came from a database.
As others have suggested, Functions ideally should be stateless and an out of process solution is probably best. I use DocumentDB because it has time-to-live functionality which is ideal for a cache. Redis is likely to be more performant especially if you don't need persistence across stop/restart.
I'm curious how auto scaling works with an Azure web application, specifically how each new instance gets a copy of the source code for the application.
Is this what happens, or is the source stored somewhere and each new instance points to it?
The entire virtual machine is duplicated. So usually you might have just one database but multiple apps receiving and processing the requests. If you need an "autoscaling" database too, then there are database solutions that handle syncronization across multiple machines, but in that case you're probably better off using Azure's native database which takes care of that.
I have an existing mvc web app running on azure websites.
I have set application insights running on it whilst it is live.
Is this recommended practise? Will this have any impact on my site whilst it is running? is it okay to leave this running as it provides very useful data...
Many thanks
Absolutely, Application Insights is intended to be able to run in production alongside your applications.
One of its purposes is to help you track the health of your application and assist you when something goes wrong.
While it is optimized to operate as silently as possible, impacting your performance inevitable. Unless you're tracing a lot of custom data or very performance-dependent - you should be fine.
I would like to find a best practice to debug an existing ASP.NET MVC application. That's a Web Role already hosted into Azure.
The application is using Windows Azure Caching. The configuration file has been defined with the settgins of an Azure Account ->
<dataCacheClient name="default">
<hosts>
<host name="xxxx.cache.windows.net" cachePort="22233" />
</hosts>
</dataCacheClient>
I would like to debug the code. What's the best approach in this case ?
I have already made a test by changing the Host with localhost but it's not working.
Thank you,
PS: I have installed the new SDK 1.8
There is no locally-installed equivalent to Azure Shared Caching. Windows Server AppFabric Caching is somewhat close, but not exactly the same.
You could try to get IT to unblock the ports so you can use Azure. Though if you have multiple developers on a project, each dev would need their own cache instance to avoid stepping on each other's data.
Another option is to completely encapsulate your caching in interfaces. Then you can use something completely different to develop on. In the past I've used the MemoryCache in-memory store for development. You could also use AppFabric Caching, or memcached, or really anything else. You just need to be aware of the differences between your dev and production systems.
Edit: Another option would be switching from Shared Caching to caching in your roles (I'm not sure what the official name for this is, these days). I believe this works locally too. The major drawback is that it's only visible within one Hosted Service. If you only have one Hosted Service anyway, that's no problem. If you have several Hosted Services that need to share data, then it could an issue.
I believe that the mvc mini profiler is a bit of a 'God-send'
I have incorporated it in a new MVC project which is targeting the Azure platform.
My question is - how to handle profiling across server (role instance) barriers?
Is this is even possible?
I don't understand why you would need to profile these apps any differently. You want to profile how your app behaves on the production server - go ahead and do it.
A single request will still be executed on a single instance, and you'll get the data from that same instance. If you want to profile services located on a different physical tier as well, that would require different approaches; involving communication through internal endpoints which I'm sure the mini profiler doesn't support out of the box. However, the modification shouldn't be that complicated.
However, would you want to profile physically separated tiers, I would go about it in a different way. Specifically, profile each tier independantly. Because that's how I would go about optimizing it. If you wrap the call to your other tier in a profiler statement, you can see where the problem lies and still be able to solve it.
By default the mvc-mini-profiler stores and delivers its results using HttpRuntime.Cache. This is going to cause some problems in a multi-instance environment.
If you are using multiple instances, then some ways you might be able to make this work are:
to change the Http Cache to an AppFabric Cache implementation (or some MemCached implementation)
to use an alternative Storage strategy for your profile results (the code includes SqlServerStorage as an example?)
Obviously, whichever strategy you choose will require more time/resources than just the single instance implementation.