Sharing an HttpRuntime.Cache across two IIS applications - iis

I have two ASP.NET 2.0 applications in IIS; a public booking system and an admin system to manage prices. There is a shared DLL project that accesses the database, used by both applications.
To improve performance, the prices are cached in DLL code to save hitting the database on every request. However, when the administrator changes the prices, the cache is refreshed on the admin application (and obviously it isn't refreshed on the public application)
So, to the question. Is it possible to configure IIS so that these two applications share the HttpRuntime.Cache? If so, how should it be set up?

That would beat the point of having two applications - they should not share the same DLL memory heap, that would be needed. What you need is a communications channel between the two and have the admin web-pages notify about changes to the cache - which would cause a refresh.
May be something simple, but maybe a simple page you post to that causes the cache to check for updates? Or - have the application check for updates now and again based on a time stamp.
(Another option is to create a service where the cache resides, but I think that is outside the scope of a simple solution)

No; by experience this will not work (.NET 4.6, IIS 8.5, 2 applications a common DLL using the same application pool). Documentation is very hard to come by (beyond "cache items are stored in memory") - in fact the only descriptive part was what #Thies stated above - but as he stated I believe this is because the cache is stored in the DLL's allocated memory, therefore (since we still have one process but two app domains) the common DLL is loaded separately into two application domains and the DLL's memory is not shared.

Related

How to config the IIS to allow a site to create/read from Named Shared Memory?

I am having strange problems with an application I need to use on a ASP.NET web site.
This application is implementing a DB on the shared memory.
Now, I assume the IIS would not allow just any web-site to manipulate the shared memory.
So, how do I configure the IIS to allow this web-site's unique operations, what permissions to set, etc?
You are right, there are several permission restrictions!,
I suggest you read some articles on the subject and come back once you have a more specific question on sepcific permissions.
you can start with articles like this one:
Guidelines for Resolving IIS Permissions Problems -
http://msdn.microsoft.com/en-us/library/aa954062.aspx

Understanding Azure Caching Service

By caching we basically mean, replicating data for faster access. For example -
Store freqeuently used data, from DB into memory.
Store static conents of Web page in the client browser.
Cloud hosting already uses closest DataCenter (CDN) to serve contents to the user. My question is, how does Caching Service makes it faster.
CDN is used to improve the delivery performance between your service datacenter and your customer, by introducing a transparent proxy datacenter that is nearer your customer. The CDN typically is set up to cache - such that requests from different customers can be serviced by the same "CDN answer" without calling the origin service datacenter. This configuration is predominantly used to offload requests for shared assets such as jpegs, javascript etc.
Azure Caching Service is employed behind your service, within your service datacenter. Unlike the built in ASP.NET cache, Azure Cache runs as a seperate service, and can be shared between servers/services. Generally your service would use this to store cross-session or expensive-to-create information - e.g. query results from a database. You're trading:
value of memory to cache the item (time/money)
cost (time/money) of creation of the item
number of times you'd expect to reuse the item.
"freshness" of information
For example you might use the memory cache to reduce the number of times that you query Azure Table, because you expect to reuse the same information multiple times, the latency to perform the query is high, and you can live with information potentially being "stale". Doing so would, save you money, and improve the overall performance of your system.
You'd typically "layer" the out-of-process Azure Cache with on-machine/in-process cache, such that for frequent queries you pull information as follows:
best - look first in local/on-box cache
better - look in off-box Azure Service Cache, then load local cache with result
good - make a call/query to expensive resource, load Azure Cache and local cache with result
Before saying anything I wanted to point you to this (very similar discussion):
Is it better to use Cache or CDN?
Having said that, this is how CDN and Caching can improve your website's performance.
CDN: This service helps you stay "closed" to your end user. Wit CDN, your websites content will be spread over a system of servers, each in its own location. Every server will hold a redundant copy of your site. When accessed by visitor, the CDN system will identify his/hers location and serve the content from the closest server (also called POP or Proxy).
For example: When visited from Australia your be server by Australian server. When visited from US you'll be server by US server and etc...
CDN will me most useful is your website operated outside of its immediate locale.
(i.e. CDN will not help you is your website promotes a local locksmith service that only has visitors from your city. As long as your original servers are sitting near by...)
Also, the overall coverage is unimportant.
You just need to make sure that the network covers all locations relevant to you day-2-day operations.
Cache: Provides faster access to your static or/and commonly used content objects. For example, if you have an image on your home page, and that image is downloaded again and again (and again) by all visitor, you should Cache it, so that returning visitor will already have it stored in his/hers PC (in browser Cache). This will save time, because local ressourses will load fasted and also save you bandwidth - because the image will load from visitor's computer and not from your server.
CDN and Caching are often combined, because this setup allows your to store Cache on the CDN network.
Also, this dual setup can also help improve Caching efficiency - For example it can help with dynamic Caching by introducing smart algorithms into the "top" CDN layer.
Here is more information about Dynamic Caching (also good introduction to HTTP Caching directives)
As you might already know, from reading the above mention post, no one method is better and they are at their best, when combined.
Hope this answers it
GL

IIS: multiple web application vs single web application root

We have a legacy system which is build in classic ASP. As we move to asp.net, we find ourselves creating web applications as we migrate old stuff to .net and add new functionalities to the system. I would say maybe 30% of them would share the same library, loading the same dlls. (all applications share the same app pool)
My question would be, what's the pros and cons of this approach?
Would it be better to have one application root?
I am not really looking for a specific answer, just curious what you people do usually and why?
thanks a lot
I would place things that can be logically grouped together into its own app pool.
Example: Components needed for a website or webapp under IIS could be considered a single logical group, therefore it needs its own app pool.
Anything else that is separate should have its own domain with own app pool.
But, IMHO, i think it's a judgment call based on the nature of the app and if it has any dependencies... etc. You know the system better than anybody, so from a 20k foot view of it all, how should things be logically separted?
Example scenario:
If you have an app that needs to be reset via IIS, will it affect others (will others go down due to the one app that requires an IIS reset)? If it's not a big deal, then why not (lump it together with the other). If it is a big deal, then keep it separate so it's not dependent on any externals.

Sitecollection Overview Page

I have the following situation:
MOSS 2007 Server Environment A -> Intranet
MOSS 2007 Server Environment B -> Collaboration Environment (approx. 150 site collections for various issues)
Both environments are on different infrastructures but we use the same Active Directory and the same groups. Now we would like to implement the following 2 things:
An overview page within the intranet with all available site collections on environment b.
An overview page within the intranet with only those site collections the user has access on.
now i'm searching for some good ideas what would be the best way to realise something like this.
thanks in advance for any response.
The main thing to be careful of in a solution like this is performance, particularly for your second requirement. That would require looping through every site collection and retrieving permission data, either using the web services or the object model.
I would recommend writing a custom timer job (or two for each requirement if that makes more sense) to execute at a low-traffic time and aggregate this information for storage in a custom SQL database. If there is never low traffic then delay your requests to reduce impact on the server.
A custom web part (or again, two if more appropriate) can then be deployed to both environments. The web part would query the database for the required information and display it to the user.
If the timer job needs to update this data more frequently then you would need to implement some sort of in-memory caching. Depending on your requirements this may need a lot of memory.

App Domain per User Session in IIS

This question is about App domains and Sessions. Is it possible to have IIS run each User Session in a seperate App Domain. If Yes, Could you please let me settings in the config file that affect this.
Regards,
Anil.
This is not possible under Windows 2000 or Windows 2003 running ASP.NET 1.x or ASP.NET 2.0 (even with .NET Framework 3.5 installed). The same applies to ASP.NET running on IIS7.
A possible workaround (which certainly wouldn't scale) would be to create a windows service to manage the starting and stopping of new processes and communicate with that service from the web application using WCF.
I don't know of a way in which this could be easily done. Please explain your problem further as to why an AppDomain is necessary - it may be easier to just move Application collection use to Session.
Update: The correct solution to your problem is, unfortunately, to re-architect the library for a server-based session solution. What you could do, and I strongly do not recommend this, is create an AppDomain per Session, storing a reference to it in Session, and then relying on calls such as CreateInstanceAndUnwrap, magic strings, and reflection (with no real compile time checking) to load an instance of the library per user. I imagine that if you pursue this solution you will spend much more time in total debugging it and dealing with errors than you would if you did a re-architecture time investment upfront.

Resources