We were using default session state (in proc) in our application which we've built on top of Orchard. Now management has decided to install a load balancer in between. To make our sessions still work, I thought to go with Out of process session state. However, I am a bit confused that whether should I enable it in the 'Orchard.web' module of in specific modules where I've used sessions.
I was trying to search out if Orchard supports out proc sessions some other way or it should be the similar way like a normal asp.net application would have.
Any help would be appreciated
First off - I'm pretty certain that the Orchard team recommends avoiding session state at all costs. Anything that has session state is (by definition) stateful and that makes scaling outwards harder. However, assuming that you cannot avoid it:
1) It's just an ASP.NET application, the normal rules apply. Ensure the same machine key is set in app config, configure the session state mechanism of your choice (SQL/state server) and configure the appropriate values in web.config.
however
2) The standard ASP.NET session state implementation has really poor locking. This can lead to bad responsiveness issues for your pages. Check out this excellent question (and linked posts) on session state performance. You should evaluated for yourself whether you have any need for locked session state. We had to remove session state entirely in order to provide acceptable performance for our applications (and we've never looked back or found a reasonable argument for session over caching since
The classic solution to scaling is to use sticky sessions. Most of the load balancers have this setting and this will allow you to keep using the inproc session. And if you don't plan on auto scaling, so you will always have a fixed number of servers behind your LB then it is a solution that you should carefully consider.
Going out of proc can give you some headaches, like marking all your classes that you put in session as Serializable.
Related
There is a need to cache objects to improve the perf of my Azure function. I tried .NET ObjectCache (System.Runtime.Caching) and it worked well in my testing (tested with upto 10min cache retention period).
In order to take this solution forward, I have few quick questions:
What is the recycling policy of Azure function. Is there any default? Can it be configured?
What is the implication in the cost?
Is my approach right or are there any better solutions?
Any questions that you may know, please help.
Thank you.
Javed,
An out-of-process solution such as Redis (or even using Table storage, depending on the workload) would be recommended.
As a rule of thumb, functions should be stateless, particularly if you're running in the dynamic runtime, where scaling operations (up and down) could happen at any time and your host is not guaranteed to stay up.
If you opt to use the classic hosting, you do have a little more flexibility, as you can enable the "always on" feature, but I'd still recommend the out-of-process approach. Running in the classic mode does have a cost implication as well, since you're no longer taking advantage of the consumption based billing model offered by the dynamic hosting.
I hope this helps!
If you just need a smallish key-value cache, you could use the file system. D:\HOME (also found in the environment variable %HOME%) is shared across all instances. I'm not sure if the capacities are any different for Azure Functions, but for Sites and WebJobs, Free and Shared sites get 1GB of space, Basic sites get 10GB, and Standard sites get 50GB.
Alternatively, you could try running .NET ObjectCache in production. It may survive multiple calls to the same instance (file system or static in-memory property). Note, this will not be shared across instances though so only use it as a best effort cache.
Note, both of these approaches pose problems for multi-tenant products as it could be an avenue for unintended cross-tenant data sharing or even more malicious activities like DNS cache poisoning. You'd want to implement authorization controls for these things just as if they came from a database.
As others have suggested, Functions ideally should be stateless and an out of process solution is probably best. I use DocumentDB because it has time-to-live functionality which is ideal for a cache. Redis is likely to be more performant especially if you don't need persistence across stop/restart.
I have a naïve version of a PokerApp running as an Azure Website.
The server stores in its memory the state of the tables, (whose turn it is, blinds value, cards…) etc.
The problem here is that I don't know how much I can rely on the WebServer's memory to be "permanent". A simple restart of the server would cause that memory to be lost and therefore all the games in progress before the restart would get lost / cause trouble.
I've read about using TableStorage to keep session data and share it between instances, but in my case it's not just string of text that I want to share but let's say for example, a Lobby objcet which contains all info associated with the games.
This is very roughly the structure of the object I have in memory
After some of your comments, you can see the object that needs to be stored is quite big and is being almost constantly. I don't know how well serializing and deserializing is going to work for me here...
Should I consider an azure VM which I'm hoping is going to have persistent memory instead of a Website?
Or is there a better approach to achieve something like this?
Thanks all for the answers and comments, you've made it clear that one can't rely on local memory when working on the cloud.
I'm going to do some refactoring and optimize the "state" object and then use a caching service.
Two question come to my mind though, and once you throw some light on these ones I promise I will shut up and accept #astaykov's great answer.
CONCURRENCY AT INSTANCE LEVEL - I have classic thread locks in my app to avoid concurrency problems, so I'm hoping there is something equivalent for those caching services you guys propose?
Also, I have a few timeouts per table (increase blinds, number of seconds the players have to act…). Let's say a user has just folded a hand, he's finished interacting with the state object so I update the cache. While that state object (to which the timers belong) is cached, my timers will stop ticking…
I know I'm not explaining myself very well here but I hope you guys see my point.
I'd suggest using the Azure Redis Cache.
Here is a nice sample how to build MVC App with Redis Cache in 15 minutes.
You can, of course use the Azure Managed Cache. Or end up with Azure Tables. And Azure Tables can hold much more then just a string. But I believe the caching solutions would have lower latency in communication.
In either way, your objects have to be serializable. And yes - the objects will get serialized/deserialized by every access. You can do it manually, or let the framework do it for you. From what I've read, NewtonSoft.JSON is quite good and optimized JSON serializerdeserializer.
UPDATE
When you ask for a VM running in the cloud - a VM will be restarted sooner or later! Application Pool will recycle, a planned maintenance will occur, an unplanned maintenance will occur, a hard disk will fail, a memory module will fail, unforeseen disaster will happen.
Only one thing is for sure - if you want your data to survive server crashes, change the way you think and design software, and take data out of (local) the memory. Or just live the fact that application may lose state sometime.
Second update - for the clocks
Well, you have to play with your imagination and experience. I would question that your clocks work anyway in the context of the ASP.NET app (unless all of them being static properties of a static type, which would be a little hell). My approach would be heavily extend my app to the client as well (JavaScript). There are a lot of great frameworks out there - SignalR, AngularJS, KnockoutJS, none of them to be underestimated! By extending your object model to the client, you can maintain players objects lifetime on the client (keeping the clock ticking) and sending updates from the client to the server for all those events. If you take a look at SignalR, you can keep real time communication between multiple clients (say players) and the server. And the server side of SignalR scales out nicely with Azure Service Bus and even Redis.
I'd been using RestKit for the last two years, but recently I've started thinking about transition from these monolith framework as it seems to be really overkill.
Here's my pros for moving forward:
There is big need in using NSURLSession for background fetches and RestKit has only experimental branch for transition to AFNetworking 2.0. No actual dates when transition will be finished. (Main Reason)
No need for CoreData support in network library as no need for fully functional offline data storage.
Having headache with new concept of response/request descriptors as they don't support different parameters in path patterns (ex. access token parameter) and there is no way to create object request operation in one line with custom descriptor. Here I am loosing features of object manager as facade.
I. The biggest loss of RestKit for me in object mapping process.
Could you recommend standalone libraries that you use which shows themselves as flexible and stable?
II. And as I sad I need no fully functional storage but I still need some caching support in some places.
I've heard that NSURLCache has become useful in last OS release.
Did you use it and what's the strategy?
Does it return cached API responses when network connection is down?
III. Does anybody faces the same problems?
What solutions have you applied?
Maybe someone could give some piece of advice about architecture that he or she uses in multiple apps with pure AFNetworking?
I. In agreement with others who have commented, AFNetworking + Mantle is a simple and effective way to interact with a Restful API and to replace RestKit's object mapping process that you miss.
II. To answer the requirements of your caching support is highly dependent on the context. However, I have found for my recent functional requirements that caching a view model for a particular controller's screen and only caching reference data returned by APIs allows me to keep the application logic relatively simple whilst giving the user some continuity. A simple error notification for connectivity issues can be dealt with a cross-cutting manner.
III. One thought on the architecture relevant to this aspect is to ensure that the APIs the app is dependent on provides data according to the app experience. This allows your app to focus on what it is good at (a very slick user-experience) and moves logic into the API's closer to API dependencies such as data. This has a further benefit of reducing the chattiness of the app.
In my XPages web app (Xpages a Lotus Notes Technology based in JSF), I need to have a dynamic map to store the session IDs and the last accessed time (in millisecond). This is implemented as a TreeMap inside a application-scoped bean. Each initial access to the app registers the current session to the TreeMap in this bean. Only a limited number of session entries are permitted to this map, and excess sessions are not registered. The map is also cleared once in a while from old session entries so that new sessions can be registered. I need to know if this is an acceptable approach/use of a application bean. I know I can store the session entries temporarily in an external DB (non-lotus notes) but the company I'm working for doesn't allow me to do so. Will this approach lead me to potential problems? If yes, is there another way for me to do this?
This sounds like a perfectly valid use of an application bean, but I'd offer two suggestions. The first is to use a ConcurrentSkipListMap instead of a TreeMap. The former is thread-safe, while the latter is not. When interacting with the lower scopes, thread safety is typically not crucial, as each user can only write to their own session, view, and request scopes, but all users can write to the application scope, so it's conceivable that concurrent writes could occur, especially in applications with heavy user load. The second suggestion is to urge caution with how much information about each session is stored in an application bean. Since the bean will be accessible to all users, it is theoretically possible to inadvertently expose too much information about a user to other users. If you're only storing session name or ID in addition to the last access time, you'll be fine. But if you're actually storing a pointer to each user's session scope, you may accidentally provide a window into data a user has cached that other users shouldn't have access to. I've never actually seen someone get bitten by this, but it's always important to keep this in mind when storing any user-specific information in the application scope.
Indeed, this is a good use of the Application Scope. Still, the TreeMap collection isn't the best approach for your situation, there are some problems with this:
Concurrency problems when 2 requests want to modify the data in your container.
If your application must scale horizontally, you will have 2 TreeMaps in each managed bean.
A good approach will be using a cache system. There are good cache libraries that will meet these requirements, I've tested ehcache and it provides both concurrent management for handling data and libraries in case you have 2 or more nodes to deploy your application, also you can configure an algorithm to clear the cache based on LRU (less frequently used) or FIFO (first in, first out).
Using an external database to handle the sessions IDs could consume some time to get/set the data (it could be very low, but still it is a disk I/O operation). For that problem, you can use BigMemory as an external database that lives in RAM, or a NoSQL database like BigTable.
Note: I do not work for ehcache nor I'm associated in a commercial way, I've tested it and it fulfills my needs.,Tthere are other cache system libraries like JBoss Cache and others you can evaluate and use.
I believe that the mvc mini profiler is a bit of a 'God-send'
I have incorporated it in a new MVC project which is targeting the Azure platform.
My question is - how to handle profiling across server (role instance) barriers?
Is this is even possible?
I don't understand why you would need to profile these apps any differently. You want to profile how your app behaves on the production server - go ahead and do it.
A single request will still be executed on a single instance, and you'll get the data from that same instance. If you want to profile services located on a different physical tier as well, that would require different approaches; involving communication through internal endpoints which I'm sure the mini profiler doesn't support out of the box. However, the modification shouldn't be that complicated.
However, would you want to profile physically separated tiers, I would go about it in a different way. Specifically, profile each tier independantly. Because that's how I would go about optimizing it. If you wrap the call to your other tier in a profiler statement, you can see where the problem lies and still be able to solve it.
By default the mvc-mini-profiler stores and delivers its results using HttpRuntime.Cache. This is going to cause some problems in a multi-instance environment.
If you are using multiple instances, then some ways you might be able to make this work are:
to change the Http Cache to an AppFabric Cache implementation (or some MemCached implementation)
to use an alternative Storage strategy for your profile results (the code includes SqlServerStorage as an example?)
Obviously, whichever strategy you choose will require more time/resources than just the single instance implementation.