A large e-commerce site is looking to switch its session cache from Shared cache to dedicated cache.
It is usually running on medium-size servers (5-6)... During busy times, it's running on 20 medium servers. During the very busy times, it is not unreasonable to have 2000+ requests per second to the site
Is co-located cache good enough here or must cache be in the dedicate worker role?
Also, must high-availability be enabled for session data? The site relies upon session data to be present for good user experience. But the cache is persisted to Azure blob storage, so I'm not sure I totally get the high-availability option
The use of dedicated roles depends on how many roles you want to run, and whether or not the memory usage of your web roles determines if they scale. For example, if your web roles are always pushing memory usage, and it is memory and not CPU that is the trigger for scaling out - then consider using dedicated roles for the cache, as your web roles can then handle the load for longer. If your web roles are cpu intensive, then dedicating memory on each role to the cache may be preferred. You also need to consider that if running in dedicated roles, you need more than one role to handle the load and availability, so even during non-busy times, you will have at least 3 roles running the cache (but possibly fewer web roles). You may also want to use dedicated cache if you do lots of deployments or scaling down - where roles are shut down intentionally and frequently.
One consideration on co-located role caching is that if you had sticky sessions the latency would be lower, as the item is on the same machine. Unfortunately, the Azure load balancer is round robin, and not sticky at all, so the chance that a session gets back to the same machine is low (1/5 of the time for 5 roles). This means that most of the time the cache item will be fetched from another role in the cluster, so co-located latency benefits are lost.
The cache is distributed and in-memory - there is no blob storage that I am aware of (except for 'cluster's runtime state' - whatever that is. An item loaded into cache is made available to other machines on the cluster from the machine that it is stored (in memory) on (a read from machine B to machine A does not also store it on machine A - see comment below). Cached items are always in memory only, and the cache size is limited by available memory.
The high availability option copies the item to a separate machine (not storage), so if one machine fails, there is still a copy somewhere. High availability will also use more memory, as an item uses memory in two different places. The chances of failure maybe low enough for your e-commerce app - if an item is not cached (either through failure or expiry) it may be reconstructed from persisted data. If you are, for example, keeping the basket in cache and not persisted to storage, you don't want it lost if a role recycles - in which case high availability may be the best option.
Great answer #SimonMunro however in my experience the Azure Co-located Cache is not fit for production. Our load testing has shown us that when a server is recycled that it takes an exceptional long period of time for a the cache to recover. We have coded against this by fetching the data from our database however our site grinds to a halt due to the stress on the database. This not only happens when a node is recycled; but also if you scale your cloud services up and down; and even when you perform a VIP swap.
We have performed the same tests using the Azure dedicated cache and have found it to handle the situation of a cache worker role recycling with little to no effect to the performance of the site. It is my recommendation is to use the Azure Dedicated Cache in all cases if you want your site to perform.
Related
Tech Stack
Azure WebApp
.Net core 2.1 WebApi
We have around 4k reference data which is used during auto suggest lookup, so in this i was wondering whether i should cache this data on WebApp or should always get it from database / 3rd party API.
I know i can use RedisCache to solve this issue, but i would like to know how Azure WebApp works when it comes to caching, it will have memory pressure? When? Yes then scale-up is the only solution?
We are using IMemoryCache in .net Core to store reference data and it expires on daily basis or when Azure WebApp is restarted (So 1st user will get delay till it gets all data in cache).
Data size is in range of 500KB - 1MB & sometimes goes till 3MB+.
What is the best approach?
iMemoryCache is not suggested when using WebApps because it is tightly bound to your application instance, so if you try to scale out your app (in case of load surges during the day) your caching mechanism will be broken.
RedisCache is pretty much a dictionary, key-value pairs.
It is very fast on look-ups but it could be very slow in some other operations like a GetAllKeys when it has to run through the whole cache. That will bring your cache server to its knees, so it needs to be handled carefully.
It will not put any significant pressure in the memory consumption of your app, you only need to have a static client. The rest is handled by the redis server.
If you plan to scale up your application (give more RAM and CPU resources to your one running instance) the iMemory cache is probably fine.
If you plan to scale out (create multiple instances of your application), that is strongly suggested for all stateless applications, then RedisCache (or any other distributed cache) is an one way for you if you need a caching mechanism.
Value and key max size is 512MB so you are on the safe side regarding value data size.
Attention
Be sure to use the Connection multiplexer as it is suggested in the official documentation because it automatically re-establishes the connection in case it is lost. That was a bug earlier, when redis cache server was going into maintenance your calls where redirected to the fail over instance but the connection was failing, so you needed to restart your application.
So I'm trying to migrate a Legacy website from an AWS VM to an Azure VM and we're trying to get the same level of performance. The problem is I'm pretty new to setting up sites on IIS.
The authors of the application are long gone and we struggle with the application for many reasons. One of the problems with the site is when it's "warming up" it pulls back a ton of data to store in memory for the entire day. This involves executing long running stored procs and in memory processes which means first load of certain pages takes up to 7 minutes. It then uses a combination of in memory data and output caching to deliver the pages.
Sessions do seem to be in use although the site is capable of recovering session data from the database in some more relatively long running database operations so sessions are better to stick with where possible which is why I'm avoiding a web garden.
That's a little bit of background, however my question is really about upping the performance on IIS. When I went through their settings on the AWS box they had something call NUMA enabled with what appears to be the default settings and then the maximum worker processes set to 0 which seems to enable NUMA. I don't know why they enabled NUMA or if it was necessary, but I am trying to get as close to a like for like transition as possible and if it gives extra performance in this application we'll probably need it!
On the Azure box I can see options to set the maximum worker processes to 0 but no NUMA options. My question is whether NUMA is enabled with those default options or is there something further I need to do to enable NUMA.
Both are production sized VMs but the one on Azure I'm working with is a Standard D16s_v3 with 16 vCores and 64Gb RAM. We are load balancing across a few of them.
If you don't see the option in the Azure VM it's because the server is using symmetric processing and isn't NUMA aware.
Now to optimize your loading a bit:
HUGE CAVEAT - if you have memory leak type issues, don't do this! To ensure you don't, put on a private bytes limit roughly 70% the size of memory on the server. If you see that get hit/issue an IIS recycle (that event is logged by default) then you may want to ignore further steps. Either that or mess around with perfmon (or more easily iteratively check peak bytes in task manager where you'll have to add that column in the details pane)
Change your app pool startup mode to: AlwaysRunning
Change your web app to preloadenabled=true
Set an initialization page in your web.config (so that preloading knows what to load).
*Edit forgot some steps. Make sure your idle timeout is clear or set it to midnight.
Make sure you don't have the default recycle time enabled, clear that out.
If you want to get fancy you can add a loading page and set an http refresh or due further customizations seen below:
https://learn.microsoft.com/en-us/iis/get-started/whats-new-in-iis-8/iis-80-application-initialization
According to https://slurm.schedmd.com/quickstart_admin.html#HA high availability of SLURM is achieved by deploying a second BackupController which takes over when the primary fails and retrieves the current state from a shared file system (probably NFS).
In my opinion this has a number of drawbacks. E.g. this limits the total number of server to two and the second server is probably barely used.
Is this the only way to get a highly available head node with SLURM?
What I would like to do is a classic 3-tiered setup: A load balancer in the first tier which spreads all requests evenly across the nodes in the seconds tier. This requires the head node(s) to be stateless. The third tier is the database tier where all information is stored or read. I don't know anything about the internals of SLURM and I'm not sure if this is even remotely possible.
In the current design, the controller internal state is in-memory, and Slurm saves it to a set of files in the directory pointed to by the StateSaveLocation configuration parameter regularly. Only one instance of slurmctld can write to that directory at a time.
One problem with storing the state in the database would be a terrible latency in resource allocation with a lot of synchronisations needed, because optimal resource allocation can only be done with full information. The infrastructure needed to support the same level of throughput as Slurm can handle now with in-memory state would be very costly compared with the current solution implying only bitwise operations on arrays in memory.
Is this the only way to get a highly available head node with SLURM?
You can also have a single MasterController managed with Corosync. But indeed Slurm only has active/passive options available for HA.
In my opinion this has a number of drawbacks. E.g. this limits the
total number of server to two and the second server is probably barely
used.
The load on the controller is often very reasonable with respect to the current processing power, and the resource allocation problem cannot be trivially parallelised (or made stateless). Often, the backup controller is co-located on a machine running another service. For instance, on small deployments, one machine runs the Slurm primary controller, and other services (NFS, LDAP, etc.), etc. while another is the user login node, that also acts as a secondary Slurm controller.
Recently, January 3rd, we observed interesting behavior with Redis Cache in Azure. It happened just once, and I'm trying to make sense of it.
We got alert that CPU went above 80% on Redis Cache service. Looking closely we discovered that used memory was dropped from typical 100MB to almost 0. Then it was quickly populated back to normal, I assume by normal usage of the application. While it was being populated, there was this CPU spike.
It looked like if cache was reset. However, this is production environment with very limited people having access to it, and we sure 100% that nobody reset it. There were no any deployment around that time. I couldn't find anything in diagnostic logs.
Questions:
1. Any ideas what could happen?
2. Where can I look, what to look for?
Update: We are on standard (C1) tier
No customers reported any problems, I just hate when I don't understand what is going on.
It depends on which cache tier you are using.
The basic tier only has one node with the cache data stored in memory. Any loss of memory in that node will cause the cache data to be lost.
If you are using the Standard tier then there are 2 nodes, a primary and secondary, with cached data being asynchronously replicated from primary to secondary. If the primary is offline then client requests are sent to the secondary. In this scenario the chance of cache data loss is low since it basically requires both nodes to be offline at the same time, which should only happen during scenarios of hardware failure (Azure ensures that normal updates maintenance such as OS updates are not done at the same time).
If you are using the premium tier then the cache data is backed by persistent storage so you should not experience cache data loss.
https://azure.microsoft.com/en-us/documentation/articles/cache-faq/#what-redis-cache-offering-and-size-should-i-use has some more information about this.
I would like to create an application that holds large amount of volatile data in memory. Only small part of this data needs to be persisted when host machine shuts down, or in case of maintenance. Outages should be rare, this in memory data needs to be accessible for most of the time, but rare restrats of service is bearable.
If I have been developing for a server, I would create a WindowsService, which runs reliably while the machine is up, and I would persist a fraction of the data in the OnStop() method.
I'm thinking of moving this whole thing to the cloud. The question is that if a Worker Role is similiar to a Windows Service from this point of view? Does it run most of the time with rare outages, or is it recycled / restarted from time to time or when it is idle?
Like Windows Service, Worker role is meant for processing background tasks. However one thing you would need to keep in mind that your worker role can go down any time. It may be because of hardware failure or software updates. Thus you can't always assume this to be highly available. That's why Windows Azure recommends deploying multiple instances of your application.
What you could do is have multiple instances of your worker role running and all of them sharing a common cache where you would put volatile data. Do take a look at Windows Azure Caching (http://msdn.microsoft.com/en-us/library/windowsazure/gg278356.aspx) where you could either dedicate some memory of a VM (i.e. an instance) for caching purpose or have a full VM dedicated for caching. That way you'll have your volatile data somewhere outside of your worker roles and thus making it available to all instances.