What is an azure role? - azure

I'm reading this article on distributed caching in Azure. Being new to Azure I'm trying to understand what they mean when they use the term "role" in the following context:
In-Role Cache You can deploy an in-role cache on a co-located or
dedicated role in Azure. Co-located means your application is also
running on that VM and dedicated means it’s running only the cache.
Although a good distributed cache provides elasticity and high
availability, there’s overhead associated with adding or removing
cache servers from the cache cluster. Your preference should be to
have a stable cache cluster. You should add or remove cache servers
only when you want to scale or reduce your cache capacity or when you
have a cache server down.
The in-role cache is more volatile than other deployment options
because Azure can easily start and stop roles. In a co-located role,
the cache is also sharing CPU and memory resources with your
applications. For one or two instances, it’s OK to use this deployment
option. It’s not suitable for larger deployments, though, because of
the negative performance impact.
You can also consider using a dedicated in-role cache. Bear in mind
this cache is deployed as part of your cloud service and is only
visible within that service. You can’t share this cache across
multiple apps. Also, the cache runs only as long as your service is
running. So, if you need to have the cache running even when you stop
your application, don’t use this option.
Microsoft Azure Cache and NCache for Azure both offer the in-role
deployment option. You can make Memcached run this configuration with
some tweaking, but you lose data if a role is recycled because
Memcached doesn’t replicate data.
They talk about In-Role cache, cache service, cache VMs and multi-region cache VMs.
I understand cache services to be "server-less" meaning you don't manage the server or cluster, Azure
does all of that, as apposed to cache VMs where you handle deployment of the server and the cache solution on that server.
How does In-Role cache differ, and what is a "role"? I usually think of role as the definition of how a user participates in a given system, and it establishes the capabilities or permissions that members of that role would need within the system to fulfill their duties. This seems different than that.

It's legacy. In the past, there were Azure In-Role Cache and the Azure Managed Cache Service. The recommendation is to use Azure Redis Cache now:
https://azure.microsoft.com/en-us/blog/azure-managed-cache-and-in-role-cache-services-to-be-retired-on-11-30-2016/

Related

How to handle in memory cache with Azure App service auto scale

I have a Web app hosted in azure App service. The application has an in-memory cache. Planning to enable the auto-scale in app service when the server traffic is high.
What will happen to the in-memory cache?
What is the best way to handle this?
Well, you will have n in-memory cache instances. That might be ok, but you might want to look at a distributed cache like Azure Redis Cache or another ready made distributed implementation of IMemoryCache as found here (assuming you use .Net):
Distributed SQL Server cache
Distributed Redis cache
Distributed NCache cache
If you keep using the in-memory cache, each new web app instance will start with an empty cache and it will fill based on the requests to that particular instance.

Alternative solution for Azure Service Fabric distributed cache

I have an application running in Service Fabric with multiple nodes. All the running nodes share some cache data using distributed caching available in Service Fabric.
Now, I am looking to move out from Service Fabric due to cost issues.
What would be a good solution for me where I can also maintain caching between multiple instances ( like the distributed cache in Service Fabric).
I need to install it in an Azure environment.
If you're looking for an alternative managed by Microsoft, the best choice would be Azure Redis Cache.
More info:
https://learn.microsoft.com/en-us/azure/azure-cache-for-redis/

Why wouldn't I disable AAR Affinity on Azure web sites

As default when deploying a new website, AAR Affinity is enabled to allow clients to reach the same web server instance over and over again. I'm wondering why this is enabled by default and if I ever need this feature. As I understand it, session storage and similar, aren't available on Azure. If you'd want this kind of behavior, Microsoft recommends using Redis as shared storage. My question is, what are the benefits of using AAR Affinity and any reasons not to disable it? Running without it, would make load balancing more evenly distributed as well.
As I understand it, session storage and similar, aren't available on Azure
In-memory session storage is just code running in ASP.NET, and is available on Azure Websites / Web Apps. If you were relying on this functionality then you'd need to leave the option enabled, otherwise you'd have a different session if you hit a different server.
Additionally, if you're using some form of in-memory cache then you'd want the same user to come back to the same server to improve cache hits.
What are the benefits of using ARR Affinity and any reasons not to disable it?
In the PaaS world, where your PaaS VM instances can be restarted for various reasons, it's not a good idea to store session information in memory. However, ARR Affinity is a way to support (with some limitations) those applications that were designed as session-sensitive (a.k.a stateful) apps.
You are right:
Running without it, would make load balancing more evenly distributed as well.
HTH :)

Maintaining Node.js sessions between multiple instances on Azure

I have 3 instances of a Node.js worker role running on Windows Azure. I'm trying to maintain sessions between all the instances.
Azure Queue seems like the recommended approach, but how do I ensure all the instances receive the session, as the queue deletes the session once a single instance has de-queued it?
Azure table isn't really suitable for my application as the sessions are too frequent and need not be stored for more than 10 seconds.
A queue isn't a good mechanism for session state; it's for message-passing. And once one instance reads a queue message, it's no longer visible while a particular role instance is processing that message. Also: what would you do with the message when done with it? Update it and then make it visible again? The issue is that you cannot choose which "session" to read. It's an almost-FIFO queue (messages that aren't processed properly can reappear). It's not like a key/value store.
To create an accessible session repository, you can take advantage of Azure's in-role (or dedicated role) caching, which is a distributed cache across your role instances. You can use Table Storage too - just simple key/value type of reads/writes. And Table Storage is included in the node.js Azure SDK.
That said: let's go the cache route here. Since your sessions are short-lived, and (I'm guessing) don't take up too much memory, you can start with an in-role cache (the cache shares the worker role RAM with your node code, taking a percentage of memory). The cache is also memcache-compatible, which is easy to access from a node application.
If you take a look at this answer, I show where caching is accessed. You'll need to set up the cache this way, but also set up the memcache server gateway by adding an internal endpoint called memcache_default. Then, point your memcache client class to the internal endpoint. Done.
The full instructions (and details around the memcache gateway vs. client shim, which you'd use when setting up a dedicated cache role) are here. You'll see that the instructions are slightly different if using a dedicated cache, as it's then recommended to use a client shim in your node app's worker role.

Who is faster? AppFabric Cache or Memcached?

We use the MS AppFabric Cache to store the central session. But we want to know how fast this store? Realties who used to do the virtual machine and Memcached on this machine?
thanks a lot!
I'm not sure I have seen any benchmarks on this so its a hard one to answer but the new Windows Azure Caching service released on June 7th addresses a lot of the performance issues of the Distributed Multi-Tenanted Azure Caching Service.
Below are extracts from Haishi's blog on Azure Caching, note Windows Azure Caching now supports Memcached interoperability too...
"The cluster utilizes free memory spaces on your Cloud Service host machines, and the cluster service process can either share the host machines of your web/worker roles, or run on dedicated virtual machines when you use dedicated caching worker roles"
http://haishibai.blogspot.com.au/2012/06/windows-azure-caching-service-memcached.html
Fast performance. The cluster either collocates with your roles, or runs in close proximity based on affinity groups. This helps to reduce latency to minimum. In addition, you can also enable local cache on your web/worker roles so that they get fastest in-memory accesses to cached items as well as notification support.

Resources