As default when deploying a new website, AAR Affinity is enabled to allow clients to reach the same web server instance over and over again. I'm wondering why this is enabled by default and if I ever need this feature. As I understand it, session storage and similar, aren't available on Azure. If you'd want this kind of behavior, Microsoft recommends using Redis as shared storage. My question is, what are the benefits of using AAR Affinity and any reasons not to disable it? Running without it, would make load balancing more evenly distributed as well.
As I understand it, session storage and similar, aren't available on Azure
In-memory session storage is just code running in ASP.NET, and is available on Azure Websites / Web Apps. If you were relying on this functionality then you'd need to leave the option enabled, otherwise you'd have a different session if you hit a different server.
Additionally, if you're using some form of in-memory cache then you'd want the same user to come back to the same server to improve cache hits.
What are the benefits of using ARR Affinity and any reasons not to disable it?
In the PaaS world, where your PaaS VM instances can be restarted for various reasons, it's not a good idea to store session information in memory. However, ARR Affinity is a way to support (with some limitations) those applications that were designed as session-sensitive (a.k.a stateful) apps.
You are right:
Running without it, would make load balancing more evenly distributed as well.
HTH :)
Related
We have been developing a RESTful web api using node and MongoDB. For hosting options, we decided to use Azure through BizSpark. We used DocumentDB with protocol support for MongoDB.
The problem now is DocumentDB is consuming all the credit causing a downtime and we haven't started making money yet. We are now considering switching from DocumentDB to MongoDB. The question now becomes, what is the cheapest way to host MongoDB on Azure?
So far on our research, we have found two options:
Using a VM (Linux or Windows)
Using a worker role
Please advice if there are other options, and how easy can it be to switch between these options at a later stage.
You can use the Azure calculator to get estimates between DocumentDB and a VM with the settings your company needs to see which one is cheaper.
If you are using Bizspark, remember that you have 5 accounts in which you can distribute all your costs to optimize in a better way.
Personal recommendations(subjective view):
Remember that if you are using the PAAS solution(DocumentDB) you
get full functionality out of the box, you don't have to set it up
and you can escale it very easily and plug in to very powerful tools
like PowerBi out of the box.
In the case of IAAS solution(vms) you have to install, mantain and
create all the connection settings for this to work. If you want to
scale you have to me more dedicated, since you have to scale it
through the use of more vms, traffic managers and more robust
architecture. If this is the path you are taking I would recommend
using containers like Docker inside the VM and their power to
manage this.
I'm thinking about setting up 2 web VMs with a load balancer and availability set, and another VM for SQL server (not sure if I can set an availability set for a SQL Server as well - SQL Server Express / Standard?)
My main problem is how to keep both web servers in sync (prefer not to use the DFS) or having the files in more than one location...
Another issue - is user uploaded content that I want to be available in both web servers (I wonder if I can also direct cache objects to be saved on a specific storage disk)
So, I was thinking to setup a storage account and attach it to both web VMs for user uploaded content and images while each server still serve it's own separate web application with same shared access to content files...
Is that a good idea? I understand that Azure storage is a virtual disk that is supposed to be highly available and fast - is it true??
Do I get a major performance hit if using the same storage disk from 3 different VMs (is that even possible?)
UPDATE:
I found out that because I'm using the BizSpark program I can't really connect more than one server - and share resources between them (unless I pay extra for it). so this became irrelevant for now
Also, I'm talking about ASP.NET but this shouldn't matter
Azure Files enables you to run multiple IIS instances against a single file share and thus not have to worry about replicating files across the multiple shares - so this is definitely an option. See Getting Started with File Storage for more information.
We need to enable 25+ performance counters in windows azure web roles. I'm thinking of RDP'ing and enable them one by one. But this could take long time and also not guranteed for if we scale up.
Could someone please help me whether it's possible to automate this process? Preferably Powershell but other solutions are ok as well.
There is no way to enable performance counters by RDP'ing into Windows Azure machines, because performance counters are published by Windows regardless.
However, what I think you're asking for is to capture the 25+ performance counters into Azure Diagnostics store?
If that is the case, you will need to:
1) Enable Azure Diagnostics on your Web Roles. This must be done before deployment. It is a best practice and mostly everyone does it (I sure wish Microsoft would have just done it for every Role w/o explicit configuration setting, but
2) There are multiple ways to instrument capture of performance counters into diagnostics store:
a) using diagnostics.wadcfg file http://msdn.microsoft.com/en-us/library/gg604918.aspx (you will need to redeploy your app with that file)
b) using powershell (although I've never done it myself) http://michaelwasham.com/2011/09/19/windows-azure-diagnostics-and-powershell-performance-counters/ or http://www.davidaiken.com/2011/10/18/how-to-easily-enable-windows-azure-diagnostics-remotely/
c) using in-code instrumentation (you'll need to re-upload your app everytime you change which counters you want enabled) http://www.codeproject.com/Articles/303686/Windows-Azure-Diagnostics-Performance-Counters-In (I dont recommend in-code configuration, because it is too brittle)
d) using 3rd party tools like Cerebrata Diagnostics Manager or AzureWatch
e) using Azure Service Management API in conjunction with Azure Diagnostics API to get at the individual instance configuration and update it (this is how the third party tools & powershell do it)
In cases of using powershell, management API directly, or a tool like Cerebrata, you configuration will "stick" for the life of the deployment. Once you re-upload a new version of the app, the configuration will be lost.
Using diagnostics.wadcfg, in-code instrumentation, or AzureWatch, your configuration will persist throughout the re-uploads of the app
HTH
I want to know how windows azure websites manage it's session states across multiple instances. There's a lot of content on Internet about how to share session states on multiple instances using cloud services, but for websites I couldn't find a final answer.
The question How does windows azure websites handle session? has not an objective answer yet. The accepted best answer has a good suggestion, but you have to watch a video that has more than 1 hour.
Do you know how to do it? Can I just use InProc session state and windows azure will manage it across all instances automatically?
Thank you.
Looks like the options are:
Windows Azure Cache Service[1]. This is the new caching service that Microsoft is offering. Pricing starts from $12.50/month for 128MB of cache (this is preview pricing with 50% discount). According to [2] the latency for accessing the cache is around 1ms.
SQL Azure Accordign to Angshuman Nayak [3] relatively cheap, especially if you are already using SQL Azure for something else. As drawback he mentions potential performance issues, as you are normally using shared database. You also need to take care of cleaning up experired sessions.
Table Storage Angshuman [3] also provides some instructions on how to use the table storage to store sessions. He also mentions that the performance is not as good as with other solutions, but does not provide any numbers on this. The good thing in table storage is the pricing. Since it is pay-as-you-go there's not fixed monthly fee.
Based on this is seems to be that the Azure Cache Service is the way to go, unless the price is an issue.
[1] http://www.windowsazure.com/en-us/pricing/details/cache/
[2] http://weblogs.asp.net/scottgu/archive/2013/09/03/windows-azure-new-distributed-dedicated-high-performance-cache-service-more-cool-improvements.aspx
[3] http://blogs.msdn.com/b/cie/archive/2013/05/17/session-state-management-in-windows-azure-web-roles.aspx
InProc SessionState is not supported on Azure websites. You will have to use an external session state provider. This article shows external options and this article shows how to use SQL Azure as a session state provider.
In azure you can make use of Redis cache to handle session.
-->install nuget package RedisSessionStateProvider
-->set your SessionState mode to Custom in web.config and customProvider to RedisSessionProvider.
You can find more information from here
We use the MS AppFabric Cache to store the central session. But we want to know how fast this store? Realties who used to do the virtual machine and Memcached on this machine?
thanks a lot!
I'm not sure I have seen any benchmarks on this so its a hard one to answer but the new Windows Azure Caching service released on June 7th addresses a lot of the performance issues of the Distributed Multi-Tenanted Azure Caching Service.
Below are extracts from Haishi's blog on Azure Caching, note Windows Azure Caching now supports Memcached interoperability too...
"The cluster utilizes free memory spaces on your Cloud Service host machines, and the cluster service process can either share the host machines of your web/worker roles, or run on dedicated virtual machines when you use dedicated caching worker roles"
http://haishibai.blogspot.com.au/2012/06/windows-azure-caching-service-memcached.html
Fast performance. The cluster either collocates with your roles, or runs in close proximity based on affinity groups. This helps to reduce latency to minimum. In addition, you can also enable local cache on your web/worker roles so that they get fastest in-memory accesses to cached items as well as notification support.