windows azure websites session states - azure

I want to know how windows azure websites manage it's session states across multiple instances. There's a lot of content on Internet about how to share session states on multiple instances using cloud services, but for websites I couldn't find a final answer.
The question How does windows azure websites handle session? has not an objective answer yet. The accepted best answer has a good suggestion, but you have to watch a video that has more than 1 hour.
Do you know how to do it? Can I just use InProc session state and windows azure will manage it across all instances automatically?
Thank you.

Looks like the options are:
Windows Azure Cache Service[1]. This is the new caching service that Microsoft is offering. Pricing starts from $12.50/month for 128MB of cache (this is preview pricing with 50% discount). According to [2] the latency for accessing the cache is around 1ms.
SQL Azure Accordign to Angshuman Nayak [3] relatively cheap, especially if you are already using SQL Azure for something else. As drawback he mentions potential performance issues, as you are normally using shared database. You also need to take care of cleaning up experired sessions.
Table Storage Angshuman [3] also provides some instructions on how to use the table storage to store sessions. He also mentions that the performance is not as good as with other solutions, but does not provide any numbers on this. The good thing in table storage is the pricing. Since it is pay-as-you-go there's not fixed monthly fee.
Based on this is seems to be that the Azure Cache Service is the way to go, unless the price is an issue.
[1] http://www.windowsazure.com/en-us/pricing/details/cache/
[2] http://weblogs.asp.net/scottgu/archive/2013/09/03/windows-azure-new-distributed-dedicated-high-performance-cache-service-more-cool-improvements.aspx
[3] http://blogs.msdn.com/b/cie/archive/2013/05/17/session-state-management-in-windows-azure-web-roles.aspx

InProc SessionState is not supported on Azure websites. You will have to use an external session state provider. This article shows external options and this article shows how to use SQL Azure as a session state provider.

In azure you can make use of Redis cache to handle session.
-->install nuget package RedisSessionStateProvider
-->set your SessionState mode to Custom in web.config and customProvider to RedisSessionProvider.
You can find more information from here

Related

Dedicated or shared Storage Account for Azure Function Apps with the names less than 32 symbols

Short Version
We want to migrate to v4 and our app names are less than 32 symbols.
Should we migrate to dedicated Storage Accounts or not?
Long Version
We use Azure Functions v3. From start one Storage Account was shared between 10+ Azure Function Apps. It could be by luck but the names are less than 32 symbols and it is not going to change. We are not using slots as they were initially not recommended and then with no adoption time or recommendation made generally available.
Pre-question research revealed this question but it looks like more related to the durable functions. Another question looks more up the point but outdated and the accepted answer states that one Storage Account can be used.
Firstly, the official documentation has a page with storage considerations and it states (props to ijabit for pointing to it.):
It's possible for multiple function apps to share the same storage account without any issues. For example, in Visual Studio you can develop multiple apps using the Azure Storage Emulator. In this case, the emulator acts like a single storage account. The same storage account used by your function app can also be used to store your application data. However, this approach isn't always a good idea in a production environment.
Unfortunately it does not elaborate further on the rationale behind the last sentence.
The page with best practices for Azure Function mentions:
To improve performance in production, use a separate storage account for each function app. This is especially true with Durable Functions and Event Hub triggered functions.
To my greater confusion there was a subsection on this page that said "Avoid sharing storage accounts". But it was later removed.
This issue is somehow superficially related to the question as it mentions the recommendation in the thread.
Secondly, we had contacted Azure Support for different not-related to this question issues and the two different support engineers shared different opinions on the current issue. One said that we can share a Storage Account among Functions Apps and another one said that we should not. So the recommendation from the support was mixed.
Thirdly, we want to migrate to v4 and in the migration notes it is stated:
Function apps that share storage accounts will fail to start if their computed hostnames are the same. Use a separate storage account for each function app. (#2049)
Digging deeper into the topic, the only issue is the collision of the function host names that are used to obtain the lock that was known even in Oct 2017. One can follow the thread and see how in Jan 2020 the recommendation was made to update the official Azure naming recommendation but it was made only on late Nov 2021. I also see that a non-intrusive, i.e. without renaming, solution is to manually set the host id. The two arguments raised by balag0 are: single point of failure and better isolation. They sound good from the perspective of cleaner architecture but pragmatically I personally find Storage Accounts reliable, especially if read about redundancy or consider that MS is dog-fooding it for other services. So it looks more like a backbone of Azure for me.
Finally, as we want to migrate to v4, should we migrate to dedicated Storage Accounts or not?
For the large project with 30+ Azure Functions I work on, we have gone with dedicated Storage Accounts. The reason why is Azure Storage account service limits. As the docs mention, this really comes into play with Durable Task Functions, but can also come into play in other high volume scenarios. There's a hard limit of 20k requests per second for a Storage Account. Hit that limit, and requests will fail and will return HTTP 429 responses. This means that your Azure Function invocation will fail too. We're running some high-volume scenarios and ran into this.
It can also cause problems with Durable Task Functions if two functions have the same TaskHub ID in host.json. This causes a collision when Durable Task Framework does its internal bookkeeping using Storage Queues and Table Storage, and there's lots of pain and agony as things fail in spectacular fashion.
Note that the 20k requests per second service limit can be raised with a support ticket to Azure. If approved, the max they'll raise it to is 50k requests/second.
So avoid the potential headaches and go with a Storage Account per Function.

Azure Split/Merge Service, is it still relevant?

I have managed to get the C# and db setup using ListMappings. However, when I try to deploy the split/merge tool to Azure cloud classic the service it states 'The requested VM tier is currently not available in East US for this subscription. Please try another tier or deploy to a different location.' We tried a few other regions with the same result. Do you know if there is a workaround or updated version? Is the split / merge service even still relevant? Has anyone got this service to run on Azure lately?
https://learn.microsoft.com/en-us/azure/azure-sql/database/elastic-scale-overview-split-and-merge
The answer to the question on whether it is still relevant, in my opinion is ...no. Split\merge is no longer relevant with the maturation of elastic pools. Elastic pools with one data base per tenant seem the sustainable way to implement multi tenancy with legacy code. The initial plan was to add keys to each of our tables to have multiple tenants per database. Elastic pools give us the same flexibility without having to make breaking changes our existing code.
Late post here, but we are implementing ElasticScale for a client to split ~50 clients into a database-per-tenant model. I don't think the SplitMerge tool will be used over the long term, just for the initial data migration from one db to many shards, but it has been handy for that purpose. We are using the ElasticScale SDK to allow a single API to route queries to the appropriate shard(s) based on sharding key. Happy to compare notes with you if you are still working on this.

Azure and VS2017 subscription

I found that with VS2017 it is available 50$/per month Azure credit. I have never worked with Azure before, but have a little experience with CloudFoundry. It seems a good chance for me to try Azure now. So, my question is what I can obtain with 50$/month? Is it enough for small ASP .NET Core web-site, database, some services into docker containers, just to play with these?
Thanks
Yes. You can play around these services.
https://azure.microsoft.com/en-us/pricing/member-offers/credit-for-visual-studio-subscribers/
Credit for use on a wide range of Azure services. Virtual Machines, Storage, SQL Databases, Containers, Cognitive Services, Functions, Data Lake, and much more.
Azure dev/test pricing helps your credit last longer.
Exclusive access to Windows 10 virtual machine images.
Easy sign-up. No credit card required.
No surprises. A spending limit protects you from overage charges. You can remove the spending limit when you’re ready.

Alternate to run window service in Azure cloud

We currently have a window service which send some notification emails to users after doing some processing on database(SQL database). Runs once in day.
We want to move this on azure cloud. One alternate is to put it on Azure VM as is. but I am finding some other best possible solution for that.
I study about recurring and on demand Web jobs but I am not sure is this is best solution.
Also is there any possibility to update configuration of service code in App.config without re-deploy the code of service on cloud. I means we can manage configuration from Azure portal.
Thanks in advance.
Update 11/4/2016
Since this was written, there are 2 additional features available in Azure that are both excellent choices depending on what functionality you need:
Azure Functions (which was based on the WebJobs described below): Serverless code that can be trigger/invoked in various ways, and has scaling support.
Azure Service Fabric: Microservice platform, with support for actor model, stateful and stateless services.
You've got 3 basic options:
Windows service running on VM
WebJob
Cloud service
There's a lot of information out there on the tradeoffs between these choices, but here's a brief summary.
VM - Advantages: you can move your service basically as it is without having to change much or any of your code. They also have the easiest connectivity with other resources in Azure (blob storage, virtual networks, etc). The disadvantage is you're giving up all the of PaaS advantages and are still stuck managing your own VM infrastructure
WebJob - Advantages: Multiple invocation options (queues, blobs, manually, queue receive loops, continuous while-loop style, etc), scheduled (would cover your case). Easy to deploy (can go with website, as a console app, automatically through Kudu), has some built in logging in Azure portal - and yes, to answer your question, you can alter the configuration in the portal itself for connection strings and app settings.
Disadvantages - you'll need to update code, you don't have access to underlying resources (if you need that), and more of something to keep in mind than a disadvantage - it uses the same resources as the webapp it's deployed with.
Web Jobs are the newest of the options, but at the same time appear to have active development going on to increase the functionality and usefulness.
Cloud Service - like a managed VM, has some deployment options, access to underlying VM if needed. Would require some code changes from your existing service.
There's nothing you've mentioned in your use case that makes me think a Web Job shouldn't be first thing you try.
(Edit: Troy Hunt has a great and relatively recent blog post illustrating most of the points I've mentioned about Web Jobs above: http://www.troyhunt.com/2015/01/azure-webjobs-are-awesome-and-you.html)

Is Azure Redis Cache used automatically for some other service. I have a charge, not sure why?

I am using ASP.NET 4.5, MVC 3 on Azure Websites. I do use Session Variables and some TempData variables.
I note that I am being charged for Redis Cache, not a huge amount. However, since I never set up to use it, I am puzzled. Is it used to underpin other mechanisms? I do use "inproc" session variables, and I understand that the LB will implement sticky sessions, if multiple instances exist?
I have raised a ticket for MS, but decided to ask the question here as well.
Thanks.
Did you create the Redis cache account? It doesnt matter how much you use. I you've created the account, you're on the hook to pay for it.
Azure Redis Cache is supported via the new Azure portal - http://portal.azure.com
Do you see any Redis caches listed there?
Redis is not created automatically by any other service, so you should not be charged unless you manually created the cache.

Resources