I a host web application on Azure. I have a basic plan where only one instance works. After I've published my app into cloud, I've noticed an automatically generated cookie called ARRAffinity. So even with single instance of hosted web application there will be always generated ARRAffinity cookie? Is that right? Or it doesn't matter how many instances are around and the Azure web application always generates the cookie?
It's always there by default.
There are ways to Disable Session affinity cookie (ARR cookie) for Azure web apps
Here's an explanation: Azure: ARRAffinity makes affinity cookies!
Affinity Cookies are used to aid people who need to stay with a certain instance of web app or web site in Azure. The reason for this is that we strive for statelessness, but do not always achieve it. This means that the user must stay on the particular instance that they using till they break state and then things are saved at that time.
This can be disabled without adding the Arr-Disable-Session-Affinity header that was mentioned in the blog post in the other answer. Azure has an option to turn this off under Settings > Configuration > General Settings > Platform settings.
Related
I was reading about ARR affinity in Azure and saw that stateful apps should have ARR affinity enabled:
I assume this is so the same VM is used for future requests for that client.
But is it even possible to store state on VM?
I thought we don't have access to those low level details in our app service.
What would be an example of a stateful app in Azure where you'd be able to store state on the VM?
A Stateful service when it make use of some existing data, it means service has a memory. Its output is not 100% the function of an input. In order to function, the service needs to retain and access some set of stored information over some period of time.
Azure App Service enables you to easily create enterprise-ready web and mobile apps for any platform or device and deploy them on a scalable cloud infrastructure.
Suggest you to refer below articles for detail understanding.
Common web application architectures
Scalable web application
What is ARR affinity:
ARR identifies the user by assigning them a special cookie (called as an affinity cookie), which allows the service to choose the right instance the user was using to serve subsequent requests made by that user. Which means if we have the ARR Affinity enabled, a client is tied to a specific web worker until the session finishes.
Azure WebApps by default have ARR Affinity cookie enabled, this cookie pairs a client request to a specific server. However, Azure Web Apps is a stateless platform and, in an environment, where we are scaling the Website across multiple instances, the ARR Affinity cookie will be bound to a specific server. It’s advisable to avoid the use ARR cookies in a scaled environment where we have multiple instances that serve our application requests.
Disabling ARR cookies is a sustainable resolution for issues related to ARR Affinity cookies in scaled environments, where these cookies rely on the relationship with the worker machine they are paired with.
Also, this is good for legacy application compatibility as they may not have been designed with load balancing in mind.
But if your app has been designed for load balancing (i.e. your app is stateless, session state stored elsewhere), you should disable this.
I have a website which is developed using Kentico 10 and hosted on Azure.
It has Azure Application gateway, scaled out to 2 instances and webfarms are also enabled. Its using Azure Redis cache as well
Today I disabled ARRAffinity in the app service (via Azure Portal) and kept Cookies based affinity enabled in the application gateway.
Still, I can see the app is consuming the Redis cache however when I add an item to shopping cart and then click on the view cart it shows the empty shopping cart (randomly) where our shopping cart is stored in a session.
So I believe this is something related to sticky session issue even with the Redis cache.
Since I've disabled the ARRAffinity in the app service does is it required to disable the Cookies based affinity from the app gateway-> https settings?
If not, anything I've missed?
If you are using an App Service and scaling out to 2 instances the Application gateway does not provide load balancing capabilities.
The load balancing is being handled by the app service. If you wanting to control that a traffic manager profile might help.
So you would still need to leave ARR on in the app service.
I have a MS Azure ASP.NET Core 2.x WebApp that uses standard Identity based authentication.
I would like to restrict access to the website to only a certain set of people.
I need the entire website to be hidden unless you pass some kind of authorization gate. The requirement, however, is that the gate should not interfere with the standard ASP.NET Core Identity authentication/authorization mechanisms.
Essentially, we have a website that needs testing by a distributed team, but we don't want the site to be visible to non-members.
TIA!
Your best bet is going to be using the network-level control that comes with an ILB App Service Environment. If that's too pricey, try the IP restrictions feature on the public app service.
How can I maintain users to get redirected to the same Server in a Load-Balanced Web Apps
Your answer will be very helpful
Traffic Manager directs the user to the appropriate region, but assuming you have your web app scaled out to at least two instances, ARR (Application Request Routing) is what directs each request to a specific instance of the app.
ARR has a feature called Session Affinity which is enabled by default. It uses an ARRAffinity cookie to attempt to route all requests from a client to the same instance of your application. I say "attempt" because, the cloud being what it is, instances of your app can come and go due to autoscaling or maintenance activities.
Your best bet is to use Azure Traffic manager. When setup correctly it will route users to the proper cloud service in their region and provide them with the best possible experience.
More information can be found here
Azure Traffic Manager
Azure Traffic Manager Documentation
Traffic Routing Methods
just starting to explore Azure and I am still a bit confused regarding the purposes of web roles vs worker roles. In the solution I'm working on mobile apps (iPhone, Android, Windows etc) will be accessing our server product via a REST api. So there is really no public facing web site for our service (as in web pages).
This made me think that I don't need a web role but instead have one or worker roles listening on our http endpoints. I have created a prototype along these lines. When from a mobile device I do I an http post to the endpoint, I get no response back. And I see nothing in the Azure logs that indicate that indeed my worker role was started or is running and responding to it.
Is this an appropriate approach? Is there something I need to do in setup code because I don't have a web role? I read in another thread that web roles run in IIS but worker roles don't.
Thanks for bearing with me. I am still getting to grips with Azure and so have a little difficulty formulating the right question.
You don't need to have a web role in your azure deployment. As you read, a web role has IIS, and your web site is hosted in it. A worker role is basically a plain old W2K8 server without IIS. Honestly, I haven't RDP'd to a worker role instance, so I'm not 100% sure that you don't have IIS or not.
But you don't need a web role in order to expose a WCF service. Here's a nice example (although the background color needs some work) that shows you how to do this.
Good luck! I hope this helps.
Adding to what David Hoerster said: You can host up to 25 externally-facing endpoints (each with its own port number) on any role type, with each endpoint being http, https, or tcp. With a Web Role and IIS, a web application typically grabs an endpoint mapped to port 80. In your case, you'll be creating your own endpoints on your specific ports. You're responsible for creating your ServiceHost (or whatever you're using to host your service) and binding it to one of your endpoints. To do this, you'll need to either map each endpoint explicitly to a specific internally-facing port, or inspect the endpoint's properties to discover which port has been dynamically assigned to it, for you to bind to (might this be the issue you're running into with your prototype code?).
If you're looking for the benefits IIS offers when hosting your endpoint, you're better off with a Web Role, as it's going to be much easier for you to do this since a Web Role enables IIS by default (and it's easy to add WCF services to a Web Role from Visual Studio).
Even if you were to self-host your endpoints, you could still use a Web Role, but now you'd be carrying the extra memory baggage of a running, yet unused, IIS service.