Azure App Service gives another response on different instance of app service - azure

I am setting up a web app on Azure for which I am using an azure app service. At the moment, the app service scales down to 1 instance at night, and scales up again in the morning.
When a request is sent to the app service when there are 2 instances, the response depends on the instance which handles the request. I would expect a 200, but half of the time I get a 500 http response.
I figured out it depends on the instance because when I use a cookie ARRAffinity (which lets you choose the specific instance of the app service), I am able to reproduce always 200 reponses on 1 machine, and always 500 responses on the other machine.
WEBSITE_LOCALCACHE_ENABLED is false and hence the app service should use the same code, coming from 1 network share if I am not mistaken.
Because half of the time, the app acts normal, I think this is not a code problem, but an infrastructural problem on Azure.
The web app is written in .NET and uses .NET Core 2.2. OS Version is windows and 64 bit system.

It might be the problem with instance or it might be the problem with code. When you seen the issue try to do the Advanced restart from the portal and see if that helps.
Also during the time of the issue see Diagnose and solve problems Blade of the App service and see under Availability and Performance section for the logs information which will give better idea.

Related

My .net core 3.1 server stops on azure after no use

I am struggling with problem with server on azure. I want it to work nonstop. But after no long use, when i make first request it I see in logs "Application Starting up". Do you know how to resolve it and allow server to run all the time
If it is an App Service, you can set Always on = On, under Settings/Configuration.
You will need you App Service to run on a basic or higher Service Plan for this.

Asp.NET Core 2.1 HostedService - keep running on Azure

We have a web application that is using the IHostedService.
There is an example of this here
And the method we employed is detailed here
The goal was to have an application that continually ran background tasks. So we could schedule in jobs that run automatically at set times. This application is separate from all our other applications, so it's not visited by our user base.
So we needed a way for this application to run all the time on Azure.
We tried to set up an App Service for the application on Azure, but it seems that the application does not continually run. Things seem to run locally ok, but on Azure, I have to stop and restart the service before it kicks in the IHostedService tasks.
Is there a way on Azure to keep the application alive and running?
Ok, so in the App Settings in Azure, there is a setting Always On this worked for us. :)
We also found out from Azure support, that if you are on the lowest tier in their offered packages, this is treated as a "Development" environment and will only have limited up-time. As a result, we could expect the application to go offline when we reached that limit.
We argued that there should have been something more obvious in the dashboard for us to know this.
Once we upgraded to a Standard tier, the application stayed online.
Also, if you are running a hosted service and it hits an unhandled exception, this stops the service. You need to make sure you are handling exceptions for this to work.

On how many instances is my Azure WebApi running

I am trying to implement SignalR hubs on my REST service (ASP.NET Web Api) hosted on azure. I've been reading some common stuff related to SignalR and I came to this one that it is server bound. You can check here. Which means that in order to be able to scale it on multiple server instancs I have to do some additional stuff. So, then, I started to ask myself "How many instances do I currently have running of my REST service on Azure? How do I know that?"
So, what I did was - I navigated to azure portal and opened my service > Process explorer
Does that mean than my web app scales automatically and I currently have 2 instances of my web api runing? I think it clearly says that there's currently only one instaces of it running 2 processes but how do I know if it will scale some time in the future?
No, your photo shows your app process and Kudu, which is a management interface you can access at https://yourappname.scm.azurewebsites.net.
You can see the instance count in your app's App Service Plan. If it is on Free or Shared, there can only be one. If it is Basic/Standard/Premium, it is one by default. If you haven't setup auto-scale, it won't scale to more than 1 instance unless you tell it to.

Prevent Azure from shutting down my App Service

I am running a .NET Core web application on an Azure App Service (App Service plan is configured to use S1). It is stable.
However, I recently ran an automated test against production and it caused 100s of errors in a few minutes. After this, the App Service became unavailable for a long time.
I know that App Service basically uses IIS and I know that there is a setting in IIS that will shut down an App Service on too many errors in a short time. I am assuming that this is the setting that came into effect for my app.
My question is: How do I prevent Azure from shutting down my App Service, even if many errors happen in a short time?
Investigate the "Always On" setting that can be changed in the Azure Portal under Application settings, General Settings. This value is configured per App.
The UI control will be disabled if your price tier does not support always on. Typically these lower priced levels in the pricing tiers are not used for a production site.
I recently ran an automated test against production and it caused 100s of errors in a few minutes. After this, the App Service became unavailable for a long time.
Firstly, you can enable diagnostic functionality for App Service web app to log information from both the web server and the web application, which will help you troubleshoot the issue.
Secondly, you can try to increase the number of instances that run your app and check if it can mitigate the issue.
Besides, if possible, you can set up staging environment and do automated test on staging environment instead of production environment, which will not cause your production shutting down for long time when you do automated test on staging.
I am not sure whether this problem was correctly diagnosed back in 2017 when I was using a .NET Core WebApp. Maybe it was or maybe it wasn't.
However, I have today in late 2019 on Azure Functions V2 and .NET Core 2.2 recreated the same scenario and provoked 5000 unhandled exceptions in one minute and the Function did not go down because of that.
So anyone finding this question can pretty much rest assured, if they are on Azure Functions V2 or newer - it does not crash just because of the quantity of exceptions like it was the case with default settings in IIS in the past.

Azure Web API - how to communicate between services

I'm currently developing a SOA based architecture in Azure, using disparate Web API services (they'd probably qualify as Microservices, but I'm hesitant to use the term).
I have a service which is triggered by the Azure Scheduler. It does some "stuff" and then needs to call another Web API (via HttpClient) service to trigger something else. To do this, I need to know the URI of the 2nd service. When running locally, this is fine, as it is something like
POST http://localhost:1234/2ndService/api/action
However, when I deploy to Azure (using Internal Only as the access level), it gets an obfuscated URI, such as http://microsoft-apiapp8cf3d453-39d8-4b3b-ad00-e9d8008a9b58, which I obviously can't guess at deploy time.
Any ideas on how to solve this problem? Or have I made a fundamental error here?
Instead of relying on public http endpoints, have you considered passing messages via queues in Azure Table Services? It's very simple to do and is going to be more robust since you can take advantage of built-in features like guaranteed message delivery.
The overall idea is that Service A does some "stuff" then puts a message on queue ONE. Service B continuously reads from queue ONE until it picks up a new message from Service A (or any other service for that matter) and then does its "STUFF". You can continue to chain calls like this to other services that need to be notified.
If you want a more elegant solution you can look at using Service Bus Topics but the concept is basically the same.
Also, since you mentioned that your architecture is much like microservices, you can check out the new Service Fabric which is designed for your scenario.
In case of Azure Web Apps, you may always see such properties going to the web app dashboard, then properties. When deploying from the Visual Studio, you can set the URL as you want - just checked it, and it works fine.
Not very clear what technology do you use - is it IaaS VM? Is it Web Apps?
From my standpoint, each service should be deployed as a separate Web App (or API App, if you want). Each Web App has defined its own name as in yourwebapp.azurewebsites.net, so once you have provisioned the Web App no 1 in Azure, you know its address so you will call it from the Web App no 2.
In all the cases, you should have fully qualified domain names, and not local/internal ones.

Resources