We are running our application using Azure App Service. It is based in .NET FW 4.7 with Webforms and API Rest. We are using PC3 App Service Plan (16GB RAM). The application is stateless and it supports the Application Service scale out without a problem.
In a first step to modernize our infrastructure, the application was packaged in a Windows Container and it is executed in an App Service. The image is based on mcr.microsoft.com/dotnet/framework/aspnet:4.8 and uploaded to ACR. The problem occurs when trying to scale out the application. At the time of scale-out, the new container is added to the balancer and some requests are answered with "The Web App's container is starting up!"
Is there a way to add the new container to the balancer only when it is fully functional?
Note: I don't know if this is related to the problem but it appears in the log:
CONTAINER_HEALTH_CHECK_MODE app setting is set to ReportOnly. Container will not be recycled. For container to be recycled when it becomes unhealthy set it to Repair
Related
I am trying to deploy a ASP net core web application in Azure Container Instance. Since it is having a microservice pattern I had to create 3 container groups for 2 API's and 1 for the web application. Each container group is having one application container and an nginx side cart to act as a reverse proxy. But I have getting around 120$ bill for these container instances alone. Each container is allocated 0.2GB Ram and 0.2GB CPU.
I am pretty new to Container instances and I am not sure if my approach is correct? Can any one suggest is it the right approach for hosting the application and also am I missing or misconfiguring some configurations while creating the container instances?
When I deployed my app to Azure App Service I got quite awesome telemetry out of the box.
Some of the telemetry data is generated by the App Service itself, some of it by my ASP.NET Core app that is using Application Insights logging.
As a result I could find out slow http requests, all application and IIS logs related to the request and see a nice chart showing where the time was spent, e.g. waiting for a SQL query or some http call.
I wonder how much of this telemetry can I get if I decide to go with Azure Container Instances.
The telemetry collected from the application itself using Microsoft.ApplicationInsights.AspNetCore SDK- you'd pretty much everything of that irrespective of where app is runnning - vm or container or app service.
from https://learn.microsoft.com/en-us/azure/azure-monitor/app/docker
When you run the Application Insights image on your Docker host, you get these benefits:
Lifecycle telemetry about all the containers running on the host - start, stop, and so on.
Performance counters for all the containers. CPU, memory, network usage, and more.
If you installed Application Insights SDK for Java in the apps running in the containers, all the telemetry of those apps will have additional properties identifying the container and host machine. So for example, if you have instances of an app running in more than one host, you can easily filter your app telemetry by host.
I was looking to use Azure App Services and noticed Azure now offers Web App for Containers, now I wonder what's the difference between them? And couple more questions come to my mind
Assuming I choose Web App for Containers, who is going to manage the container updates?
Is the deployment differs from App Services to Web App for Containers, from application perspective?
Web App for Containers is one of the offerings in Azure App Service. It allows you to deploy containerized applications on Linux and Windows (the latter is in preview).
The platform automatically takes care of OS patching, capacity provisioning, and load balancing. But, the container updates are up to you.
The deployment differs in that you will be deploying your application inside a Docker container instead of deploying directly like you do in a Web App.
What is the difference between Azure Container Service and Web App for Containers?
They both seem to offer a fully managed platform on which we can deploy containers. I feel that Web App for Containers must be offering something more, but I don't see it. I've read the Azure Container Service FAQ and the Web App for Containers intro page, but the difference is not obvious to me.
Web App for Containers lets you run your custom Docker container which hosts your Web Application. By default the Web App Service with Linux OS provides built-in Docker images like PHP 7.0 and Node.js 4.5. But by following the instructions from this webpage you can also host your custom docker images which allows you to define your own SW-Stack. The limitation is that you can only deploy one docker image to an App Service. You can scale the App Service to use multiple instances, but each instance will have the same docker image deployed. So this allows you to use Docker as a Service, but isn't intended for deploying Microservices.
Container Services (ACS), Kubernetes Service (AKS) and Service Fabric allow you to deploy and manage multiple (different) Docker containers which might also need to communicate with each other. Let's say you implement a shopping website and want to build your web application based on a Microservices architecture. You end up having one Service (= container) which is used for registration & login of users and another Service which is used for the visitors' shopping carts and purchasing items. Additionally you have many further small services for all the other needed tasks. Because the purchasing service is used more frequently than the sign-up/sign-in service, you will need, for example, 6 instances of the sign-up/sign-in service and 12 instances of the cart service. Basically, ACS, AKS and Service Fabric let you deploy and manage all those different Microservices.
If you want to know the difference between ACS/AKS and Service Fabric you might want to have a look here.
We deployed a Node.js Azure Web App and defined a minimum of 2 instances (for scalability and high-availability).
It seems like the LB is balancing the load between the instances, but it doesn't react on instance error (crash) and seems to insist balancing the load between all the instances including the one which crashed.
Is there a way to set a fail-over mechanism for high-availability?
The load balancer used by Azure App Service will continue to send requests to individual web servers as long as the underlying virtual machines are up and running.
To workaround the issue you are running into, you can try configuring the "auto-heal" feature. If the scenario is that the app gets "stuck" in a permanently broken state, auto-heal rules can be configured to automatically restart the app.
More details on auto-heal here:
Auto-heal for Azure Web Sites