Collect /metrics Prometheus in Azure Container Apps - azure

I'm starting with Azure Container Apps, and I have a question:
All my applications issue a /metrics endpoint that my Prometheus server consumes.
I configured it for 5 instances of container apps. Soon I will have 5 applications running in parallel.
myapplication.com/metrics
Whenever my prometheus accesses /metrics it should collect a different metric due to traffic balancing.
How can I integrate the /metrics export with the average of the 5 container instances via prometheus?
I know that with Kubernetes it is possible, but I intend to use Azure Container Apps.

Related

How to build self-made Azure App Service cluster with docker swarm

Since Azure App Services are too expensive, we would like to build our own App Service Cluster so we can deploy Docker images to custom cloud VMs/workers from different service providers.
It should cover functionalities like:
Deployment Center: selecting Docker images (Docker hub, gitlab) and deploy them to our cloud workers by tag
Scalability (like App Service Plan): add workers
Load balancing
SSL/Domain Management
FW Integration
Logging Streams
Does there exist already an open source framework with a GUI for this type of "private cloud" that can be deployed via docker swarm for instance?
thankz!

How does Azure Service Plan load-balance traffic with different apps

I am trying to understand better how Azure App Service Plan (ASP) load-balances the traffic when multiple/different App Services are deployed in it.
Let's assume my ASP is made of 2 nodes (VMs or instances) and I deploy 2 apps (total 4 app instances running) and with following URL:
https://app1.azurewebsites.net
https://app2.azurewebsites.net
I know that there are ASP front-ends acting as load balancers. So here if I understand correctly it is like when I have a web-server hosting different web-sites and address distinction is based on virtual hostnames (which are the URL above). Right?
App Service is a multitenant service, so it uses the host header in the request to route the request to the correct endpoint. The default domain name of App Services, *.azurewebsites.net (say, contoso.azurewebsites.net), is different from the application gateway's domain name (say, contoso.com). ref.1
When using App Service, you can scale your apps by scaling the App Service plan they run on. When multiple apps are run in the same App Service plan, each scaled-out instance runs all the apps in the plan.
Apps are allocated to available App Service plan using a best effort approach for an even distribution across instances. While an even distribution is not guaranteed, the platform will make sure that two instances of the same app will not be hosted on the same App Service plan instance.
The platform does not rely on metrics to decide on worker allocation. Applications are rebalanced only when instances are added or removed from the App Service plan.
You can also now do Per-app scaling, which can be enabled at the App Service plan level to allow for scaling an app independently from the App Service plan that hosts it. This way, an App Service plan can be scaled to 10 instances, but an app can be set to use only five. ref.2

Ways to improve inter app-service communication

I have two app services(service A and Service B) developed in .net core 3.1 and hosted as two independent app service in azure. Service A is heavily dependent on service B. Is there is way (Azure offering) to make them communicate faster? Is hosting them in same container will improve inter service communication performance? Any suggestion on kubernetes?
If you are not using Azure Kubernetes Offering yet, (AKS) I would recommend spinning up a cluster. (note that it supports windows nodes in case you need them)
You should keep your services separated into two pods (https://learn.microsoft.com/en-us/azure/aks/concepts-clusters-workloads#pods)
and create a matching kubernetes service.
Now if you would like to have your POD run on the same node to increase the communication speed, you should look at using pod affinity, which will allow to have pod pods run on the same node, without having to tie them to a particular node (node affinity)
https://learn.microsoft.com/en-us/azure/aks/operator-best-practices-advanced-scheduler#inter-pod-affinity-and-anti-affinity

Azure WebApp for containers without exposed ports

I have a custom built docker image which purpose is to process files that are loaded into a storage account or service bus. The container has no exposed ports.
I can deploy this image and start the container on the Azure Web App but after 240 seconds the container seems to terminate. The logs indicate that the container did not start within the time limit.
Am I correct in assuming that if no ports are exposed in my container that the webapp thinks that the container was not started correctly?
What is the best alternative for deploying my container if this is the case? (ACI, ACS, AKS,.. ?)
Azure Load Balancer has a default idle timeout setting of four minutes. This is generally a reasonable response time limit for a web request. If your webapp requires background processing, it is recommended to use Azure WebJobs. The Azure web app can call WebJobs and be notified when background processing is finished. You can choose from multiple methods for using WebJobs, including queues and triggers, see https://learn.microsoft.com/en-us/azure/app-service/web-sites-create-web-jobs
Checkout the FAQs here: https://learn.microsoft.com/en-gb/azure/app-service/containers/app-service-linux-faq
Can I expose more than one port on my custom container image?
We do not currently support exposing more than one port.
My custom container listens to a port other than port 80. How can I configure my app to route requests to that port?
We have automatic port detection. You can also specify an app setting called WEBSITES_PORT and give it the value of the expected port number. Previously, the platform used the PORT app setting. We are planning to deprecate this app setting and to use WEBSITES_PORT exclusively.

Scaling Azure Container Service with private ports on containers

In our organization, we are currently trying out the Azure Container Service with Docker Swarm. We have developed a Web API project based on .NET Core and created containers out of it. We have exposed the web api on Container’s Private Port (3000). We want to scale this to say 15 containers on three agent nodes while still accessing the web api through one single Azure load balancer url on public port 8080.
I believe we would need an Internal Load Balancer to do this but there is no documentation around it. I have seen this article on DC\OS but we are using Docker Swarm here. Any help?
Azure Container Service use vanilla Docker Swarm so any load balancing solution for Swarm will work in ACS, e.g. https://botleg.com/stories/load-balancing-with-docker-swarm
Same is true for DC/OS, but in this case it is documented in "Load balance containers in an Azure Container Service cluster" - https://azure.microsoft.com/en-us/documentation/articles/container-service-load-balancing/

Resources