I configured my service on fargate aws by activating autoscaling, the alarm was created and I started the performance test, the instances are created following the detection of the alarm but what I did not understand is it is after the end of load the instances created are not destroyed the number remains the same as if there is always a load increase
Related
WE are launching an ALB to access various UIs of the EMR services. But when we enable the High availability of EMR launching 3 master nodes, how will the alb automatically points out to new masternode thats active. The ALB should not distribute the traffic to secondary instances. It need to point to the whatever instance which is acting as master at a given time.
Thank you
You need custom solution for that. For example you can setup a lambda function that gets triggered based on alarm constructed using MultiMasterInstanceGroupNodesRunningPercentage metric. Once trigger, the lambda function with de-register old muster with a target group of your ALB, and register a new one.
I'm new in the Kubernetes universe and I have some doubts about an implementation that I want to do.
I have the following scenario: I have 200 instances of a worker that executes some business logic and the unique thing that differentiate them is the input parameters.
I was thinking in to use AKS to scale up this infrastructure according to the input parameter and dynamically, only create a new pod when exists the demand for the worker with the input parameter "XYZ".
Simple architecture draft:
I have an API that receives a request and with base in this request, an orchestrator send the request for the correct worker.
So I'd like to know if this type of architecture is possible with AKS and if is a good approach.
This is one of the scenario where you can use Azure Functions with ACI or with KEDA to autoscore the containers based on the demand.
Use the AKS virtual node to provision pods inside Azure Container
Instances that start in seconds. This enables AKS to run with just
enough capacity for your average workload. As you run out of capacity
in your AKS cluster, scale out additional pods in Azure Container
Instances without additional servers to manage.
Here is my blog on Scale Applications with Kuberenetes-based Event Driven AutoScaling
You can do this with Kubernetes ingress controller
https://www.nginx.com/products/nginx/kubernetes-ingress-controller/
This is how to set it up on Azure Kubernetes
https://learn.microsoft.com/en-us/azure/aks/ingress-tls
I have a .NET Core application running on Azure AppService.
I have also a hosted service interacting with a table and updating it according to a given condition.
I am experiencing an increase of CPU usage of just one of the instances when my autoscaling rules kick in and start scaling out the AppService.
My question is if HostedService is automatically scaled-out too when new instances are warmed up by autoscaling.
Yes they are. When you scale out, you have multiple instances of the same application running.
I figured this out the hard way. I was running a hosted service that sent out reports through emails once a day. After I scaled out to 2 instances, clients started receiving the same email twice a day.
Here's my scenario:
I have an Azure Cloud Service that runs a "hefty" .NET WCF project. The heftiness comes in with the startup tasks, as we cache a large amount of data into memory to make the project run quickly.
We're have some logic to override the OnStart method of the RoleInstance to perform this caching, so the instance doesn't return as "Ready" until all of this caching is completed.
When we deploy our service, we have 2 instances (so theyre on separate Fault\Update domains).
To that scenario I have 2 questions:
When we deploy an update or Microsoft performs maintenance against one of these managed VM's, does the Azure Load Balancer take the role state into account and not route traffic to it until it's in a "Ready" state?
For the aforementioned Load Balancer, do I have to configure anything for the cloud service to balance between the multiple instances? I was always under the impression that Microsoft managed that for you.. this way if you scale out to N role instances, the cloud service will take into account the number if instances and assign load accordingly.
Thanks!
It is handled for you. The load balancer probe communicates with the guest agent on each VM which only returns an HTTP 200 once the role is in the Ready state. However, if you’re using a web role and running w3wp.exe on it, the load balancer is not able to detect any failures like HTTP 500 responses that it may generate.
In that case, you’d need to insert an appropriate LoadBalancerProbe section in your .csdef file and also properly handle the OnStop event. This article describes the default load balancer behaviour in more detail, as well as how to customise it.
In my cloud service I have one web role and worker role. I changed my web role VM size from medium to A6.
When I tried to deploy to Windows Azure, I get the following error message:s seem prompt me error
The VM size (or combination of VM sizes) required by this deployment cannot be provisioned due to deployment request constraints. If possible, try relaxing constraints such as virtual network bindings, deploying to a hosted service with no other deployment in it and to a different affinity group or with no affinity group, or try deploying to a different region.
What does that mean?
Basically, you've asked it for one of the new "Uber" A6 Instances (with additional memory/process resources) and it was unable to provision your request (i.e. provide you with the required amount of cloud computing power for an A6 Instance).
You could try deploying to a different geographic location or affinity group or just wait and try again.