Delete the Azure Virtual Machine Scale Set instances using Runbook - azure

First I want to know if is possible to delete vmss instances based on cpu performance of the instances but not by using scaling.
I have a scale set in which the instances have different cpu average and I want to remove only the instances with the lowest cpu performance, let's say instances with less than 20% cpu performance.
The idea is to make a cycle to pass through all the instances and then a condition where I select all the vmss instances with less than 20% cpu performance. Inside the condition to delete the selected the vmss instances.

Why don't use the automatic scale, it's the best and simplest way to scale the VMSS. If you use the Runbook, you need to get the CPU performance yourself every interval. And I don't know a simple way to get it. You just can easily see the CPU performance in the Azure portal. Go to use the automatic scale, that's the way.

Related

Choosing the right EC2 instance for three NodeJS Applications

I'm running three MEAN stack programmes. Each application receives over 10,000 monthly users. Could you please assist me in finding an EC2 instance for my apps?
I've been using a "t3.large" instance with two vCPUs and eight gigabytes of RAM, but it costs $62 to $64 per month.
I need help deciding which EC2 instance to use for three Nodejs applications.
First check CloudWatch metrics for the current instances. Is CPU and memory usage consistent over time? Analysing the metrics could help you to decide whether you should select a smaller/bigger instance or not.
One way to avoid too unnecessary costs is to use auto scaling groups and load balancers. By using them and finding and applying proper settings, you could have always right amount of computing power for your applications.
https://docs.aws.amazon.com/AmazonCloudWatch/latest/monitoring/working_with_metrics.html
https://docs.aws.amazon.com/autoscaling/ec2/userguide/auto-scaling-groups.html
Depends on your applications. If your apps need more compute power or more memory or more storage? Deciding a server is similar to installing an app on system. Check what are basic requirements for it & then proceed to choose server.
If you have 10k+ monthly customers, think about using ALB so that traffic gets distributed evenly. Try caching to server some content if possible. Use unlimited burst mode of t3 servers if CPU keeps hitting 100%. Also, try to optimize code so that fewer resources are consumed. Once you are comfortable with ec2 choice, try to purchase saving plans or RIs for less cost.
Also, do monitor the servers & traffic using Cloudwatch agent, internet monitor etc features.

Degrading the services automatically by autoscaling in azure services - vCPU

I am designing a learning management system and inflow for the website is more in some cases and less in another time. I would like to know about the getting the vCPU's which are scaled up to make it down after the stipulated time. I found a document regarding scaling up but didn't find a way to scale it down.
Any help is appreciated.
There is a chance of auto scaling for the normal services in azure cloud services, that means for stipulated time you can increase or decrease as mentioned in the link.
When it comes for vCPU which is cannot be performed automatically. vCPU can be scaled up based on the request criteria and in the same manner we need to request the support team to scale those down to the normal.
There is no specific procedure to make the auto scaling for vCPU operations. We can increase the capacity of core, but to reduce to the normal, we need to approach the support system for manual changing. You can change it from 10 cores to next level 16 cores, but cannot be performed automatic scaling down from 16 cores to 10 cores.

Poor performance on Azure web apps when scaling out

I'm experiencing poor on Azure web apps when scaling out. Azure automatically picks up an issue from a set of rules I have set up (like CPU > 80% for 5 minutes). Then it increases the number of VMs by 1. Here is a screenshot of the settings on Azure:
This part works fine. But while it is scaling to one additional VM, I'm experiencing poor performance on the entire website. When the new VM is up and running, everything seems to work as expected.
I have previously observed similar issues when scaling up and down the VM size but this is "just" a new VM.
I have already tried to create application initialization in my Web.config:
<applicationInitialization doAppInitAfterRestart="true">
<add initializationPage="/" />
</applicationInitialization>
Still poor performance while scaling out. Does anyone know what this could be caused by? I wouldn't expect a website to perform worse while scaling. I know that the performance probably isn't that great when the VM reaches 80% CPU. But the performance is definitely worse as soon as Azure starts scaling.
Kindly ensure that the margin between the scale out\in are not small. I have seen cases where the auto scale flapping was the issue.
To avoid a "flapping" situation, where scale-in and scale-out actions continually go back and forth. It is recommended to choose an adequate margin between the scale-out and in thresholds.
Just a adding a few best practices guide for you to review the metric threshold.
Ensure the maximum and minimum values are different and have an adequate margin between them - E.g- If you have a setting that has minimum=2, maximum=2 and the current instance count is 2, no scale action can occur. Keep an adequate margin between the maximum and minimum instance counts, which are inclusive. Autoscale always scales between these limits.
Always use a scale-out and scale-in rule combination that performs an increase and decrease - If you use only one part of the combination, autoscale will only take action in a single direction (scale out, or in) until it reaches the maximum, or minimum instance counts of defined in the profile.
This is not optimal, ideally you want your resource to scale up at times of high usage to ensure availability. Similarly, at times of low usage you want your resource to scale down, so you can realize cost savings. I suggest you review the threshold for the metrics and example outlined in this documentation Autoscale best practices, and let us know if it helps.

Azure scaling of roles and pricing

I have been digging without being able to wrap my head around this. It seems like once a role is deployed, you are charged for it in full, whether or not you scale it up or down?
Why would anyone scale down with this? I don't see the incentive to not just leave the role with all possible instances used to max?
I can see why an availability set, with several roles, might want to distribute the cores between them depending on load. But is that all it's for?
You pay the price of one instance of choosen size (A0 to D14) multiplied per the number of instance that are running.
Try with the Azure Princing Calculator, the number of instance increases charges.
When you try to use Autoscaling it clearly states :
Autoscale is saving you XX% off your current bill

Reduce costs of Azure availability set

I am planning on running Sharepoint Foundation on one VM size A3 and SQL Server on another of size A6. As far as I understand this is not enough to achieve SLA and I should use 2 more instances - one for Sharepoint and one for SQL Server configured in 2 seperate availability sets.
Can I use scaling (by CPU usage) to turn off one instance and leave only one running at a time in an availability set? This would reduce the costs but I wonder if this solution will be good enough to achieve Azure's SLA. The way I see it one instance is running at a time while other one is shut down so I am billed for one instance. When there is an update or failure going on, the instance that until then has been running is shut down and the other one comes online. Is this the way it works? Can I cut costs of availability sets like this?
no, the SLA requires two running instances. However, if you want to control your costs, the approach you have in place will work. Just keep in mind that the duration/window for a disruption will be dependent on how quickly you detect that the primary VM has failed, and how fast you can start the secondary VM. And depending on the nature of the service disruption, it may not be possible for you to start the secondary. So its a risk.

Resources