Is there a way to script something using runbooks in Azure so that a script runs that checks the CPU Usage and if the average for two hours is less than 10% it shuts down the VM?
Has anyone got an example script?
I do not have example code, but would start with Azure Monitor. Using Azure Monitor you can create an alerts with specific criteria such as CPU usage and a time slice.
Create, view, and manage alerts using Azure Monitor
On alert you can engage an Azure Automation Web-hook to perform the remediation action.
Webhook actions for alert rules
Azure alert rules are probably the way to go here. Here's an end-to-end solution with that.
A simpler way to do it is using a tool like VMPower which isn't free but is inexpensive and works best when you need to do this across multiple VMs with different auto-stop configurations.
Related
In order to save expenses in Azure DevOps, I'm trying to scale the resources, which can scale depending on the requirement. Team leads will update the resource requirements in SharePoint, and the runbook needs to be executed with SharePoint datat. Team leads will update the resource requirements in SharePoint, and the runbook needs to be executed with SharePoint data. If such resources are not required on weekends but must be operational on weekdays, they should be stopped or reduced in size. I need to use automation to do it for all of the VMs and App Services at a subscription level every Friday. If there is a method to automate this procedure using PowerShell.
I'm glad to receive input. Thanks in advance.
I'm looking for feedback on Start/Stop VMs and Scaling Azure App Services. On weekends, the same may be said for other relevant resources. How can we accomplish this with Azure PowerShell?
The best way to do is by using 'Azure Automation Runbook' scheduled to run every specified day or date by time. To target the VM's, Azure Tags will be much helpful.
Your script must check:
A VM has a specific Tags (e.g., StopVM:Friday 11:00PM)
Maintenance Enabled in your monitoring solution
VM is stopped already or not.
Backup required?
Confirm the VM is Deallocated (not stopped)
Auto-Shutdown option is also available to do this activity.
Because sometime all you need is a quick and dirty way to save money :
And if you wan't to build something there's an API to shutdown and start VMs
I am running few Data Pipelines in Azure Data Factory and its using Azure Integration Runtime for the compute.
I am trying to Monitor the CPU/Memory Usage Pipelines Consume and Utilise Azure IR.
I have checked in the Azure Monitor but the CPU / Memory Metrics are for Self Hosted Integration Runtime I think.
Also, with the Diagnostic Setting Enabled, I tried to verify the details in the Logs too but these details are not available.
Can anyone help to know more options?
If you are referring to the Azure AutoResolveIntegrationRuntime, then no there is not, and this is why (from https://www.cathrinewilhelmsen.net/integration-runtimes-azure-data-factory/)
Microsoft has massive elastic pools across the various locations/regions they offer Azure, and at runtime ADF determines what pool/hardware it will use to perform the Pipeline activities. So there is really no way (and no need) to monitor the Azure Autoresolve IR. But if you are interested in monitoring Self-Hosted IR's then there are many ways to do it.
One simple and straight forward way to do it is by creating Azure Dashboards in the Metrics portion of Azure Monitor. As you can see from the screenshot below it provides good visual representation of usage/resources over time.
As you can see I'm visualizing the integration Runtime itself (CPU/Memory) as well as the Azure VM that is hosting the Integration Runtime. On top of this you can go into the Metrics dashboard to set up alerts if certain conditions are met (eg AVG CPU % usage is over 75% for the last 15 minutes). These alerts can send you a text message, or email... and even do things as complicated as triggering a LogicApp or WebHook for automated scaling up/out, advanced notifications, etc.
This in my opinion is the best way to monitor but another option could be to call the Azure Data Factory REST API to get monitor data for the Integration Runtimes
POST https://management.azure.com/subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/Microsoft.DataFactory/factories/{factoryName}/integrationRuntimes/{integrationRuntimeName}/monitoringData?api-version=2018-06-01
But this method would require you to incrementally pull in data, store it, parse it, and then visualize it or act upon it when that is already very well built in for you. Sometimes it's fun to recreate wheels though.
Yes It is possible to Monitor Azure Integration Runtime.
"Pipeline Runs" in Monitoring has the option to check the CPU Utilization specific to pipeline, Integration Runtime and more specific filters. You can find here, how its done.
Recently, i have been introduced to Azure and i have an application that is using high CPU (almost 80%) during morning hours between 9 am to 1 pm. After that the CPU utilization is reduced to a minimal of 10% the whole day. So in order to reduce my cost i was thinking to implement vertical auto-scaling in my application. When i read more on this i could find automation account and RunBook as the only way but my need is that is there any other way to implement Vertical auto-scaling in Azure IaaS VM apart from automation account?
If Yes, please share the approach.
Yes you can use Azure PowerShell and/or the Azure CLI to execute scaling commands on a VM. Here are some PowerShell examples: https://learn.microsoft.com/en-us/azure/virtual-machines/windows/resize-vm?toc=%2Fazure%2Fvirtual-machines%2Fwindows%2Ftoc.json
You would then just have to schedule the script to run either locally or in an Azure service like Functions, Container Instances or etc.
If you wanted to scale vertically a single VM based on a performance metric (CPU, memory, etc.) you can use the classic metric alerts system to do that. When those alerts fire based on thresholds you set you can invoke a webhook OR Logic App to trigger execution of a script or ARM Template.
Can we use Azure Functions along with Azure Batch? Please Advise.
I am working on a POC to decide which one to use for our background processes.
I too was in similar dilemma till I tried both of them for my use case.
The major difference between the two is that Azure Function has a hard timeout limit of I guess 10 minutes which you can not exceed. What I mean is that if your script/execution runs beyond 10 minutes then Azure function will kill it automatically.
Whereas Azure batch is essentially a configuration of pools or VMs in which you can run long running jobs where you are not bothered about the time of its execution. Essentially they are old VMs (low costs too). Difference between batch and Azure VMs is that Azure VMs have high speed VMs but in batch you can configure the periodic jobs where in Azure VMs you need to code in such a way that it executed like a periodic job
And yes it is possible to use Functions with Azure batch. You can configure your script as HTTP trigger in Function which you can call (get/post) through Azure Batch VMs.
Hope it helps.
May be we should expand this topic to Azure services for Batch processing in general. I did come across an article from Microsoft that goes through these options in general (which includes Web Jobs, and Kubernetes options).
But, frankly, even after reading the article; the confusion remains. For example, Azure Batches can be scheduled; but not sure if they can be triggered based on other Azure services like how Azure web jobs handles it. I get a feeling that Azure Batch is pitched where you need high + parallel computing at low costs. Because, none of the other options directly allow you to low-priority and low-cost compute instances. Correct me please!
#AzureBatch #AzureWebJobs #AzureAKS #AzureFunctions
I thought one of the advantages of Azure was that I could turn services on and off depending on when I want them to be available.
However I cant see how to pause my App Service Plan.
Is it possible?
I want to use the S1 tier so that I can play with what it offers. However I want to be able to pause the cost accumulation when I am not using it.
I see from the app service pricing help that an app will still be billed for even though it is in the stopped state.
Yet the link also clearly states that I only pay for what I use. So how does that work?
If you put your hosting plan onto the free tier, you will stop being charged for it. However if you have things like deployment slots and certificates these will be deleted.
The ability to turn services on and off, is more to do with being able to scale services, so if you need 50 servers for an hour you can easily do that.
What you can do to make your solution temporary is to create a deployment script, using Powershell or Resource manager Templates then you can deploy your solution for exactly as long as you need it and then delete it again when you don't. In this sense you can turn your services on and off at a whim.
Azure provides building blocks for you to create the solution you need, it is up to you to figure out how to best use those building blocks to create the solution you seek.
Edited to answer extended question.
If you want to use the S1 pricing plan, and not have it charge when you are not using it, the only way of achieving that is by using automation. Fortunately, this is reasonably trivial to achieve.
If you look at this template it is pretty much all configured to deploy a website from Github to Azure on demand. If you edit that to configure it to your needs you can have a new Azure website online with 2 minutes of running the script.
Then you would have another script that deleted it once you had finished.
Doing it this way you would loose no functionality, and probably learn quite a bit about what is possible with Azure along the way.
App Service Plan
An app service plan is the hardware that a web app runs on. In the free and shared tier your web apps share an instance with other web apps. In the other tiers you have a dedicated virtual machine. It is this virtual machine that you pay for. In that case it is irrelevant whether or not you have web apps running on your app service or not, you still have a virtual machine running and you will be charged for that.
To change the App Service Plan via PowerShell, you can run the following command
Set-AzureRmAppServicePlan -ResourceGroupName $rg -Name $AppServicePlan -Tier Free
I was able to accomplish this using the dashboard by selecting the App Service Plan, clicking Scale up (App Service Plan), and then from there if you click Dev/Test you can select the Free tier.
As others have mentioned, you need to script this. Fortunately, I created a repository with one-click deployment to your Azure resources.
https://github.com/jraps20/jrap-AzureVerticalScaling
The steps are intended to be as simple and generic as possible:
Execute the one-click deployment from the repo readme
Select the subscription, resource group etc.
Deploy resource to Azure
Set up your schedule to scale up and scale down as-needed
The scripting relies on runbooks and variables to maintain the previous state of each App Service Plan and App Services within those plans. Some App Services cannot be scaled due to specific settings being used (AlwaysOn, Use32BitWOrkerProcess, ClientCertEnabled, etc.). In those cases, the previous values are stored as variables prior to down scaling and then the original values are reapplied when the services are scaled up.
For more clarity, I have written a blog post that goes into detail. The post is pertaining to Sitecore, but applies to any App Service setup- Drastically Reduce Azure PaaS Hosting Costs in Non-Prod Environments With Scheduled Vertical Scaling. It also includes a brief video tutorial to show its use case.
Myself and others have been using this repository/approach for well over a year and it works great. I mostly use it for POC's to reduce costs when I'm not actively working on something. However, its main intention was for targeting non-prod environments to save costs during non-work hours.
Azure App Service Plan is just an logical concept of a set of features and capacity that you can share across multiple apps. I don`t think you can "pause" a plan, instead you can pause your service. and depends on billing model of each service, you might or might not get charged.
Pausing = Delete or lower tier.
Scripting is the key.
Design Diagram
Use scripts to create (also consider shared resources)
Delete using scripts
Use scripts to recreate.
eg: If we use resource group properly per environment then
Export-AzureRmResourceGroup will create a template for us (everything in the resource group will be pulled out as script). So we can delete it and recreate it anytime.
To pause a VM and stop billing you need to shut is down and deallocate it. Just shutting down still has the capacity reserved as if its running.
Storage can't be shutdown - it can be moved to lower cost tiers.