Terminate resource group automatically based on a time window - azure

I'm not sure if it's an off topic for SO but I really need help here. Now in my project we are running load test on weekly basic and we are taking the advantage of ARM and azure CLI for making it fully automated test framework, starting from vm spinning to report gen.
But after the test, for now we are terminating the resource group manually and we have few though to make it automatic e.g by running a cron job. So just I'm curious if there is a better approach to do a graceful termination/destroy(not stop) automatically using azure cli based on a time window.

No, there is no such a way, but if everything is automated, you can run az group delete xxx at the end of your script\automation routine.
On top of that, take a look at Event Grid. Its a new service that can create actions in response to events.

Related

Docker containers runs great locally. Now I need it on schedule in cloud

I've containerized a logic that I have to run on a schedule. If I do my docker run locally (whatever my image is local or it is using the one from the hub) everything works great.
Now I need though to run that "docker run" on a scheduled base, on the cloud.
Azure would be preferred, but honestly, I'm looking for the easier and cheapest way to achieve this goal.
Moreover, my schedule can change, so maybe today that job runs once a day, in the future that can change.
What do you suggest?
You can create an Azure Logic app to trigger the start of a Azure Container Instance. As you have a "run-once" (every N minute/hour/..) container, the restart-policy should be set to "Never", so that the container only executes and then stops after the scheduling.
The Logic app needs to have the permissions to start the Container, so add a role assignment on the ACI to the managed identity of the logic App.
Screenshot shows the workflow with a Recurrence trigger, that starts an existing container every minute.
Should be quite cheap and utilizes only Azure services, without any custom infrastructure
Professionally I used 4 ways to run cron jobs/ scheduled builds. I give a quick summary of all with it pros/cons.
GitLab scheduled builds (free)
My personal preference would be to setup a scheduled pipeline in GitLab. Simply add the script to a .gitlab-ci.yml, configure the scheduled build and you are done. This is the lightweight option and works in most cases, if the execution time is not too long. I used this approach for scraping simple pages.
Jenkins scheduled builds (not-free)
I used the same approach as GitLab with Jenkins. But Jenkins comes with more overhead and you have to configure the entire Jenkins on multiple machines.
Kubernetes CronJob (expensive)
My third approach would be using a kubernetes cronjob. However, I would only use this if I consume a lot of memory/ram, or have a long execution time. I used this approach for dumping really large data sets.
Run a cron job from a container (expensive)
My last option would be to deploy a docker container on either a VM or a Kubernetes cluster and configure a cron job from within that docker container. You can even use docker-in-docker for that. This gives maximum flexibility, but comes with some challenges. Personally I like the separation of concerns when it comes to down-times etc. That's why never run a cron job as main process.

How to determine if a Databricks cluster is ready using the API?

I'm calling the /clusters/events API with PowerShell to check if my Databricks cluster is up and ready for the next step in my setup process. Is this the best approach?
Currently, I grab the array of ClusterEvent and check the most recent ClusterEvent for its ClusterEventType. If it's RUNNING, we're good to go and we move on to the next step.
Recently, I discovered my release pipeline was hanging while checking the cluster status. It turns out that the cluster was in fact running but its status was DRIVER_HEALTHY, not RUNNNING. So, I changed my script and everyone is happy again.
Is there an official API call I make that returns yes/no, true/false, etc. so I don't need to code for the ClusterEventType I find means the cluster is running?
There is no such API that says yes/no about the cluster status. You can use Get command of the Clusters REST API - it returns information about current state of the cluster, so you just need to wait until it's get to the RUNNING state.
P.S. if you're doing that as part of release pipeline, or something like, then you can look to the Terraform provider for Databricks - it will handle waiting for cluster running, and other things automatically, and you can combine it with other things, like, provisioning of Azure resources, etc.

I want to be able to schedule a shutdown and restart of a VM on azure using Powershell

I have a VM machine that i would like to shutdown/power off at a certain time and then restart at a certain time. I have tried this in task scheduler and obviously i can shutdown at a given time but cant then set the restart time
I would like the VM machine to shutdown at 10pm and restart at 5am and run a task scheduler task i have that restarts key services (that side of it works)
I have played around with automation tasks within azure but run into a variety of RMLogin issues
i just want the simplest way to schedule this
there is no auto-startup as far as I'm aware, so you'd have to use some sort of Automation. There is an official solution from Microsoft. Which is somewhat overkill, but should work (never tried it, tbh). There are various other scripts online that work with Azure Automation. They are easily searchable (like so).
If you go to my blog you can also find an example script that does the same, and an example of a runbook that you can trigger manually to start\stop vms
I would assume you would have gone through the below mentioned suggestion, The automation book https://learn.microsoft.com/en-us/azure/automation/automation-solution-vm-management is the way to achieve this. You can achieve auto shutdown via the portal but not restart and start.
Please check this links that talks about Start and Shut down role of the VM through REST API. You can wire up the end point with Azure Function, Puppet, or Chef to automate this process
VM - Start/Shut down Role(s): https://learn.microsoft.com/en-us/previous-versions/azure/reference/jj157189(v=azure.100)
If anything doesn't work for you I would suggest to leave your feedback.
So to simply answer your question, no, there is not a more simple way to achieve this.
If you want, you can add your feedback for this feature suggestion here
https://feedback.azure.com/forums/34192--general-feedback

How do i execute a script before vm delete on Azure Scale Set

On Azure, I'm using Scale Set with PowerShell DSC to run a script to configure my VMs every time a new one is created. But now I need to run a powershell script every time a VM is deleted from the scale set. How can i do that?
You should be able to register a script to run on shutdown with the Group Policy Editor like this: http://lifehacker.com/use-group-policy-editor-to-run-scripts-when-shutting-do-980849001. However, currently scale sets don't guarantee that your code will have time to run before the VM delete actually happens; this is on the backlog, however, so hopefully we'll have this functionality soon.
Hope this helps! :)

Is it possible to just specify an Azure automation job once, without affecting the schedule?

I would like to specify an Azure Automation Job once ie "Scale DB1 to Standard S2 using credential - mycredential". Currently I have to provide these parameter values every time I set up a new schedule. There seems to be no concept of creating a specific job and a specific schedule and then linking the 2. Yes you do link jobs to schedules, but a schedule can only take one job, so you have to unlink the schedule first before being able to link it to another job.
I guess the desired relationship is:
Job -< JobSchedule>- Schedule
and we have
job -< Schedule.
But the job can only be defined when setting up a schedule.
I would like a Job list like:
jb-WeekDayNightScaleDownQAV11
jb-WeekDayNightScaleDownQAV12
jb-WeekDayMorningScaleUpQAV11
jb-WeekDayMorningScaleUpQAV12
jb-WeekendStartScaleDownQAV11
jb-WeekendStartScaleDownQAV12
For Schedules I would want:
sch-WeekdayMorning
sch-WeekdayNight
sch-WeekendStart
Having defined these 9 objects, I would like to edit them individually without affecting any of the others. Some linkages might look like:
sch-WeekdayMorning
-jb-WeekDayNightScaleDownQAV11
-jb-WeekDayNightScaleDownQAV12
However I fear this is not possible.
It seems that one cannot define an Azure Automation with predefined Parameter values, or is this to do with the runbook code, so I should add 2 instance of "Set-AzureSqlDatabaseEdition" runbook, and then edit the code of each to point to seperate Database servers etc. All a bit puzzling....
Thanks in advance for any help.
EDIT
I have checked out linking Azure Scheduler to Azure Automation, and there does seem to be an opinion that it is pretty complex with its abilities at present which I agree with. Link is How to Link Azure Scheduler to Azure Automation
You mention above that a schedule can only take one job -- that is not the case. You probably meant you can only associate a runbook with a schedule once (one job per runbook per schedule), which is correct. Just want to clarify for anyone else who comes across this post.
For what you are trying to do, I would recommend creating a runbook that starts your other runbooks (inline or as separate jobs, up to you). Put that runbook on a schedule that covers all intervals needed, and have it, within the runbook, check the current time to determine what other runbooks it should start, and what parameters it should pass them.
In addition, we will have a blog post coming out soon on how to (much more easily) link Azure Scheduler jobs to start Azure Automation runbooks (using Azure Automation webhooks). This is another way to do this.
Update: Here's the blog post I mention: http://azure.microsoft.com/blog/2015/08/05/scheduling-azure-automation-runbooks-with-azure-scheduler-2/

Resources