What's the easiest approach to automate startup tasks on Azure VMs? - azure

My current application scenario requires me to start several VM simultaneously with some startup tasks (the startup task on each VM triggers the same script but with different parameters). Previously in EC2 I can easily start a number of EC2 instances and then use Windows task scheduler easily to trigger the executable file, read the data in user-data of each instance and then everything is done.
I tried the same approach in Azure but found a number of issues:
Tried using task scheduler and start a task "at startup", but won't work since after syspreping the user information is lost and therefore I won't be able to start the same task.
Tried gpedit.msc and specify a startup script. Won't work. I don't know why.
Tried using task scheduler and start a task at a specific time point. Won't work. I've received an error message which says "the operator or administrator has refused the request".
So what's the simplest approach to automate a startup task in Azure VM?

Did you think about using the Azure-powershell?
You could use the Start-AzureVM-cmdlet to start your VMs.
You could use the following
code-snippet for starts:
$vmuri = Get-AzureWinRMUri -ServiceName $VMName
#region start hpc-azure-nodes
Invoke-Command -ConnectionUri $vmuri -Credential $credential {
#start your tasks with the according parameters
} -ArgumentList
A considerably fancier way would be to create a head-vm in Azure and install a HPC-cluster-manager. Utilizing the HPC-cluster-manager you can provision any number of Azure computing nodes within your limit, deploy your software and start/stop your software centrally from the cluster-manager.
Additionally the HPC-cluster-manager provides a number of helpful features:
add/remove nodes
connect via rdp to each node
view logging information for your jobs
and many, many more
There is also a HPC-powershell which provides a nice environment for automation. Admittedly this approach requires somewhat more effort but in the long run it almost certainly pays off.

I think what you're looking for is PowerShell Desired State Configuration (DSC).
This is a management platform introduced in PowerShell 4.0. You can use it to configure a VM or set of servers by providing a description of the desired state for each node in the system. Basically you can describe the state you want the server to be in when it boots up and if the configuration has drifted it will correct it.
A DSC script can:
Manage registry keys
Copy files and folders
Run PowerShell scripts
Deploy software
Manage server roles
Turn Windows features on/off
Here is a quick tutorial at blog.msdn.com that will get you started.

Related

Azure Automation Use Case

I have a certain script (python), which needs to be automated that is relatively memory and CPU intensive. For a monthly process, it runs ~300 times, and each time it takes somewhere from 10-24 hours to complete, based on input. It takes certain (csv) file(s) as input and produces certain file(s) as output, after processing of course. And btw, each run is independent.
We need to use configs and be able to pass command line arguments to the script. Certain imports, which are not default python packages, need to be installed as well (requirements.txt). Also, need to take care of logging pipeline (EFK) setup (as ES-K can be centralised, but where to keep log files and fluentd config?)
Last bit is monitoring - will we be able to restart in case of unexpected closure?
Best way to automate this, tools and technologies?
My thoughts
Create a docker image of the whole setup (python script, fluent-d config, python packages etc.). Now we somehow auto deploy this image (on a VM (or something else?)), execute the python process, save the output (files) to some central location (datalake, eg) and destroy the instance upon successful completion of process.
So, is what I'm thinking possible in Azure? If it is, what are the cloud components I need to explore -- answer to my somehows and somethings? If not, what is probably the best solution for my use case?
Any lead would be much appreciated. Thanks.
Normally for short living jobs I'd say use an Azure Function. Thing is, they have a maximum runtime of 10 minutes unless you put them on an App Service Plan. But that will costs more unless you manually stop/start the app service plan.
If you can containerize the whole thing I recommend using Azure Container Instances because you then only pay for what you actual use. You can use an Azure Function to start the container, based on an http request, timer or something like that.
You can set a restart policy to indicate what should happen in case of unexpected failures, see the docs.
Configuration can be passed from the Azure Function to the container instance or you could leverage the Azure App Configuration service.
Though I don't know all the details, this sounds like a good candidate for Azure Batch. There is no additional charge for using Batch. You only pay for the underlying resources consumed, such as the virtual machines, storage, and networking. Batch works well with intrinsically parallel (also known as "embarrassingly parallel") workloads.
The following high-level workflow is typical of nearly all applications and services that use the Batch service for processing parallel workloads:
Basic Workflow
Upload the data files that you want to process to an Azure Storage account. Batch includes built-in support for accessing Azure Blob storage, and your tasks can download these files to compute nodes when the tasks are run.
Upload the application files that your tasks will run. These files can be binaries or scripts and their dependencies, and are executed by the tasks in your jobs. Your tasks can download these files from your Storage account, or you can use the application packages feature of Batch for application management and deployment.
Create a pool of compute nodes. When you create a pool, you specify the number of compute nodes for the pool, their size, and the operating system. When each task in your job runs, it's assigned to execute on one of the nodes in your pool.
Create a job. A job manages a collection of tasks. You associate each job to a specific pool where that job's tasks will run.
Add tasks to the job. Each task runs the application or script that you uploaded to process the data files it downloads from your Storage account. As each task completes, it can upload its output to Azure Storage.
Monitor job progress and retrieve the task output from Azure Storage.
(source)
I would go with Azure Devops and a custom agent pool. This agent pool could include some virtual machines (maybe only one) with docker installed. I would then install all the necessary packages that you mentioned on this docker container and also the DevOps agent (it will be needed to communicate with the agent pool).
You could pass every parameter needed in the build container agents through Azure Devops tasks and also have a common storage layer for build and release pipeline. This way you could mamipulate/process your files on the build pipeline and then using the same folder create a task on the release pipeline to export/upload those files somewhere.
As this script should run many times through the month, you could have many containers so that to run more than one job at a given time.
I follow the same procedure for a corporate environment. I keep a VM running windows with multiple docker machines to compile diferent code frameworks. Each container includes different tools and is registered to a custom agent pool. Jobs are distributed across those containers and build and release pipelines integrate with multiple processing.
You probably suppose to use Azure Data Factory for moving and transforming data.
Then you can also use ADF for calling Azure Batch that will be using python.
https://learn.microsoft.com/en-us/azure/batch/tutorial-run-python-batch-azure-data-factory
Adding more info could probably suggest other better suggestions.

How to automatic simulate failure in Azure VM

I'm working on Dynamic 365 project where we have environments (Azure VM's machine (AOS, BI) ).Now for testing our monthly API release we have to test our environments(Resume, rollback senarios) for that I have to manually simulate the VM failure via adding Throw "Error" keyword in any of the desync or other file ones files generate in respective folder then ones VM failed then I go to VM and remove the Throw keyword and perform Resume/rollback operation.
Now I'm planning to automate this process via yml pipeline where my pipeline task will make failure instead of myself manually doing.
One way I can think I can write a Powershell script and make VM failure though I want to make it failure then perform Resume and Rollback operation.
Can anyone help me with the best and easy approach to achieve this..? May be without modifying file or something else.
Thanks in advance
You can write a Powershell to do this by using Azure VM run command functionality.
By this functionality, you can run your Powershell script remotely instead of RDP to your VM and run it manually.

Jenkins: Queue jobs if there are available Azure VM

Situation:
I have a pipeline job that executes tests in parallel. I use Azure VMs that I start/stop on each build of the job thru Powershell. Before I run the job, it checks if there are available VMs on azure (offline VMs) then use that VMs for that build. If there is no available VMs then I will fail the job. Now, one of my requirements is that instead of failing the build, I need to queue the job until one of the nodes is offline/available then use those nodes.
Problem:
Is there any way for me to this? Any existing plugin or a build wrapper that will allow me to queue the job based on the status of the nodes? I was forced to do this because we need to stop the Azure VM to lessen the cost usage.
As of the moment, I am still researching if this is possible or any other way for me to achieve this. I am thinking of any groovy script that will check the nodes and if there are no available, I will manually add it to the build queue until at least 1 is available. The closest plugin that I got is Run Condition plugin but I think this will not work.
I am open to any approach that will help me achieve this. Thanks

I want to be able to schedule a shutdown and restart of a VM on azure using Powershell

I have a VM machine that i would like to shutdown/power off at a certain time and then restart at a certain time. I have tried this in task scheduler and obviously i can shutdown at a given time but cant then set the restart time
I would like the VM machine to shutdown at 10pm and restart at 5am and run a task scheduler task i have that restarts key services (that side of it works)
I have played around with automation tasks within azure but run into a variety of RMLogin issues
i just want the simplest way to schedule this
there is no auto-startup as far as I'm aware, so you'd have to use some sort of Automation. There is an official solution from Microsoft. Which is somewhat overkill, but should work (never tried it, tbh). There are various other scripts online that work with Azure Automation. They are easily searchable (like so).
If you go to my blog you can also find an example script that does the same, and an example of a runbook that you can trigger manually to start\stop vms
I would assume you would have gone through the below mentioned suggestion, The automation book https://learn.microsoft.com/en-us/azure/automation/automation-solution-vm-management is the way to achieve this. You can achieve auto shutdown via the portal but not restart and start.
Please check this links that talks about Start and Shut down role of the VM through REST API. You can wire up the end point with Azure Function, Puppet, or Chef to automate this process
VM - Start/Shut down Role(s): https://learn.microsoft.com/en-us/previous-versions/azure/reference/jj157189(v=azure.100)
If anything doesn't work for you I would suggest to leave your feedback.
So to simply answer your question, no, there is not a more simple way to achieve this.
If you want, you can add your feedback for this feature suggestion here
https://feedback.azure.com/forums/34192--general-feedback

Is there an Ansible equivalent of the powershell command Switch-AzureRmWebAppSlot?

I am eager to automate the process of swapping between WebApp slots on azure:
https://octopus.com/docs/deploying-applications/deploying-to-azure/deploying-a-package-to-an-azure-web-app/using-deployment-slots-with-azure-web-apps
but for everything to date I have tried to avoid using powershell, and stayed in a Linux/Ansible automation environment. I have looked through the ansible azure modules:
http://docs.ansible.com/ansible/latest/list_of_cloud_modules.html#azure
for something to help out here, but other than my favorite module azure_rm_deployment_module, I don't see anything that can help me automate this kind of procedure. Currently I am using Azure with all Node/Linux resources and setting up a Windows VM to invoke PowerShell (PS) commands like Switch-AzureRmWebAppSlot, seems to deviate from that plan. I could always try to debug what the PS command is doing and attempt to simulate it, however if anyone has a better solution it would be great to hear it!
No, the closest you can get is run a script to do the job (powershell\azure cli). Azure Powershell works on linux (that's what I'm using for some of my ansible tasks), alternatively Azure CLI works as well
If there is a way to use arm to perform the action, you should be good using the azure_rm_deployment module. A trick I've used in the past is to use fiddler with https tracing on while performing an action using Azure powershell. For Azure providers that use the arm endpoints, there's a good chance that this will leave you with a semi-clean log of the endpoints/json payload you need to perform the action.
It's fairly hacky, but for a long time this was the only way to get anything done with webapps outside of Powershell, due to the lackiness of Azure's documentation.

Resources