Automation account rerun, a jobshedule already exist - azure

I have created a CD pipeline in Azure DevOps that will deploy an Azure Automation account and a runbook , shedule , jobshedule through ARM templates.
All working fine except when rerunning the template. My template is a part of a large deployment process that is still under construction so until the total scope is finished the ARM template that creates the runbook, shedule, jobshedule will rerun with every release.
The problem right now is the following: Whenever I rerun the template with a new release pipeline, I receive following error
A job schedule for the specified runbook and schedule already
exists.
At first I tried to be smart so added a GUI before the name of my jobshedule but the shedule itself attaches the runbook with the shedule and the deployment was smart enough to figure it out that the shedule was already connected to the runbook. Is there a way of making this still within the DevOps mindset / process so that I can rerun my templates with no problem.
The workaround solution I have created atm is to delete the shedule at every deployment but that seems like a very bad workaround.

Related feature request from UserVoice / feedback forum is here that's currently in triaged state.

Job Schedule id need to be unique for each deployment as per azure documentation.

Related

[Azure Terraform]: Create Start/Stop VM Solution

I am using Terraform to create an Automation Account in Azure.
The following resource in Azure provider does the job: azurerm_automation_account.
Ok. So I got my AA created... here is when problems arise.
"Run As" account: there seems to be a way to create it from Terraform... but the process is cumbersome. I have lost hope, and will probably resort to enable it manually from Azure portal (it is just one click)... but it will brake my automation pipeline :(
"Start/Stop VM Solution": I need the powershell runbooks in this solution to start-stop VMs according to a given schedule. There is a resource in Azure provider called "azurerm_automation_runbook". It has 2 useful arguments to reference runbook scripts:
"content": with it I could "load" a local powershell script content. I know this would work (I could manually download the .ps1 script used by "Start/Stop VM Solution" and use "content" to load it), but I would be missing any fixes/updates made by Microsoft in its code)
"publish_content_link": by which I could point to the URI of a given powershell runbook. I have looked in the "Runbook Gallery" for the runbooks contained in the "Start/Stop VM Solution" (not found them). Anyone had any luck with this? A different approach could be to "create" the "Start/Stop VM Solution" from a Terraform script (this will automatically populate the desired runbooks in my Automation Account)... but not sure if this would be possible.
Thanks in advance.
For point 1: I also found it very challenging and while things have improved lately, there still doesn't seem to be an easy, straight forward way of creating the Run As Account. I eventually resorted to creating it manually from the Azure Portal but below are potential areas you can explore:
I'm not sure if you've considered using the external data source from terraform to execute the Powershell script from Microsoft. It's still a pain because of the last step where you have to authenticate manually, but it still brings you closer to having a blueprint of your environment. Although I'm not sure how it would behave if running this Terraform script a second time.
For point 2: Could you confirm that the script you want to use is a Powershell script and not a Powershell Workflow script? Also could you please elaborate on this approach (I have a feeling that might be the best approach):
A different approach could be to "create" the "Start/Stop VM Solution" from a Terraform script (this will automatically populate the desired runbooks in my Automation Account)
If you look at the Runbooks Gallery, you'll see most of these Powershell scripts have not been updated for many years and are still working fine. If this will be used in a production environment, it would be better if you have control over the changes and update then at your convenience. If you want to get the URI, you can just click on 'View Source Project' and it will lead you to the GitHub repo. E.g. for the Runbook Stop-Start-AzureVM (Scheduled VM Shutdown/Startup).
You'll also notice most of the scripts is submitted by external parties. If you link to a URI that's maintained by someone else and that person publishes malicious code in there or even accidentally messes up the code, it's not desirable. But again I'm not sure as to the extent of your automation (e.g. if you expect to execute the terraform script once a month to ensure the Runbook is up to date)
If I get the scripts from somewhere, I'll validate it prior to using them in my environment.
data "local_file" "start_vm_parallel" {
filename = "./scripts/start-vm-parallel.ps1"
}
resource "azurerm_automation_runbook" "start_vm_parallel" {
name = local.NAME
location = local.REGION
resource_group_name = local.RG
automation_account_name = azurerm_automation_account.automation_prod.name
log_verbose = "true"
log_progress = "true"
description = "This runbook starts VMs in parallel based on a matching tag value"
runbook_type = "PowerShellWorkflow"
content = data.local_file.start_vm_parallel.content
publish_content_link {
uri = "https://path.to.script/script.ps1"
}
}
If you're using a Powershell Workflow, you need to make sure that the Runbook name matches the workflow name inside the script.
One last thing to remember before you even start using your Runbooks, is to update the modules by creating a 'modules update' Runbook from the Azure Automation team and running it on schedule, once a month.

Azure DevOps dynamic Release Pipeline creation

I am currently planning on a type of multi-tenant system, were different resource groups with a set of AppServices are deployed for customers via ARM Templates. Hence, each customer has its own Resource Group and set of AppServices. Currently we use Azure DevOps to deploy to a set of AppServices used for Development and Quality Assurance before it gets to Production. I am now trying to incorporate DevOps into the mix, automating a pipeline creation of some sort... (it would be a copy of an existing pipeline but only changing the Target AppServices). Which is were my question comes from, Is there a way to dynamically create or edit a Release pipeline to add the deployment of those new AppServices, without the need of manually edit or create a pipeline an adding those newly created AppServices, I was thinking something around the lines of being able to copy a yaml file template then replacing the necessary info to point to those AppServices after they have been created, but I am not totally sure where could I store the new yaml file so that it is picked up by Azure DevOps, or how could I would accomplish these, with the main idea being that all of this continues to be part of an automated process (if possible).
Thanks a lot for any help, any suggestion is appreciated.
EDIT:
The question is not about how to Deploy an ARM Template through the DevOps release pipeline (I plan on using a PowerShell Script/REST API to accomplish that), instead, is about when the AppServices Resources are created, I need to deploy code to those newly created AppServices and also update that code when necessary (Hopefully through a Release Pipeline), somehow generate a new release pipeline each time I deploy a new set of Resources. So that, when there is a new update, I could easily have that pipeline triggered and that set if AppServices can be updated (created as part of the automation process "dynamically"). (I Already have a similar pipeline that deploys to a "static" set of AppServices).
This is possible as you eluded to with YAML Pipelines. Based upon the scenario you have subscribed each repository would have it's own pipeline.yml file that will define the trigger, pool etc. It would also reference a repository that will house your yaml template.
The template would accept whichever parameters you may required (resource group, app service name, etc...) The triggering pipeline associated with each repository would pass this information leveraging the teamplate.
By doing this CI/CD can be set up to trigger on the individual pipelines and deploy the appropriate code all while leveraging the same YAML template.
The repository reference would be similar to:
resources:
repositories:
- repository: YAMLTemplates
type: git
name: OrginazationName/YAML Project Name
With the call to the template being similar to:
- template: azure-ARM-template.yml#YAMLTemplate
parameters:
appServiceName: 'AppServiceName'
resourceGroupName: 'ResourceGroupName'
UPDATE
At a high level the YAML pipeline would consist of the following. If all App Services are similar as stated and ARM Templates are similar this how it could be constructed and triggered based on a folder path:
Build necessary artifacts
Publish Pipeline
Deploy Azure Resource Group Task
Deploy App Settings Task (if applicable)
Deploy App Service
Release the deployment pieces for each environment in appropriate stages to help alleviate the amount of copying and pasting each of the above tasks can be part of a template either individually at a task, combination of tasks, or all in one. This would allow for defining the YAML once and referencing it and including app specific components as needed as parameters to the templates.

VSTS Build Succeeded even ARM Template was invalid

Am working on Azure Resource Manager Templates(ARM Templates) and VSTS CI&CD. With the help of ARM Templates, I want to deploy AKS (Azure kubernete Service). So before going to deploy, I need to validate my ARM Template in the CI-Build by applying a PowerShell task. But here, at the time of validating my ARM Template “It’s not stopping CI-Build even when the validation fails”. Its giving output as “Validation Completed” as shown in the below picture . Is there any solution to resolve this issue, i.e. I wanted to stop my CI-Build running if any validation fails.
Not sure how does your powershell script look like. But according to the screenshot, the powershell script is executed successfully without any error code return. You can update your powershell script to check the validate result and set the exit code to "1" if the result is "InvalidTemplate". This will make the powershell task fail when the template is valid.
Looks like the resource is defined multiple times in the template. You can remove it and its always a good practice from the PowerShell script to use Test-AzureRmResourceGroupDeployment and validate if the template is valid and has obtained all its parameters and then deploy using New-AzureRmResourceGroupDeployment
Like Eddie said you can try this inside a try{} catch block and return an exception or an exit code to make the VSTS Build pipeline fail, if the script fails.

azure runbook get who executed the job

I was working with azure runbooks in automation accounts for quite a while but recently I was tasked to identify who was executing the runbook.
I noticed that there is a fied called "Executed by" when you get information from a job, but seems that field is being removed by MSFT.
Checking the logs I can see the calls to the runbook but the job id stated in the log doesn't match the one in the jobid in the runbook inside the automation account.
I was wondering how can I match an execution on a runbook with a entry in the log.
any idea with powershell or by calling the REST API directly?
Thanks!
You can get the user who started the Automation job using the startedBy field returned in the Get-AzureRmAutomationJob and REST API.
This will require passing in the job id, which you can get using:
$PsPrivateMetadata.JobId.Guid

Azure ExpiredAuthenticationToken during New-AzureRmResourceGroupDeployment when deploying resources via Visual Studio

I'm trying to deploy an HDInsight cluster using an ARM template via Visual Studio. I've created an Azure Resource Group project in Visual Studio 2015, and added my resource definitions to the template JSON files.
However when I've gone to deploy it (by right-clicking the project, choosing Deploy -> New Deployment, entering my parameters), the output of Visual Studio shows (I've snipped out some boring stuff):
17:19:23 - Build started.
17:19:23 - Project "LaunchHdInsightCluster.deployproj" (StageArtifacts target(s)):
[snip]
17:20:27 - [VERBOSE] 17:20:27 - Resource Microsoft.HDInsight/clusters 'groupbhdinsight' provisioning status is running
17:31:06 - [ERROR] New-AzureRmResourceGroupDeployment : ExpiredAuthenticationToken: The access token expiry UTC time '3/14/2016 5:31:06 PM' is earlier than current UTC time '3/14/2016 5:31:07 PM'.
Note that the deploy only ran for 12 minutes before the access token expired - obviously for deploying an HDInsight cluster this is a problem (takes on average 20 minutes).
I'm just trying to understand what's going on under the hood here, as I can't find documentation for this. i.e:
What creates the access token and how? How long does it last for? I wasn't asked for any Azure creds when deploying - I'm assuming it must be the fact that I'm signed into Visual Studio using the same account I use in Azure, and it 'borrows' the authentication session, but this is just a guess
What determines the expiry time of the access token so I can prevent this happening again?
How do I refresh my authentication token?
What's happening here is that the Azure Resource Group deployment in VS uses the PowerShell Script in the project to do deployment (even though the output is hosted in VS, we use that PS script to do the work). The PowerShell script is authenticated by using the token from your VS sign in. That token is only good for an hour and then VS will refresh it. Once it's handed off to PowerShell though, PowerShell doesn't automatically refresh it. So if you have the token for 59 minutes, it's going to expire soon after you start the deployment. The token could last for an hour, or anything less than that. We're working on a fix for this (i.e. have PowerShell automatically refresh the token) but that's a month or so out yet. See: https://github.com/Azure/azure-powershell/issues/1068
Workarounds: Unfortunately there's no good work around from VS. But...
As observed the deployment will continue just fine in Azure, it's just that VS/PS can no longer poll for status. You can monitor the deployment via the portal or PowerShell.
If you drop to PowerShell and run the script, PowerShell will automatically refresh the token when you login with credentials - you can get the exact command that VS runs by sifting through the output window - this doc also gives an overview of running the script manually: https://azure.microsoft.com/en-us/documentation/articles/vs-azure-tools-resource-groups-how-script-works/
Hope that helps...
I bet it was a transient issue. I retried deployment (needed to modify my ARM template) and now it succeeded.
Please check your Azure Resource Group in the portal. You will likely have your resources up and running.
#Cleverguy25 provided an explanation of how I believe the deployment process work.
I am not sure, but I believe that the New-AzureRmResourceGroupDeployment uploads your template file and sets up a deployment to happen in the cloud. Then it queries the deployment to see if it is done and outputs the resources as they are created. Obviously those queries error when the token expires. But the deployment should continue.
You could ignore this error and query the deployment or resource group yourself, to see when it is done.
I follow this post, and simply execute 'Clear-AzureRmContext' this command, then reconnect to Azure, using 'connect-AzAccount', the issue resolved.
https://github.com/Azure/azure-powershell/issues/6585
Open a new powershell and get the current metadata used to authenticate Azure Resource Manager requests using Clear-AzureRmContext.
This worked the magic for me.

Resources