Mechanism to upload ARM template into storage - azure

I want to use linked templates in my ARM deployment model. Every article I've read mentions that the linked template needs to be in an accessible location (such as blob storage).
This works OK if I manually upload the files to storage but I'm looking for a mechanism to upload a template to Storage as part of the build or Deployment process.
I'd hoped to use the Artifact Storage Account option but it is unavailable when deploying infrastructure only.
Is there a built-in method to achieve this or would it require either an extra step such as powershell script or VSTS build step?

The Artifact Storage Account option becomes available as soon as you introduce the two parameters _artifactsLocation and _artifactsLocationSasToken into your deployment.
"parameters": {
"_artifactsLocation": {
"type": "string",
"metadata": {
"description": "Auto-generated container in staging storage account to receive post-build staging folder upload"
}
},
"_artifactsLocationSasToken": {
"type": "securestring",
"metadata": {
"description": "Auto-generated token to access _artifactsLocation"
}
}
}

Related

Update ADF Linked Service configuration by API

I have requirement to update a ADF linked service configuration by API(or any other way through code, except using UI). I need to add 'init scripts' in the job cluster configuration of a linked service.
I got some Microsoft documentation on this, but it is only for creating a linked service, not for editing it.
Please let me know if you have any leads on this.
You can update ADF linked Service configuration by API.
Sample Request
PUT https://management.azure.com/subscriptions/12345678-1234-1234-1234-12345678abc/resourceGroups/exampleResourceGroup/providers/Microsoft.DataFactory/factories/exampleFactoryName/linkedservices/exampleLinkedService?api-version=2018-06-01
Request body
{
"properties": {
"type": "AzureStorage",
"typeProperties": {
"connectionString": {
"type": "SecureString",
"value": "DefaultEndpointsProtocol=https;AccountName=examplestorageaccount;AccountKey=<storage key>"
}
},
"description": "Example description"
}
}
In this link Sample Request and Request body are given.
For example, If you want to update AzureBlobStorage LinkedService, You can update configurations given here
We use a PowerShell module azure.datafactory.tools for deployments of ADF components.
It can replace a Linked Service with a new definition. Furthermore, you can test the deployed Linked Service with the module.

Azure logic app - How can I share integration account across multiple resource groups

I am trying to deploy my logic app to multiple environments using CI/CD pipeline. I am getting an error -The client 'guid' with object id ''guid' ' has permission to perform action 'Microsoft.Logic/workflows/write' on scope 'Test Resource group'; however, it does not have permission to perform action 'Microsoft.Logic/integrationAccounts/join/action' on the linked scope(s) 'Development Resource group integration account' or the linked scope(s) are invalid.
Creating another integration account for test resource group doesnt come under free tier. Is there a way to share integration account across multiple resource groups
Not sure what the permissions issue is but you might need to give more information around this.
But try the below first in your pipeline. Works for us with 3 different resource groups and two integration accounts
"parameters": {
"IntegrationAccountName": {
"type": "string",
"minLength": 1,
"defaultValue": "inter-account-name"
},
"Environment": {
"type": "string",
"minLength": 1,
"defaultValue": "dev"
}
},
"variables": {
"resourceGroupName": "[if(equals(parameters('Environment'), 'prd'),'rg-resources-production','rg-resources-staging')]",
"LogicAppIntegrationAccount": "[concat('/subscriptions/3fxxx4de-xxxx-xxxx-xxxx-88ccxxxfbab4/resourceGroups/',variables('resourceGroupName'),'/providers/Microsoft.Logic/integrationAccounts/',parameters('IntegrationAccount'))]",
},
In the above sample, we had two different integration accounts one for testing and one for production. This is why I have set the integration account name as a parameter as it changes between environments.
I have created a variable "resourceGroupName" this is important because this url is setting up a direct link to the integration account which is stored in a known resource group. In this sample I have included an if statement using the value set at the "environment" parameter. This helps select which resource group is going to be used.
I then create another variable which stores the new URL. Replace the subscription guid with your own: 3fxxx4de-xxxx-xxxx-xxxx-88ccxxxfbab4.
Once that is created you need to change the ARM template to use the variable you just created. To set it,place in the properties object.
"properties": {
"state": "Enabled",
"integrationAccount": {
"id": "[variables('LogicAppIntegrationAccount')]"
},
So for you pipeline it should just be a normal arm template but with these two parameters above being set.
Let me know if you have more questions around this.

How to test within Azure - Azure Resource Manager (ARM Templates)

Assume we have a Checkpoint Firewall Template created on Azure Portal. Is there a way to test the Template within Azure? Also if the Template is modified, is there a way to Test that new modified Template within Azure?
You can test an ARM Template by using it in a deployment. You can also use the what-if setting to produce hypothetical output without actually deploying anything.
Microsoft Azure Docs for What-If
To create a What-If deployment you can proceed a number of ways; Azure CLI, PowerShell, REST, etc. Here is an example using REST (Postman).
Use the endpoint
POST https://management.azure.com/subscriptions/{subscriptionId}/resourcegroups/{resourceGroupName}/providers/Microsoft.Resources/deployments/{deploymentName}/whatIf?api-version=2020-06-01
Provide a body payload:
{
"location": "westus2",
"properties": {
"mode": "Incremental",
"parameters": {},
"template": {}
}
}
Add your template and parameters. Supply a bearer token for authentication and deploy.
You can check the Azure What-If REST API docs here.

System group membership cannot be changed

I have generated template from existing Azure API management resource, modified it a bit, and tried to deploy using Azure CLI. But I'm getting the following error:
Deployment failed. Correlation ID: 7561a68f-54d1-4370-bf6a-175fd93a4b99. {
"error": {
"code": "MethodNotAllowed",
"message": "System group membership cannot be changed",
"details": null
}
}
But all the APIs are getting created and working fine. Can anyone help me solve the error. This is the command I tried to deploy in my ubuntu machine:
az group deployment create -g XXXX --template-file azuredeploy.json --parameters #param.json
Service Group Template:
{
"type": "Microsoft.ApiManagement/service/groups",
"apiVersion": "2018-06-01-preview",
"name": "[concat(parameters('service_name'), '/administrators')]",
"dependsOn": [
"[resourceId('Microsoft.ApiManagement/service', parameters('service_name'))]"
],
"properties": {
"displayName": "Administrators",
"description": "Administrators is a built-in group. Its membership is managed by the system. Microsoft Azure subscription administrators fall into this group.",
"type": "system"
}
}
You have several options if you want to copy an API Management instance to a new instance. Using the template is not listed here.
Use the backup and restore function in API Management. For more information, see How to implement disaster recovery by using service backup and restore in Azure API Management.
Create your own backup and restore feature by using the API Management REST API. Use the REST API to save and restore the entities from the service instance that you want.
Download the service configuration by using Git, and then upload it to a new instance. For more information, see How to save and configure your API Management service configuration by using Git.
Update:
I have Confirmed with Microsoft engineer that ARM template deployment for APIM failed is an known issue and is planning to fix it.(5/7/2019)

How can I link a template in an Azure ARM template?

I have multiple ARM templates that I want to link. But when I use "[uri(deployment().properties.templateLink.uri, 'transform.json')]" I get an error telling me that deployment() gives an object that does not contain templateLink when running it locally or through the Azure DevOps pipeline.
So then I tried to send in the path to the artifact that I create when I build the project in Azure DevOps, "[concat(parameters('templateDirectory'), '/transform.json')]", and then provide it as a parameter when calling the template.
But then I get this error instead
At least one resource deployment operation failed. Please list deployment operations for details. Please see https://aka.ms/arm-debug for usage details.
Details:
BadRequest: {
"error": {
"code": "InvalidContentLink",
"message": "The provided content link 'file:///D:/a/r1/a/_Infrastructure/ARM/shared/transform.json' is invalid or not supported. Content link must be an absolute URI not referencing local host or UNC path."
}
} undefined
Task failed while creating or updating the template deployment.
So my question is how should I handle the linking of templates when I deploy through the Azure DevOps pipeline?
Do I have to copy it to a storage in the build step so that I can access it with http or https in the deploy step, and if so, how is the best way to that? it seems a little complex.
Solution update
So the solution I went for was to upload all the template files to a temporary storage that I created and then added the path to that to the main template that refers to all the templates.
Task overview to get a better understanding of how it was done:
Copy templates to blob:
Deploy ARM template:
And here a snippet of how it was used in the main template referring other templates:
"resources": [
{
"apiVersion": "2015-01-01",
"name": "dashboard-24h",
"type": "Microsoft.Resources/deployments",
"properties": {
"mode": "Incremental",
"templateLink": {
"uri": "[concat(parameters('templateBasePath'), '/dashboard/24h/azuredeploy-dashboard-deploy.json')]",
"contentVersion": "1.0.0.0"
},
"parameters": {
"templateBasePath": {
"value": "[parameters('templateBasePath')]"
},
"appName": {
"value": "[parameters('appName')]"
}
}
}
},
...
]
so, if you want to use the deployment().properties.templateLink.uri your template has to be deployed from a url, not from a local disk.
nested templates ALWAYS have to be deployed from url. so, if you want to use the aforementioned method, everything has to be uploaded to someplace that is accessible publicly (or auth has to be done through URL, like SAS token).
What I usually do - run a simple powershell script before deployment that uploads all the templates to a common location, after that I just use deployment function.
Expanding on the above if you are generating your ARM
Or
Using AzureResourceManagerTemplateDeployment#3 to deploy the generated templates you can override the variables like this
- task: AzureResourceManagerTemplateDeployment#3
inputs:
deploymentScope: 'Resource Group'
... other values
deploymentMode: 'Validation'
overrideParameters: '-LinkedTemplatesBaseUrl "$(BASE_URL)" -LinkedTemplatesUrlQueryString $(SAS_TOKEN)'
The LinkedTemplatesBaseUrl and LinkedTemplatesUrlQueryString must be defined in the *-parameters.json file
Use LinkedTemplatesUrlQueryString only when fetching from secured storage (which is a preferred way)
You can use AzureFileCopy#4 to copy your templates
steps:
- task: AzureFileCopy#4
name: AzureFileCopyTask
inputs:
SourcePath: '$(System.DefaultWorkingDirectory)/build_dir/*.json'
azureSubscription: $(AZURE_SUBSCRIPTION)
Destination: 'AzureBlob'
storage: $(STORAGE_ACCOUNT)
ContainerName: $(STORAGE_CONTAINER)
CleanTargetBeforeCopy: true
BlobPrefix: $(STORAGE_PREFIX)
And use the output variables like mentioned here

Resources