I am creating an Azure Logic App (using it to unzip to a blob storage). For this I need the Logic App workflow and a connection to the blob storage. I create the empty Logic App Workflow with Terraform and the actual Logic App implementation with Visual Studio that I just then deploy to the Logic App created with tf.
I use following tf code to create the empty Logic App Workflow:
resource "azurerm_logic_app_workflow" "logic_unzip" {
name = "ngh-${var.deployment}-unziplogic"
resource_group_name = "${azurerm_resource_group.rg.name}"
location = "${azurerm_resource_group.rg.location}"
}
As the Logic App needs connection to the Blob storage I will use following template to create it:
resource "azurerm_template_deployment" "depl_connection_azureblob" {
name = "azureblob"
resource_group_name = "${azurerm_resource_group.rg.name}"
template_body = <<DEPLOY
{
"$schema": "https://schema.management.azure.com/schemas/2015-01-01/deploymentTemplate.json#",
"contentVersion": "1.0.0.0",
"parameters": {
"connection_name": {"type": "string"},
"storage_name": {"type": "string"},
"storage_access_key": {"type": "string"},
"location": {"type": "string"},
"api_id": {"type": "string"}
},
"resources": [{
"type": "Microsoft.Web/connections",
"name": "[parameters('connection_name')]",
"apiVersion": "2016-06-01",
"location": "[parameters('location')]",
"scale": null,
"properties": {
"displayName": "[parameters('connection_name')]",
"api": {
"id": "[parameters('api_id')]"
},
"parameterValues": {
"accountName": "[parameters('storage_name')]",
"accessKey": "[parameters('storage_access_key')]"
}
},
"dependsOn": []
}]
}
DEPLOY
parameters = {
"connection_name" = "azureblob"
"storage_name" = "${azurerm_storage_account.sa-main.name}"
"storage_access_key" = "${azurerm_storage_account.sa-main.primary_access_key}"
"location" = "${azurerm_resource_group.rg.location}"
"api_id" = "${data.azurerm_subscription.current.id}/providers/Microsoft.Web/locations/${azurerm_resource_group.rg.location}/managedApis/azureblob"
}
deployment_mode = "Incremental"
}
Running plan and apply, these work perfect. In Visual Studio I can then create the Logic App and use the azureblob connection to select the correct blob storage.
Now, when I have deployed the Logic App Workflow from Visual Studio and run terraform plan I get following changes:
An execution plan has been generated and is shown below.
Resource actions are indicated with the following symbols:
~ update in-place
Terraform will perform the following actions:
~ azurerm_logic_app_workflow.logic_unzip
parameters.$connections: "" => ""
parameters.%: "1" => "0"
Plan: 0 to add, 1 to change, 0 to destroy.
Running the apply command now will break the Logic App as it removes the bound connection. Clearly the Visual Studio deploy has created the binding between the Logic App and the connection.
How can I tell Terraform not to remove the connections (created by the Visual Studio deploy) from the Logic App?
Terraform is not aware of the resources deployed in the arm template, so it detects the state change and tries to "fix" that. I dont see any CF resources for logic app connections, so seeing how it detects that parameters.connections changed from 0 to 1 adding your connection directly to the workflow resource might work, but CF mentions : Any parameters specified must exist in the Schema defined in workflow_schema, but I dont see connections in the schema, which is a bit weird, but I assume I'm misreading the schema
you can also use ignore_changes:
lifecycle {
ignore_changes = [
"parameters.$connections"
]
}
according to the comments and this
reading:
https://www.terraform.io/docs/configuration/resources.html#ignore_changes
Related
Trying to deploy ARM Template for a Database Account, SQL Database with two Collections where autoscale throughput setting are set at the database level (shared for collections).
I created this setup in Azure UI and then exported the template.
When importing the template from Powershell using New-AzResourceGroupDeployment it fails with message
Status Message: Entity with the specified id does not exist in the system.
ActivityId: <redacted>, Microsoft.Azure.Documents.Common/2.11.0 (Code:NotFound)
This is ridiculous because I exported the template and did not modify it and then imported. Isn't Azure recognizing it's own format?
I think the problem has to do with this fragment of template:
{
"type": "Microsoft.DocumentDB/databaseAccounts/sqlDatabases/throughputSettings",
"apiVersion": "2020-04-01",
"name": "[concat(parameters('databaseAccounts_an_test_name'), '/', parameters('databaseAccounts_an_test_name'), '-db-2/default')]",
"dependsOn": [
"[resourceId('Microsoft.DocumentDB/databaseAccounts/sqlDatabases', parameters('databaseAccounts_an_test_name'), concat(parameters('databaseAccounts_an_test_name'), '-db-2'))]",
"[resourceId('Microsoft.DocumentDB/databaseAccounts', parameters('databaseAccounts_an_test_name'))]"
],
"properties": {
"resource": {
"throughput": 400,
"autoscaleSettings": {
"maxThroughput": 4000
}
}
}
}
Any ideas?
Based on Mark Brown hints this is the exact solution.
{
"type": "Microsoft.DocumentDB/databaseAccounts/sqlDatabases",
"name": ...
"apiVersion": "2020-04-01"
"dependsOn": ...
"properties": {
"resource": {
"id": ...
},
"options": {
"autoscaleSettings": {
"maxThroughput": 4000
}
}
}
}
Don't use the Microsoft.DocumentDB/databaseAccounts/sqlDatabases/throughputSettings part of yaml from exported template. I'm not sure why Azure exports it and then doesn't allow for import.
If you are creating a new database or container resource you need to pass the throughput in the options for the resource. You can only use the throughput resource directly when updating the throughput.
Here is an example here
Using terraform and Azure ARm template , I am trying to create an azure event grid subscription on a function.
This the ARM using for the event grid subscription:
{
"$schema": "https://schema.management.azure.com/schemas/2015-01-01/deploymentTemplate.json",
"contentVersion": "1.0.0.0",
"parameters": {
"eventGridTopicName": {
"type": "string",
"metadata": {
"description": "The name of the Event Grid custom topic."
}
},
"eventGridSubscriptionName": {
"type": "string",
"metadata": {
"description": "The name of the Event Grid custom topic's subscription."
}
},
"eventGridSubscriptionUrl": {
"type": "string",
"metadata": {
"description": "The webhook URL to send the subscription events to. This URL must be valid and must be prepared to accept the Event Grid webhook URL challenge request. (RequestBin URLs are exempt from this requirement.)"
}
},
"location": {
"type": "string",
"defaultValue": "[resourceGroup().location]",
"metadata": {
"description": "The location in which the Event Grid resources should be deployed."
}
}
},
"resources": [{
"name": "[parameters('eventGridTopicName')]",
"type": "Microsoft.EventGrid/topics",
"location": "[parameters('location')]",
"apiVersion": "2018-01-01"
},
{
"name": "[concat(parameters('eventGridTopicName'), '/Microsoft.EventGrid/', parameters('eventGridSubscriptionName'))]",
"type": "Microsoft.EventGrid/topics/providers/eventSubscriptions",
"location": "[parameters('location')]",
"apiVersion": "2018-01-01",
"properties": {
"destination": {
"endpointType": "WebHook",
"properties": {
"endpointUrl": "[parameters('eventGridSubscriptionUrl')]"
}
},
"filter": {
"includedEventTypes": [
"All"
]
}
},
"dependsOn": [
"[parameters('eventGridTopicName')]"
]
}
]
}
Following the documentation here in order to create the subscription, we have to recover a system key in order to create the complete webhook endpoint. So following this post here, I have used an ARM template to recover the system key called evengrid_extension.
So everything goes well except during the arm deployment of the eventgrid subscription. I have this error:
Error waiting for deployment: Code="DeploymentFailed"
Message="At least one resource deployment operation failed. Please
list deployment operations for details. Please see
https://aka.ms/arm-debug for usage details."
Details=[{"code":"Conflict","message":"{\r\n
\"status\": \"Failed\",\r\n
\"error\": {\r\n \"code\": \"ResourceDeploymentFailure\",\r\n
\"message\": \"The resource operation completed with terminal provisioning state 'Failed'.\",\r\n
\"details\": [\r\n {\r\n
\"code\": \"Url validation\",\r\n
\"message\": \"The attempt to validate the provided endpoint https://myFunctionName.azurewebsites.net/runtime/webhooks/eventgrid
failed.
\For more details, visit https:
//aka.ms/esvalidation.\"\r\n }\r\n ]\r\n }\r\n}"}]
I check my code n terraform in order to be sure that I am using the right value for all parameters in this arm template and everything is ok. I have the right topic name, the right endpoint with all value filled in. So I don't understand what I am missing here. I was wondering too if I am using the right system key. I know that there are a system key named durabletask_extension, and another one named eventgrid_extension. But in fact I have tried with both and the same error occured.
Update
Just notice that the keys i-e durabletask_extension and eventgrid_extension are both system keys. So in my arm template to recover these works well and I recover the right system key by using only eventgrid_extension.
Here my code for terraform:
resource "azurerm_eventgrid_topic" "eventgrid_topic" {
name = "topicName"
location = var.main_location
resource_group_name = azurerm_resource_group.name
}
resource "azurerm_template_deployment" "eventgrid_subscription" {
name = "EventGridSbscription"
resource_group_name = azurerm_resource_group.environment.name
template_body = file("./arm/event-grid-subscription.json")
parameters = {
eventGridTopicName = "${azurerm_eventgrid_topic.eventgrid_topic.name}"
eventGridSubscriptionName = "eventgrid-myFunctionName"
eventGridSubscriptionUrl = "https://${azurerm_function_app.function.name}.azurewebsites.net/runtime/webhooks/eventgrid?functionName=${azurerm_function_app.function.name}&code=${lookup(azurerm_template_deployment.function_key.outputs, "systemKey")}"
location = var.main_location
}
deployment_mode = "Incremental"
depends_on = [
azurerm_template_deployment.function_key
]
}
So I do not understand why my susbription deployment failed, or what I am missing in order to automate this settings with terraform.
Following the doc here I understand too that:
If you don't have access to the application code (for example, if
you're using a third-party service that supports webhooks), you can
use the manual handshake mechanism. Make sure you're using the
2018-05-01-preview API version or later (install Event Grid Azure CLI
extension) to receive the validationUrl in the validation event. To
complete the manual validation handshake, get the value of the
validationUrl property and visit that URL in your web browser. If
validation is successful, you should see a message in your web browser
that validation is successful. You'll see that event subscription's
provisioningState is "Succeeded".
So, there is a way to make a validation using terraform or another way to automate this validation ?
The template is right, you just misunderstand something in the eventGridSubscriptionUrl. Take a look at the URL. The URL shows like this:
Version 2.x runtime
https://{functionappname}.azurewebsites.net/runtime/webhooks/eventgrid?functionName={functionname}&code={systemkey}
Version 1.x runtime
https://{functionappname}.azurewebsites.net/admin/extensions/EventGridExtensionConfig?functionName={functionname}&code={systemkey}
The functionappname is what you set as the value azurerm_function_app.function.name, but functionname is not.
You get the existing function name through the Azure REST API Web Apps - Get Function.
And in Terraform, it seems there is no function resource in the function app for you to create. But you can also use the template to create the function and output the function name. Then you can set it in the URL. You can get more details about function in the Azure Template here and the function name shows in the property.
How do i identify the azure resource is exists or not in the ARM templates by the resource type and identifier
It is actually kind of possible. You can use resource group tags to mark a current deployed version and skip deployment if the tag is set. All this could be achieved via linked template.
Note that we don't check for resource existence per se but we still allow writing ARM template that could contain one time initialization templates. The last will restore the resource if resource group was deleted and resources were lost (given that you created the resource group again). You can extend this to support per-resource tags which will be more useful in some cases.
The template that starts the deployment may look like this:
{
"$schema": "https://schema.management.azure.com/schemas/2015-01-01/deploymentTemplate.json#",
"contentVersion": "1.0.0.0",
"parameters": {
"DeploymentTemplateLink": {
"type": "string"
},
"DeploymentVersion": {
"defaultValue": 1,
"type": "int"
}
},
"variables": {
"rgWithDefaultVersion": {
"tags": {
"Version": "0"
}
}
},
"resources": [
{
"type": "Microsoft.Resources/deployments",
"apiVersion": "2017-05-10",
"name": "DeploymentTemplate",
"condition": "[less(int(union(variables('rgWithDefaultVersion'), resourceGroup()).tags['Version']), parameters('DeploymentVersion'))]",
"properties": {
"mode": "Incremental",
"templateLink": {
"uri": "[parameters('DeploymentTemplateLink')]",
"contentVersion": "1.0.0.0"
},
"parameters": {
"DeploymentVersion": {
"value": "[parameters('DeploymentVersion')]"
}
}
}
}
]
}
The linked template's condition looks into tags and returns true only if current version (stored in the tag) is less than the requested one. You don't actually have to maintain versioning: just don't set the DeploymentVersion parameter and it will deploy only for the first time. If you decide to redeploy anyway you have always an option to increase the version, which will cause deployment of the linked template (aka "main deployment").
The main deployment template is on you, but it should contain a tags resource in order to maintain the logic.
{
"$schema": "https://schema.management.azure.com/schemas/2015-01-01/deploymentTemplate.json#",
"contentVersion": "1.0.0.0",
"parameters": {
"DeploymentVersion": {
"defaultValue": 1,
"type": "int"
}
},
"variables": {},
"resources": [
{
"type": "Microsoft.Resources/tags",
"name": "default",
"apiVersion": "2019-10-01",
"dependsOn": [],
"properties": {
"tags": {
"Version": "[string(parameters('DeploymentVersion'))]"
}
}
}
]
}
Remark for those who didn't understand the union() and rgWithDefaultVersion thing. ARM template deployment will fail if referenced object doesn't contain a property. In our case we have two such properties: 'tags' and 'Version'. 'Tags' will exist only if particular resource group has or ever had tags. 'Version' will exist only after we already wrote it once (in the main deployment). Therefore before we access them we perform union() operation on returned object with a proper default one, ensuring that we can safely access the mentioned properties.
there is no way of doing that in an arm template. you can use some external source (like powershell) to determine that and pass in parameter with appropriate value, alternatively you can use tags to figure that out (have a tag that represents an existence\absence of a resource).
Resource Manager provides the following functions for getting resource values: Resource functions for Azure Resource Manager templates
You could wrap your template with a piece of powershell\whatever, that would determine if the resource exists, and pass in the parameter value depending on that and use a conditional statement in the template that would decide what to do based on the input (but the input has to come from elsewhere)
I needed a solution to this recently to basically do an incremental update to a SQL server. Since you can't do this; the template will fail with a NameAlreadyExists error.
So I needed to check the resource doesn't exist and only create if it doesn't.
Add a "condition" check for the azure resource id exists; don't create if it does.
{
...
"condition": "[empty(resourceId('[resourceGroup().id]', 'Microsoft.SQL/servers', parameters('serverName')))]",
...
}
You can do this for any resource type.
I've searched online and browsed the available powershell cmdlets to try and find a solution for this problem but have been unsuccessful. Essentially, I have a few Data Factory pipelines that copy/archive incoming files and will use a web http post component that will invoke a Logic App that connects to a Blob container and will delete the incoming file. The issue I'm facing is that we have several automation runbooks that will rest Blob access keys every X days. When the Blob keys get reset the Logic App will fail whenever this happens because the connection is manually created in the designer itself and I can't specify a connection string that could pull from the Keyvault, as an example. Inside of the {Logic App > API Connections > Edit API Connection} we can manually update the connection string/key but obviously for an automated process we should be able to do this programmatically.
Is there a powershell cmdlet or other method I'm not seeing that would allow me to update/edit the API Connections that get created when using and Blob component inside a Logic App?
Any insights is appreciated!
Once you've rotated your key in the storage account, you can use an ARM template to update your connection API. In this ARM template, the connection api is created referencing the storage account internally so you don't have to provide the key:
azuredeploy.json file:
{
"$schema": "http://schema.management.azure.com/schemas/2015-01-01/deploymentTemplate.json#",
"contentVersion": "1.0.0.0",
"parameters": {
"azureBlobConnectionAPIName": {
"type": "string",
"metadata": {
"description": "The name of the connection api to access the azure blob storage."
}
},
"storageAccountName": {
"type": "string",
"metadata": {
"description": "The Storage Account Name."
}
}
},
"variables": {},
"resources": [
{
"type": "Microsoft.Web/connections",
"name": "[parameters('azureBlobConnectionAPIName')]",
"apiVersion": "2016-06-01",
"location": "[resourceGroup().location]",
"scale": null,
"properties": {
"displayName": "[parameters('azureBlobConnectionAPIName')]",
"parameterValues": {
"accountName": "[parameters('storageAccountName')]",
"accessKey": "[listKeys(resourceId('Microsoft.Storage/storageAccounts', parameters('storageAccountName')),'2015-05-01-preview').key1]"
},
"api": {
"id": "[concat('subscriptions/', subscription().subscriptionId, '/providers/Microsoft.Web/locations/', resourceGroup().location, '/managedApis/azureblob')]"
}
},
"dependsOn": []
}
]
}
azuredeploy.parameters.json file:
{
"$schema": "https://schema.management.azure.com/schemas/2015-01-01/deploymentParameters.json#",
"contentVersion": "1.0.0.0",
"parameters": {
"azureBlobConnectionAPIName": {
"value": "myblobConnectionApiName"
},
"storageAccountName": {
"value": "myStorageAccountName"
}
}
}
You can them execute the arm template like that:
Connect-AzureRmAccount
Select-AzureRmSubscription -SubscriptionName <yourSubscriptionName>
New-AzureRmResourceGroupDeployment -Name "ExampleDeployment" -ResourceGroupName "MyResourceGroupName" `
-TemplateFile "D:\Azure\Templates\azuredeploy.json" `
-TemplateParameterFile "D:\Azure\Templates\azuredeploy.parameters.json"
to get started with ARM template and powerhsell, you cam have a look at this article:
Deploy resources with Resource Manager templates and Azure PowerShell
I am creating simple pipeline in the data factory that should only run a custom activity. The deployment template for the pipeline looks like this:
{
"type": "pipelines",
"name": "MyCustomActivityPipeline",
"dependsOn": [
"DataFactoryName",
"AzureBatchLinkedService"
],
"apiVersion": "[variables('api-version')]",
"properties": {
"description": "Custom activity sample",
"activities": [
{
"type": "Custom",
"name": "MyCustomActivity",
"linkedServiceName": {
"referenceName": "AzureBatchLinkedService",
"type": "LinkedServiceReference"
},
"typeProperties": {
"command": "cmd /c echo hello world"
}
}
]
}
}
Additionally I have created all the resources needed- the batch account with pools and the storage account. All the resources are in the same resource group and subscription. I try to trigger the pipeline using console command
Invoke-AzureRmDataFactoryV2Pipeline -DataFactory "DataFactory" -PipelineName "PipelineName" -ResourceGroupName "ResourceGroupName"
I am getting this error:
Activity MyCustomActivity failed: Can not access user batch account, please check batch account setiings.
Has anyone ever experienced such an error from ADF execution of a pipeline? The weird part is that all the resources have access to each other and are within the same resource group and subscription.
Please check the settings for the storage linked service used by batch linked service. Make sure the connection string type is SecureString