I'm trying to create a Logic App through Terraform and facing issue related to API Connection.
Here are the manual steps for creating the API Connection:
Create a Logic App in your resource group and go to Logic App Designer
Select the HTTP trigger request and click on "Next Step", then search and select "Azure Container Instance"
Click on Create or update container group and it should ask you to sign-in
Now if you scroll all the way down, you should see "Connected to ...... Change Connection"
If the Change Connection is clicked, it will show the existing aci connections or create a new one.
I'm trying to create a Logic App and I'm facing an issue with the above mentioned steps.
What I'm doing is:
Exported the existing Logic App template from another environment
Converted the values in the json as parameters and kept them in variables.tf and the final values in terraform.tfvars
The terraform plan is working fine, however the terraform apply is causing an issue
Error message:
Error: waiting for creation of Template Deployment "logicapp_arm_template" (Resource Group "resource_group_name"): Code="DeploymentFailed" Message="At least one resource deployment operation failed. Please list deployment operations for details. Please see https://aka.ms/DeployOperations for usage details." Details=[{"code":"NotFound","message":"{\r\n \"error\": {\r\n \"code\": \"ApiConnectionNotFound\",\r\n \"message\": \"**The API connection 'aci' could not be found**.\"\r\n }\r\n}"}]
Further troubleshooting shows that the error occurs in this line in terraform.tfvars
connections_aci_externalid = "/subscriptions/<subscription_id>/resourceGroups/<resource_group_name>/providers/Microsoft.Web/connections/aci"
Deduced that the issue is since the "aci" is not created.
So, created the aci manually through the Azure Portal (see top of the post for steps).
However, when I hit terraform apply the new error below shows up:
A resource with the ID "/subscriptions/<subscription_id>/resourceGroups/<resource_group_name>/providers/Microsoft.Resources/deployments/logicapp_arm_template" already exists - to be managed via Terraform this resource needs to be imported into the State. Please see the resource documentation for "azurerm_resource_group_template_deployment" for more information.
My question is, since I'm creating the Logic App using the existing template how should the "aci" portion be handled through Terraform?
For your last error message, you could remove the terraform.tfstate and terraform.tfstate.backup files in the terraform working directory and existing resources in the Azure portal then run terraform plan and terraform apply again.
If you have a separate working ARM Template, you can invoke the template deployment with terraform resource azurerm_resource_group_template_deployment. You could provide the contents of the ARM Template parameters file with argument parameters_content and the contents of the ARM Template file with argument template_content.
In this case, If you have manually created a new API Connection, you can directly input your new API connection id /subscriptions/<subscription_id>/resourceGroups/<resourceGroup_id>/providers/Microsoft.Web/connections/aci. Alternatively, you can create the API Connections automatically when you deploy your ARM Template with resource Microsoft.Web/connections. Read this blog for more samples.
If using the azurerm_resource_group_template_deployment, make sure that deployment mode is set to incremental, otherwise, you run into terrible state issues. An example from our terraform module can be seen below. We use this to deploy an arm template, which we design in our development environment and export from the Azure portal. This enables us to use parameters to deploy exactly the same logic app in test, acceptance, and production environments.
resource "azurerm_logic_app_workflow" "workflow" {
name = var.logic_app_name
location = var.location
resource_group_name = var.resource_group_name
}
resource "azurerm_resource_group_template_deployment" "workflow_deployment" {
count = var.arm_template_path == null ? 0 : 1
name = "${var.logic_app_name}-deployment"
resource_group_name = var.resource_group_name
deployment_mode = "Incremental"
template_content = file(var.arm_template_path)
parameters_content = jsonencode(local.parameters_content)
depends_on = [azurerm_logic_app_workflow.workflow]
}
Note the conditional using count. Setting arm_template_path = null by default enables us to deploy only the workflow "container" in our development environment. Which can then be used as a "canvas" for designing the logic app.
Related
I have a diagnostic setting configured on my master db. As shown below in my main.tf
resource "azurerm_monitor_diagnostic_setting" "main" {
name = "Diagnostic Settings - Master"
target_resource_id = "${azurerm_mssql_server.main.id}/databases/master"
log_analytics_workspace_id = azurerm_log_analytics_workspace.main.id
log {
category = "SQLSecurityAuditEvents"
enabled = true
retention_policy {
enabled = false
}
}
metric {
category = "AllMetrics"
retention_policy {
enabled = false
}
}
lifecycle {
ignore_changes = [log, metric]
}
}
If I don't delete it before in the resource group before I run the Terraform, I get the error:
Diagnostic Settings - Master" already exists - to be managed via
Terraform this resource needs to be imported into the State
I know that if I delete the SQL Server the diagnostic setting remains - but I don't know why that is a problem with Terraform. I have also noticed that it is in my tfplan.
What could be the problem?
If I don't delete it before in the resource group before I run the
Terraform, I get the error:
Diagnostic Settings - Master" already exists - to be managed via Terraform this resource needs to be imported into the State
I know that if I delete the SQL Server the diagnostic setting remains but I don't know why that is a problem with Terraform.
If you have created the resource in Azure from a different way (i.e. Portal/Templates/CLI/Powershell), that means Terraform is not aware of resource already existing in Azure. So, during Terraform Plan, it shows you the plan what will be created from what you have written in main.tf. But when you run Terraform Apply the azurerm provider checks the resources names with the existing resources of the same resource providers and result in giving an error that it already exists and needs to be imported to be managed by Terraform.
Also if you have created everything from Terraform then doing a Terraform destroy deletes all the resources present on the main.tf.
Well, it's in the .tfplan and also it's in main.tf - so it's imported, right ?
If you mention the resource and its details in main.tf and .tfplan, it doesn't mean that you have imported the resource or Terraform gets aware of the resource. Terraform is only aware of the resources that are stored in the Terraform state file i.e. .tfstate.
So , to overcome the error that you get without deleting the resource from Portal, you will have to add the resource in the main.tf as you have already done and then use Terraform import command to import the Azure resource to Terraform State file like below:
terraform import azurerm_monitor_diagnostic_setting.example "{resourceID}|{DiagnosticsSettingsName}"
So, for you it will be like:
terraform import azurerm_monitor_diagnostic_setting.main "/subscriptions/<SubscriptionID>/resourceGroups/<ResourceGroupName>/providers/Microsoft.Sql/servers/<SQLServerName>/databases/master|Diagnostic Settings - Master"
After the Import is done, any changes you make from Terraform to that resource will get reflected in portal as well and you will be able to destroy the resource from terraform as well.
My header might not have summed up correctly my question.
So I have a terraform stack that creates a resource group, and a keyvault, amongst other things. This has already been ran and the resources exist.
I am now adding another resource to this same terraform stack. Namely a mysql server. Now I know if I just re-run the stack it will check the state file and just add my mysql server.
However as part of this mysql server creation I am providing a password and I want to write this password to the keyvault that already exists.
if I was doing this from the start my terraform would look like:
resource "azurerm_key_vault_secret" "sqlpassword" {
name = "flagr-mysql-password"
value = random_password.sqlpassword.result
key_vault_id = azurerm_key_vault.shared_kv.id
depends_on = [
azurerm_key_vault.shared_kv
]
}
however I believe as the keyvault already exists this would error as it wouldn't know this value azurerm_key_vault.shared_kv.id unless I destroy the keyvault and allow terraform to recreate it. is that correct?
I could replace azurerm_key_vault.shared_kv.id with the actual resource ID from azure, but then if I were to ever run this stack to create a new environment it would be writing the value into my old keyvault I presume?
I have done this recently for AWS deployment, you would do terraform import on azurerm_key_vault.shared_kv resource to bring it under terraform management and then you would be able to able to deploy azurerm_key_vault_secret
To import: you will need to build the resource azurerm_key_vault.shared_kv (this will require a few iterations).
I creaed my project with code from Hashicorp tutorial "Host a static website with S3 and Cloudflare", but the tutorial didn't mention github actions. So, when I put my project in github actions, even though terraform plan and terraform apply output successfully locally, I get errors on terraform apply:
Error: expected DNS record to not already be present but already exists
with cloudflare_record.site_cname ...
with cloudflare_record.www
I have two resources in my main.tf, one for the site domain and one for www, like the following:
resource "cloudflare_record" "site_cname" {
zone_id = data.cloudflare_zones.domain.zones[0].id
name = var.site_domain
value = aws_s3_bucket.site.website_endpoint
type = "CNAME"
ttl = 1
proxied = true
}
resource "cloudflare_record" "www" {
zone_id = data.cloudflare_zones.domain.zones[0].id
name = "www"
value = var.site_domain
type = "CNAME"
ttl = 1
proxied = true
}
If I remove these lines of code from my main.tf and then run terraform apply locally, I get the warning that this will destroy my resource.
Which should I do?
add an allow_overwrite somewhere (don't see examples of how to use this in the docs) and the ways I've tried to add it generated errors.
remove the lines of code from main.tf knowing the github actions run will destroy my cloudflare_record.www and cloudflare_record.site_cname knowing I can see my zone id and CNAME if I log into cloudflare so maybe this code isn't necessary after the initial set up
run terrform import somewhere? If so, where do I find the zone ID and record ID
or something else?
Where is your terraform state? Did you store it locally or in a remote location?
Because it would explain why you don't have any problems locally and why it's trying to recreate the resources in Github actions.
More information about terraform backend (where the state is stored) -> https://www.terraform.io/docs/language/settings/backends/index.html
And how to create one with S3 for example ->
https://www.terraform.io/docs/language/settings/backends/s3.html
It shouldn't be a problem if Terraform would drop and re-create DNS records, but for better result, you need to ensure that GitHub Actions has access to the (current) workspace state.
Since Terraform Cloud provides a free plan, there is no reason not to take advantage of it. Just create a workspace through their dashboard, add "remote" backend configuration to your project and ensure that GitHub Actions uses Terraform API Token at runtime (you would set it via GitHub repository settings > Secrets).
You may want to check this example — Terraform Starter Kit
infra/backend.tf
infra/dns-records.tf
scripts/tf.js
Here is how you can pass Terraform API Token from secrets.TERRAFORM_API_TOKEN GitHub secret to Terraform CLI:
- env: { TERRAFORM_API_TOKEN: "${{ secrets.TERRAFORM_API_TOKEN }}" }
run: |
echo "credentials \"app.terraform.io\" { token = \"$TERRAFORM_API_TOKEN\" }" > ./.terraformrc
in my Azure account a have some resources. Resource groups, app services, storage accounts...
I've created these resources by using the Azure portal or Powershell.
Then i've written a terraform script to add other resources and update some of the existing ones. In particular i'm interested in updating the app service. I want to add some settings and a managed identity to it.
What happens is that terraform says: "look, there is already an app service with the name you specified".
I tried to use "terraform import" to bind the existing app service to my terrafom state file, but doing so i loose the settings that i've put in the terraform file.
How can i solve this problem? Thank you.
terraform import is the way to go. If you have any existing settings in your file: Just remove them until you have fully imported the app service.
Full tutorial - with a resource group instead of an app service, but the principle is the same:
https://azurecitadel.com/automation/terraform/lab6/#lab-importing-resources
Create a resource group:
Grab the ID for the azure resource: id=$(az group show --name deleteme --query id --output tsv)
Create an empty stanza for the resource in a new import.tf file
resource "azurerm_resource_group" "deleteme" {}
Run the import command:
terraform import azurerm_resource_group.deleteme $id
terraform-labs$ terraform import azurerm_resource_group.deleteme $id
Acquiring state lock. This may take a few moments...
azurerm_resource_group.deleteme: Importing from ID "/subscriptions/2d31be49-d999-4415-bb65-8aec2c90ba62/resourceGroups/deleteme"...
azurerm_resource_group.deleteme: Import complete!
Imported azurerm_resource_group (ID: /subscriptions/2d31be49-d999-4415-bb65-8aec2c90ba62/resourceGroups/deleteme)
azurerm_resource_group.deleteme: Refreshing state... (ID: /subscriptions/2d31be49-d999-4415-bb65-8aec2c90ba62/resourceGroups/deleteme)
Import successful!
The resources that were imported are shown above. These resources are now in
your Terraform state and will henceforth be managed by Terraform.
Run terraform plan and you should see some errors as our block is not populated
Run terraform state show azurerm_resource_group.deleteme
id = /subscriptions/2d31be49-d999-4415-bb65-8aec2c90ba62/resourceGroups/deleteme
location = westeurope
name = deleteme
tags.% = 0
Add in the name argument, and the location using the loc variable
Rerun terraform plan and it should show no errors and no planned changes
The resource is now fully imported and safely under the control of Terraform.
I have the following ARM Template structure:
Parent Template
|--Nested Template 1
|--...
|--Nested Template 6
So I only have 2 levels of templates, Parent and nested.
Lets say I deploy parent to an empty resource group and everything works well. After that I delete one of the resources and want to deploy the same Parent Template with the same parameters to bring deleted resources back. But the deployment would fail saying that the resource already exists (the other, not the one i'm tried to recreate). I tried both incremental mode and full mode for deployments.
If i directly invoke nested template with the missing resources it works as expected (so specifically creating a deployment with nested template only, not with parent that invokes nested template).
UPD:
After some additional testing I can conclude thats even weirder then before. So I'm starting this deployment with powershell:
New-AzureRmResourceGroupDeployment #parameters
And it deploys just fine, however if I invoke the same command after the first deployment completed I would get an error:
The resource 'gggg-1s-the-wordd' already exists in location
'westeurope' in resource group 'gggg'. A resource with the same name
cannot be created in location 'northeurope'. Please select a new
resource name.
Is this behavior excepted? I can't seem to find anything relevant, thanks!
UPD2: It doesn't really matter if I use portal or powershell, I get the same error.
So with the help from Brian we were able to identify the culprit. The issue was that the WebApp had its location set to resourcegroup().location while the App Service Plan was correctly getting location from parameters. So that lead to a problem where at deployment time WebApp would deploy to the region where its App Service Plan was, but at evaluation time it would consider that this WebApp belongs to the region where the resource group was.
TLDR - copy paste error, which coupled with a bug in evaluation of location in ARM lead to a quite weird behavior.
If you deploy the same resource (intentionally did not use the word "template" there) to the same resource group, Azure should "make it so". IOW, if it's not there, it will create it, if it is there, it should no-op. It's not that black and white there are some nuances (like you can't change certain properties if the resource exsists) but if you deploy the same resource with the same property values to the same resource group you should not get an error.
In general, nesting (or not) shouldn't affect any of this.
If you're deploying to different resource groups, then you could see an error about "already exists" depending on the resource.
All that said, it's really hard to tell in your specific case what's going on without more detail... So if this doesn't help, can you add some detail (what's the exact error message) or a repro (template that we could see the problem with)?
I experienced the same issue. The reason was that, location of App Service was defined as [resourceGroup().location] instead of App service plan (ASP) location, which was creating the problem. I changed it by passing the location of ASP as a parameter to the template.
Getting location of of ASP is as:
internal static string GetASPLocation(TokenCloudCredentials credentials, string resourceGroup, string ASP)
{
Console.WriteLine($"Getting location of App Service Plan {ASP} in Resource Group {resourceGroup}");
var resourceClient = new ResourceManagementClient(credentials);
ResourceExistsResult result = resourceClient.Resources.CheckExistence(resourceGroup, new ResourceIdentity(ASP, "Microsoft.Web/serverfarms", "2015-08-01"));
var appServicePlan = resourceClient.Resources.Get(resourceGroup, new ResourceIdentity(ASP, "Microsoft.Web/serverfarms", "2015-08-01"));
return appServicePlan.Resource.Location;
}
And in ARM template, location can be changed as :
"location": "[parameters('ASPLocation')]"