So Im using terraform to create a scheduled query alert on a particular application insight
resource "azurerm_template_deployment" "rule1" {
name = "queryrule${md5(format("%s-%s", var.resourcegroupname, var.name))}" # This is the name of the deployment (has to be unique for each rule)
resource_group_name = var.resourcegroupname
template_body = file("./modules/queryrule/queryRule.json")
deployment_mode = "Incremental"
parameters = {
action_emailSubject = "${var.person} from ${var.email}"
action_groups = "${join(";", var.action_group_array)}"
action_trigger_thresholdOperator = var.act_threshold_1operator
action_trigger_threshold = var.action_threshold
name = var.name_rule
description = var.description
schedule_frequencyInMinutes = var.frequency
schedule_timeWindowInMinutes = var.timeWindow
query = var.queryString
data_source_id = var.data_source
}
}
queryRule.json is a normal ARM template for scheduled query.
THe problem is that when I deployed the terraform project the datasource was invalid so the scheduled query was created but not added the alert of the appinsight and also not added to the terraform state.
when I deployed next time it said this resource already exists but is not part of terraform state. I want to delete this scheduled query but i cant find it on the azure portal. Any ideas how to find and delete this orphan scheduled query?
Contacted Microsoft support to find the answer. The reason I was not able to find it because the scheduled query was never created, only the deployment for it was created and not added to terraform state. I had to go the the deployments option on menu on the resource group and found the failed deployment and deleted it.
Related
I have a use-case where we have two ways of allowing resource creation:
With the help of terraform
manual creation.
for example (this is just an example resource, not the actual one we are using):
resource "aws_codestarconnections_connection" "example" {
name = "example-connection"
provider_type = "GitHub"
}
Using the UI also we can create the same connection.
If the resource is already there, I do not want terraform to attempt the creation again.
If it was earlier created by terraform only, it stays in the terraform state, and terraform won't create this resource.
But, if it was a manual creation, is there a way to avoid the creation?
Option tried:
Fetching the data source first and then using count while resource creation.
Eg.
resource "aws_codestarconnections_connection" "example" {
count = number_of_fetched_resources == 0 ? 1 : number_of_fetched_resources
name = "example-connection"
provider_type = "GitHub"
}
Now, if there was no resource available, it works fine. Because it creates 1 resource in that case. But if there was already a manual creation, I want, terraform shouldn't attempt resource creation (leave it at the same number of resources). But terraform takes it as if I am asking to create a resource because in the state it has 0.
Also
count = number_of_fetched_resources == 0 ? 1 : 0
doesn't work because now it will delete the existing resource also, in case terraform created it earlier.
So is there a way to sync the state using terraform code (I cannot use commands like terraform import as running commands is done in a different environment)?
When creating an App Service Plan on my new-ish (4 day old) subscription using Terraform, I immediately get a throttling error
App Service Plan Create operation is throttled for subscription <subscription>. Please contact support if issue persists
The thing is, when I then go to the UI and create an identical service plan, I receive no errors and it creates without issue, so it's clear that there is actually no throttling issue for creating the app service plan since I can make it.
I'm wondering if anyone knows why this is occurring?
NOTE
I've gotten around this issue by just creating the resource in the UI and then importing it into my TF state... but since the main point of IaC is automation, I'd like to ensure that this unusual behavior does not persist when I go to create new environments.
EDIT
My code is as follows
resource "azurerm_resource_group" "frontend_rg" {
name = "${var.env}-${var.abbr}-frontend"
location = var.location
}
resource "azurerm_service_plan" "frontend_sp" {
name = "${var.env}-${var.abbr}-sp"
resource_group_name = azurerm_resource_group.frontend_rg.name
location = azurerm_resource_group.frontend_rg.location
os_type = "Linux"
sku_name = "B1"
}
EDIT 2
terraform {
backend "azurerm" {}
required_providers {
azurerm = {
source = "hashicorp/azurerm"
version = "3.15.0"
}
}
}
So I am completely new to the terraform and I found that by using this in terraform main.tf I can create Azure Databricks infrastructure:
resource "azurerm_databricks_workspace" "bdcc" {
depends_on = [
azurerm_resource_group.bdcc
]
name = "dbw-${var.ENV}-${var.LOCATION}"
resource_group_name = azurerm_resource_group.bdcc.name
location = azurerm_resource_group.bdcc.location
sku = "standard"
tags = {
region = var.BDCC_REGION
env = var.ENV
}
}
And I also found here
That by using this I can even create particular notebook in this Azure DataBricks infrastructure:
resource "databricks_notebook" "notebook" {
content_base64 = base64encode(<<-EOT
# created from ${abspath(path.module)}
display(spark.range(10))
EOT
)
path = "/Shared/Demo"
language = "PYTHON"
}
But since I am new to this, I am not sure in what order I should put those pieces of code together.
It would be nice if someone could point me to the full example of how to create notebook via terraform on Azure Databricks.
Thank you beforehand!
In general you can put these objects in any order - it's a job of the Terraform to detect dependencies between the objects and create/update them in the correct order. For example, you don't need to have depends_on in the azurerm_databricks_workspace resource, because Terraform will find that it needs resource group before workspace could be created, so workspace creation will follow the creation of the resource group. And Terraform is trying to make the changes in the parallel if it's possible.
But because of this, it's becoming slightly more complex when you have workspace resource together with workspace objects, like, notebooks, clusters, etc. As there is no explicit dependency, Terraform will try create notebook in parallel with creation of workspace, and it will fail because workspace doesn't exist - usually you will get a message about authentication error.
The solution for that would be to have explicit dependency between notebook & workspace, plus you need to configure authentication of Databricks provider to point to newly created workspace (there are differences between user & service principal authentication - you can find more information in the docs). At the end your code would look like this:
resource "azurerm_databricks_workspace" "bdcc" {
name = "dbw-${var.ENV}-${var.LOCATION}"
resource_group_name = azurerm_resource_group.bdcc.name
location = azurerm_resource_group.bdcc.location
sku = "standard"
tags = {
region = var.BDCC_REGION
env = var.ENV
}
}
provider "databricks" {
host = azurerm_databricks_workspace.bdcc.workspace_url
}
resource "databricks_notebook" "notebook" {
depends_on = [azurerm_databricks_workspace.bdcc]
...
}
Unfortunately, there is no way to put depends_on on the provider level, so you will need to put it into every Databricks resource that is created together with workspace. Usually the best practice is to have a separate module for workspace creation & separate module for objects inside Databricks workspace.
P.S. I would recommend to read some book or documentation on Terraform. For example, Terraform: Up & Running is very good intro
I'm trying to create a Logic App through Terraform and facing issue related to API Connection.
Here are the manual steps for creating the API Connection:
Create a Logic App in your resource group and go to Logic App Designer
Select the HTTP trigger request and click on "Next Step", then search and select "Azure Container Instance"
Click on Create or update container group and it should ask you to sign-in
Now if you scroll all the way down, you should see "Connected to ...... Change Connection"
If the Change Connection is clicked, it will show the existing aci connections or create a new one.
I'm trying to create a Logic App and I'm facing an issue with the above mentioned steps.
What I'm doing is:
Exported the existing Logic App template from another environment
Converted the values in the json as parameters and kept them in variables.tf and the final values in terraform.tfvars
The terraform plan is working fine, however the terraform apply is causing an issue
Error message:
Error: waiting for creation of Template Deployment "logicapp_arm_template" (Resource Group "resource_group_name"): Code="DeploymentFailed" Message="At least one resource deployment operation failed. Please list deployment operations for details. Please see https://aka.ms/DeployOperations for usage details." Details=[{"code":"NotFound","message":"{\r\n \"error\": {\r\n \"code\": \"ApiConnectionNotFound\",\r\n \"message\": \"**The API connection 'aci' could not be found**.\"\r\n }\r\n}"}]
Further troubleshooting shows that the error occurs in this line in terraform.tfvars
connections_aci_externalid = "/subscriptions/<subscription_id>/resourceGroups/<resource_group_name>/providers/Microsoft.Web/connections/aci"
Deduced that the issue is since the "aci" is not created.
So, created the aci manually through the Azure Portal (see top of the post for steps).
However, when I hit terraform apply the new error below shows up:
A resource with the ID "/subscriptions/<subscription_id>/resourceGroups/<resource_group_name>/providers/Microsoft.Resources/deployments/logicapp_arm_template" already exists - to be managed via Terraform this resource needs to be imported into the State. Please see the resource documentation for "azurerm_resource_group_template_deployment" for more information.
My question is, since I'm creating the Logic App using the existing template how should the "aci" portion be handled through Terraform?
For your last error message, you could remove the terraform.tfstate and terraform.tfstate.backup files in the terraform working directory and existing resources in the Azure portal then run terraform plan and terraform apply again.
If you have a separate working ARM Template, you can invoke the template deployment with terraform resource azurerm_resource_group_template_deployment. You could provide the contents of the ARM Template parameters file with argument parameters_content and the contents of the ARM Template file with argument template_content.
In this case, If you have manually created a new API Connection, you can directly input your new API connection id /subscriptions/<subscription_id>/resourceGroups/<resourceGroup_id>/providers/Microsoft.Web/connections/aci. Alternatively, you can create the API Connections automatically when you deploy your ARM Template with resource Microsoft.Web/connections. Read this blog for more samples.
If using the azurerm_resource_group_template_deployment, make sure that deployment mode is set to incremental, otherwise, you run into terrible state issues. An example from our terraform module can be seen below. We use this to deploy an arm template, which we design in our development environment and export from the Azure portal. This enables us to use parameters to deploy exactly the same logic app in test, acceptance, and production environments.
resource "azurerm_logic_app_workflow" "workflow" {
name = var.logic_app_name
location = var.location
resource_group_name = var.resource_group_name
}
resource "azurerm_resource_group_template_deployment" "workflow_deployment" {
count = var.arm_template_path == null ? 0 : 1
name = "${var.logic_app_name}-deployment"
resource_group_name = var.resource_group_name
deployment_mode = "Incremental"
template_content = file(var.arm_template_path)
parameters_content = jsonencode(local.parameters_content)
depends_on = [azurerm_logic_app_workflow.workflow]
}
Note the conditional using count. Setting arm_template_path = null by default enables us to deploy only the workflow "container" in our development environment. Which can then be used as a "canvas" for designing the logic app.
I have an existing Azure SQL Server and 1 database that wasn't created into an Elastic Pool initially. Terraform has deployed this and kept the state.
# Define SQL Server 1
resource "azurerm_mssql_server" "go-1" {
name = "sql-sandbox-server01
resource_group_name = data.azurerm_resource_group.env-resourcegroup.name
location = data.azurerm_resource_group.env-resourcegroup.location
version = var.azsqlserver1version
administrator_login = var.azsqlserver1sauser
administrator_login_password = random_password.sql-password.result
public_network_access_enabled = "true" # set to false with vNet integration
}
# Define SQL Database 1 - non-ElasticPool
resource "azurerm_mssql_database" "go-1" {
name = "sqldb-sandbox-01"
server_id = azurerm_mssql_server.go-1.id
sku_name = "Basic"
}
Since the decision now to use Elastic Pools has been reached (for this single database and others to follow) the database "sqldb-sandbox-01" now already has tables and data in it.
I've added this to my main.tf file...and it works fine, the elastic pool gets created...
resource "azurerm_sql_elasticpool" "go-1" {
name = "sqlep-sandbox-pool01
resource_group_name = data.azurerm_resource_group.env-resourcegroup.name
location = data.azurerm_resource_group.env-resourcegroup.location
server_name = azurerm_mssql_server.go-1.name
edition = "Basic"
dtu = 50
db_dtu_min = 0
db_dtu_max = 5
pool_size = 5000
}
My question is...How do I move the existing "sqldb-sandbox-01" into the Elastic Pool in Terraform without it destroying the database and recreating it?
I attempted this, just adding the single line elastic_pool_id, but as the documentation states it will destroy/create the database again...
# Define SQL Database 1 - non-ElasticPool
resource "azurerm_mssql_database" "go-1" {
name = var.azsqldb1name
server_id = azurerm_mssql_server.go-1.id
sku_name = var.azsqldb1sku
elastic_pool_id = azurerm_sql_elasticpool.go-1.id
}
I would be grateful to hear from anyone that has been in the same position and managed to find a way.
To move an existing same-server database into an Elastic Pool is easily achieved in the Azure Portal GUI, so I was hoping for something similar here. I did some searching around but couldn't find anything specific to this straightforward task.
Thanks in advance
For the existing Azure SQL database and Elastic Pool. Directly adding the single line elastic_pool_id in the block will force a new resource to be created. Even this display is not obvious in the Azure portal.
Instead of doing this, you could use local PowerShell scripts to add the existing database to the new Elastic Pool. The local-exec provisioner invokes a local executable after a resource is created.
Here is a working sample on my side.
resource "null_resource" "add_pool" {
provisioner "local-exec" {
command = <<-EOT
Set-AzSqlDatabase `
-ResourceGroupName "${azurerm_resource_group.example.name}" `
-ServerName "${azurerm_mssql_server.example.name}" `
-DatabaseName "${azurerm_mssql_database.test.name}" `
-ElasticPoolName "${azurerm_sql_elasticpool.go-1.name}"
EOT
interpreter = ["PowerShell", "-Command"]
}
}
This actually ended up easier than anticipated, after quite a lot of testing.
My original database segment looked like this...
# Define SQL Database 1 - non-ElasticPool
resource "azurerm_mssql_database" "go-1" {
name = var.azsqldb1name
server_id = azurerm_mssql_server.go-1.id
sku_name = "Basic"
}
It made sense to just move the database to an elastic pool manually, rather than try and do it in-code.
Once the database had moved to the elastic pool, I noticed the Azure ID behind the database did not change.
I then updated the Terraform with a change to the sku_name, and the addition of the elastic_pool_id...
resource "azurerm_mssql_database" "go-1" {
name = var.azsqldb1name
server_id = azurerm_mssql_server.go-1.id
sku_name = "ElasticPool"
elastic_pool_id = azurerm_sql_elasticpool.go-1.id
}
Run the Terraform Plan again, No infrastructure changes detected, it has worked and doesn't want to destroy anything.
In summary: Do the move of the stand-alone database to the Elastic Pool manually
Update your Terraform for the database in question with a change to...sku_name...and the addition of the elastic_pool_id
Thanks to all those that assisted me with this question
Switching from a General Purpose tier to ElasticPool is safe now (tested with AzureRM Provider 3.29). Terraform will do a migration to the Elasticpool without a re-creation of the database:
At least this is the case for MSSQL Database.