Terraform - Add tags to failover SQL databases - terraform

I am trying to figure out a way to add tags to the failover DBs created with the help of terraform registry - azurerm_mssql_failover_group.
if I use the tags field mentioned as part of terraform documentation, it adds tags to the failover group but it does not set tags on the failover databases created.
My code for the resource group is as below
resource "azurerm_mssql_failover_group" "sql-database-failover" {
name = "sqldatabasefailover1"
server_id = "Id of Primary SQL Server"
databases = [
"Id of primary SQL dbs"
]
partner_server {
id = "Id of secondary (failover) SQL server"
}
read_write_endpoint_failover_policy {
mode = "Automatic"
grace_minutes = 60
}
tags = local.common_tags
}
I tried looking for terraform registry which can help us add tags to the already present DB but could not find any. Any help will be appreciated.
Regards Tarun

I tried to reproduce the same in my environment .
I created local tags as below and tried to add to failover group.
Code referred from : azurerm_sql_failover_group | Resources | hashicorp/azurerm | Terraform Registry
Code:
locals {
resource_tags = {
project_name = "failovergroup",
category = "devbackupresource"
}
}
resource "azurerm_mssql_server" "primary" {
name = "ka-sql-primary"
resource_group_name = data.azurerm_resource_group.example.name
location = "southeastasia"
version = "12.0"
administrator_login = "sqladmin"
administrator_login_password = "xxx"
}
resource "azurerm_mssql_server" "secondary" {
name = "ka-sql-secondary"
resource_group_name = data.azurerm_resource_group.example.name
location = "westeurope"
version = "12.0"
administrator_login = "sqladmin"
administrator_login_password = "xxx"
}
resource "azurerm_mssql_database" "db1" {
name = "kadb1"
server_id = azurerm_mssql_server.primary.id
}
resource "azurerm_mssql_failover_group" "example" {
name = "kav-example-failover-group"
server_id = azurerm_mssql_server.primary.id
databases = [azurerm_mssql_database.db1.id]
tags = local.resource_tags
partner_server {
id = azurerm_mssql_server.secondary.id
}
read_write_endpoint_failover_policy {
mode = "Automatic"
grace_minutes = 60
}
}
But the tags are not added to secondaryDb which is intended for failover group.
You can use the tags in Secondary Db resource block ” azurerm_mssql_server” for secondary as below.
locals {
resource_tags = {
...
}
}
resource "azurerm_mssql_server" "secondary" {
name = "ka-sql-secondary"
resource_group_name = data.azurerm_resource_group.example.name
location = "westeurope"
version = "12.0"
administrator_login = "sqladmin"
administrator_login_password = "pa$$w0rd"
tags = local.resource_tags
}
resource "azurerm_mssql_failover_group" "example" {
name = "kav-example-failover-group
server_id = azurerm_mssql_server.primary.id
databases = [azurerm_mssql_database.db1.id]
...
}
This created tags to my secondary sql server in azure .
Edit:
tag-support-microsoftsql
See Auto-failover groups limitations .One may need to create tags manually .
Also Note: Database restore operations don't restore the tags of the
original database.

Related

Terraform - ADF to DB connectivity issue when tenant_id is provided in LS configuration - azurerm_data_factory_linked_service_azure_sql_database

Terraform Version
1.2.3
AzureRM Provider Version
v3.13.0
Affected Resource(s)/Data Source(s)
Azure data factory, SQL Database
Terraform Configuration Files
resource "azurerm_data_factory_linked_service_azure_sql_database" "sqldatabase_linked_service_10102022" {
count = (var.subResourcesInfo.sqlDatabaseName != "") ? 1 : 0
depends_on = [azurerm_data_factory_integration_runtime_azure.autoresolve_integration_runtime,
azurerm_data_factory_managed_private_endpoint.sqlserver_managed_endpoint]
name = "AzureSqlDatabase10102022"
data_factory_id = azurerm_data_factory.datafactory.id
integration_runtime_name = "AutoResolveIntegrationRuntime"
use_managed_identity = true
connection_string = format("Integrated Security=False;Data Source=%s.database.windows.net;Initial Catalog=%s;",
var.subResourcesInfo.sqlServerName,
var.subResourcesInfo.sqlDatabaseName)
}
Expected Behaviour
Issue is ADF to DB connectivity, error:
Operation on target DWH_DF_aaa failed: {'StatusCode':'DFExecutorUserError','Message':'Job failed due to reason: com.microsoft.dataflow.broker.InvalidOperationException: Only one valid authentication should be used for AzureSqlDatabase. ServicePrincipalAuthentication is invalid. One or two of servicePrincipalId/key/tenant is missing.','Details':''}
When we created this LS using TF, we get tenant="" in ADF LS Json file which we suspect that causing issue of above error.
When we created the same LS directly on ADF UI, then there is no field of tenant="" in its json file, and if we use this LS in dataflow/pipeline then communication works from ADF to DB.
Expected behavior should be, if we don't provide tenant_id parameter in TF code then in json also should not show tenant="" which then works for connectivity.
I tried to reproduce the scenario in my environment:
With below code , I could create a Linked Service (connection) between Azure SQL Database and Azure Data Factory.
Code:
resource "azurerm_data_factory" "example" {
name = "kaADFexample"
location = data.azurerm_resource_group.example.location
resource_group_name = data.azurerm_resource_group.example.name
managed_virtual_network_enabled = true
}
resource "azurerm_storage_account" "example" {
name = "kaaaexample"
resource_group_name = data.azurerm_resource_group.example.name
location = data.azurerm_resource_group.example.location
account_kind = "BlobStorage"
account_tier = "Standard"
account_replication_type = "LRS"
}
resource "azurerm_data_factory_managed_private_endpoint" "example" {
name = "example"
data_factory_id = azurerm_data_factory.example.id
target_resource_id = azurerm_storage_account.example.id
subresource_name = "blob"
}
resource "azurerm_user_assigned_identity" "main" {
depends_on = [data.azurerm_resource_group.example]
name = "kasupports01-mid"
resource_group_name = data.azurerm_resource_group.example.name
location = data.azurerm_resource_group.example.location
}
resource "azurerm_data_factory_integration_runtime_azure" "test" {
name = "AutoResolveIntegrationRuntime"
data_factory_id = azurerm_data_factory.example.id
location = "AutoResolve"
virtual_network_enabled = true
}
resource "azurerm_data_factory_linked_service_azure_sql_database" "linked_service_azure_sql_database" {
name = "kaexampleLS"
data_factory_id = azurerm_data_factory.example.id
connection_string = "data source=serverhostname;initial catalog=master;user id=testUser;Password=test;integrated security=False;encrypt=True;connection timeout=30"
use_managed_identity = true
integration_runtime_name = azurerm_data_factory_integration_runtime_azure.test.name
depends_on = [azurerm_data_factory_integration_runtime_azure.test,
azurerm_data_factory_managed_private_endpoint.example]
}
output "id" {
value = azurerm_data_factory_linked_service_azure_sql_database.linked_service_azure_sql_database.id
}
Executed: terraform plan
Output:
id = "/subscriptions/xxxxxxxxx/resourceGroups/xxxxxx/providers/Microsoft.DataFactory/factories/kaADFexample/linkedservices/kaexampleLS"
If the error persists in your case ,try removing the tenant attribute in the data_factory just after deployment is done in terraform.
Please check this known issue and mentioned by #chgenzel in terraform-provider-azurerm issues | Github
ADF:
Managed Identity
Linked service : azure sql
Reference: data_factory_linked_service_azure_sql_database | terraformregistry

Update exsiting Azure App Service in Terraform

I would like to update my exsiting Azure App Service in Terraform by adding a Backup to this App Service.
For now it looks like this:
data "azurerm_app_service_plan" "example" {
name = "MyUniqueServicePlan"
resource_group_name = "example-resources"
}
resource "azurerm_app_service" "example" {
name = "MyUniqueWebAppName"
location = "West Europe"
resource_group_name = "example-resources"
app_service_plan_id = data.azurerm_app_service_plan.example.id
connection_string {
name = "myConectionString"
type = "SQLServer"
value = "Server=tcp:mysqlservername123.database.windows.net,1433;Initial Catalog=MyDatabaseName;Persist Security Info=False;User ID=xxx;Password=xxxxxx;MultipleActiveResultSets=False;Encrypt=True;TrustServerCertificate=False;Connection Timeout=30;"
}
backup {
name = "MyBackupName"
enabled = true
storage_account_url = "https://storageaccountnameqwetih.blob.core.windows.net/mycontainer?sp=r&st=2022-08-31T09:49:17Z&se=2022-08-31T17:49:17Z&spr=https&sv=2021-06-08&sr=c&sig=2JwQ%xx%2B%2xxB5xxxxFZxxVyAadjxxV8%3D"
schedule {
frequency_interval = 30
frequency_unit = "Day"
keep_at_least_one_backup = true
retention_period_in_days = 10
start_time = "2022-08-31T07:11:56.52Z"
}
}
}
But when I run it i got a error A resource with the ID ........ /MyUniqueWebAppName" already exists - to be managed via Terraform this resource needs to be imported into the State.
How in terraform can I point to an existing Azure APP Service and add a backup with the same schedule as I did in my template?
Before you can modify your existing resources with TF, you must import into the terraform state. For this you use import command.
data "azurerm_resource_group" "example" {
name = "<give rg name existing one>"
}
data "azurerm_app_service_plan" "example" {
name = "MyUniqueServicePlan"
resource_group_name = data.azurerm_resource_group.example.name
}
data "azurerm_app_service" "example" {
name = "MyUniqueWebAppName"
location = data.azurerm_resource_group.example.location
resource_group_name = data.azurerm_resource_group.example.name
app_service_plan_id = data.azurerm_app_service_plan.example.id
connection_string {
name = "myConectionString"
type = "SQLServer"
value = "Server=tcp:mysqlservername123.database.windows.net,1433;Initial Catalog=MyDatabaseName;Persist Security Info=False;User ID=xxx;Password=xxxxxx;MultipleActiveResultSets=False;Encrypt=True;TrustServerCertificate=False;Connection Timeout=30;"
}
backup {
name = "MyBackupName"
enabled = true
storage_account_url = "https://storageaccountnameqwetih.blob.core.windows.net/mycontainer?sp=r&st=2022-08-31T09:49:17Z&se=2022-08-31T17:49:17Z&spr=https&sv=2021-06-08&sr=c&sig=2JwQ%xx%2B%2xxB5xxxxFZxxVyAadjxxV8%3D"
schedule {
frequency_interval = 30
frequency_unit = "Day"
keep_at_least_one_backup = true
retention_period_in_days = 10
start_time = "2022-08-31T07:11:56.52Z"
}
}
}
No need to use import command , use this code for your reference
just give rg name existing one in resources group block

Using terraform how do I create an azure sql database from a backup

Using the default example on the terraform site I can easily create a database but how do I create a new database by restoring a backup?
provider "azurerm" {
features {}
}
resource "azurerm_resource_group" "example" {
name = "example-resources"
location = "West Europe"
}
resource "azurerm_storage_account" "example" {
name = "examplesa"
resource_group_name = azurerm_resource_group.example.name
location = azurerm_resource_group.example.location
account_tier = "Standard"
account_replication_type = "LRS"
}
resource "azurerm_mssql_server" "example" {
name = "example-sqlserver"
resource_group_name = azurerm_resource_group.example.name
location = azurerm_resource_group.example.location
version = "12.0"
administrator_login = "4dm1n157r470r"
administrator_login_password = "4-v3ry-53cr37-p455w0rd"
}
resource "azurerm_mssql_database" "test" {
name = "acctest-db-d"
server_id = azurerm_mssql_server.example.id
collation = "SQL_Latin1_General_CP1_CI_AS"
license_type = "LicenseIncluded"
max_size_gb = 4
read_scale = true
sku_name = "BC_Gen5_2"
zone_redundant = true
create_mode = "RestoreExternalBackup" <-- WHAT ELSE DO I DO?
extended_auditing_policy {
storage_endpoint = azurerm_storage_account.example.primary_blob_endpoint
storage_account_access_key = azurerm_storage_account.example.primary_access_key
storage_account_access_key_is_secondary = true
retention_in_days = 6
}
tags = {
foo = "bar"
}
}
In the documentation they mention a create_mode "RestoreExternalBackup" option but provide no example on how to reference the backup - mine is stored in an azure storage container.
Edit: The mention of "RestoreExternalBackup" was more about my lack of understanding. What I meant to ask was how do I restore/create a database from a bacpac file stored in a Storage Account
Following the blog Deploying Azure SQL Database Bacpac and Terraform by John Q. Martin
You can include the bacpac as the source for the database created in
Azure.
First, setup the firewall on the Azure SQL Server to prevent any failure during deployment due to blob storage access issue. To ensure this we have to enable “Allow Azure services and resources to access this server”, this allows the two Azure services to communicate.
Setting the Azure SQL Server Firewall
Set both Start_ip and End_ip to 0.0.0.0. This is interpreted by Azure as a firewall rule to allow Azure services.
resource "azurerm_sql_firewall_rule" "allowAzureServices" {
name = "Allow_Azure_Services"
resource_group_name = azurerm_resource_group.example.name
server_name = azurerm_sql_server.example.name
start_ip_address = "0.0.0.0"
end_ip_address = "0.0.0.0"
}
Defining the Database Resource
We need to use the azurerm_sql_database resource, because the deployment of a bacpac is only supported through this resource type.
The resource definition here is comprised of two main sections, the first being the details around where the database needs to go and the second part being a sub-block which defines the bacpac source details. Here we need to put in the URI for the bacpac file and the storage key, in this case we are using the SAS token for the key to allow access to the bacpac.
We also need to provide the username and password for the server we are creating to allow the import to work because it needs to have authorisation to the Azure SQL Server to work.
provider "azurerm" {
features {}
}
resource "azurerm_resource_group" "example" {
name = "example-resources"
location = "West Europe"
}
resource "azurerm_storage_account" "example" {
name = "examplesa"
resource_group_name = azurerm_resource_group.example.name
location = azurerm_resource_group.example.location
account_tier = "Standard"
account_replication_type = "LRS"
}
resource "azurerm_sql_server" "example" {
name = "myexamplesqlserver"
resource_group_name = azurerm_resource_group.example.name
location = azurerm_resource_group.example.location
version = "12.0"
administrator_login = "4dm1n157r470r"
administrator_login_password = "4-v3ry-53cr37-p455w0rd"
tags = {
environment = "production"
}
}
resource "azurerm_sql_firewall_rule" "allowAzureServices" {
name = "Allow_Azure_Services"
resource_group_name = azurerm_resource_group.example.name
server_name = azurerm_sql_server.example.name
start_ip_address = "0.0.0.0"
end_ip_address = "0.0.0.0"
}
resource "azurerm_sql_database" "appdb01" {
depends_on = [azurerm_sql_firewall_rule.allowAzureServices]
name = "AzSqlDbName"
resource_group_name = azurerm_sql_server.example.resource_group_name
location = azurerm_sql_server.example.location
server_name = azurerm_sql_server.example.name
collation = "SQL_Latin1_General_CP1_CI_AS"
requested_service_objective_name = "BC_Gen5_2"
max_size_gb = 4
read_scale = true
zone_redundant = true
create_mode = "Default"
import {
storage_uri = "https://examplesa.blob.core.windows.net/source/Source.bacpac"
storage_key = "gSKjBfoK4toNAWXUdhe6U7YHqBgCBPsvoDKTlh2xlqUQeDcuCVKcU+uwhq61AkQaPIbNnqZbPmYwIRkXp3OzLQ=="
storage_key_type = "StorageAccessKey"
administrator_login = "4dm1n157r470r"
administrator_login_password = "4-v3ry-53cr37-p455w0rd"
authentication_type = "SQL"
operation_mode = "Import"
}
extended_auditing_policy {
storage_endpoint = azurerm_storage_account.example.primary_blob_endpoint
storage_account_access_key = azurerm_storage_account.example.primary_access_key
storage_account_access_key_is_secondary = true
retention_in_days = 6
}
tags = {
foo = "bar"
}
}
Note:
The extended_auditing_policy block has been moved to azurerm_mssql_server_extended_auditing_policy and azurerm_mssql_database_extended_auditing_policy. This block will be removed in version 3.0 of
the provider.
requested_service_objective_name - (Optional) The service objective name for the database. Valid values depend on edition and location and may include S0, S1, S2, S3, P1, P2, P4, P6, P11 and ElasticPool. You can list the available names with the cli: shell az sql db list-editions -l westus -o table. For further information please see Azure CLI - az sql db.
And import supports the following:
storage_uri - (Required) Specifies the blob URI of the .bacpac file.
storage_key - (Required) Specifies the access key for the storage account.
storage_key_type - (Required) Specifies the type of access key for the storage account. Valid values are StorageAccessKey or SharedAccessKey.
administrator_login - (Required) Specifies the name of the SQL administrator.
administrator_login_password - (Required) Specifies the password of the SQL administrator.
authentication_type - (Required) Specifies the type of authentication used to access the server. Valid values are SQL or ADPassword.
operation_mode - (Optional) Specifies the type of import operation being performed. The only allowable value is Import.
Alternately, If you want to continue using the azurerm_mssql_database then we would need to deploy and empty database and then deploy the bacpac via SqlPackage. (Which I haven't tried yet)

Could not read output attribute from remote state datasource

I am new to terraform so I will attempt to explain with the best of my ability. Terraform will not read in the variable/output from the statefile and use that value in another file.
I have tried searching the internet for everything I could find to see if anyone how has had this problem and how they fixed it.
###vnet.tf
#Remote State pulling data from bastion resource group state
data "terraform_remote_state" "network" {
backend = "azurerm"
config = {
storage_account_name = "terraformstatetracking"
container_name = "bastionresourcegroups"
key = "terraform.terraformstate"
}
}
#creating virtual network and putting that network in resource group created by bastion.tf file
module "quannetwork" {
source = "Azure/network/azurerm"
resource_group_name = "data.terraform_remote_state.network.outputs.quan_netwk"
location = "centralus"
vnet_name = "quan"
address_space = "10.0.0.0/16"
subnet_prefixes = ["10.0.1.0/24", "10.0.2.0/24", "10.0.3.0/24"]
subnet_names = ["subnet1", "subnet2", "subnet3"]
tags = {
environment = "quan"
costcenter = "it"
}
}
terraform {
backend "azurerm" {
storage_account_name = "terraformstatetracking"
container_name = "quannetwork"
key = "terraform.terraformstate"
}
}
###resourcegroups.tf
# Create a resource group
#Bastion
resource "azurerm_resource_group" "cm" {
name = "${var.prefix}cm.RG"
location = "${var.location}"
tags = "${var.tags}"
}
#Bastion1
resource "azurerm_resource_group" "network" {
name = "${var.prefix}network.RG"
location = "${var.location}"
tags = "${var.tags}"
}
#bastion2
resource "azurerm_resource_group" "storage" {
name = "${var.prefix}storage.RG"
location = "${var.location}"
tags = "${var.tags}"
}
terraform {
backend "azurerm" {
storage_account_name = "terraformstatetracking"
container_name = "bastionresourcegroups"
key = "terraform.terraformstate"
}
}
###outputs.tf
output "quan_netwk" {
description = "Quan Network Resource Group"
value = "${azurerm_resource_group.network.id}"
}
When running the vnet.tf code it should read in the output from the outputs.tf which is stored in the azure backend storage account statefile file and use that value for the resource_group_name in the quannetwork module. Instead it creates a resource group named data.terraform_remote_state.network.outputs.quan_netwk. Any help would be greatly appreciated.
First, you need to input a string for the resource_group_name in your module quannetwork, not the resource group Id.
Second, if you want to quote something in the remote state, do not just put it in the Double quotes, the right format below:
resource_group_name = "${data.terraform_remote_state.network.outputs.quan_netwk}"

How do we configure plan for SQL database for Azure using Terraform?

I am using Terraform to configure the infrastructure on Azure. I am creating SQL database using Terraform template and I am able to create the database but by default the Standard plan is set for the database. Is it possible to configure the plan to Basic plan using the template?
This is the template that I am using: -
resource "azurerm_resource_group" "test" {
name = "Test-ResourceGroup"
location = "Central India"
}
resource "azurerm_sql_server" "test" {
name = "name-test-dev"
resource_group_name = "${azurerm_resource_group.test.name}"
location = "Central India"
version = "12.0"
administrator_login = "test-admin"
administrator_login_password = "test-password"
}
resource "azurerm_sql_database" "test" {
name = "test-dev"
resource_group_name = "${azurerm_resource_group.test.name}"
location = "Central India"
server_name = "${azurerm_sql_server.test.name}"
}
Set edition argument on the database to Basic
resource "azurerm_sql_database" "test" {
name = "test-dev"
edition = "Basic"
resource_group_name = "${azurerm_resource_group.test.name}"
location = "Central India"
server_name = "${azurerm_sql_server.test.name}"
}
hope helps
Create an Azure SQL Server Database with a SQL script initialization
https://registry.terraform.io/modules/metadevpro/sqlserver-seed/azurerm/1.0.0
module "azurerm_sql_sever_init" {
source = "Azure/database-seed/azurerm"
location = "westeurope"
resource_group = "myresourcegroup007"
db_server_fqdn = "${azurerm_sql_server.db1.fully_qualified_domain_name}"
sql_admin_username = "${azurerm_sql_server.db1.administrator_login}"
sql_admin_password = "${azurerm_sql_server.db1.administrator_login_password}"
db_name = "mydatabase"
init_script_file = "mydatabase.init.sql"
log_file = "mydatabase.init.log"
tags = {
environment = "qa"
project = "acme"
provisioner = "terraform"
}
}

Resources