Argument "recovery_vault_name" not found while creating the recovery service vault using Terraform - terraform

I am using Terraform Script to create a Recovery Service Vault and need to take the VM backup into that recovery service vault. Below is the terraform script and steps I performed:
Create Recovery Service Vault
Define backup policy
Azure VM protection backup
Terraform Version >= 0.14
Azure rm version= ~>2.0
Main.tf
resource "azurerm_recovery_services_vault" "recovery_vault" {
name = var.recovery_vault_name
location = var.location
resource_group_name = var.resource_group_name
}
resource "azurerm_backup_policy_vm" "ss_vm_backup_policy" {
name = "tfex-recovery-vault-policy"
resource_group_name = var.resource_group_name
recovery_vault_name = azurerm_recovery_services_vault.recovery_vault.name
policy_type = "V1"
backup {
frequency = "Monthly"
time = "23:00"
weekdays = ['Sunday']
}
instant_restore_retention_days = 5
retention_weekly {
count= 12
weekdays = ['Sunday']
}
}
resource "azurerm_backup_protected_vm" "ss_vm_protection_backup" {
resource_group_name = var.resource_group_name
recovery_vault_name = azurerm_recovery_services_vault.recovery_vault.name
source_vm_id = azurerm_windows_virtual_machine.windows_vm[count.index].id
backup_policy_id = azurerm_backup_policy_vm.ss_vm_backup_policy.id
}
Variable.tf
variable resource_group_name {
default = "ss"
}
variable recovery_vault_name {
default = "yy"
}
Referred the above Main.tf in my application specific main.tf file as below:
module "azure_windows_vm" {
source = "git::https://xx.yy.com/ABCD/_git/NBGF/TGFT?ref=master"
vm_name = local.vm_name
location = local.location
resource_group_name = local.resource_group_name
admin_password = "xyz"
num_vms = 2
vnet_name = var.vm_subnet_id
tags=local.tags
}
When I execute the above Terraform script in DevOps pipeline, I am getting below error from line where module "azure_windows_vm" starts
Error: Missing required argument The argument "recovery_vault_name" is
required, but no definition was found
I tried different things to fix this error, but somehow it is not working. Can someone please guide what I am missing here?
Thank You!

Related

Terraform - ADF to DB connectivity issue when tenant_id is provided in LS configuration - azurerm_data_factory_linked_service_azure_sql_database

Terraform Version
1.2.3
AzureRM Provider Version
v3.13.0
Affected Resource(s)/Data Source(s)
Azure data factory, SQL Database
Terraform Configuration Files
resource "azurerm_data_factory_linked_service_azure_sql_database" "sqldatabase_linked_service_10102022" {
count = (var.subResourcesInfo.sqlDatabaseName != "") ? 1 : 0
depends_on = [azurerm_data_factory_integration_runtime_azure.autoresolve_integration_runtime,
azurerm_data_factory_managed_private_endpoint.sqlserver_managed_endpoint]
name = "AzureSqlDatabase10102022"
data_factory_id = azurerm_data_factory.datafactory.id
integration_runtime_name = "AutoResolveIntegrationRuntime"
use_managed_identity = true
connection_string = format("Integrated Security=False;Data Source=%s.database.windows.net;Initial Catalog=%s;",
var.subResourcesInfo.sqlServerName,
var.subResourcesInfo.sqlDatabaseName)
}
Expected Behaviour
Issue is ADF to DB connectivity, error:
Operation on target DWH_DF_aaa failed: {'StatusCode':'DFExecutorUserError','Message':'Job failed due to reason: com.microsoft.dataflow.broker.InvalidOperationException: Only one valid authentication should be used for AzureSqlDatabase. ServicePrincipalAuthentication is invalid. One or two of servicePrincipalId/key/tenant is missing.','Details':''}
When we created this LS using TF, we get tenant="" in ADF LS Json file which we suspect that causing issue of above error.
When we created the same LS directly on ADF UI, then there is no field of tenant="" in its json file, and if we use this LS in dataflow/pipeline then communication works from ADF to DB.
Expected behavior should be, if we don't provide tenant_id parameter in TF code then in json also should not show tenant="" which then works for connectivity.
I tried to reproduce the scenario in my environment:
With below code , I could create a Linked Service (connection) between Azure SQL Database and Azure Data Factory.
Code:
resource "azurerm_data_factory" "example" {
name = "kaADFexample"
location = data.azurerm_resource_group.example.location
resource_group_name = data.azurerm_resource_group.example.name
managed_virtual_network_enabled = true
}
resource "azurerm_storage_account" "example" {
name = "kaaaexample"
resource_group_name = data.azurerm_resource_group.example.name
location = data.azurerm_resource_group.example.location
account_kind = "BlobStorage"
account_tier = "Standard"
account_replication_type = "LRS"
}
resource "azurerm_data_factory_managed_private_endpoint" "example" {
name = "example"
data_factory_id = azurerm_data_factory.example.id
target_resource_id = azurerm_storage_account.example.id
subresource_name = "blob"
}
resource "azurerm_user_assigned_identity" "main" {
depends_on = [data.azurerm_resource_group.example]
name = "kasupports01-mid"
resource_group_name = data.azurerm_resource_group.example.name
location = data.azurerm_resource_group.example.location
}
resource "azurerm_data_factory_integration_runtime_azure" "test" {
name = "AutoResolveIntegrationRuntime"
data_factory_id = azurerm_data_factory.example.id
location = "AutoResolve"
virtual_network_enabled = true
}
resource "azurerm_data_factory_linked_service_azure_sql_database" "linked_service_azure_sql_database" {
name = "kaexampleLS"
data_factory_id = azurerm_data_factory.example.id
connection_string = "data source=serverhostname;initial catalog=master;user id=testUser;Password=test;integrated security=False;encrypt=True;connection timeout=30"
use_managed_identity = true
integration_runtime_name = azurerm_data_factory_integration_runtime_azure.test.name
depends_on = [azurerm_data_factory_integration_runtime_azure.test,
azurerm_data_factory_managed_private_endpoint.example]
}
output "id" {
value = azurerm_data_factory_linked_service_azure_sql_database.linked_service_azure_sql_database.id
}
Executed: terraform plan
Output:
id = "/subscriptions/xxxxxxxxx/resourceGroups/xxxxxx/providers/Microsoft.DataFactory/factories/kaADFexample/linkedservices/kaexampleLS"
If the error persists in your case ,try removing the tenant attribute in the data_factory just after deployment is done in terraform.
Please check this known issue and mentioned by #chgenzel in terraform-provider-azurerm issues | Github
ADF:
Managed Identity
Linked service : azure sql
Reference: data_factory_linked_service_azure_sql_database | terraformregistry

is it possible to create Azure Database for PostgreSQL Flexible server - Restore server-> (Back up and restore) using terraform?

The azure Database for PostgreSQL Flexible server automatically back up the databases. In case of any accidental deletion of any databases we can restore the database by creating a new flexible server for the recovery process from the back up database .I know how do it from azure portal.Does the terraform code can also configure "backup and restore" for PostgreSQL Flexible server - Restore server.
The exact summary of the manual task documented in the azure doc:https://learn.microsoft.com/en-us/azure/postgresql/flexible-server/how-to-restore-server-portal. Just i want do the task using terraform . In addition to that ensure appropriate login and database level permission
I really appreciate any support and help
It is possible to create the azure database for PostgreSQL flexible server backup using terraform
Please use the below terraform code to restore the server
provider "azurerm" {
features {}
}
resource "azurerm_resource_group" "example" {
name = "RG_NAME"
location = "EASTUS"
}
resource "azurerm_virtual_network" "example" {
name = "example-vn"
location = azurerm_resource_group.example.location
resource_group_name = azurerm_resource_group.example.name
address_space = ["10.0.0.0/16"]
}
resource "azurerm_subnet" "example" {
name = "example-sn"
resource_group_name = azurerm_resource_group.example.name
virtual_network_name = azurerm_virtual_network.example.name
address_prefixes = ["10.0.2.0/24"]
service_endpoints = ["Microsoft.Storage"]
delegation {
name = "fs"
service_delegation {
name = "Microsoft.DBforPostgreSQL/flexibleServers"
actions = [
"Microsoft.Network/virtualNetworks/subnets/join/action",
]
}
}
}
resource "azurerm_private_dns_zone" "example" {
name = "example.postgres.database.azure.com"
resource_group_name = azurerm_resource_group.example.name
}
resource "azurerm_private_dns_zone_virtual_network_link" "example" {
name = "exampleVnetZone.com"
private_dns_zone_name = azurerm_private_dns_zone.example.name
virtual_network_id = azurerm_virtual_network.example.id
resource_group_name = azurerm_resource_group.example.name
}
resource "azurerm_postgresql_flexible_server" "example" {
name = "example-psqlflexibleserver"
resource_group_name = azurerm_resource_group.example.name
location = azurerm_resource_group.example.location
version = "12"
delegated_subnet_id = azurerm_subnet.example.id
private_dns_zone_id = azurerm_private_dns_zone.example.id
administrator_login = "psqladmin"
administrator_password = "H#Sh1CoR3!"
zone = "1"
storage_mb = 32768
backup_retention_days = 30
geo_redundant_backup_enabled = true
sku_name = "GP_Standard_D4s_v3"
depends_on = [azurerm_private_dns_zone_virtual_network_link.example]
}
Here I have mentioned the RG_name, subnet, VM, Vnet, db name, password and backup policy days
I have given the backup policy retention days are 30 the policy retention days should be in between 1 to 35 and the defaults value is 7 days
Before running the script we have to check the appropriate login server details
After the follow the below steps to execute the file
terraform init
It will initialize the file
Terraform plan
This will creates an execution plan and it will preview the changes that terraform plans to make the infrastructure
Terraform apply
This will creates or updates the infrastructure depending on the configuration
Previously it was default and the geo_redundant_backup_enabled is false I have set it to true and backup policy will be 30 days
For reference you can use this documentation

Update exsiting Azure App Service in Terraform

I would like to update my exsiting Azure App Service in Terraform by adding a Backup to this App Service.
For now it looks like this:
data "azurerm_app_service_plan" "example" {
name = "MyUniqueServicePlan"
resource_group_name = "example-resources"
}
resource "azurerm_app_service" "example" {
name = "MyUniqueWebAppName"
location = "West Europe"
resource_group_name = "example-resources"
app_service_plan_id = data.azurerm_app_service_plan.example.id
connection_string {
name = "myConectionString"
type = "SQLServer"
value = "Server=tcp:mysqlservername123.database.windows.net,1433;Initial Catalog=MyDatabaseName;Persist Security Info=False;User ID=xxx;Password=xxxxxx;MultipleActiveResultSets=False;Encrypt=True;TrustServerCertificate=False;Connection Timeout=30;"
}
backup {
name = "MyBackupName"
enabled = true
storage_account_url = "https://storageaccountnameqwetih.blob.core.windows.net/mycontainer?sp=r&st=2022-08-31T09:49:17Z&se=2022-08-31T17:49:17Z&spr=https&sv=2021-06-08&sr=c&sig=2JwQ%xx%2B%2xxB5xxxxFZxxVyAadjxxV8%3D"
schedule {
frequency_interval = 30
frequency_unit = "Day"
keep_at_least_one_backup = true
retention_period_in_days = 10
start_time = "2022-08-31T07:11:56.52Z"
}
}
}
But when I run it i got a error A resource with the ID ........ /MyUniqueWebAppName" already exists - to be managed via Terraform this resource needs to be imported into the State.
How in terraform can I point to an existing Azure APP Service and add a backup with the same schedule as I did in my template?
Before you can modify your existing resources with TF, you must import into the terraform state. For this you use import command.
data "azurerm_resource_group" "example" {
name = "<give rg name existing one>"
}
data "azurerm_app_service_plan" "example" {
name = "MyUniqueServicePlan"
resource_group_name = data.azurerm_resource_group.example.name
}
data "azurerm_app_service" "example" {
name = "MyUniqueWebAppName"
location = data.azurerm_resource_group.example.location
resource_group_name = data.azurerm_resource_group.example.name
app_service_plan_id = data.azurerm_app_service_plan.example.id
connection_string {
name = "myConectionString"
type = "SQLServer"
value = "Server=tcp:mysqlservername123.database.windows.net,1433;Initial Catalog=MyDatabaseName;Persist Security Info=False;User ID=xxx;Password=xxxxxx;MultipleActiveResultSets=False;Encrypt=True;TrustServerCertificate=False;Connection Timeout=30;"
}
backup {
name = "MyBackupName"
enabled = true
storage_account_url = "https://storageaccountnameqwetih.blob.core.windows.net/mycontainer?sp=r&st=2022-08-31T09:49:17Z&se=2022-08-31T17:49:17Z&spr=https&sv=2021-06-08&sr=c&sig=2JwQ%xx%2B%2xxB5xxxxFZxxVyAadjxxV8%3D"
schedule {
frequency_interval = 30
frequency_unit = "Day"
keep_at_least_one_backup = true
retention_period_in_days = 10
start_time = "2022-08-31T07:11:56.52Z"
}
}
}
No need to use import command , use this code for your reference
just give rg name existing one in resources group block

Terraform: Error: Invalid index operation

I am using Terraform 0.14 for to automate the creation of some Azure resources.
I am trying to create assign a pull role to an Azure Kubernetes cluster to pull images from an Azure container registry using a Managed system identity
Here is my code
Azure Kubernetes cluster (main.tf file)
resource "azurerm_kubernetes_cluster" "akc" {
name = var.cluster_name
location = var.location
resource_group_name = var.resource_group_name
dns_prefix = var.dns_prefix
kubernetes_version = var.kubernetes_version
api_server_authorized_ip_ranges = var.api_server_authorized_ip_ranges
identity {
type = "SystemAssigned"
}
}
Azure Kubernetes cluster (outputs.tf file)
output "principal_id" {
value = azurerm_kubernetes_cluster.akc.identity[0]["principal_id"]
}
Azure role assignment (main.tf file)
# Create a role assignment
resource "azurerm_role_assignment" "ara" {
scope = var.scope
role_definition_name = var.role_definition_name
principal_id = var.principal_id
}
Test environment (main.tf file)
# Create azure kubernetes cluster
module "azure_kubernetes_cluster" {
source = "../modules/azure-kubernetes-cluster"
cluster_name = var.cluster_name
location = var.location
dns_prefix = var.dns_prefix
resource_group_name = var.resource_group_name
kubernetes_version = var.kubernetes_version
node_count = var.node_count
min_count = var.min_count
max_count = var.max_count
os_disk_size_gb = "100"
max_pods = "110"
vm_size = var.vm_size
aad_group_name = var.aad_group_name
vnet_subnet_id = var.vnet_subnet_id
}
# Create azure container registry
module "azure_container_registry" {
source = "../modules/azure-container-registry"
container_registry_name = var.container_registry_name
resource_group_name = var.resource_group_name
location = var.location
sku = var.sku
admin_enabled = var.admin_enabled
}
# Create azure role assignment
module "azure_role_assignment" {
source = "../modules/azure-role-assignment"
scope = module.azure_container_registry.acr_id
role_definition_name = var.role_definition_name
principal_id = module.azure_kubernetes_cluster.principal_id
}
However, when I run the terraform plan command, I get the error below:
Error: Invalid index operation
on ../modules/aks-cluster/outputs.tf line 14, in output "principal_id":
14: value = azurerm_kubernetes_cluster.cluster.identity[0]["principal_id"]
Only attribute access is allowed here. Did you mean to access attribute
"principal_id" using the dot operator?
Trying to figure out the solution to this.
I later figured out the solution to the error. Some modifications were made in Terraform 0.12 and later versions on how index operations are called. So rather than this:
Azure Kubernetes cluster (outputs.tf file)
output "principal_id" {
value = azurerm_kubernetes_cluster.akc.identity[0]["principal_id"]
}
It will be this:
Azure Kubernetes cluster (outputs.tf file)
output "principal_id" {
value = azurerm_kubernetes_cluster.akc.identity.*.principal_id
}
And also instead of this:
Test environment (main.tf file)
# Create azure role assignment
module "azure_role_assignment" {
source = "../modules/azure-role-assignment"
scope = module.azure_container_registry.acr_id
role_definition_name = var.role_definition_name
principal_id = module.azure_kubernetes_cluster.principal_id
}
It will be this:
Test environment (main.tf file)
# Create azure role assignment
module "azure_role_assignment" {
source = "../modules/azure-role-assignment"
scope = module.azure_container_registry.acr_id
role_definition_name = var.role_definition_name
principal_id = module.azure_kubernetes_cluster.principal_id[0]
}
Resources: Invalid index when referencing output from module in 0.12
That's all
I hope this helps

Create main.tf resources only when a variable is set to true in the vars.tf file

I usually have one generic main.tf file that is the basis for all deployments to our environments (DEV/STAGING/LIVE). I have one parameter.tf file for each of those environments.
There is always a requirement to have some more expensive Azure options enabled in the STAGING and LIVE environments over what DEV might have - in my example its enabling the Azure Defender for SQL and extended Auditing functions for Azure SQL servers (PaaS)
This is a portion of my main.tf file that is generic...
# Define SQL Server
resource "azurerm_mssql_server" "example" {
name = var.azsqlserver1name
resource_group_name = azurerm_resource_group.example.name
location = azurerm_resource_group.example.location
version = var.azsqlserver1version
administrator_login = var.azsqlserver1sauser
administrator_login_password = random_password.sql-password.result
public_network_access_enabled = "true" # set to false with vNet integration
}
# Define Storage Account and container for SQL Threat Detection Policy Audit Logs
resource "azurerm_storage_account" "example" {
name = var.azsaname1
resource_group_name = azurerm_resource_group.example.name
location = azurerm_resource_group.example.location
account_tier = var.azsatier1
account_replication_type = var.azsasku1
access_tier = var.azsaaccesstier1
account_kind = var.azsakind1
enable_https_traffic_only = "true"
}
resource "azurerm_storage_container" "example" {
name = "vascans"
storage_account_name = azurerm_storage_account.example.name
container_access_type = "private"
}
# Defines Azure SQL Defender and Auditing - NOTE: Auditing - only SA out at the moment (11/2020) - Log Analytics and Event Hub in preview only
resource "azurerm_mssql_server_security_alert_policy" "example" {
resource_group_name = azurerm_resource_group.example.name
server_name = azurerm_mssql_server.example.name
state = var.azsqltreatdetectionstate
storage_endpoint = azurerm_storage_account.example.primary_blob_endpoint
storage_account_access_key = azurerm_storage_account.example.primary_access_key
email_account_admins = var.azsqltreatdetectionemailadmins
retention_days = var.azsqltreatdetectionretention
}
resource "azurerm_mssql_server_vulnerability_assessment" "example" {
server_security_alert_policy_id = azurerm_mssql_server_security_alert_policy.example.id
storage_container_path = "${azurerm_storage_account.example.primary_blob_endpoint}${azurerm_storage_container.example.name}/"
storage_account_access_key = azurerm_storage_account.example.primary_access_key
recurring_scans {
enabled = var.azsqlvscansrecurring
email_subscription_admins = var.azsqlvscansemailadmins
}
}
resource "azurerm_mssql_server_extended_auditing_policy" "example" {
server_id = azurerm_mssql_server.example.id
storage_endpoint = azurerm_storage_account.example.primary_blob_endpoint
storage_account_access_key = azurerm_storage_account.example.primary_access_key
storage_account_access_key_is_secondary = false
retention_in_days = var.azsqlauditretentiondays
}
What I need to do is have anything after the first "azurerm_mssql_server" resource to only be created in STAGING and LIVE (not DEV). I was planning to have a variable in the DEV/STAGING/LIVE parm tf files that state something like...
DEVparm.tf
variable azsqlenableazuredefenderforsql {
default="false"
}
STAGINGparm.tf and LIVEparm.tf
variable azsqlenableazuredefenderforsql {
default="true"
}
If this possible to achieve? Thus far I've draw a blank and tested a few things, but they don't quite work. It seems a simple enough vision, but when there is no IF... statement
If you need to flip a resource on and off that is easy to achieve with count = 1 or 0. This is usually handled with the ternary operator.
resource "some_resource" "example" {
count = terraform.workspace != "development" ? 1 : 0
}
The count parameter was added to modules for terraform 0.13. If you have a bundle of resources it could be an alterative method to excluding certain resources from building.
One way that a lot of people solve this is by combining the count parameter on resources with a ternary. For example, look at the section entitled "If-Statements with the count parameter" in https://blog.gruntwork.io/terraform-tips-tricks-loops-if-statements-and-gotchas-f739bbae55f9#478c.
Basically you can keep your azsqlenableazuredefenderforsql variable and then in your resources do something like:
resource "azurerm_storage_container" "example" {
count = var.azsqlenableazuredefenderforsql ? 1 : 0
name = "vascans"
storage_account_name = azurerm_storage_account.example.name
container_access_type = "private"
}

Resources