Terraform: Error: Invalid index operation - azure

I am using Terraform 0.14 for to automate the creation of some Azure resources.
I am trying to create assign a pull role to an Azure Kubernetes cluster to pull images from an Azure container registry using a Managed system identity
Here is my code
Azure Kubernetes cluster (main.tf file)
resource "azurerm_kubernetes_cluster" "akc" {
name = var.cluster_name
location = var.location
resource_group_name = var.resource_group_name
dns_prefix = var.dns_prefix
kubernetes_version = var.kubernetes_version
api_server_authorized_ip_ranges = var.api_server_authorized_ip_ranges
identity {
type = "SystemAssigned"
}
}
Azure Kubernetes cluster (outputs.tf file)
output "principal_id" {
value = azurerm_kubernetes_cluster.akc.identity[0]["principal_id"]
}
Azure role assignment (main.tf file)
# Create a role assignment
resource "azurerm_role_assignment" "ara" {
scope = var.scope
role_definition_name = var.role_definition_name
principal_id = var.principal_id
}
Test environment (main.tf file)
# Create azure kubernetes cluster
module "azure_kubernetes_cluster" {
source = "../modules/azure-kubernetes-cluster"
cluster_name = var.cluster_name
location = var.location
dns_prefix = var.dns_prefix
resource_group_name = var.resource_group_name
kubernetes_version = var.kubernetes_version
node_count = var.node_count
min_count = var.min_count
max_count = var.max_count
os_disk_size_gb = "100"
max_pods = "110"
vm_size = var.vm_size
aad_group_name = var.aad_group_name
vnet_subnet_id = var.vnet_subnet_id
}
# Create azure container registry
module "azure_container_registry" {
source = "../modules/azure-container-registry"
container_registry_name = var.container_registry_name
resource_group_name = var.resource_group_name
location = var.location
sku = var.sku
admin_enabled = var.admin_enabled
}
# Create azure role assignment
module "azure_role_assignment" {
source = "../modules/azure-role-assignment"
scope = module.azure_container_registry.acr_id
role_definition_name = var.role_definition_name
principal_id = module.azure_kubernetes_cluster.principal_id
}
However, when I run the terraform plan command, I get the error below:
Error: Invalid index operation
on ../modules/aks-cluster/outputs.tf line 14, in output "principal_id":
14: value = azurerm_kubernetes_cluster.cluster.identity[0]["principal_id"]
Only attribute access is allowed here. Did you mean to access attribute
"principal_id" using the dot operator?
Trying to figure out the solution to this.

I later figured out the solution to the error. Some modifications were made in Terraform 0.12 and later versions on how index operations are called. So rather than this:
Azure Kubernetes cluster (outputs.tf file)
output "principal_id" {
value = azurerm_kubernetes_cluster.akc.identity[0]["principal_id"]
}
It will be this:
Azure Kubernetes cluster (outputs.tf file)
output "principal_id" {
value = azurerm_kubernetes_cluster.akc.identity.*.principal_id
}
And also instead of this:
Test environment (main.tf file)
# Create azure role assignment
module "azure_role_assignment" {
source = "../modules/azure-role-assignment"
scope = module.azure_container_registry.acr_id
role_definition_name = var.role_definition_name
principal_id = module.azure_kubernetes_cluster.principal_id
}
It will be this:
Test environment (main.tf file)
# Create azure role assignment
module "azure_role_assignment" {
source = "../modules/azure-role-assignment"
scope = module.azure_container_registry.acr_id
role_definition_name = var.role_definition_name
principal_id = module.azure_kubernetes_cluster.principal_id[0]
}
Resources: Invalid index when referencing output from module in 0.12
That's all
I hope this helps

Related

Argument "recovery_vault_name" not found while creating the recovery service vault using Terraform

I am using Terraform Script to create a Recovery Service Vault and need to take the VM backup into that recovery service vault. Below is the terraform script and steps I performed:
Create Recovery Service Vault
Define backup policy
Azure VM protection backup
Terraform Version >= 0.14
Azure rm version= ~>2.0
Main.tf
resource "azurerm_recovery_services_vault" "recovery_vault" {
name = var.recovery_vault_name
location = var.location
resource_group_name = var.resource_group_name
}
resource "azurerm_backup_policy_vm" "ss_vm_backup_policy" {
name = "tfex-recovery-vault-policy"
resource_group_name = var.resource_group_name
recovery_vault_name = azurerm_recovery_services_vault.recovery_vault.name
policy_type = "V1"
backup {
frequency = "Monthly"
time = "23:00"
weekdays = ['Sunday']
}
instant_restore_retention_days = 5
retention_weekly {
count= 12
weekdays = ['Sunday']
}
}
resource "azurerm_backup_protected_vm" "ss_vm_protection_backup" {
resource_group_name = var.resource_group_name
recovery_vault_name = azurerm_recovery_services_vault.recovery_vault.name
source_vm_id = azurerm_windows_virtual_machine.windows_vm[count.index].id
backup_policy_id = azurerm_backup_policy_vm.ss_vm_backup_policy.id
}
Variable.tf
variable resource_group_name {
default = "ss"
}
variable recovery_vault_name {
default = "yy"
}
Referred the above Main.tf in my application specific main.tf file as below:
module "azure_windows_vm" {
source = "git::https://xx.yy.com/ABCD/_git/NBGF/TGFT?ref=master"
vm_name = local.vm_name
location = local.location
resource_group_name = local.resource_group_name
admin_password = "xyz"
num_vms = 2
vnet_name = var.vm_subnet_id
tags=local.tags
}
When I execute the above Terraform script in DevOps pipeline, I am getting below error from line where module "azure_windows_vm" starts
Error: Missing required argument The argument "recovery_vault_name" is
required, but no definition was found
I tried different things to fix this error, but somehow it is not working. Can someone please guide what I am missing here?
Thank You!

Terraform - ADF to DB connectivity issue when tenant_id is provided in LS configuration - azurerm_data_factory_linked_service_azure_sql_database

Terraform Version
1.2.3
AzureRM Provider Version
v3.13.0
Affected Resource(s)/Data Source(s)
Azure data factory, SQL Database
Terraform Configuration Files
resource "azurerm_data_factory_linked_service_azure_sql_database" "sqldatabase_linked_service_10102022" {
count = (var.subResourcesInfo.sqlDatabaseName != "") ? 1 : 0
depends_on = [azurerm_data_factory_integration_runtime_azure.autoresolve_integration_runtime,
azurerm_data_factory_managed_private_endpoint.sqlserver_managed_endpoint]
name = "AzureSqlDatabase10102022"
data_factory_id = azurerm_data_factory.datafactory.id
integration_runtime_name = "AutoResolveIntegrationRuntime"
use_managed_identity = true
connection_string = format("Integrated Security=False;Data Source=%s.database.windows.net;Initial Catalog=%s;",
var.subResourcesInfo.sqlServerName,
var.subResourcesInfo.sqlDatabaseName)
}
Expected Behaviour
Issue is ADF to DB connectivity, error:
Operation on target DWH_DF_aaa failed: {'StatusCode':'DFExecutorUserError','Message':'Job failed due to reason: com.microsoft.dataflow.broker.InvalidOperationException: Only one valid authentication should be used for AzureSqlDatabase. ServicePrincipalAuthentication is invalid. One or two of servicePrincipalId/key/tenant is missing.','Details':''}
When we created this LS using TF, we get tenant="" in ADF LS Json file which we suspect that causing issue of above error.
When we created the same LS directly on ADF UI, then there is no field of tenant="" in its json file, and if we use this LS in dataflow/pipeline then communication works from ADF to DB.
Expected behavior should be, if we don't provide tenant_id parameter in TF code then in json also should not show tenant="" which then works for connectivity.
I tried to reproduce the scenario in my environment:
With below code , I could create a Linked Service (connection) between Azure SQL Database and Azure Data Factory.
Code:
resource "azurerm_data_factory" "example" {
name = "kaADFexample"
location = data.azurerm_resource_group.example.location
resource_group_name = data.azurerm_resource_group.example.name
managed_virtual_network_enabled = true
}
resource "azurerm_storage_account" "example" {
name = "kaaaexample"
resource_group_name = data.azurerm_resource_group.example.name
location = data.azurerm_resource_group.example.location
account_kind = "BlobStorage"
account_tier = "Standard"
account_replication_type = "LRS"
}
resource "azurerm_data_factory_managed_private_endpoint" "example" {
name = "example"
data_factory_id = azurerm_data_factory.example.id
target_resource_id = azurerm_storage_account.example.id
subresource_name = "blob"
}
resource "azurerm_user_assigned_identity" "main" {
depends_on = [data.azurerm_resource_group.example]
name = "kasupports01-mid"
resource_group_name = data.azurerm_resource_group.example.name
location = data.azurerm_resource_group.example.location
}
resource "azurerm_data_factory_integration_runtime_azure" "test" {
name = "AutoResolveIntegrationRuntime"
data_factory_id = azurerm_data_factory.example.id
location = "AutoResolve"
virtual_network_enabled = true
}
resource "azurerm_data_factory_linked_service_azure_sql_database" "linked_service_azure_sql_database" {
name = "kaexampleLS"
data_factory_id = azurerm_data_factory.example.id
connection_string = "data source=serverhostname;initial catalog=master;user id=testUser;Password=test;integrated security=False;encrypt=True;connection timeout=30"
use_managed_identity = true
integration_runtime_name = azurerm_data_factory_integration_runtime_azure.test.name
depends_on = [azurerm_data_factory_integration_runtime_azure.test,
azurerm_data_factory_managed_private_endpoint.example]
}
output "id" {
value = azurerm_data_factory_linked_service_azure_sql_database.linked_service_azure_sql_database.id
}
Executed: terraform plan
Output:
id = "/subscriptions/xxxxxxxxx/resourceGroups/xxxxxx/providers/Microsoft.DataFactory/factories/kaADFexample/linkedservices/kaexampleLS"
If the error persists in your case ,try removing the tenant attribute in the data_factory just after deployment is done in terraform.
Please check this known issue and mentioned by #chgenzel in terraform-provider-azurerm issues | Github
ADF:
Managed Identity
Linked service : azure sql
Reference: data_factory_linked_service_azure_sql_database | terraformregistry

terraform nested for each loop in azure storage account

I would want to create multiple storage acounts and inside each of those sotrage accounts some containers.
If I would want 3 storage account i would always want to create container-a and container-b in those 3 storage accounts
So for example would be. Storage account list ["sa1","sa2","sa3"].
resource "azurerm_storage_account" "storage_account" {
count = length(var.list)
name = var.name
resource_group_name = module.storage-account-resource-group.resource_group_name[0]
location = var.location
account_tier = var.account_tier
account_kind = var.account_kind
then container block
resource "azurerm_storage_container" "container" {
depends_on = [azurerm_storage_account.storage_account]
count = length(var.containers)
name = var.containers[count.index].name
container_access_type = var.containers[count.index].access_type
storage_account_name = azurerm_storage_account.storage_account[0].name
container variables:
variable "containers" {
type = list(object({
name = string
access_type = string
}))
default = []
description = "List of storage account containers."
}
list variable
variable "list" {
type = list(string)
description = "the env to deploy. ['dev','qa','prod']"
This code will create only one container in the first storage account "sa1" but not in the others two "sa2" and "sa3". I read I need to use 2 for each to iterate in both list of storage account and continaers, but not sure how should be the code for it.
It would be better to use for_each:
resource "azurerm_storage_account" "storage_account" {
for_each = toset(var.list)
name = var.name
resource_group_name = module.storage-account-resource-group.resource_group_name[0]
location = var.location
account_tier = var.account_tier
account_kind = var.account_kind
}
then you need an equivalent of a double for loop, which you can get using setproduct:
locals {
flat_list = setproduct(var.list, var.containers)
}
and then you use local.flat_list for containers:
resource "azurerm_storage_container" "container" {
for_each = {for idx, val in local.flat_list: idx => val}
name = each.value.name[1].name
container_access_type = each.value.name[1].access_type
storage_account_name = azurerm_storage_account.storage_account[each.value[0]].name
}
p.s. I haven't run the code, thus it may require some adjustments, but the idea remains valid.

Accessing specific storage account id created using terraform module

I am creating an infrastructure with terraform modules. Some of the common and repeatitive infra are created using module
and other resources are created independently outside of the module. The structure of my code is described as below.
-terraform\module\storage.tf
-terraform\main.tf
-terraform\mlws.tf
This is my code for /module/storage.tf where I am createing a storage account like this
resource "azurerm_storage_account" "storage" {
name = var.storage_account_name
resource_group_name = var.rg_name
location = var.location
account_tier = "Standard"
account_replication_type = "GRS"
min_tls_version = "TLS1_2"
}
module "m1" {
source = "./modules"
storage_account_name = "m1storage"
rg_name = "rg1"
location = "USCentral"
}
module "m2" {
source = "./modules"
storage_account_name = "m2storage"
rg_name = "rg2"
location = "USCentral"
}
module "m3" {
source = "./modules"
storage_account_name = "m3storage"
rg_name = "rg3"
location = "USCentral"
}
resource "azurerm_machine_learning_workspace" "mlws" {
name = "mlws"
location = ""USCentral"
resource_group_name = "mlws-rg1"
application_insights_id = azurerm_application_insights.mlops_appins.id
key_vault_id = data.azurerm_key_vault.kv.id
storage_account_id = **<Mandatory to be filled>**
container_registry_id = azurerm_container_registry.acr.id
identity {
type = "SystemAssigned"
}
depends_on = [
module.m2
]
}
The code for storage account is under \terraform\module\storage.tf, The code for calling the module is under \terraform\main.tf, The code for machine learning workspace is under \terraform\mlws.tf.
Since my mlws.tf code is outside the module but it need to be associated with storage account id created under module m2 in above code.
I am struggling to fetch the id of "m2storage" storage account. Can you please provide solution on how can I access the id of specific storage account created through module and attach it with my code which is outside the module.
This is how it normally works. You run module m2 and it should give output something like this (should include storage_account_id):
output "storage_account_id" {
description = "M2 storage account id."
value = m2.storage_account.storage_account_id
}
Now you have the output and you want to use it you will refer to it as:
resource "azurerm_machine_learning_workspace" "mlws" {
name = "mlws"
location = ""USCentral"
resource_group_name = "mlws-rg1"
application_insights_id = azurerm_application_insights.mlops_appins.id
key_vault_id = data.azurerm_key_vault.kv.id
storage_account_id = module.m2.storage_account_id
container_registry_id = azurerm_container_registry.acr.id
identity {
type = "SystemAssigned"
}
depends_on = [
module.m2
]
}
Let me know if you need more help.

Using terraform how do I create an azure sql database from a backup

Using the default example on the terraform site I can easily create a database but how do I create a new database by restoring a backup?
provider "azurerm" {
features {}
}
resource "azurerm_resource_group" "example" {
name = "example-resources"
location = "West Europe"
}
resource "azurerm_storage_account" "example" {
name = "examplesa"
resource_group_name = azurerm_resource_group.example.name
location = azurerm_resource_group.example.location
account_tier = "Standard"
account_replication_type = "LRS"
}
resource "azurerm_mssql_server" "example" {
name = "example-sqlserver"
resource_group_name = azurerm_resource_group.example.name
location = azurerm_resource_group.example.location
version = "12.0"
administrator_login = "4dm1n157r470r"
administrator_login_password = "4-v3ry-53cr37-p455w0rd"
}
resource "azurerm_mssql_database" "test" {
name = "acctest-db-d"
server_id = azurerm_mssql_server.example.id
collation = "SQL_Latin1_General_CP1_CI_AS"
license_type = "LicenseIncluded"
max_size_gb = 4
read_scale = true
sku_name = "BC_Gen5_2"
zone_redundant = true
create_mode = "RestoreExternalBackup" <-- WHAT ELSE DO I DO?
extended_auditing_policy {
storage_endpoint = azurerm_storage_account.example.primary_blob_endpoint
storage_account_access_key = azurerm_storage_account.example.primary_access_key
storage_account_access_key_is_secondary = true
retention_in_days = 6
}
tags = {
foo = "bar"
}
}
In the documentation they mention a create_mode "RestoreExternalBackup" option but provide no example on how to reference the backup - mine is stored in an azure storage container.
Edit: The mention of "RestoreExternalBackup" was more about my lack of understanding. What I meant to ask was how do I restore/create a database from a bacpac file stored in a Storage Account
Following the blog Deploying Azure SQL Database Bacpac and Terraform by John Q. Martin
You can include the bacpac as the source for the database created in
Azure.
First, setup the firewall on the Azure SQL Server to prevent any failure during deployment due to blob storage access issue. To ensure this we have to enable “Allow Azure services and resources to access this server”, this allows the two Azure services to communicate.
Setting the Azure SQL Server Firewall
Set both Start_ip and End_ip to 0.0.0.0. This is interpreted by Azure as a firewall rule to allow Azure services.
resource "azurerm_sql_firewall_rule" "allowAzureServices" {
name = "Allow_Azure_Services"
resource_group_name = azurerm_resource_group.example.name
server_name = azurerm_sql_server.example.name
start_ip_address = "0.0.0.0"
end_ip_address = "0.0.0.0"
}
Defining the Database Resource
We need to use the azurerm_sql_database resource, because the deployment of a bacpac is only supported through this resource type.
The resource definition here is comprised of two main sections, the first being the details around where the database needs to go and the second part being a sub-block which defines the bacpac source details. Here we need to put in the URI for the bacpac file and the storage key, in this case we are using the SAS token for the key to allow access to the bacpac.
We also need to provide the username and password for the server we are creating to allow the import to work because it needs to have authorisation to the Azure SQL Server to work.
provider "azurerm" {
features {}
}
resource "azurerm_resource_group" "example" {
name = "example-resources"
location = "West Europe"
}
resource "azurerm_storage_account" "example" {
name = "examplesa"
resource_group_name = azurerm_resource_group.example.name
location = azurerm_resource_group.example.location
account_tier = "Standard"
account_replication_type = "LRS"
}
resource "azurerm_sql_server" "example" {
name = "myexamplesqlserver"
resource_group_name = azurerm_resource_group.example.name
location = azurerm_resource_group.example.location
version = "12.0"
administrator_login = "4dm1n157r470r"
administrator_login_password = "4-v3ry-53cr37-p455w0rd"
tags = {
environment = "production"
}
}
resource "azurerm_sql_firewall_rule" "allowAzureServices" {
name = "Allow_Azure_Services"
resource_group_name = azurerm_resource_group.example.name
server_name = azurerm_sql_server.example.name
start_ip_address = "0.0.0.0"
end_ip_address = "0.0.0.0"
}
resource "azurerm_sql_database" "appdb01" {
depends_on = [azurerm_sql_firewall_rule.allowAzureServices]
name = "AzSqlDbName"
resource_group_name = azurerm_sql_server.example.resource_group_name
location = azurerm_sql_server.example.location
server_name = azurerm_sql_server.example.name
collation = "SQL_Latin1_General_CP1_CI_AS"
requested_service_objective_name = "BC_Gen5_2"
max_size_gb = 4
read_scale = true
zone_redundant = true
create_mode = "Default"
import {
storage_uri = "https://examplesa.blob.core.windows.net/source/Source.bacpac"
storage_key = "gSKjBfoK4toNAWXUdhe6U7YHqBgCBPsvoDKTlh2xlqUQeDcuCVKcU+uwhq61AkQaPIbNnqZbPmYwIRkXp3OzLQ=="
storage_key_type = "StorageAccessKey"
administrator_login = "4dm1n157r470r"
administrator_login_password = "4-v3ry-53cr37-p455w0rd"
authentication_type = "SQL"
operation_mode = "Import"
}
extended_auditing_policy {
storage_endpoint = azurerm_storage_account.example.primary_blob_endpoint
storage_account_access_key = azurerm_storage_account.example.primary_access_key
storage_account_access_key_is_secondary = true
retention_in_days = 6
}
tags = {
foo = "bar"
}
}
Note:
The extended_auditing_policy block has been moved to azurerm_mssql_server_extended_auditing_policy and azurerm_mssql_database_extended_auditing_policy. This block will be removed in version 3.0 of
the provider.
requested_service_objective_name - (Optional) The service objective name for the database. Valid values depend on edition and location and may include S0, S1, S2, S3, P1, P2, P4, P6, P11 and ElasticPool. You can list the available names with the cli: shell az sql db list-editions -l westus -o table. For further information please see Azure CLI - az sql db.
And import supports the following:
storage_uri - (Required) Specifies the blob URI of the .bacpac file.
storage_key - (Required) Specifies the access key for the storage account.
storage_key_type - (Required) Specifies the type of access key for the storage account. Valid values are StorageAccessKey or SharedAccessKey.
administrator_login - (Required) Specifies the name of the SQL administrator.
administrator_login_password - (Required) Specifies the password of the SQL administrator.
authentication_type - (Required) Specifies the type of authentication used to access the server. Valid values are SQL or ADPassword.
operation_mode - (Optional) Specifies the type of import operation being performed. The only allowable value is Import.
Alternately, If you want to continue using the azurerm_mssql_database then we would need to deploy and empty database and then deploy the bacpac via SqlPackage. (Which I haven't tried yet)

Resources