I have created resource, network and compute module in terraform, now want to pass output of vm_id to site recovery module here are the files I am using currently.
Continue to subject: in resource "azurerm_site_recovery_replicated_vm" "vm-replication": source_vm_id= module.compute.vm_id
This is directory structure that I am following currently,
.
├── main.tf
└── modules
├── compute
│ ├── main.tf
│ ├── outputs.tf_bk
│ ├── variable.tf
│ └── variable.tfvars
├── network
│ ├── main.tf
│ ├── variable.tf
│ └── variable.tfvars
├── resource
│ ├── main.tf
│ ├── variable.tf
│ └── variable.tfvars
└── site_recovery
├── main.tf
├── variable.tf
└── variable.tfvars
root module main.cf file:
#Select provider
provider "azurerm" {
subscription_id = "xxxxxxxxxxxxxxxxxxxxxxxx"
version = "~> 2.4"
features {}
}
module "resource" {
source = "./modules/resource"
resource_group_name = "devops_primary"
location = "southeastasia"
}
module "network" {
source = "./modules/network"
virtual_network = "primaryvnet"
subnet = "primarysubnet"
address_space = "192.168.0.0/16"
address_prefix = "192.168.1.0/24"
public_ip = "backendvmpip"
location = "southeastasia"
primary_nic = "backendvmnic"
primary_ip_conf = "backendvm"
resource_group_name = "module.resource.primary_group_name"
}
module "compute" {
source = "./modules/compute"
#resource_group_name = "devops_primary"
#location = "southeastasia"
vm_name = "backendvm-primary"
vm_size = "standard_d2s_v3"
vm_storage_od_disk_name = "backend-vm-os-disk-primary"
computer_name = "backendserver"
username = "terraform"
ssh_key_path = "/home/terraform/.ssh/authorized_keys"
keys_data = "~/.ssh/id_rsa.pub"
sa_name = "primarysa"
disk_name = "backenddisk_primary"
}
module "site_recovery" {
source = "./modules/site_recovery"
#resource_group_name = "devops_primary"
#location = "southeastasia"
sec_resource_group = "devops_secondary"
recovery_vault_name = "recovery-vault"
primary_fabric = "devops_primary-fabric"
seconday_fabric = "devops_secondary-fabric"
primary_container = "primary-protection-container"
secondary_container = "secondary-protection-container"
policy_name = "policy"
container_mapping = "container-mapping"
replicated_vm = "backendvm-replication"
}
compute main.cf :
#Create VM in Primary resource
resource "azurerm_virtual_machine" "primary" {
name = "var.vm_name"
location = "module.resource.azurerm_resource_group.primary.location"
resource_group_name = "module.resource.azurerm_resource_group.primary.name"
vm_size = "var.vm_size"
network_interface_ids = ["module.resource.azurerm_network_interface.primary.id"]
storage_os_disk {
name = "var.vm_storage_od_disk_name"
os_type = "Linux"
caching = "ReadWrite"
create_option = "FromImage"
managed_disk_type = "Premium_LRS"
}
storage_image_reference {
publisher = "Canonical"
offer = "UbuntuServer"
sku = "18.04-LTS"
version = "latest"
}
os_profile {
computer_name = "var.computer_name"
admin_username = "var.username"
}
os_profile_linux_config {
disable_password_authentication = true
ssh_keys {
path = "/home/terraform/.ssh/authorized_keys"
key_data = file("~/.ssh/id_rsa.pub")
}
}
tags = {
environment = "Test"
}
output "vm_ids" {
description = "Virtual machine ids created."
value = azurerm_virtual_machine.primary.id
#depends_on = [azurerm_virtual_machine.primary.primary]
}
site recovery main.cf
#Create Site Recovery Replicated VM
resource "azurerm_site_recovery_replicated_vm" "vm-replication" {
name = var.replicated_vm
resource_group_name = azurerm_resource_group.secondary.name
recovery_vault_name = azurerm_recovery_services_vault.vault.name
source_recovery_fabric_name = azurerm_site_recovery_fabric.primary.name
#source_vm_id = site recovery main.cf
#Create Site Recovery Replicated VM
resource "azurerm_site_recovery_replicated_vm" "vm-replication" {
name = var.replicated_vm
resource_group_name = azurerm_resource_group.secondary.name
recovery_vault_name = azurerm_recovery_services_vault.vault.name
source_recovery_fabric_name = azurerm_site_recovery_fabric.primary.name
#source_vm_id = "module.compute.azurerm_virtual_machine.primary.id"
source_vm_id = module.compute.vm_ids
recovery_replication_policy_id = azurerm_site_recovery_replication_policy.policy.id
source_recovery_protection_container_name = azurerm_site_recovery_protection_container.primary.name
target_resource_group_id = azurerm_resource_group.secondary.id
target_recovery_fabric_id = azurerm_site_recovery_fabric.secondary.id
target_recovery_protection_container_id = azurerm_site_recovery_protection_container.secondary.id
managed_disk {
disk_id = "[module.resource.azurerm_virtual_machine.primary.storage_os_disk[0].managed_disk_id]"
staging_storage_account_id = "module.resource.azurerm_storage_account.primary.id"
target_resource_group_id = azurerm_resource_group.secondary.id
target_disk_type = "Premium_LRS"
target_replica_disk_type = "Premium_LRS"
}
managed_disk {
disk_id = "[module.resource.azurerm_managed_disk.primary.id]"
staging_storage_account_id = "[module.resource.azurerm_storage_account.primary.id]"
target_resource_group_id = azurerm_resource_group.secondary.id
target_disk_type = "Premium_LRS"
target_replica_disk_type = "Premium_LRS"
}
depends_on = ["module.compute.vm_ids"]
}
Used depends_on for input to site_recovery module, again will you please suggest, how can I output managed disks ids and Os disks ids from compute module and use input in site recovery module.
For the error
Error: Reference to undeclared module on modules/site_recovery/main.tf
It means the referenced module is not declared in the calling module.
To call a module means to include the contents of that module into the configuration with specific values for its input variables. Modules are called from within other modules using module blocks. You need to add the module block in the configuration .tf file where you want to call that module. See calling a child module.
It seems that there are no module blocks declared in your sub site recovery and compute main.tf, so you can not call the resource modules such as module.resource.azurerm_resource_group.primary.location, module.resource.azurerm_managed_disk.primary.id and so on.
As your directory structure, you can also use input variable to call the module from another module output. The correct expression is module.<MODULE NAME>.<OUTPUT NAME>.
To output the VM id and managed disks id from compute module like this:
output "azurerm_vm_id" {
value = azurerm_virtual_machine.primary.id
}
output "primary_os_disk_id" {
value = azurerm_virtual_machine.primary.storage_os_disk[0].managed_disk_id
}
The main.tf in the root directory
module "vm" {
source = "./modules/vm"
vm_name = "backendvm-primary"
vm_size = "standard_d2s_v3"
vm_storage_od_disk_name = "backend-vm-os-disk-primary"
computer_name = "backendserver"
username = "terraform"
nic_ids = module.network.primary_nic_id
resource_group_name = module.resource.rg_name
location = module.resource.rg_location
#ssh_key_path = "/home/terraform/.ssh/authorized_keys"
#keys_data = "~/.ssh/id_rsa.pub"
}
module "site_recovery" {
source = "./modules/site_recovery"
resource_group_name = module.resource.rg_name
location = module.resource.rg_location
sec_resource_group = "nancy_secondary"
sec_location = "eastus"
recovery_vault_name = "recovery-vault"
primary_fabric = "devops_primary-fabric"
seconday_fabric = "devops_secondary-fabric"
primary_container = "primary-protection-container"
secondary_container = "secondary-protection-container"
policy_name = "policy"
container_mapping = "container-mapping"
replicated_vm = "backendvm-replication"
source_vm_id = module.vm.azurerm_vm_id
primary_os_disk_id = module.vm.primary_os_disk_id
}
The Site Recovery main.tf file
#Create Site Recovery Replicated VM
resource "azurerm_site_recovery_replicated_vm" "vm-replication" {
depends_on = [var.vm_depends_on]
name = var.replicated_vm
resource_group_name = azurerm_resource_group.secondary.name
recovery_vault_name = azurerm_recovery_services_vault.vault.name
source_recovery_fabric_name = azurerm_site_recovery_fabric.primary.name
source_vm_id = var.source_vm_id
recovery_replication_policy_id = azurerm_site_recovery_replication_policy.policy.id
source_recovery_protection_container_name = azurerm_site_recovery_protection_container.primary.name
target_resource_group_id = azurerm_resource_group.secondary.id
target_recovery_fabric_id = azurerm_site_recovery_fabric.secondary.id
target_recovery_protection_container_id = azurerm_site_recovery_protection_container.secondary.id
managed_disk {
disk_id = var.primary_os_disk_id
staging_storage_account_id = azurerm_storage_account.primary.id
target_resource_group_id = azurerm_resource_group.secondary.id
target_disk_type = "Premium_LRS"
target_replica_disk_type = "Premium_LRS"
}
}
In fact, in the azurerm_site_recovery_replicated_vm block, there is an implicit dependencies source_vm_id, it replys on the source Azure VM. If you want to use terraform depends_on meta-argument accepts a list of resources with module. You can refer to this thread - Terraform depends_on with modules and this document.
Related
I am trying to create APIs with certain inputs dynamically into an APIM instance in azure. For that I have created a resource azurerm_api_management_api, to which I am going to pass the values like name, version, display name dynamically from a local.tf file. But when I tried, the error was
Error: Unsupported block type
│
│ on api-management\api_management_api.tf line 6, in resource "azurerm_api_management_api" "apim_api":
│ 6: dynamic apiValues{
│
│ Blocks of type "apiValues" are not expected here.
Here is the resource block.
resource "azurerm_api_management_api" "apim_api" {
revision = "1"
resource_group_name = var.resource_group_name
api_management_name = azurerm_api_management.apim.name
dynamic apiValues{
for_each = local.apiDetails
content{
name = apiValues.value.name
display_name = apiValues.value.display_name
path = ""
protocols = ["http","https"]
service_url = "http://spring-boot-redis.azurewebsites.net"
import {
content_format = "openapi-link"
content_value = "./SpringBootRedis.yaml"
}
}
}
}
locals.tf
locals {
apiDetails = [
{
name = "spring-boot-redis"
display_name = "Spring Boot Redis"
}
]
}
Is there any other way to achieve this? As I am planning to put this on an azure pipeline. So that I have to only take care of the API specification and names.
I am unsure why you are attempting to use a dynamic block for a block that does not exist according to the documentation. The error message agrees the block does not exist in the resource schema.
It appears what you are trying to achieve here is multiple resources with a value from a local.apiDetails:
resource "azurerm_api_management_api" "apim_api" {
for_each = local.apiDetails
revision = "1"
resource_group_name = var.resource_group_name
api_management_name = azurerm_api_management.apim.name
name = each.value.name
display_name = each.value.display_name
path = ""
protocols = ["http","https"]
service_url = "http://spring-boot-redis.azurewebsites.net"
import {
content_format = "openapi-link"
content_value = "./SpringBootRedis.yaml"
}
}
The documentation has more information.
I've created a Terraform template that creates 2 route tables and 2 subnets using the for_each command. I am trying to associate the route tables to the two subnets, however I am struggling to do so because I don't know how to obtain the ID for the route tables and subnets as the details are not in a variable, and I'm not sure how to get that information and use it. Please may someone provide assistance?
Thank you
Main Template
# SUBNETS DEPLOYMENT
resource "azurerm_subnet" "subnets" {
depends_on = [azurerm_virtual_network.vnet]
for_each = var.subnets
resource_group_name = var.rg.name
virtual_network_name = var.vnet.config.name
name = each.value.subnet_name
address_prefixes = each.value.address_prefixes
}
# ROUTE TABLE DEPLOYMENT
resource "azurerm_route_table" "rt" {
depends_on = [azurerm_virtual_network.vnet]
for_each = var.rt
name = each.value.route_table_name
resource_group_name = var.rg.name
location = var.rg.location
disable_bgp_route_propagation = true
route = [ {
address_prefix = each.value.address_prefix
name = each.value.route_name
next_hop_in_ip_address = each.value.next_hop_ip
next_hop_type = each.value.next_hop_type
} ]
}
# ROUTE TABLE ASSOICATION
resource "azurerm_subnet_route_table_association" "rt_assoication" {
subnet_id = azurerm_subnet.subnets.id
route_table_id = azurerm_route_table.rt.id
}
Variables
# SUBNET VARIBALES
variable "subnets" {
description = "subnet names and address prefixes"
type = map(any)
default = {
subnet1 = {
subnet_name = "snet-001"
address_prefixes = ["172.17.208.0/28"]
}
subnet2 = {
subnet_name = "snet-002"
address_prefixes = ["172.17.208.32/27"]
}
}
}
# ROUTE TABLES VARIABLES
variable "rt" {
description = "variable for route tables."
type = map(any)
default = {
rt1 = {
route_table_name = "rt1"
address_prefix = "0.0.0.0/0"
route_name = "udr-azure-firewall"
next_hop_ip = "10.0.0.0"
next_hop_type = "VirtualAppliance"
}
rt2 = {
route_table_name = "rt2"
address_prefix = "0.0.0.0/0"
route_name = "udr-azure-firewall"
next_hop_ip = "10.0.0.0"
next_hop_type = "VirtualAppliance"
}
}
}
The error I get when I run terraform plan is:
│ Error: Missing resource instance key
│
│ on modules\vnet\main.tf line 74, in resource "azurerm_subnet_route_table_association" "rt_assoication":
│ 74: subnet_id = azurerm_subnet.subnets.id
│
│ Because azurerm_subnet.subnets has "for_each" set, its attributes must be accessed on specific instances.
│
│ For example, to correlate with indices of a referring resource, use:
│ azurerm_subnet.subnets[each.key]
╵
╷
│ Error: Missing resource instance key
│
│ on modules\vnet\main.tf line 75, in resource "azurerm_subnet_route_table_association" "rt_assoication":
│ 75: route_table_id = azurerm_route_table.rt.id
│
│ Because azurerm_route_table.rt has "for_each" set, its attributes must be accessed on specific instances.
│
│ For example, to correlate with indices of a referring resource, use:
│ azurerm_route_table.rt[each.key]
Looks like you are almost there, update the following in the subnet-route table association block, it should work:
# ROUTE TABLE ASSOICATION
resource "azurerm_subnet_route_table_association" "rt_assoication" {
subnet_id = azurerm_subnet.subnets[each.key].id
route_table_id = azurerm_route_table.rt[each.key].id
}
I started writing terraform to automate the iac for provisioning VMs in Azure. However I wrote the entire code but am unable to use the existing subnet/vnet/resource group properly.
main.tf
# Configure the Microsoft Azure Provider
provider "azurerm" {
# The "feature" block is required for AzureRM provider 2.x.
# If you're using version 1.x, the "features" block is not allowed.
#version = "~>2.20.0"
features {}
subscription_id = var.subscription_id
tenant_id = var.tenant_id
client_id = var.client_id
client_secret = var.client_secret
}
#terraform {
# backend "azurerm" {
# snapshot = true
#}
#}
# Refer to resource group
data "azurerm_resource_group" "nwrk_group" {
name = var.nwrk_resource_group
}
data "azurerm_resource_group" "resource_group" {
name = var.resource_group
}
# Refer to a subnet
data "azurerm_subnet" "subnet" {
name = var.nwrk_subnet_name
virtual_network_name = var.nwrk_name
resource_group_name = data.azurerm_resource_group.nwrk_group.name
}
# Refer to Network Security Group and rule
data "azurerm_network_security_group" "nwrk_security_group" {
name = var.nwrk_security_grp
resource_group_name = data.azurerm_resource_group.nwrk_group.name
}
module "vm" {
source = "../modules/windows_vm"
node = var.node
node_username = var.node_username
node_password = var.node_password
tags = var.tags
deployment_environment = var.deployment_environment
nwrk_group_location = data.azurerm_resource_group.resource_group.location
nwrk_group_name = data.azurerm_resource_group.resource_group.name
subnet_id = data.azurerm_subnet.subnet.id
nwrk_security_group_id = data.azurerm_network_security_group.nwrk_security_group.id
resource_group_location = data.azurerm_resource_group.resource_group.location
resource_group_name = data.azurerm_resource_group.resource_group.name
}
terraform.tfvars
tags = {
project = "SEPS_Terraform"
environment = "test_tfm"
}
deployment_environment = "DEV"
node_username = "saz76test"
node_password = "SA82nd2"
nwrk_subnet_name = "SUBNET_45_0"
node = {
general_info = {
name = "gateway.test.com"
private_ip = "153.78.51.92"
vm_template = "Standard_B2s"
disk_type = "StandardSSD_LRS"
nwrk_resource_group = "SWS_LAB_36_192"
nwrk_name = "SUB_VNET_36_192"
nwrk_security_group = "N-Untrusted"
nwrk_subnet_name = "SUB_51_0"
}
os_image = {
publisher = "MicrosoftWindowsServer"
offer = "WindowsServer"
sku = "2019-DataCenter"
version = "latest"
}
storage_disk = {
type = "StandardSSD_LRS"
size = 256
}
}
variables.tf
variable "subscription_id" {
type = string
description = "Azure subscription id to provision infra."
}
variable "tenant_id" {
type = string
description = "Azure subscription tenant id"
}
variable "client_id" {
type = string
description = "App id to authenticate to azure."
}
variable "client_secret" {
type = string
description = "App password to authenticate to azure"
}
variable "resource_group" {
type = string
description = "Resource group in which resources will be added other than network resources"
}
variable "nwrk_resource_group" {
type = string
description = "Resource group for network resources"
}
variable "nwrk_name" {
type = string
description = "VPC network name where the network resources belong to"
}
variable "nwrk_subnet_name" {
type = string
description = "Subnet of the VPC network"
}
variable "nwrk_security_grp" {
type = string
description = "Security group to which the network belong to"
}
variable "tags" {
type = map(string)
description = "Tags to attach to resources"
}
variable "deployment_environment" {
type = string
description = "Environment these VMs belong to"
}
variable "node" {
type = map(map(string))
description = "web node with specifications."
}
variable "node_username" {
type = string
description = "Login username for node"
}
variable "node_password" {
type = string
description = "Login password for node"
}
module_code:
# Create network interface
resource "azurerm_network_interface" "nic" {
name = "${var.node["general_info"]["name"]}_nic"
location = var.nwrk_group_location
resource_group_name = var.nwrk_group_name
ip_configuration {
name = "${var.node["general_info"]["name"]}_nicConfiguration"
subnet_id = var.subnet_id
private_ip_address_allocation = "Static"
private_ip_address = var.node["general_info"]["private_ip"]
}
tags = var.tags
}
# Connect the security group to the network interface
resource "azurerm_network_interface_security_group_association" "example" {
network_interface_id = azurerm_network_interface.nic.id
network_security_group_id = var.nwrk_security_group_id
}
resource "azurerm_windows_virtual_machine" "vm" {
name = var.node["general_info"]["name"]
location = var.resource_group_location
resource_group_name = var.resource_group_name
network_interface_ids = [azurerm_network_interface.nic.id]
size = var.node["general_info"]["vm_template"]
computer_name = var.node["general_info"]["name"]
admin_username = var.node_username
admin_password = var.node_password
os_disk {
name = "${var.node["general_info"]["name"]}-osDisk"
caching = "ReadWrite"
storage_account_type = var.node["general_info"]["disk_type"]
}
source_image_reference {
publisher = var.node["os_image"]["publisher"]
offer = var.node["os_image"]["offer"]
sku = var.node["os_image"]["sku"]
version = var.node["os_image"]["version"]
}
tags = var.tags
}
output "vm_id" {
value = azurerm_windows_virtual_machine.vm.id
}
output "vm_name" {
value = azurerm_windows_virtual_machine.vm.name
}
output "vm_ip_address" {
value = azurerm_network_interface.nic.private_ip_address
}
My code is above one which am trying to execute init working but plan is failing to do. Can someone please help me on this what I am missing. ?? The error is getting like it.
Error :
Warning: Value for undeclared variable
│
│ The root module does not declare a variable named "nwrk_security_group" but a value was found in file "subscription.tfvars". If you meant to use
│ this value, add a "variable" block to the configuration.
│
│ To silence these warnings, use TF_VAR_... environment variables to provide certain "global" settings to all configurations in your organization.
│ To reduce the verbosity of these warnings, use the -compact-warnings option.
╵
╷
│ Warning: Resource targeting is in effect
│
│ You are creating a plan with the -target option, which means that the result of this plan may not represent all of the changes requested by the
│ current configuration.
│
│ The -target option is not for routine use, and is provided only for exceptional situations such as recovering from errors or mistakes, or when
│ Terraform specifically suggests to use it as part of an error message.
╵
╷
│ Error: Error: Subnet "SUBNET_45_0" (Virtual Network "SUB_VNET_36_192" / Resource Group "SWS_LAB_36_192") was not found
│
│ with data.azurerm_subnet.subnet,
│ on main.tf line 31, in data "azurerm_subnet" "subnet":
│ 31: data "azurerm_subnet" "subnet" {
│
╵
╷
│ Error: Error: Network Security Group "NSG" (Resource Group "SWS_LAB_36_192") was not found
│
│ with data.azurerm_network_security_group.nwrk_security_group,
│ on main.tf line 38, in data "azurerm_network_security_group" "nwrk_security_group":
│ 38: data "azurerm_network_security_group" "nwrk_security_group" {
Subscription.tfvars
subscription_id = "fdssssssssssssss"
client_id = "sdsdsdsdsdsdsdsdsdsdsdsd"
client_secret = ".dssssssssssssssssss
tenant_id = "asdfasdfasdfasdfasdfasdfasdfasdfasdfasdfasdfasdf"
resource_group = "SWS_LAB_36_192"
nwrk_resource_group = "SWS_LAB_36_192"
nwrk_name = "SUB_VNET_36_192"
nwrk_security_group = "N-Untrusted"
There could potentially be many different problems because I am not sure what the outlook of the root module and child modules are, but as per the error you are getting, it seems that the value defined for the variable in the subscription.tfvars is not being declared anywhere and the one that is supposed to be declared is missing, the data source does not return anything, hence there is the error from the child module as well. Currently it is defined as:
variable "nwrk_security_grp" {
type = string
description = "Security group to which the network belong to"
}
If you take a look at the values in subscription.tfvars, there is no nwrk_security_grp, but there is a nwrk_security_group. One option to fix this would probably be to change the name of the variable in the variables.tf:
variable "nwrk_security_group" {
type = string
description = "Security group to which the network belong to"
}
In that case, you would have to adapt the data source to use the new variable name:
data "azurerm_network_security_group" "nwrk_security_group" {
name = var.nwrk_security_group
resource_group_name = data.azurerm_resource_group.nwrk_group.name
}
Alternatively (and probably easier), you can change the name of the variable you are assigning the value to in subscription.tfvars:
nwrk_security_grp = "N-Untrusted" # it was nwrk_security_group
What I would strongly suggest going forward is to keep the naming convention for the variables the same because this way you will get into a lot of issues.
Hello Terraform Experts,
I inherited some old Terraform code for deploying resources to Azure. One of the main components that I see in most of the modules is to merge the Resource Group tags with additional tags that go on individual resources. The Resource Group tags are outputs as a map of tags. For example:
output "resource_group_tags_map" {
value = { for r in azurerm_resource_group.this : r.name => r.tags }
description = "map of rg tags."
}
and then a resource like vnets merges the RG tags with additional specific tags for the vnet given the name of the RG in a variable.
# merge Resource Group tags with Tags for VNET
# this is going to break if we change RGs
locals {
tags = merge(var.net_additional_tags, data.azurerm_resource_group.this.tags)
This works just fine if we can set the resource group in a single variable. It assumes that the resource(s) being deployed will go into one RG. However, this is not the case anymore and we somehow need to build in a way for any RG to be chosen when deploying a resource. The code below shows how the original concept works.
locals {
tags = merge(var.net_additional_tags, data.azurerm_resource_group.this.tags)
# - Virtual Network
# -
resource "azurerm_virtual_network" "this" {
for_each = var.virtual_networks
name = each.value["name"]
location = data.azurerm_resource_group.this.location
resource_group_name = var.resource_group_name
address_space = each.value["address_space"]
dns_servers = lookup(each.value, "dns_servers", null)
tags = local.tags
}
looking for help therefore to work around this. Say we create 100 vnets and each one of them goes into a different RG, we couldn't create 100 different resource group variables to capture that as it would become too cumbersome.
Here is my example with Key Vault
resource "azurerm_key_vault" "this" {
for_each = var.key_vaults
name = each.value["name"]
location = each.value["location"]
resource_group_name = each.value["resource_group_name"]
sku_name = each.value["sku_name"]
access_policy = var.access_policies
enabled_for_deployment = each.value["enabled_for_deployment"]
enabled_for_disk_encryption = each.value["enabled_for_disk_encryption"]
enabled_for_template_deployment = each.value["enabled_for_template_deployment"]
enable_rbac_authorization = each.value["enable_rbac_authorization"]
purge_protection_enabled = each.value["purge_protection_enabled"]
soft_delete_retention_days = each.value["soft_delete_retention_days"]
tags = merge(each.value["tags"], )
In the tags argument, we need to somehow merge the tags entered for this instance of Key Vault with the resource group tags that the user chose to place the key vault in. I thought of something like this, but clearly the syntax is wrong.
merge(each.value["tags"], data.azurerm_resource_group[each.key][each.value["resource_group_name"].tags)
Thanks for your input.
UPDATE:
│ Error: Invalid index
│
│ on Modules\keyvault\main.tf line 54, in resource "azurerm_key_vault" "this":
│ 54: tags = merge(each.value["tags"], data.azurerm_resource_group.this["${each.value.resource_group_name}"].tags)
│ ├────────────────
│ │ data.azurerm_resource_group.this is object with 1 attribute "keyvault1"
│ │ each.value.resource_group_name is "Terraform1"
│
│ The given key does not identify an element in this collection value.
Solution code posted below using a map and locals.
SOLUTION
Variables.tf
variable "key_vaults" {
description = "Key Vaults and their properties."
type = map(object({
name = string
location = string
resource_group_name = string
sku_name = string
tenant_id = string
enabled_for_deployment = bool
enabled_for_disk_encryption = bool
enabled_for_template_deployment = bool
enable_rbac_authorization = bool
purge_protection_enabled = bool
soft_delete_retention_days = number
tags = map(string)
}))
default = {}
}
# soft_delete_retention_days numeric value can be between 7 and 90. 90 is default
Main.tf for KeyVault module
data "azurerm_resource_group" "this" {
# read from local variable, index is resource_group_name
for_each = local.rgs_map
name = each.value.name
}
# use data azurerm_client_config to get tenant_id, not from config
data "azurerm_client_config" "current" {}
# -
# - Setup key vault
# - transform variables to locals to make sure the correct index will be used: resource group name and key vault name
locals {
rgs_map = {
for n in var.key_vaults :
n.resource_group_name => {
name = n.resource_group_name
}
}
kvs_map = {
for n in var.key_vaults :
n.name => {
name = n.name
location = n.location
resource_group_name = n.resource_group_name
sku_name = n.sku_name
tenant_id = data.azurerm_client_config.current.tenant_id # n.tenant_id
enabled_for_deployment = n.enabled_for_deployment
enabled_for_disk_encryption = n.enabled_for_disk_encryption
enabled_for_template_deployment = n.enabled_for_template_deployment
enable_rbac_authorization = n.enable_rbac_authorization
purge_protection_enabled = n.purge_protection_enabled
soft_delete_retention_days = n.soft_delete_retention_days
tags = merge(n.tags, data.azurerm_resource_group.this["${n.resource_group_name}"].tags)
}
}
}
resource "azurerm_key_vault" "this" {
for_each = local.kvs_map # use local variable, other wise keyvault1 will be used in stead of kv-eastus2-01 as index
name = each.value["name"]
location = each.value["location"]
resource_group_name = each.value["resource_group_name"]
sku_name = each.value["sku_name"]
tenant_id = each.value["tenant_id"]
enabled_for_deployment = each.value["enabled_for_deployment"]
enabled_for_disk_encryption = each.value["enabled_for_disk_encryption"]
enabled_for_template_deployment = each.value["enabled_for_template_deployment"]
enable_rbac_authorization = each.value["enable_rbac_authorization"]
purge_protection_enabled = each.value["purge_protection_enabled"]
soft_delete_retention_days = each.value["soft_delete_retention_days"]
tags = each.value["tags"]
}
I'm creating VMs using the script below beginning with "# Script to create VM". The script is being called from a higher level directory so as to create the VMs using modules, the call looks something like in the code below starting with "#Template..". The problem is that we are missing the state for a few VMs that were created during a previous run. I've tried importing the VM itself but looking at the state file it does not appear anything similar to the ones already there created using the bottom script. Any help would be great.
#Template to call VM Script below
module <virtual_machine_name> {
source = "./vm"
virtual_machine_name = "<virtual_machine_name>"
resource_group_name = "<resource_group_name>"
availability_set_name = "<availability_set_name>"
virtual_machine_size = "<virtual_machine_size>"
subnet_name = "<subnet_name>"
private_ip = "<private_ip>"
optional:
production = true (default is false)
data_disk_name = ["<disk1>","<disk2>"]
data_disk_size = ["50","100"] size is in GB
}
# Script to create VM
data azurerm_resource_group rgdata02 {
name = "${var.resource_group_name}"
}
data azurerm_subnet sndata02 {
name = "${var.subnet_name}"
resource_group_name = "${var.core_resource_group_name}"
virtual_network_name = "${var.virtual_network_name}"
}
data azurerm_availability_set availsetdata02 {
name = "${var.availability_set_name}"
resource_group_name = "${var.resource_group_name}"
}
data azurerm_backup_policy_vm bkpoldata02 {
name = "${var.backup_policy_name}"
recovery_vault_name = "${var.recovery_services_vault_name}"
resource_group_name = "${var.core_resource_group_name}"
}
data azurerm_log_analytics_workspace law02 {
name = "${var.log_analytics_workspace_name}"
resource_group_name = "${var.core_resource_group_name}"
}
#===================================================================
# Create NIC
#===================================================================
resource "azurerm_network_interface" "vmnic02" {
name = "nic${var.virtual_machine_name}"
location = "${data.azurerm_resource_group.rgdata02.location}"
resource_group_name = "${var.resource_group_name}"
ip_configuration {
name = "ipcnfg${var.virtual_machine_name}"
subnet_id = "${data.azurerm_subnet.sndata02.id}"
private_ip_address_allocation = "Static"
private_ip_address = "${var.private_ip}"
}
}
#===================================================================
# Create VM with Availability Set
#===================================================================
resource "azurerm_virtual_machine" "vm02" {
count = var.avail_set != "" ? 1 : 0
depends_on = [azurerm_network_interface.vmnic02]
name = "${var.virtual_machine_name}"
location = "${data.azurerm_resource_group.rgdata02.location}"
resource_group_name = "${var.resource_group_name}"
network_interface_ids = [azurerm_network_interface.vmnic02.id]
vm_size = "${var.virtual_machine_size}"
availability_set_id = "${data.azurerm_availability_set.availsetdata02.id}"
tags = var.tags
# This means the OS Disk will be deleted when Terraform destroys the Virtual Machine
# NOTE: This may not be optimal in all cases.
delete_os_disk_on_termination = true
os_profile {
computer_name = "${var.virtual_machine_name}"
admin_username = "__VMUSER__"
admin_password = "__VMPWD__"
}
os_profile_linux_config {
disable_password_authentication = false
}
storage_image_reference {
id = "${var.image_id}"
}
storage_os_disk {
name = "${var.virtual_machine_name}osdisk"
caching = "ReadWrite"
create_option = "FromImage"
managed_disk_type = "Premium_LRS"
os_type = "Linux"
}
boot_diagnostics {
enabled = true
storage_uri = "${var.boot_diagnostics_uri}"
}
}
#===================================================================
# Create VM without Availability Set
#===================================================================
resource "azurerm_virtual_machine" "vm03" {
count = var.avail_set == "" ? 1 : 0
depends_on = [azurerm_network_interface.vmnic02]
name = "${var.virtual_machine_name}"
location = "${data.azurerm_resource_group.rgdata02.location}"
resource_group_name = "${var.resource_group_name}"
network_interface_ids = [azurerm_network_interface.vmnic02.id]
vm_size = "${var.virtual_machine_size}"
# availability_set_id = "${data.azurerm_availability_set.availsetdata02.id}"
tags = var.tags
# This means the OS Disk will be deleted when Terraform destroys the Virtual Machine
# NOTE: This may not be optimal in all cases.
delete_os_disk_on_termination = true
os_profile {
computer_name = "${var.virtual_machine_name}"
admin_username = "__VMUSER__"
admin_password = "__VMPWD__"
}
os_profile_linux_config {
disable_password_authentication = false
}
storage_image_reference {
id = "${var.image_id}"
}
storage_os_disk {
name = "${var.virtual_machine_name}osdisk"
caching = "ReadWrite"
create_option = "FromImage"
managed_disk_type = "Premium_LRS"
os_type = "Linux"
}
boot_diagnostics {
enabled = true
storage_uri = "${var.boot_diagnostics_uri}"
}
}
#===================================================================
# Set Monitoring and Log Analytics Workspace
#===================================================================
resource "azurerm_virtual_machine_extension" "oms_mma02" {
count = var.bootstrap ? 1 : 0
name = "${var.virtual_machine_name}-OMSExtension"
virtual_machine_id = "${azurerm_virtual_machine.vm02.id}"
publisher = "Microsoft.EnterpriseCloud.Monitoring"
type = "OmsAgentForLinux"
type_handler_version = "1.8"
auto_upgrade_minor_version = true
settings = <<SETTINGS
{
"workspaceId" : "${data.azurerm_log_analytics_workspace.law02.workspace_id}"
}
SETTINGS
protected_settings = <<PROTECTED_SETTINGS
{
"workspaceKey" : "${data.azurerm_log_analytics_workspace.law02.primary_shared_key}"
}
PROTECTED_SETTINGS
}
#===================================================================
# Associate VM to Backup Policy
#===================================================================
resource "azurerm_backup_protected_vm" "vm02" {
count = var.bootstrap ? 1 : 0
resource_group_name = "${var.core_resource_group_name}"
recovery_vault_name = "${var.recovery_services_vault_name}"
source_vm_id = "${azurerm_virtual_machine.vm02.id}"
backup_policy_id = "${data.azurerm_backup_policy_vm.bkpoldata02.id}"}
On my understanding that you do not understand the Terraform Import clearly. So I would show you what does it mean.
When you want to import the pre-existing resources, you need to configure the resource in the Terraform files first that how the existing resources configured. And all the resources would be imported into the state files.
Another caveat currently is that only a single resource can be imported into a state file at a time.
When you want to import the resources into a module, I assume the folder structure like this:
testingimportfolder
└── main.tf
└── terraform.tfstate
└── terraform.tfstate.backup
└───module
└── main.tf
And the main.tf file in the folder testingimportfolder set the module block liek this:
module "importlab" {
source = "./module"
...
}
And after you finish importing all the resources into the state file, and then you can see the output of the command terraform state list like this:
module.importlab.azurerm_network_security_group.nsg
module.importlab.azurerm_resource_group.rg
module.importlab.azurerm_virtual_network.vnet
All the resource name should like module.module_name.azurerm_xxxx.resource_name. If you use the module inside the module, I assume the folder structure like this:
importmodules
├── main.tf
├── modules
│ └── vm
│ ├── main.tf
│ └── module
│ └── main.tf
And the file importmodules/modules/vm/main.tf like this:
module "azurevm" {
source = "./module"
...
}
Then after you finish importing all the resources into the state file, and then you can see the output of the command terraform state list like this:
module.vm.module.azurevm.azurerm_network_interface.example
Yes, it just likes what you have got. The state file will store your existing resources as you quote the modules one by one. So you need to plan your code and modules carefully and clearly. Or you will make yourself confused.