I am in the process of learning TF, and on the subject of modules, at the same time I have decided to only create resources on my Azure account using TF as a way to accelerate my learning. To this note, I found this gitbub repo https://github.com/kumarvna/terraform-azurerm-virtual-machine
I have been following the contents and trying to reproduce on my test system, I have tried to contact the author to no avail and felt I have already wasted 2 weeks trying to fix the problem, let me ask on here for help.
My setup.
Pulled the code from the repo onto my laptop.
logged onto my Azure account from a powershell console.
created a folder called create_vm
and in that folder
On my main.tf file, I have the following. This is a linux example, but I had the same issues with a windows example also.
# Azurerm provider configuration
provider "azurerm" {
features {}
}
# Creates a new resource group
resource "azurerm_resource_group" "test_build" {
name = "testBuild"
location = "West Europe"
}
# Creates a new network
resource "azurerm_virtual_network" "example" {
name = "example-network"
location = azurerm_resource_group.test_build.location
resource_group_name = azurerm_resource_group.test_build.name
address_space = ["10.0.0.0/16"]
dns_servers = ["10.0.0.4", "10.0.0.5"]
subnet {
name = "subnet1"
address_prefix = "10.0.1.0/24"
}
}
# Creates a new la workspace
resource "azurerm_log_analytics_workspace" "la" {
name = "loganalytics-we-sharedtest2"
resource_group_name = azurerm_resource_group.test_build.name
}
module "virtual-machine" {
source = "kumarvna/virtual-machine/azurerm"
version = "2.3.0"
# Resource Group, location, VNet and Subnet details
resource_group_name = azurerm_resource_group.test_build.name
location = "westeurope"
virtual_network_name = azurerm_virtual_network.example.name
subnet_name = "subnet1"
virtual_machine_name = "vm-linux"
# This module support multiple Pre-Defined Linux and Windows Distributions.
# Check the README.md file for more pre-defined images for Ubuntu, Centos, RedHat.
# Please make sure to use gen2 images supported VM sizes if you use gen2 distributions
# Specify `disable_password_authentication = false` to create random admin password
# Specify a valid password with `admin_password` argument to use your own password
# To generate SSH key pair, specify `generate_admin_ssh_key = true`
# To use existing key pair, specify `admin_ssh_key_data` to a valid SSH public key path.
os_flavor = "linux"
linux_distribution_name = "ubuntu2004"
virtual_machine_size = "Standard_B2s"
generate_admin_ssh_key = true
instances_count = 2
# Proxymity placement group, Availability Set and adding Public IP to VM's are optional.
# remove these argument from module if you dont want to use it.
enable_proximity_placement_group = true
enable_vm_availability_set = true
enable_public_ip_address = true
# Network Seurity group port allow definitions for each Virtual Machine
# NSG association to be added automatically for all network interfaces.
# Remove this NSG rules block, if `existing_network_security_group_id` is specified
nsg_inbound_rules = [
{
name = "ssh"
destination_port_range = "22"
source_address_prefix = "*"
},
{
name = "http"
destination_port_range = "80"
source_address_prefix = "*"
},
]
# Boot diagnostics to troubleshoot virtual machines, by default uses managed
# To use custom storage account, specify `storage_account_name` with a valid name
# Passing a `null` value will utilize a Managed Storage Account to store Boot Diagnostics
enable_boot_diagnostics = true
# Attach a managed data disk to a Windows/Linux VM's. Possible Storage account type are:
# `Standard_LRS`, `StandardSSD_ZRS`, `Premium_LRS`, `Premium_ZRS`, `StandardSSD_LRS`
# or `UltraSSD_LRS` (UltraSSD_LRS only available in a region that support availability zones)
# Initialize a new data disk - you need to connect to the VM and run diskmanagemnet or fdisk
data_disks = [
{
name = "disk1"
disk_size_gb = 100
storage_account_type = "StandardSSD_LRS"
},
{
name = "disk2"
disk_size_gb = 200
storage_account_type = "Standard_LRS"
}
]
# (Optional) To enable Azure Monitoring and install log analytics agents
# (Optional) Specify `storage_account_name` to save monitoring logs to storage.
log_analytics_workspace_id = azurerm_log_analytics_workspace.la.id
# Deploy log analytics agents to virtual machine.
# Log analytics workspace customer id and primary shared key required.
deploy_log_analytics_agent = true
log_analytics_customer_id = azurerm_log_analytics_workspace.la.workspace_id
log_analytics_workspace_primary_shared_key = azurerm_log_analytics_workspace.la.primary_shared_key
# Adding additional TAG's to your Azure resources
tags = {
ProjectName = "demo-project"
Env = "dev"
Owner = "user#example.com"
BusinessUnit = "CORP"
ServiceClass = "Gold"
}
}
on variables.tf.
variable "log_analytics_workspace_name" {
description = "The name of log analytics workspace name"
default = null
}
variable "storage_account_name" {
description = "The name of the hub storage account to store logs"
default = null
}
variable "create_resource_group" {
description = "Whether to create resource group and use it for all networking resources"
default = true
}
Please note that I added the create_resource_group variable to try to resolve my issue to no avail.
I then run
terraform init
terraform plan
I get the following error with terraform plan
│ Error: Error: Log Analytics workspaces "loganalytics-we-sharedtest2" (Resource Group "rg-shared-westeurope-01") was not found
│
│ with data.azurerm_log_analytics_workspace.example,
│ on main.tf line 6, in data "azurerm_log_analytics_workspace" "example":
│ 6: data "azurerm_log_analytics_workspace" "example" {
│
╵
╷
│ Error: Error: Resource Group "rg-shared-westeurope-01" was not found
│
│ with module.virtual-machine.data.azurerm_resource_group.rg,
│ on .terraform\modules\virtual-machine\main.tf line 27, in data "azurerm_resource_group" "rg":
│ 27: data "azurerm_resource_group" "rg" {
│
What have I done ?
Looked through the code to see what I am missing.
Added the variable at the top. Tried to contact the author to no
avail.
Tried to use an existing resource group, I feel this defeats the
purpose of having a variable that asks if a new resource group can
be created in case it doesn't already exist.
What else is confusing ?
I initially had another folder for modules, i later came to realise that the module is a public one being pulled down whenever I ran terraform init, now is there a way to have this as a localised module ?
I have made the changes recommended by the answer below, however in order not to turn the question into a long winded one, I have placed the error that I got below.
│ Error: Error: Subnet: (Name "subnet1" / Virtual Network Name "testBuild_vnet" / Resource Group "testBuild") was not found
│
│ with module.virtual-machine.data.azurerm_subnet.snet,
│ on .terraform\modules\virtual-machine\main.tf line 36, in data "azurerm_subnet" "snet":
│ 36: data "azurerm_subnet" "snet" {
│
╵
╷
│ Error: Invalid count argument
│
│ on .terraform\modules\virtual-machine\main.tf line 443, in resource "azurerm_monitor_diagnostic_setting" "nsg":
│ 443: count = var.existing_network_security_group_id == null && var.log_analytics_workspace_id != null ? 1 : 0
│
│ The "count" value depends on resource attributes that cannot be determined until apply, so Terraform cannot predict how many instances will be created. To work around this, use the -target argument to first apply only the resources that the count depends on.
I think the misunderstanding is that you think the module creates a resource group, but that is not the case. This module expects an already existing resource group as var.resource_group_name (same goes for the input variables virtual_network_name, subnet_name and log_analytics_workspace_id).
The main difference between the resource_ and data_ prefix is that data sources are read-only and "only" fetch already existing infrastructure for further use:
Data sources allow Terraform to use information defined outside of
Terraform, defined by another separate Terraform configuration, or
modified by functions.
https://www.terraform.io/language/data-sources
So in your case it should work like (not tested):
# Azurerm provider configuration
provider "azurerm" {
features {}
}
# Creates a new resource group
resource "azurerm_resource_group" "test_build" {
name = "testBuild"
location = "West Europe"
}
# Creates a new network
resource "azurerm_virtual_network" "example" {
name = "example-network"
location = azurerm_resource_group.test_build.location
resource_group_name = azurerm_resource_group.test_build.name
address_space = ["10.0.0.0/16"]
dns_servers = ["10.0.0.4", "10.0.0.5"]
subnet {
name = "subnet1"
address_prefix = "10.0.1.0/24"
}
}
# Creates a new la workspace
resource "azurerm_log_analytics_workspace" "la" {
name = "loganalytics-we-sharedtest2"
resource_group_name = azurerm_resource_group.test_build.name
}
module "virtual-machine" {
source = "kumarvna/virtual-machine/azurerm"
version = "2.3.0"
# Resource Group, location, VNet and Subnet details
resource_group_name = azurerm_resource_group.test_build.name
location = "westeurope"
virtual_network_name = azurerm_virtual_network.example.name
subnet_name = "subnet1"
virtual_machine_name = "vm-linux"
# This module support multiple Pre-Defined Linux and Windows Distributions.
# Check the README.md file for more pre-defined images for Ubuntu, Centos, RedHat.
# Please make sure to use gen2 images supported VM sizes if you use gen2 distributions
# Specify `disable_password_authentication = false` to create random admin password
# Specify a valid password with `admin_password` argument to use your own password
# To generate SSH key pair, specify `generate_admin_ssh_key = true`
# To use existing key pair, specify `admin_ssh_key_data` to a valid SSH public key path.
os_flavor = "linux"
linux_distribution_name = "ubuntu2004"
virtual_machine_size = "Standard_B2s"
generate_admin_ssh_key = true
instances_count = 2
# Proxymity placement group, Availability Set and adding Public IP to VM's are optional.
# remove these argument from module if you dont want to use it.
enable_proximity_placement_group = true
enable_vm_availability_set = true
enable_public_ip_address = true
# Network Seurity group port allow definitions for each Virtual Machine
# NSG association to be added automatically for all network interfaces.
# Remove this NSG rules block, if `existing_network_security_group_id` is specified
nsg_inbound_rules = [
{
name = "ssh"
destination_port_range = "22"
source_address_prefix = "*"
},
{
name = "http"
destination_port_range = "80"
source_address_prefix = "*"
},
]
# Boot diagnostics to troubleshoot virtual machines, by default uses managed
# To use custom storage account, specify `storage_account_name` with a valid name
# Passing a `null` value will utilize a Managed Storage Account to store Boot Diagnostics
enable_boot_diagnostics = true
# Attach a managed data disk to a Windows/Linux VM's. Possible Storage account type are:
# `Standard_LRS`, `StandardSSD_ZRS`, `Premium_LRS`, `Premium_ZRS`, `StandardSSD_LRS`
# or `UltraSSD_LRS` (UltraSSD_LRS only available in a region that support availability zones)
# Initialize a new data disk - you need to connect to the VM and run diskmanagemnet or fdisk
data_disks = [
{
name = "disk1"
disk_size_gb = 100
storage_account_type = "StandardSSD_LRS"
},
{
name = "disk2"
disk_size_gb = 200
storage_account_type = "Standard_LRS"
}
]
# (Optional) To enable Azure Monitoring and install log analytics agents
# (Optional) Specify `storage_account_name` to save monitoring logs to storage.
log_analytics_workspace_id = azurerm_log_analytics_workspace.la.id
# Deploy log analytics agents to virtual machine.
# Log analytics workspace customer id and primary shared key required.
deploy_log_analytics_agent = true
log_analytics_customer_id = azurerm_log_analytics_workspace.la.workspace_id
log_analytics_workspace_primary_shared_key = azurerm_log_analytics_workspace.la.primary_shared_key
# Adding additional TAG's to your Azure resources
tags = {
ProjectName = "demo-project"
Env = "dev"
Owner = "user#example.com"
BusinessUnit = "CORP"
ServiceClass = "Gold"
}
}
Just adding a new variable called create_resource_group will not do anything as long as there is no corresponding logic/code behind it.
Related
I know inorder to have a remote state in my terraform code, i must create a storage account,and a container. Usually, it is done manually, but i am trying to create the storage account and the container dynamically using the below code:
resource "azurerm_resource_group" "state_resource_group" {
name = "RG-Terraform-on-Azure"
location = "West Europe"
}
terraform {
backend "azurerm" {
resource_group_name = "RG-Terraform-on-Azure"
storage_account_name = azurerm_storage_account.state_storage_account.name
container_name = azurerm_storage_container.state_container.name
key = "terraform.tfstate"
}
}
resource "azurerm_storage_account" "state_storage_account" {
name = random_string.storage_account_name.result
resource_group_name = azurerm_resource_group.state_resource_group.name
location = azurerm_resource_group.state_resource_group.location
account_tier = "Standard"
account_replication_type = "LRS"
tags = {
environment = "staging"
}
}
resource "azurerm_storage_container" "state_container" {
name = "vhds"
storage_account_name = azurerm_storage_account.state_storage_account.name
container_access_type = "private"
}
resource "random_string" "storage_account_name" {
length = 14
lower = true
numeric = false
upper = false
special = false
}
But, the above code complains that:
│ Error: Variables not allowed
│
│ on main.tf line 11, in terraform:
│ 11: storage_account_name = azurerm_storage_account.state_storage_account.name
│
│ Variables may not be used here.
So,I already know that the i cannot use variables in the backend block, however i am wondering if there is a solution which enable me to create the storage account and the container dynamically and store the state file in there ?
Point:
i have already seen this question, but the .conf file did not work for me!
This can't be done in the same Terraform file. The backend has to exist before anything else. Terraform requires the backend to exist when you run terraform init. The backend is accessed to read the state as the very first step Terraform performs when you do a plan or apply, before any resources are actually created.
In the past I've automated the creation of the storage backend using a CLI tool. If you wanted to automate it with terraform it would have to be in a separate Terraform workspace, but then where would the backend for that workspace be?
In general, it doesn't really work to create the backend in Terraform.
I'm having issue with accessing my data "terrafrom_remote_state" objects..
So I'm following the hashicorp site to deploy azure resource with terraform cloud with run triggers. The trigger is working , running the plan for the second workspace, but it can't access the data i'm passing through the outputs.
I have set the "state" for the first workspace to be shared, and set the run trigger on the second workspace to be triggered by the 1st. No issues here.
I have tried to follow what is on the hasicorp site, but it is for aws so, maybe for azure I have missed something. I will post my outputs , then some code for the second workspace.
Ouputs : which i have looked at in the statefile and look good.
output "rgName" {
description = "The resource group for resources"
value = var.rgName
}
output "location" {
description = "The location for resources"
value = var.location
}
output "subnet1_id" {
description = "subnet 1"
value = azurerm_subnet.subnet1.id
}
2nd workspace
data "terraform_remote_state" "network" {
backend = "remote"
config = {
organization = "Awesome-Company"
workspaces = {
name = "TFCloud-Trigger-Network"
}
}
}
provider "azurerm" {
version = "2.66.0"
subscription_id = var.subscription_id
client_id = var.client_id
client_secret = var.clientSecret
tenant_id = var.tenant_id
features{}
}
#Deploy Public IP
resource "azurerm_public_ip" "pip1" {
name = "TFC-pip1"
location = data.terraform_remote_state.network.outputs.location
resource_group_name = data.terraform_remote_state.network.outputs.rgName
allocation_method = "Dynamic"
sku = "Basic"
}
#Create NIC
resource "azurerm_network_interface" "nic1" {
name = "TFC-TestVM-Nic"
location = data.terraform_remote_state.network.outputs.location
resource_group_name = data.terraform_remote_state.network.outputs.rgName
ip_configuration {
name = "ipconfig1"
subnet_id = date.terraform_remote_state.network.outputs.subnet1_id
private_ip_address_allocation = "Dynamic"
public_ip_address_id = azurerm_public_ip.pip1.id
}
}
The error is
Error: Unsupported attribute │ │ on main.tf line 26, in resource
"azurerm_public_ip" "pip1": │ 26: location =
data.terraform_remote_state.network.outputs.location │
├──────────────── │ │ data.terraform_remote_state.network.outputs
is object with no attributes │ │ This object does not have an
attribute named "location".
I can't access the data.terraform_remote_state.network.outputs
So, I figured this out and it is not in the documentation. A workspace that is a triggered by another workspace will not automatically update it's terrafrom plan.
Normally when I edit the code in github (or another repo) terraform cloud will automatically run a plan once you have saved that new code. A workspace that is triggered by another will not do that. So, even though I changed the code, I had to manually go to TF Cloud discard the current run on that triggered workspace, and re-run the plan. After this, the run trigger would successfully run.
It was a weird thing...
I am new to terraform and want to change the subnet on a network and I am getting a weird error.
google got nothing. here's what I am entering (after changing the main.tf and running plan)
terraform apply -replace="azurerm_subnet.subnet1"
Terraform will perform the following actions:
# module.network.azurerm_subnet.subnet[0] will be updated in-place
~ resource "azurerm_subnet" "subnet" {
~ address_prefixes = [
- "10.0.2.0/24",
+ "10.0.4.0/24",
]
id =
"/subscriptions/xxxxxxxxxxxxxxxxxxxxxxxxx/resourceGroups/lab-
resources/providers/Microsoft.Network/virtualNetworks/acctvnet/subnets/subnet1"
name = "subnet1"
# (7 unchanged attributes hidden)
}
Plan: 0 to add, 1 to change, 0 to destroy.
Do you want to perform these actions?
Terraform will perform the actions described above.
Only 'yes' will be accepted to approve.
I enter yes and I get this error:
Error: updating Subnet: (Name "subnet1" / Virtual Network Name "acctvnet" / Resource Group "lab-resources"): network.SubnetsClient#CreateOrUpdate: Failure sending request: StatusCode=400 -- Original Error: Code="InUseSubnetCannotBeUpdated" Message="Subnet subnet1 is in use and cannot be updated." Details=[]
│
│ with module.network.azurerm_subnet.subnet[0],
│ on .terraform/modules/network/main.tf line 15, in resource "azurerm_subnet" "subnet":
│ 15: resource "azurerm_subnet" "subnet" {
│
The VM is off and I do not see what else can be using it.
I also tried using the terraform taint "azurerm_subnet.subnet1"
Any ideas? Is what I am doing not possible?
Here is my main.tf
terraform {
required_version = ">=0.12"
required_providers {
azurerm = {
source = "hashicorp/azurerm"
version = "~>2.0"
}
}
}
provider "azurerm" {
features {}
}
resource "azurerm_resource_group" "lab_autodeploy" {
name = "lab-resources"
location = "East US 2"
}
module "Windowsservers" {
source = "Azure/compute/azurerm"
resource_group_name = azurerm_resource_group.lab_autodeploy.name
is_windows_image = true
vm_hostname = "new_ddc" // line can be removed if only one VM module per resource group
size = "Standard_F2"
admin_password = "$omePassw0rd"
vm_os_simple = "WindowsServer"
public_ip_dns = ["srv"] // change to a unique name per datacenter region
vnet_subnet_id = module.network.vnet_subnets[0]
depends_on = [azurerm_resource_group.lab_autodeploy]
}
module "network" {
source = "Azure/network/azurerm"
resource_group_name = azurerm_resource_group.lab_autodeploy.name
subnet_prefixes = ["10.4.0.0/24"]
subnet_names = ["subnet1"]
depends_on = [azurerm_resource_group.lab_autodeploy]
}
output "windows_vm_public_name" {
value = module.windowsservers.public_ip_dns_name
}
This isn't an issue specific to Terraform - in Azure you cannot change a subnet that has things attached to it. The fact that the VM is powered off makes no difference.
To get around this without destroying the VM, you could move the NIC to a different subnet (create a temporary subnet if necessary), perform the address space change and then move the NIC back.
Would be grateful for any assistance, I thought I had nailed this one when I stumbled across the following link ...
Creating a resource group with terraform in azure: Cannot find resource group directly after creating it
However, the next stage of my code is still failing...
Error: Code="ResourceGroupNotFound" Message="Resource group 'ShowTell' could not be found
# We strongly recommend using the required_providers block to set the
# Azure Provider source and version being used
terraform {
required_providers {
azurerm = {
source = "hashicorp/azurerm"
version = "=2.64.0"
}
}
}
# Configure the Microsoft Azure Provider
provider "azurerm" {
features {}
}
variable "resource_group_name" {
type = string
default = "ShowTell"
description = ""
}
# Create your resource group
resource "azurerm_resource_group" "example" {
name = var.resource_group_name
location = "UK South"
}
# Should be accessible from LukesContainer.uksouth.azurecontainer.io
resource "azurerm_container_group" "LukesContainer" {
name = "LukesContainer"
location = "UK South"
resource_group_name = "${var.resource_group_name}"
ip_address_type = "public"
dns_name_label = "LukesContainer"
os_type = "Linux"
container {
name = "hello-world"
image = "microsoft/aci-helloworld:latest"
cpu = "0.5"
memory = "1.5"
ports {
port = "443"
protocol = "TCP"
}
}
container {
name = "sidecar"
image = "microsoft/aci-tutorial-sidecar"
cpu = "0.5"
memory = "1.5"
}
tags = {
environment = "testing"
}
}
In order to create an implicit dependency you must refer directly to the object that the dependency relates to. In your case, that means deriving the resource group name from the resource group object itself, rather than from the variable you'd used to configure that object:
resource "azurerm_container_group" "LukesContainer" {
name = "LukesContainer"
location = "UK South"
resource_group_name = azurerm_resource_group.example.name
# ...
}
With the configuration you included in your question, both the resource group and the container group depend on var.resource_group_name but there was no dependency between azurerm_container_group.LukesContainer and azurerm_resource_group.example, and so Terraform is therefore free to create those two objects in either order.
By deriving the container group's resource group name from the resource group object you tell Terraform that the resource group must be processed first, and then its results used to populate the container group.
I have several Azure resources that are created using the for_each property and then those resources have an Application Insights resource created using for_each as well.
Here is the code that creates the azurerm_application_insights:
resource "azurerm_application_insights" "applicationInsights" {
for_each = toset(keys(merge(local.appServices, local.functionApps)))
name = lower(join("-", ["wb", var.deploymentEnvironment, var.location, each.key, "ai"]))
location = var.location
resource_group_name = azurerm_resource_group.rg.name
application_type = "web"
lifecycle {
ignore_changes = [tags]
}
}
I've noticed that every time we run a terraform plan against some environments, we are always seeing Terraform report a "change" to the APPINSIGHTS_INSTRUMENTATIONKEY value. When I compare this value in the app settings key/value list to the actual AI instrumentation key that was created for it, it does match.
Terraform will perform the following actions:
# module.primaryRegion.module.functionapp["fnapp1"].azurerm_function_app.fnapp will be updated in-place
~ resource "azurerm_function_app" "fnapp" {
~ app_settings = {
# Warning: this attribute value will be marked as sensitive and will
# not display in UI output after applying this change
~ "APPINSIGHTS_INSTRUMENTATIONKEY" = (sensitive)
# (1 unchanged element hidden)
Is this a common issue with other people? I would think that the instrumentation key would never change especially since Terraform is what created all of these Application Insights resources and assigns it to each application.
This is how I associate each Application Insights resource to their appropriate application with a for_each property
module "webapp" {
for_each = local.appServices
source = "../webapp"
name = lower(join("-", ["wb", var.deploymentEnvironment, var.location, each.key, "app"]))
location = var.location
resource_group_name = azurerm_resource_group.rg.name
app_service_plan_id = each.value.app_service_plan_id
app_settings = merge({"APPINSIGHTS_INSTRUMENTATIONKEY" = azurerm_application_insights.applicationInsights[each.key].instrumentation_key}, each.value.app_settings)
allowed_origins = each.value.allowed_origins
deploymentEnvironment = var.deploymentEnvironment
}
I'm wondering if the merge is just reordering the list of key/values in the app_settings for the app, and Terraform detects that as some kind of change and the value itself isn't changing. This is the only way I know how to assign a bunch of Application Insights resources to many web apps using for_each to reduce configuration code.
Use only the Site_config block to solve the issue
Example
resource "azurerm_windows_function_app" "function2" {
provider = azurerm.private
name = local.private.functionapps.function2.name
resource_group_name = local.private.rg.app.name
location = local.private.location
storage_account_name = local.private.functionapps.storageaccount.name
storage_account_access_key = azurerm_storage_account.function_apps_storage.primary_access_key
service_plan_id = azurerm_service_plan.app_service_plan.id
virtual_network_subnet_id = lookup(azurerm_subnet.subnets, "appservice").id
https_only = true
site_config {
application_insights_key = azurerm_application_insights.appinisghts.instrumentation_key
}
}