Unable to change an azure subnet using terraform - azure

I am new to terraform and want to change the subnet on a network and I am getting a weird error.
google got nothing. here's what I am entering (after changing the main.tf and running plan)
terraform apply -replace="azurerm_subnet.subnet1"
Terraform will perform the following actions:
# module.network.azurerm_subnet.subnet[0] will be updated in-place
~ resource "azurerm_subnet" "subnet" {
~ address_prefixes = [
- "10.0.2.0/24",
+ "10.0.4.0/24",
]
id =
"/subscriptions/xxxxxxxxxxxxxxxxxxxxxxxxx/resourceGroups/lab-
resources/providers/Microsoft.Network/virtualNetworks/acctvnet/subnets/subnet1"
name = "subnet1"
# (7 unchanged attributes hidden)
}
Plan: 0 to add, 1 to change, 0 to destroy.
Do you want to perform these actions?
Terraform will perform the actions described above.
Only 'yes' will be accepted to approve.
I enter yes and I get this error:
Error: updating Subnet: (Name "subnet1" / Virtual Network Name "acctvnet" / Resource Group "lab-resources"): network.SubnetsClient#CreateOrUpdate: Failure sending request: StatusCode=400 -- Original Error: Code="InUseSubnetCannotBeUpdated" Message="Subnet subnet1 is in use and cannot be updated." Details=[]
│
│ with module.network.azurerm_subnet.subnet[0],
│ on .terraform/modules/network/main.tf line 15, in resource "azurerm_subnet" "subnet":
│ 15: resource "azurerm_subnet" "subnet" {
│
The VM is off and I do not see what else can be using it.
I also tried using the terraform taint "azurerm_subnet.subnet1"
Any ideas? Is what I am doing not possible?
Here is my main.tf
terraform {
required_version = ">=0.12"
required_providers {
azurerm = {
source = "hashicorp/azurerm"
version = "~>2.0"
}
}
}
provider "azurerm" {
features {}
}
resource "azurerm_resource_group" "lab_autodeploy" {
name = "lab-resources"
location = "East US 2"
}
module "Windowsservers" {
source = "Azure/compute/azurerm"
resource_group_name = azurerm_resource_group.lab_autodeploy.name
is_windows_image = true
vm_hostname = "new_ddc" // line can be removed if only one VM module per resource group
size = "Standard_F2"
admin_password = "$omePassw0rd"
vm_os_simple = "WindowsServer"
public_ip_dns = ["srv"] // change to a unique name per datacenter region
vnet_subnet_id = module.network.vnet_subnets[0]
depends_on = [azurerm_resource_group.lab_autodeploy]
}
module "network" {
source = "Azure/network/azurerm"
resource_group_name = azurerm_resource_group.lab_autodeploy.name
subnet_prefixes = ["10.4.0.0/24"]
subnet_names = ["subnet1"]
depends_on = [azurerm_resource_group.lab_autodeploy]
}
output "windows_vm_public_name" {
value = module.windowsservers.public_ip_dns_name
}

This isn't an issue specific to Terraform - in Azure you cannot change a subnet that has things attached to it. The fact that the VM is powered off makes no difference.
To get around this without destroying the VM, you could move the NIC to a different subnet (create a temporary subnet if necessary), perform the address space change and then move the NIC back.

Related

Azure vm snapshot using terraform throwing error

I have written a small terraform script to take snapshot of two VM's sitting on Azure. I have created two lists with resource group details and OS Disk name. Below is the necessary files.
main.tf
terraform {
required_providers {
azurerm = {
source = "hashicorp/azurerm"
version = "~> 3.0.2"
}
}
required_version = ">= 1.1.0"
}
provider "azurerm" {
features {}
}
data "azurerm_managed_disk" "existing" {
for_each = zipmap(var.cloud_resource_group_list,var.cloud_vm_os_disk_name)
name = each.value
resource_group_name = each.key
}
resource "azurerm_snapshot" "example" {
name = "snapshot"
for_each = ([for i in data.azurerm_managed_disk.existing: zipmap(i.resource_group_name, i.name)])
location = data.azurerm_managed_disk.existing[each.key].location
resource_group_name = data.azurerm_managed_disk.existing[each.key]
create_option = "Copy"
source_uri = data.azurerm_managed_disk.existing[each.value].id
}
variables.tf
variable "cloud_resource_group_list" {
description = "VM resource group name"
type = list(string)
}
variable "cloud_vm_os_disk_name" {
description = "VM OS disk names"
type = list(string)
}
terraform.tfvars
cloud_resource_group_list = ["rg1", "rg2"]
cloud_vm_os_disk_name = ["disk1", "disk2"]
terraform validate runs sucessfully. When I run terraform apply the first resource group is read sucessfully but it fails for second resource group. Below is the error.
terraform apply
data.azurerm_managed_disk.existing["rg1"]: Reading...
data.azurerm_managed_disk.existing["rg1"]: Reading...
data.azurerm_managed_disk.existing["disk1"]: Read complete after 1s
╷
│ Error: Managed Disk: (Disk Name "disk2" / Resource Group "rg2") was not found
│
│ with data.azurerm_managed_disk.existing["rg2"],
│ on main.tf line 22, in data "azurerm_managed_disk" "existing":
│ 22: data "azurerm_managed_disk" "existing" {
Both rg2 and disk2 exists on azure portal. Please help me where am I wrong and why its not working.

Error in terraform module mainly to do with log analytics

I am in the process of learning TF, and on the subject of modules, at the same time I have decided to only create resources on my Azure account using TF as a way to accelerate my learning. To this note, I found this gitbub repo https://github.com/kumarvna/terraform-azurerm-virtual-machine
I have been following the contents and trying to reproduce on my test system, I have tried to contact the author to no avail and felt I have already wasted 2 weeks trying to fix the problem, let me ask on here for help.
My setup.
Pulled the code from the repo onto my laptop.
logged onto my Azure account from a powershell console.
created a folder called create_vm
and in that folder
On my main.tf file, I have the following. This is a linux example, but I had the same issues with a windows example also.
# Azurerm provider configuration
provider "azurerm" {
features {}
}
# Creates a new resource group
resource "azurerm_resource_group" "test_build" {
name = "testBuild"
location = "West Europe"
}
# Creates a new network
resource "azurerm_virtual_network" "example" {
name = "example-network"
location = azurerm_resource_group.test_build.location
resource_group_name = azurerm_resource_group.test_build.name
address_space = ["10.0.0.0/16"]
dns_servers = ["10.0.0.4", "10.0.0.5"]
subnet {
name = "subnet1"
address_prefix = "10.0.1.0/24"
}
}
# Creates a new la workspace
resource "azurerm_log_analytics_workspace" "la" {
name = "loganalytics-we-sharedtest2"
resource_group_name = azurerm_resource_group.test_build.name
}
module "virtual-machine" {
source = "kumarvna/virtual-machine/azurerm"
version = "2.3.0"
# Resource Group, location, VNet and Subnet details
resource_group_name = azurerm_resource_group.test_build.name
location = "westeurope"
virtual_network_name = azurerm_virtual_network.example.name
subnet_name = "subnet1"
virtual_machine_name = "vm-linux"
# This module support multiple Pre-Defined Linux and Windows Distributions.
# Check the README.md file for more pre-defined images for Ubuntu, Centos, RedHat.
# Please make sure to use gen2 images supported VM sizes if you use gen2 distributions
# Specify `disable_password_authentication = false` to create random admin password
# Specify a valid password with `admin_password` argument to use your own password
# To generate SSH key pair, specify `generate_admin_ssh_key = true`
# To use existing key pair, specify `admin_ssh_key_data` to a valid SSH public key path.
os_flavor = "linux"
linux_distribution_name = "ubuntu2004"
virtual_machine_size = "Standard_B2s"
generate_admin_ssh_key = true
instances_count = 2
# Proxymity placement group, Availability Set and adding Public IP to VM's are optional.
# remove these argument from module if you dont want to use it.
enable_proximity_placement_group = true
enable_vm_availability_set = true
enable_public_ip_address = true
# Network Seurity group port allow definitions for each Virtual Machine
# NSG association to be added automatically for all network interfaces.
# Remove this NSG rules block, if `existing_network_security_group_id` is specified
nsg_inbound_rules = [
{
name = "ssh"
destination_port_range = "22"
source_address_prefix = "*"
},
{
name = "http"
destination_port_range = "80"
source_address_prefix = "*"
},
]
# Boot diagnostics to troubleshoot virtual machines, by default uses managed
# To use custom storage account, specify `storage_account_name` with a valid name
# Passing a `null` value will utilize a Managed Storage Account to store Boot Diagnostics
enable_boot_diagnostics = true
# Attach a managed data disk to a Windows/Linux VM's. Possible Storage account type are:
# `Standard_LRS`, `StandardSSD_ZRS`, `Premium_LRS`, `Premium_ZRS`, `StandardSSD_LRS`
# or `UltraSSD_LRS` (UltraSSD_LRS only available in a region that support availability zones)
# Initialize a new data disk - you need to connect to the VM and run diskmanagemnet or fdisk
data_disks = [
{
name = "disk1"
disk_size_gb = 100
storage_account_type = "StandardSSD_LRS"
},
{
name = "disk2"
disk_size_gb = 200
storage_account_type = "Standard_LRS"
}
]
# (Optional) To enable Azure Monitoring and install log analytics agents
# (Optional) Specify `storage_account_name` to save monitoring logs to storage.
log_analytics_workspace_id = azurerm_log_analytics_workspace.la.id
# Deploy log analytics agents to virtual machine.
# Log analytics workspace customer id and primary shared key required.
deploy_log_analytics_agent = true
log_analytics_customer_id = azurerm_log_analytics_workspace.la.workspace_id
log_analytics_workspace_primary_shared_key = azurerm_log_analytics_workspace.la.primary_shared_key
# Adding additional TAG's to your Azure resources
tags = {
ProjectName = "demo-project"
Env = "dev"
Owner = "user#example.com"
BusinessUnit = "CORP"
ServiceClass = "Gold"
}
}
on variables.tf.
variable "log_analytics_workspace_name" {
description = "The name of log analytics workspace name"
default = null
}
variable "storage_account_name" {
description = "The name of the hub storage account to store logs"
default = null
}
variable "create_resource_group" {
description = "Whether to create resource group and use it for all networking resources"
default = true
}
Please note that I added the create_resource_group variable to try to resolve my issue to no avail.
I then run
terraform init
terraform plan
I get the following error with terraform plan
│ Error: Error: Log Analytics workspaces "loganalytics-we-sharedtest2" (Resource Group "rg-shared-westeurope-01") was not found
│
│ with data.azurerm_log_analytics_workspace.example,
│ on main.tf line 6, in data "azurerm_log_analytics_workspace" "example":
│ 6: data "azurerm_log_analytics_workspace" "example" {
│
╵
╷
│ Error: Error: Resource Group "rg-shared-westeurope-01" was not found
│
│ with module.virtual-machine.data.azurerm_resource_group.rg,
│ on .terraform\modules\virtual-machine\main.tf line 27, in data "azurerm_resource_group" "rg":
│ 27: data "azurerm_resource_group" "rg" {
│
What have I done ?
Looked through the code to see what I am missing.
Added the variable at the top. Tried to contact the author to no
avail.
Tried to use an existing resource group, I feel this defeats the
purpose of having a variable that asks if a new resource group can
be created in case it doesn't already exist.
What else is confusing ?
I initially had another folder for modules, i later came to realise that the module is a public one being pulled down whenever I ran terraform init, now is there a way to have this as a localised module ?
I have made the changes recommended by the answer below, however in order not to turn the question into a long winded one, I have placed the error that I got below.
│ Error: Error: Subnet: (Name "subnet1" / Virtual Network Name "testBuild_vnet" / Resource Group "testBuild") was not found
│
│ with module.virtual-machine.data.azurerm_subnet.snet,
│ on .terraform\modules\virtual-machine\main.tf line 36, in data "azurerm_subnet" "snet":
│ 36: data "azurerm_subnet" "snet" {
│
╵
╷
│ Error: Invalid count argument
│
│ on .terraform\modules\virtual-machine\main.tf line 443, in resource "azurerm_monitor_diagnostic_setting" "nsg":
│ 443: count = var.existing_network_security_group_id == null && var.log_analytics_workspace_id != null ? 1 : 0
│
│ The "count" value depends on resource attributes that cannot be determined until apply, so Terraform cannot predict how many instances will be created. To work around this, use the -target argument to first apply only the resources that the count depends on.
I think the misunderstanding is that you think the module creates a resource group, but that is not the case. This module expects an already existing resource group as var.resource_group_name (same goes for the input variables virtual_network_name, subnet_name and log_analytics_workspace_id).
The main difference between the resource_ and data_ prefix is that data sources are read-only and "only" fetch already existing infrastructure for further use:
Data sources allow Terraform to use information defined outside of
Terraform, defined by another separate Terraform configuration, or
modified by functions.
https://www.terraform.io/language/data-sources
So in your case it should work like (not tested):
# Azurerm provider configuration
provider "azurerm" {
features {}
}
# Creates a new resource group
resource "azurerm_resource_group" "test_build" {
name = "testBuild"
location = "West Europe"
}
# Creates a new network
resource "azurerm_virtual_network" "example" {
name = "example-network"
location = azurerm_resource_group.test_build.location
resource_group_name = azurerm_resource_group.test_build.name
address_space = ["10.0.0.0/16"]
dns_servers = ["10.0.0.4", "10.0.0.5"]
subnet {
name = "subnet1"
address_prefix = "10.0.1.0/24"
}
}
# Creates a new la workspace
resource "azurerm_log_analytics_workspace" "la" {
name = "loganalytics-we-sharedtest2"
resource_group_name = azurerm_resource_group.test_build.name
}
module "virtual-machine" {
source = "kumarvna/virtual-machine/azurerm"
version = "2.3.0"
# Resource Group, location, VNet and Subnet details
resource_group_name = azurerm_resource_group.test_build.name
location = "westeurope"
virtual_network_name = azurerm_virtual_network.example.name
subnet_name = "subnet1"
virtual_machine_name = "vm-linux"
# This module support multiple Pre-Defined Linux and Windows Distributions.
# Check the README.md file for more pre-defined images for Ubuntu, Centos, RedHat.
# Please make sure to use gen2 images supported VM sizes if you use gen2 distributions
# Specify `disable_password_authentication = false` to create random admin password
# Specify a valid password with `admin_password` argument to use your own password
# To generate SSH key pair, specify `generate_admin_ssh_key = true`
# To use existing key pair, specify `admin_ssh_key_data` to a valid SSH public key path.
os_flavor = "linux"
linux_distribution_name = "ubuntu2004"
virtual_machine_size = "Standard_B2s"
generate_admin_ssh_key = true
instances_count = 2
# Proxymity placement group, Availability Set and adding Public IP to VM's are optional.
# remove these argument from module if you dont want to use it.
enable_proximity_placement_group = true
enable_vm_availability_set = true
enable_public_ip_address = true
# Network Seurity group port allow definitions for each Virtual Machine
# NSG association to be added automatically for all network interfaces.
# Remove this NSG rules block, if `existing_network_security_group_id` is specified
nsg_inbound_rules = [
{
name = "ssh"
destination_port_range = "22"
source_address_prefix = "*"
},
{
name = "http"
destination_port_range = "80"
source_address_prefix = "*"
},
]
# Boot diagnostics to troubleshoot virtual machines, by default uses managed
# To use custom storage account, specify `storage_account_name` with a valid name
# Passing a `null` value will utilize a Managed Storage Account to store Boot Diagnostics
enable_boot_diagnostics = true
# Attach a managed data disk to a Windows/Linux VM's. Possible Storage account type are:
# `Standard_LRS`, `StandardSSD_ZRS`, `Premium_LRS`, `Premium_ZRS`, `StandardSSD_LRS`
# or `UltraSSD_LRS` (UltraSSD_LRS only available in a region that support availability zones)
# Initialize a new data disk - you need to connect to the VM and run diskmanagemnet or fdisk
data_disks = [
{
name = "disk1"
disk_size_gb = 100
storage_account_type = "StandardSSD_LRS"
},
{
name = "disk2"
disk_size_gb = 200
storage_account_type = "Standard_LRS"
}
]
# (Optional) To enable Azure Monitoring and install log analytics agents
# (Optional) Specify `storage_account_name` to save monitoring logs to storage.
log_analytics_workspace_id = azurerm_log_analytics_workspace.la.id
# Deploy log analytics agents to virtual machine.
# Log analytics workspace customer id and primary shared key required.
deploy_log_analytics_agent = true
log_analytics_customer_id = azurerm_log_analytics_workspace.la.workspace_id
log_analytics_workspace_primary_shared_key = azurerm_log_analytics_workspace.la.primary_shared_key
# Adding additional TAG's to your Azure resources
tags = {
ProjectName = "demo-project"
Env = "dev"
Owner = "user#example.com"
BusinessUnit = "CORP"
ServiceClass = "Gold"
}
}
Just adding a new variable called create_resource_group will not do anything as long as there is no corresponding logic/code behind it.

Creating resources in terraform in Azure using existing resource and creating new one

I am having difficulty in creating resources in azure using terraform
Vnet is already present and it is in rg group
Resource group is present and vnet is in that group
I am creating 1 subnets new resources in another existing resource group name MB-TB-Dev
I Will be creating next 2 vm one is Linux red hat and one is windows
I am using the code below:
// Configure the Microsoft Azure Provider
provider "azurerm" {
features {}
subscription_id = "xxxxxxxxxx"
}
// Source code for the Resource Group i want my subnet in that
data "azurerm_resource_group" "rg_name" {
name = "MB-Tb-Dev"
}
output "id" {
value = data.azurerm_resource_group.rg_name.id
}
// vnet already define already present in another resource group
data "azurerm_virtual_network" "vnet" {
name = "sknet"
resource_group_name = "rg"
}
output "virtual_network_id" {
value = data.azurerm_virtual_network.vnet.id
}
// Subnet creation
resource "azurerm_subnet" "subnet1" {
name = "FrontEnd"
resource_group_name = "${data.azurerm_resource_group.rg_name.name}"
virtual_network_name = "${data.azurerm_virtual_network.vnet.id}"
address_prefixes = ["10.0.1.0/24"]
}
I am having error when I run the terraform apply:
Error: creating Subnet: (Name "FrontEnd" / Virtual Network Name "/subscriptions/XXXXXX-
XXXXXXXX-a/resourceGroups/rg/providers/Microsoft.Network/virtualNetworks/sknet" / Resource
Group "MB-Tb-Dev"): network.SubnetsClient#CreateOrUpdate: Failure sending request:
StatusCode=404 -- Original Error: Code="ResourceNotFound" Message="The Resource
Microsoft.Network/virtualNetworks/subscriptions' under resource group 'MB-Tb-Dev' was not found.
│
│ with azurerm_subnet.subnet1,
│ on subnet-main.tf line 34, in resource "azurerm_subnet" "subnet1":
│ 34: resource "azurerm_subnet" "subnet1" {
│
╵
Have you checked the available resource and the location details in the rg resource group name?
Simply it might have some issues defining the relevant parameters.
You may check the relevant relevant resource on this.

Terraform cloud run triggers with Azure

I'm having issue with accessing my data "terrafrom_remote_state" objects..
So I'm following the hashicorp site to deploy azure resource with terraform cloud with run triggers. The trigger is working , running the plan for the second workspace, but it can't access the data i'm passing through the outputs.
I have set the "state" for the first workspace to be shared, and set the run trigger on the second workspace to be triggered by the 1st. No issues here.
I have tried to follow what is on the hasicorp site, but it is for aws so, maybe for azure I have missed something. I will post my outputs , then some code for the second workspace.
Ouputs : which i have looked at in the statefile and look good.
output "rgName" {
description = "The resource group for resources"
value = var.rgName
}
output "location" {
description = "The location for resources"
value = var.location
}
output "subnet1_id" {
description = "subnet 1"
value = azurerm_subnet.subnet1.id
}
2nd workspace
data "terraform_remote_state" "network" {
backend = "remote"
config = {
organization = "Awesome-Company"
workspaces = {
name = "TFCloud-Trigger-Network"
}
}
}
provider "azurerm" {
version = "2.66.0"
subscription_id = var.subscription_id
client_id = var.client_id
client_secret = var.clientSecret
tenant_id = var.tenant_id
features{}
}
#Deploy Public IP
resource "azurerm_public_ip" "pip1" {
name = "TFC-pip1"
location = data.terraform_remote_state.network.outputs.location
resource_group_name = data.terraform_remote_state.network.outputs.rgName
allocation_method = "Dynamic"
sku = "Basic"
}
#Create NIC
resource "azurerm_network_interface" "nic1" {
name = "TFC-TestVM-Nic"
location = data.terraform_remote_state.network.outputs.location
resource_group_name = data.terraform_remote_state.network.outputs.rgName
ip_configuration {
name = "ipconfig1"
subnet_id = date.terraform_remote_state.network.outputs.subnet1_id
private_ip_address_allocation = "Dynamic"
public_ip_address_id = azurerm_public_ip.pip1.id
}
}
The error is
Error: Unsupported attribute │ │ on main.tf line 26, in resource
"azurerm_public_ip" "pip1": │ 26: location =
data.terraform_remote_state.network.outputs.location │
├──────────────── │ │ data.terraform_remote_state.network.outputs
is object with no attributes │ │ This object does not have an
attribute named "location".
I can't access the data.terraform_remote_state.network.outputs
So, I figured this out and it is not in the documentation. A workspace that is a triggered by another workspace will not automatically update it's terrafrom plan.
Normally when I edit the code in github (or another repo) terraform cloud will automatically run a plan once you have saved that new code. A workspace that is triggered by another will not do that. So, even though I changed the code, I had to manually go to TF Cloud discard the current run on that triggered workspace, and re-run the plan. After this, the run trigger would successfully run.
It was a weird thing...

Terraform tried creating a "implicit dependency" but the next stage of my code still fails to find the Azure resource group just created

Would be grateful for any assistance, I thought I had nailed this one when I stumbled across the following link ...
Creating a resource group with terraform in azure: Cannot find resource group directly after creating it
However, the next stage of my code is still failing...
Error: Code="ResourceGroupNotFound" Message="Resource group 'ShowTell' could not be found
# We strongly recommend using the required_providers block to set the
# Azure Provider source and version being used
terraform {
required_providers {
azurerm = {
source = "hashicorp/azurerm"
version = "=2.64.0"
}
}
}
# Configure the Microsoft Azure Provider
provider "azurerm" {
features {}
}
variable "resource_group_name" {
type = string
default = "ShowTell"
description = ""
}
# Create your resource group
resource "azurerm_resource_group" "example" {
name = var.resource_group_name
location = "UK South"
}
# Should be accessible from LukesContainer.uksouth.azurecontainer.io
resource "azurerm_container_group" "LukesContainer" {
name = "LukesContainer"
location = "UK South"
resource_group_name = "${var.resource_group_name}"
ip_address_type = "public"
dns_name_label = "LukesContainer"
os_type = "Linux"
container {
name = "hello-world"
image = "microsoft/aci-helloworld:latest"
cpu = "0.5"
memory = "1.5"
ports {
port = "443"
protocol = "TCP"
}
}
container {
name = "sidecar"
image = "microsoft/aci-tutorial-sidecar"
cpu = "0.5"
memory = "1.5"
}
tags = {
environment = "testing"
}
}
In order to create an implicit dependency you must refer directly to the object that the dependency relates to. In your case, that means deriving the resource group name from the resource group object itself, rather than from the variable you'd used to configure that object:
resource "azurerm_container_group" "LukesContainer" {
name = "LukesContainer"
location = "UK South"
resource_group_name = azurerm_resource_group.example.name
# ...
}
With the configuration you included in your question, both the resource group and the container group depend on var.resource_group_name but there was no dependency between azurerm_container_group.LukesContainer and azurerm_resource_group.example, and so Terraform is therefore free to create those two objects in either order.
By deriving the container group's resource group name from the resource group object you tell Terraform that the resource group must be processed first, and then its results used to populate the container group.

Resources