Azure vm snapshot using terraform throwing error - azure

I have written a small terraform script to take snapshot of two VM's sitting on Azure. I have created two lists with resource group details and OS Disk name. Below is the necessary files.
main.tf
terraform {
required_providers {
azurerm = {
source = "hashicorp/azurerm"
version = "~> 3.0.2"
}
}
required_version = ">= 1.1.0"
}
provider "azurerm" {
features {}
}
data "azurerm_managed_disk" "existing" {
for_each = zipmap(var.cloud_resource_group_list,var.cloud_vm_os_disk_name)
name = each.value
resource_group_name = each.key
}
resource "azurerm_snapshot" "example" {
name = "snapshot"
for_each = ([for i in data.azurerm_managed_disk.existing: zipmap(i.resource_group_name, i.name)])
location = data.azurerm_managed_disk.existing[each.key].location
resource_group_name = data.azurerm_managed_disk.existing[each.key]
create_option = "Copy"
source_uri = data.azurerm_managed_disk.existing[each.value].id
}
variables.tf
variable "cloud_resource_group_list" {
description = "VM resource group name"
type = list(string)
}
variable "cloud_vm_os_disk_name" {
description = "VM OS disk names"
type = list(string)
}
terraform.tfvars
cloud_resource_group_list = ["rg1", "rg2"]
cloud_vm_os_disk_name = ["disk1", "disk2"]
terraform validate runs sucessfully. When I run terraform apply the first resource group is read sucessfully but it fails for second resource group. Below is the error.
terraform apply
data.azurerm_managed_disk.existing["rg1"]: Reading...
data.azurerm_managed_disk.existing["rg1"]: Reading...
data.azurerm_managed_disk.existing["disk1"]: Read complete after 1s
╷
│ Error: Managed Disk: (Disk Name "disk2" / Resource Group "rg2") was not found
│
│ with data.azurerm_managed_disk.existing["rg2"],
│ on main.tf line 22, in data "azurerm_managed_disk" "existing":
│ 22: data "azurerm_managed_disk" "existing" {
Both rg2 and disk2 exists on azure portal. Please help me where am I wrong and why its not working.

Related

terraform azure insufficient blocks

Here is my terraform plan
terraform {
required_providers {
azure = {
source = "hashicorp/azurerm"
version = "=3.5.0"
}
}
backend "s3" {
encrypt = true
bucket = "terraform"
region = "us-east-1"
key = "aws/tgw_peer/us-east-1/terraform.tfstate"
}
}
provider "azurerm" {
features {}
}
data "azurerm_virtual_network" "vnet" {
resource_group_name = var.resource_group_name
name = var.vnet_name
}
When I execute terraform plan I get the following error:
╷
│ Error: Insufficient features blocks
│
│ on <empty> line 0:
│ (source code not available)
│
│ At least 1 "features" blocks are required.
╵
There is clearly a features block in the azurerm provider block. However, the fact that the error doesn't specify the file name tells me that maybe the problem is somewhere else.
What am I doing wrong?
Terraform version 1.1.6
The name of the provider in the required_providers block is wrong, you have set it to azure while it should be azurerm. Example of how to configure the provider:
terraform {
required_providers {
azurerm = { # <--- Note that it is azurerm
source = "hashicorp/azurerm"
version = "3.5.0"
}
}
}
provider "azurerm" {
# Configuration options
}

Terraform cloud run triggers with Azure

I'm having issue with accessing my data "terrafrom_remote_state" objects..
So I'm following the hashicorp site to deploy azure resource with terraform cloud with run triggers. The trigger is working , running the plan for the second workspace, but it can't access the data i'm passing through the outputs.
I have set the "state" for the first workspace to be shared, and set the run trigger on the second workspace to be triggered by the 1st. No issues here.
I have tried to follow what is on the hasicorp site, but it is for aws so, maybe for azure I have missed something. I will post my outputs , then some code for the second workspace.
Ouputs : which i have looked at in the statefile and look good.
output "rgName" {
description = "The resource group for resources"
value = var.rgName
}
output "location" {
description = "The location for resources"
value = var.location
}
output "subnet1_id" {
description = "subnet 1"
value = azurerm_subnet.subnet1.id
}
2nd workspace
data "terraform_remote_state" "network" {
backend = "remote"
config = {
organization = "Awesome-Company"
workspaces = {
name = "TFCloud-Trigger-Network"
}
}
}
provider "azurerm" {
version = "2.66.0"
subscription_id = var.subscription_id
client_id = var.client_id
client_secret = var.clientSecret
tenant_id = var.tenant_id
features{}
}
#Deploy Public IP
resource "azurerm_public_ip" "pip1" {
name = "TFC-pip1"
location = data.terraform_remote_state.network.outputs.location
resource_group_name = data.terraform_remote_state.network.outputs.rgName
allocation_method = "Dynamic"
sku = "Basic"
}
#Create NIC
resource "azurerm_network_interface" "nic1" {
name = "TFC-TestVM-Nic"
location = data.terraform_remote_state.network.outputs.location
resource_group_name = data.terraform_remote_state.network.outputs.rgName
ip_configuration {
name = "ipconfig1"
subnet_id = date.terraform_remote_state.network.outputs.subnet1_id
private_ip_address_allocation = "Dynamic"
public_ip_address_id = azurerm_public_ip.pip1.id
}
}
The error is
Error: Unsupported attribute │ │ on main.tf line 26, in resource
"azurerm_public_ip" "pip1": │ 26: location =
data.terraform_remote_state.network.outputs.location │
├──────────────── │ │ data.terraform_remote_state.network.outputs
is object with no attributes │ │ This object does not have an
attribute named "location".
I can't access the data.terraform_remote_state.network.outputs
So, I figured this out and it is not in the documentation. A workspace that is a triggered by another workspace will not automatically update it's terrafrom plan.
Normally when I edit the code in github (or another repo) terraform cloud will automatically run a plan once you have saved that new code. A workspace that is triggered by another will not do that. So, even though I changed the code, I had to manually go to TF Cloud discard the current run on that triggered workspace, and re-run the plan. After this, the run trigger would successfully run.
It was a weird thing...

Unable to change an azure subnet using terraform

I am new to terraform and want to change the subnet on a network and I am getting a weird error.
google got nothing. here's what I am entering (after changing the main.tf and running plan)
terraform apply -replace="azurerm_subnet.subnet1"
Terraform will perform the following actions:
# module.network.azurerm_subnet.subnet[0] will be updated in-place
~ resource "azurerm_subnet" "subnet" {
~ address_prefixes = [
- "10.0.2.0/24",
+ "10.0.4.0/24",
]
id =
"/subscriptions/xxxxxxxxxxxxxxxxxxxxxxxxx/resourceGroups/lab-
resources/providers/Microsoft.Network/virtualNetworks/acctvnet/subnets/subnet1"
name = "subnet1"
# (7 unchanged attributes hidden)
}
Plan: 0 to add, 1 to change, 0 to destroy.
Do you want to perform these actions?
Terraform will perform the actions described above.
Only 'yes' will be accepted to approve.
I enter yes and I get this error:
Error: updating Subnet: (Name "subnet1" / Virtual Network Name "acctvnet" / Resource Group "lab-resources"): network.SubnetsClient#CreateOrUpdate: Failure sending request: StatusCode=400 -- Original Error: Code="InUseSubnetCannotBeUpdated" Message="Subnet subnet1 is in use and cannot be updated." Details=[]
│
│ with module.network.azurerm_subnet.subnet[0],
│ on .terraform/modules/network/main.tf line 15, in resource "azurerm_subnet" "subnet":
│ 15: resource "azurerm_subnet" "subnet" {
│
The VM is off and I do not see what else can be using it.
I also tried using the terraform taint "azurerm_subnet.subnet1"
Any ideas? Is what I am doing not possible?
Here is my main.tf
terraform {
required_version = ">=0.12"
required_providers {
azurerm = {
source = "hashicorp/azurerm"
version = "~>2.0"
}
}
}
provider "azurerm" {
features {}
}
resource "azurerm_resource_group" "lab_autodeploy" {
name = "lab-resources"
location = "East US 2"
}
module "Windowsservers" {
source = "Azure/compute/azurerm"
resource_group_name = azurerm_resource_group.lab_autodeploy.name
is_windows_image = true
vm_hostname = "new_ddc" // line can be removed if only one VM module per resource group
size = "Standard_F2"
admin_password = "$omePassw0rd"
vm_os_simple = "WindowsServer"
public_ip_dns = ["srv"] // change to a unique name per datacenter region
vnet_subnet_id = module.network.vnet_subnets[0]
depends_on = [azurerm_resource_group.lab_autodeploy]
}
module "network" {
source = "Azure/network/azurerm"
resource_group_name = azurerm_resource_group.lab_autodeploy.name
subnet_prefixes = ["10.4.0.0/24"]
subnet_names = ["subnet1"]
depends_on = [azurerm_resource_group.lab_autodeploy]
}
output "windows_vm_public_name" {
value = module.windowsservers.public_ip_dns_name
}
This isn't an issue specific to Terraform - in Azure you cannot change a subnet that has things attached to it. The fact that the VM is powered off makes no difference.
To get around this without destroying the VM, you could move the NIC to a different subnet (create a temporary subnet if necessary), perform the address space change and then move the NIC back.

AKS Cluster | Failed to query available provider packages | Right version of hashicorp

I'm currently building my terraform plan and it seems that I'm running into issues as soon as I run the following command:
terraform init
The current main.tf contains this:
terraform {
backend "azurerm"{
resource_group_name = "test"
storage_account_name = "testaccount"
container_name = "testc"
key = "testc.state"
}
required_providers {
azurerm = {
source = "hashicorp/azurerm"
version = "2.46.0"
}
}
}
# Configure the Microsoft Azure Provider
provider "azurerm" {
features {}
}
data "azurerm_key_vault" "keyVaultClientID" {
name = "AKSClientID"
key = var.keyvaultID
}
data "azure_key_vault_secret" "keyVaultClientSecret" {
name = "AKSClientSecret"
key_vault_id = var.keyvaultID
}
resource "azurerm_kubernetes_cluster" "test_cluster" {
name = var.name
location = var.location
resource_group_name = var.resourceGroup
dns_prefix = ""
default_node_pool {
name = "default"
node_count = 1
vm_size = "Standard_D2_v2"
}
service_principal {
client_id = data.azurerm_key_vault_secret.keyVaultClientID.value
client_secret = data.azurerm_key_vault_secret.keyVaultClientSecret.value
}
tags = {
"Environment" = "Development"
}
}
The error message that I get is the following:
│ Error: Failed to query available provider packages
│
│ Could not retrieve the list of available versions for provider hashicorp/azure: provider
│ registry registry.terraform.io does not have a provider named
│ registry.terraform.io/hashicorp/azure
I'm looking at the documentation, and I'm changing the version, but I'm not getting any luck. Does anyone knows what else I can do or what should I change on my main.tf?
The solve this issue, you will have to add the following inside of the main terraform plan:
required_providers {
azurerm = {
source = "hashicorp/azurerm"
version = "=2.75.0"
}
If you add it, the issue will never appear again. Also, you might have to run the upgrade command to make sure terraform will be able to handle the new version.

Terraform try to pull not defined provider

Every time I perfomr terraform init tf try to pull from registry quite strange provider which do not exit.
Error:
│ Error: Failed to query available provider packages
│
│ Could not retrieve the list of available versions for provider hashicorp/databricks: provider registry registry.terraform.io does not have a provider named
│ registry.terraform.io/hashicorp/databricks
│
│ Did you intend to use databrickslabs/databricks? If so, you must specify that source address in each module which requires that provider. To see which modules are currently depending on
│ hashicorp/databricks, run the following command:
│ terraform providers
╵
This providere is quite strange combination of 2 providers.
My tf file:
terraform {
required_providers {
azurerm = {
source = "hashicorp/azurerm"
version = "~> 2.65"
}
databrick = {
source = "databrickslabs/databricks"
version = "0.3.7"
}
}
required_version = ">= 0.14.9"
}
provider "azurerm" {
features {}
}
provider "databrick" {
features {}
}
resource "azurerm_resource_group" "rg" {
name = "TerraformResourceGroup"
location = "westeurope"
}
resource "azurerm_databricks_workspace" "databrick" {
name = "terraform-databrick"
resource_group_name = azurerm_resource_group.rg.name
location = azurerm_resource_group.rg.location
sku = "trial"
tags = {
"env" = "rnd"
"provisoning" = "tf"
}
}
data "databricks_node_type" "smallest" {
local_disk = true
}
data "databricks_spark_version" "latest_lts" {
long_term_support = true
}
resource "databricks_cluster" "cluster" {
cluster_name = "terraform-cluster"
spark_version = data.databricks_spark_version.latest_lts.id
node_type_id = data.databricks_node_type.smallest.id
autotermination_minutes = 20
spark_conf = {
"spark.databricks.cluster.profile" : "singleNode"
"spark.master" : "local[*]"
}
custom_tags = {
"type" = "SingleNode"
"env" = "rnd"
"provisoning" = "tf"
}
}
I was looking for some kind of 'verbose' flag, so I could find why it is trying to pull this kind of provider and from where it is coming.
Sadly I was able only to be able to find up that this issue is comming from 'data' and below part of my file.
All my knowlage is based on this docs Data brick cluster and this learning material Terraform Azure
Thank you in advace all of your help.

Resources