I'm having issue with accessing my data "terrafrom_remote_state" objects..
So I'm following the hashicorp site to deploy azure resource with terraform cloud with run triggers. The trigger is working , running the plan for the second workspace, but it can't access the data i'm passing through the outputs.
I have set the "state" for the first workspace to be shared, and set the run trigger on the second workspace to be triggered by the 1st. No issues here.
I have tried to follow what is on the hasicorp site, but it is for aws so, maybe for azure I have missed something. I will post my outputs , then some code for the second workspace.
Ouputs : which i have looked at in the statefile and look good.
output "rgName" {
description = "The resource group for resources"
value = var.rgName
}
output "location" {
description = "The location for resources"
value = var.location
}
output "subnet1_id" {
description = "subnet 1"
value = azurerm_subnet.subnet1.id
}
2nd workspace
data "terraform_remote_state" "network" {
backend = "remote"
config = {
organization = "Awesome-Company"
workspaces = {
name = "TFCloud-Trigger-Network"
}
}
}
provider "azurerm" {
version = "2.66.0"
subscription_id = var.subscription_id
client_id = var.client_id
client_secret = var.clientSecret
tenant_id = var.tenant_id
features{}
}
#Deploy Public IP
resource "azurerm_public_ip" "pip1" {
name = "TFC-pip1"
location = data.terraform_remote_state.network.outputs.location
resource_group_name = data.terraform_remote_state.network.outputs.rgName
allocation_method = "Dynamic"
sku = "Basic"
}
#Create NIC
resource "azurerm_network_interface" "nic1" {
name = "TFC-TestVM-Nic"
location = data.terraform_remote_state.network.outputs.location
resource_group_name = data.terraform_remote_state.network.outputs.rgName
ip_configuration {
name = "ipconfig1"
subnet_id = date.terraform_remote_state.network.outputs.subnet1_id
private_ip_address_allocation = "Dynamic"
public_ip_address_id = azurerm_public_ip.pip1.id
}
}
The error is
Error: Unsupported attribute │ │ on main.tf line 26, in resource
"azurerm_public_ip" "pip1": │ 26: location =
data.terraform_remote_state.network.outputs.location │
├──────────────── │ │ data.terraform_remote_state.network.outputs
is object with no attributes │ │ This object does not have an
attribute named "location".
I can't access the data.terraform_remote_state.network.outputs
So, I figured this out and it is not in the documentation. A workspace that is a triggered by another workspace will not automatically update it's terrafrom plan.
Normally when I edit the code in github (or another repo) terraform cloud will automatically run a plan once you have saved that new code. A workspace that is triggered by another will not do that. So, even though I changed the code, I had to manually go to TF Cloud discard the current run on that triggered workspace, and re-run the plan. After this, the run trigger would successfully run.
It was a weird thing...
Related
I have written a small terraform script to take snapshot of two VM's sitting on Azure. I have created two lists with resource group details and OS Disk name. Below is the necessary files.
main.tf
terraform {
required_providers {
azurerm = {
source = "hashicorp/azurerm"
version = "~> 3.0.2"
}
}
required_version = ">= 1.1.0"
}
provider "azurerm" {
features {}
}
data "azurerm_managed_disk" "existing" {
for_each = zipmap(var.cloud_resource_group_list,var.cloud_vm_os_disk_name)
name = each.value
resource_group_name = each.key
}
resource "azurerm_snapshot" "example" {
name = "snapshot"
for_each = ([for i in data.azurerm_managed_disk.existing: zipmap(i.resource_group_name, i.name)])
location = data.azurerm_managed_disk.existing[each.key].location
resource_group_name = data.azurerm_managed_disk.existing[each.key]
create_option = "Copy"
source_uri = data.azurerm_managed_disk.existing[each.value].id
}
variables.tf
variable "cloud_resource_group_list" {
description = "VM resource group name"
type = list(string)
}
variable "cloud_vm_os_disk_name" {
description = "VM OS disk names"
type = list(string)
}
terraform.tfvars
cloud_resource_group_list = ["rg1", "rg2"]
cloud_vm_os_disk_name = ["disk1", "disk2"]
terraform validate runs sucessfully. When I run terraform apply the first resource group is read sucessfully but it fails for second resource group. Below is the error.
terraform apply
data.azurerm_managed_disk.existing["rg1"]: Reading...
data.azurerm_managed_disk.existing["rg1"]: Reading...
data.azurerm_managed_disk.existing["disk1"]: Read complete after 1s
╷
│ Error: Managed Disk: (Disk Name "disk2" / Resource Group "rg2") was not found
│
│ with data.azurerm_managed_disk.existing["rg2"],
│ on main.tf line 22, in data "azurerm_managed_disk" "existing":
│ 22: data "azurerm_managed_disk" "existing" {
Both rg2 and disk2 exists on azure portal. Please help me where am I wrong and why its not working.
I know inorder to have a remote state in my terraform code, i must create a storage account,and a container. Usually, it is done manually, but i am trying to create the storage account and the container dynamically using the below code:
resource "azurerm_resource_group" "state_resource_group" {
name = "RG-Terraform-on-Azure"
location = "West Europe"
}
terraform {
backend "azurerm" {
resource_group_name = "RG-Terraform-on-Azure"
storage_account_name = azurerm_storage_account.state_storage_account.name
container_name = azurerm_storage_container.state_container.name
key = "terraform.tfstate"
}
}
resource "azurerm_storage_account" "state_storage_account" {
name = random_string.storage_account_name.result
resource_group_name = azurerm_resource_group.state_resource_group.name
location = azurerm_resource_group.state_resource_group.location
account_tier = "Standard"
account_replication_type = "LRS"
tags = {
environment = "staging"
}
}
resource "azurerm_storage_container" "state_container" {
name = "vhds"
storage_account_name = azurerm_storage_account.state_storage_account.name
container_access_type = "private"
}
resource "random_string" "storage_account_name" {
length = 14
lower = true
numeric = false
upper = false
special = false
}
But, the above code complains that:
│ Error: Variables not allowed
│
│ on main.tf line 11, in terraform:
│ 11: storage_account_name = azurerm_storage_account.state_storage_account.name
│
│ Variables may not be used here.
So,I already know that the i cannot use variables in the backend block, however i am wondering if there is a solution which enable me to create the storage account and the container dynamically and store the state file in there ?
Point:
i have already seen this question, but the .conf file did not work for me!
This can't be done in the same Terraform file. The backend has to exist before anything else. Terraform requires the backend to exist when you run terraform init. The backend is accessed to read the state as the very first step Terraform performs when you do a plan or apply, before any resources are actually created.
In the past I've automated the creation of the storage backend using a CLI tool. If you wanted to automate it with terraform it would have to be in a separate Terraform workspace, but then where would the backend for that workspace be?
In general, it doesn't really work to create the backend in Terraform.
I am in the process of learning TF, and on the subject of modules, at the same time I have decided to only create resources on my Azure account using TF as a way to accelerate my learning. To this note, I found this gitbub repo https://github.com/kumarvna/terraform-azurerm-virtual-machine
I have been following the contents and trying to reproduce on my test system, I have tried to contact the author to no avail and felt I have already wasted 2 weeks trying to fix the problem, let me ask on here for help.
My setup.
Pulled the code from the repo onto my laptop.
logged onto my Azure account from a powershell console.
created a folder called create_vm
and in that folder
On my main.tf file, I have the following. This is a linux example, but I had the same issues with a windows example also.
# Azurerm provider configuration
provider "azurerm" {
features {}
}
# Creates a new resource group
resource "azurerm_resource_group" "test_build" {
name = "testBuild"
location = "West Europe"
}
# Creates a new network
resource "azurerm_virtual_network" "example" {
name = "example-network"
location = azurerm_resource_group.test_build.location
resource_group_name = azurerm_resource_group.test_build.name
address_space = ["10.0.0.0/16"]
dns_servers = ["10.0.0.4", "10.0.0.5"]
subnet {
name = "subnet1"
address_prefix = "10.0.1.0/24"
}
}
# Creates a new la workspace
resource "azurerm_log_analytics_workspace" "la" {
name = "loganalytics-we-sharedtest2"
resource_group_name = azurerm_resource_group.test_build.name
}
module "virtual-machine" {
source = "kumarvna/virtual-machine/azurerm"
version = "2.3.0"
# Resource Group, location, VNet and Subnet details
resource_group_name = azurerm_resource_group.test_build.name
location = "westeurope"
virtual_network_name = azurerm_virtual_network.example.name
subnet_name = "subnet1"
virtual_machine_name = "vm-linux"
# This module support multiple Pre-Defined Linux and Windows Distributions.
# Check the README.md file for more pre-defined images for Ubuntu, Centos, RedHat.
# Please make sure to use gen2 images supported VM sizes if you use gen2 distributions
# Specify `disable_password_authentication = false` to create random admin password
# Specify a valid password with `admin_password` argument to use your own password
# To generate SSH key pair, specify `generate_admin_ssh_key = true`
# To use existing key pair, specify `admin_ssh_key_data` to a valid SSH public key path.
os_flavor = "linux"
linux_distribution_name = "ubuntu2004"
virtual_machine_size = "Standard_B2s"
generate_admin_ssh_key = true
instances_count = 2
# Proxymity placement group, Availability Set and adding Public IP to VM's are optional.
# remove these argument from module if you dont want to use it.
enable_proximity_placement_group = true
enable_vm_availability_set = true
enable_public_ip_address = true
# Network Seurity group port allow definitions for each Virtual Machine
# NSG association to be added automatically for all network interfaces.
# Remove this NSG rules block, if `existing_network_security_group_id` is specified
nsg_inbound_rules = [
{
name = "ssh"
destination_port_range = "22"
source_address_prefix = "*"
},
{
name = "http"
destination_port_range = "80"
source_address_prefix = "*"
},
]
# Boot diagnostics to troubleshoot virtual machines, by default uses managed
# To use custom storage account, specify `storage_account_name` with a valid name
# Passing a `null` value will utilize a Managed Storage Account to store Boot Diagnostics
enable_boot_diagnostics = true
# Attach a managed data disk to a Windows/Linux VM's. Possible Storage account type are:
# `Standard_LRS`, `StandardSSD_ZRS`, `Premium_LRS`, `Premium_ZRS`, `StandardSSD_LRS`
# or `UltraSSD_LRS` (UltraSSD_LRS only available in a region that support availability zones)
# Initialize a new data disk - you need to connect to the VM and run diskmanagemnet or fdisk
data_disks = [
{
name = "disk1"
disk_size_gb = 100
storage_account_type = "StandardSSD_LRS"
},
{
name = "disk2"
disk_size_gb = 200
storage_account_type = "Standard_LRS"
}
]
# (Optional) To enable Azure Monitoring and install log analytics agents
# (Optional) Specify `storage_account_name` to save monitoring logs to storage.
log_analytics_workspace_id = azurerm_log_analytics_workspace.la.id
# Deploy log analytics agents to virtual machine.
# Log analytics workspace customer id and primary shared key required.
deploy_log_analytics_agent = true
log_analytics_customer_id = azurerm_log_analytics_workspace.la.workspace_id
log_analytics_workspace_primary_shared_key = azurerm_log_analytics_workspace.la.primary_shared_key
# Adding additional TAG's to your Azure resources
tags = {
ProjectName = "demo-project"
Env = "dev"
Owner = "user#example.com"
BusinessUnit = "CORP"
ServiceClass = "Gold"
}
}
on variables.tf.
variable "log_analytics_workspace_name" {
description = "The name of log analytics workspace name"
default = null
}
variable "storage_account_name" {
description = "The name of the hub storage account to store logs"
default = null
}
variable "create_resource_group" {
description = "Whether to create resource group and use it for all networking resources"
default = true
}
Please note that I added the create_resource_group variable to try to resolve my issue to no avail.
I then run
terraform init
terraform plan
I get the following error with terraform plan
│ Error: Error: Log Analytics workspaces "loganalytics-we-sharedtest2" (Resource Group "rg-shared-westeurope-01") was not found
│
│ with data.azurerm_log_analytics_workspace.example,
│ on main.tf line 6, in data "azurerm_log_analytics_workspace" "example":
│ 6: data "azurerm_log_analytics_workspace" "example" {
│
╵
╷
│ Error: Error: Resource Group "rg-shared-westeurope-01" was not found
│
│ with module.virtual-machine.data.azurerm_resource_group.rg,
│ on .terraform\modules\virtual-machine\main.tf line 27, in data "azurerm_resource_group" "rg":
│ 27: data "azurerm_resource_group" "rg" {
│
What have I done ?
Looked through the code to see what I am missing.
Added the variable at the top. Tried to contact the author to no
avail.
Tried to use an existing resource group, I feel this defeats the
purpose of having a variable that asks if a new resource group can
be created in case it doesn't already exist.
What else is confusing ?
I initially had another folder for modules, i later came to realise that the module is a public one being pulled down whenever I ran terraform init, now is there a way to have this as a localised module ?
I have made the changes recommended by the answer below, however in order not to turn the question into a long winded one, I have placed the error that I got below.
│ Error: Error: Subnet: (Name "subnet1" / Virtual Network Name "testBuild_vnet" / Resource Group "testBuild") was not found
│
│ with module.virtual-machine.data.azurerm_subnet.snet,
│ on .terraform\modules\virtual-machine\main.tf line 36, in data "azurerm_subnet" "snet":
│ 36: data "azurerm_subnet" "snet" {
│
╵
╷
│ Error: Invalid count argument
│
│ on .terraform\modules\virtual-machine\main.tf line 443, in resource "azurerm_monitor_diagnostic_setting" "nsg":
│ 443: count = var.existing_network_security_group_id == null && var.log_analytics_workspace_id != null ? 1 : 0
│
│ The "count" value depends on resource attributes that cannot be determined until apply, so Terraform cannot predict how many instances will be created. To work around this, use the -target argument to first apply only the resources that the count depends on.
I think the misunderstanding is that you think the module creates a resource group, but that is not the case. This module expects an already existing resource group as var.resource_group_name (same goes for the input variables virtual_network_name, subnet_name and log_analytics_workspace_id).
The main difference between the resource_ and data_ prefix is that data sources are read-only and "only" fetch already existing infrastructure for further use:
Data sources allow Terraform to use information defined outside of
Terraform, defined by another separate Terraform configuration, or
modified by functions.
https://www.terraform.io/language/data-sources
So in your case it should work like (not tested):
# Azurerm provider configuration
provider "azurerm" {
features {}
}
# Creates a new resource group
resource "azurerm_resource_group" "test_build" {
name = "testBuild"
location = "West Europe"
}
# Creates a new network
resource "azurerm_virtual_network" "example" {
name = "example-network"
location = azurerm_resource_group.test_build.location
resource_group_name = azurerm_resource_group.test_build.name
address_space = ["10.0.0.0/16"]
dns_servers = ["10.0.0.4", "10.0.0.5"]
subnet {
name = "subnet1"
address_prefix = "10.0.1.0/24"
}
}
# Creates a new la workspace
resource "azurerm_log_analytics_workspace" "la" {
name = "loganalytics-we-sharedtest2"
resource_group_name = azurerm_resource_group.test_build.name
}
module "virtual-machine" {
source = "kumarvna/virtual-machine/azurerm"
version = "2.3.0"
# Resource Group, location, VNet and Subnet details
resource_group_name = azurerm_resource_group.test_build.name
location = "westeurope"
virtual_network_name = azurerm_virtual_network.example.name
subnet_name = "subnet1"
virtual_machine_name = "vm-linux"
# This module support multiple Pre-Defined Linux and Windows Distributions.
# Check the README.md file for more pre-defined images for Ubuntu, Centos, RedHat.
# Please make sure to use gen2 images supported VM sizes if you use gen2 distributions
# Specify `disable_password_authentication = false` to create random admin password
# Specify a valid password with `admin_password` argument to use your own password
# To generate SSH key pair, specify `generate_admin_ssh_key = true`
# To use existing key pair, specify `admin_ssh_key_data` to a valid SSH public key path.
os_flavor = "linux"
linux_distribution_name = "ubuntu2004"
virtual_machine_size = "Standard_B2s"
generate_admin_ssh_key = true
instances_count = 2
# Proxymity placement group, Availability Set and adding Public IP to VM's are optional.
# remove these argument from module if you dont want to use it.
enable_proximity_placement_group = true
enable_vm_availability_set = true
enable_public_ip_address = true
# Network Seurity group port allow definitions for each Virtual Machine
# NSG association to be added automatically for all network interfaces.
# Remove this NSG rules block, if `existing_network_security_group_id` is specified
nsg_inbound_rules = [
{
name = "ssh"
destination_port_range = "22"
source_address_prefix = "*"
},
{
name = "http"
destination_port_range = "80"
source_address_prefix = "*"
},
]
# Boot diagnostics to troubleshoot virtual machines, by default uses managed
# To use custom storage account, specify `storage_account_name` with a valid name
# Passing a `null` value will utilize a Managed Storage Account to store Boot Diagnostics
enable_boot_diagnostics = true
# Attach a managed data disk to a Windows/Linux VM's. Possible Storage account type are:
# `Standard_LRS`, `StandardSSD_ZRS`, `Premium_LRS`, `Premium_ZRS`, `StandardSSD_LRS`
# or `UltraSSD_LRS` (UltraSSD_LRS only available in a region that support availability zones)
# Initialize a new data disk - you need to connect to the VM and run diskmanagemnet or fdisk
data_disks = [
{
name = "disk1"
disk_size_gb = 100
storage_account_type = "StandardSSD_LRS"
},
{
name = "disk2"
disk_size_gb = 200
storage_account_type = "Standard_LRS"
}
]
# (Optional) To enable Azure Monitoring and install log analytics agents
# (Optional) Specify `storage_account_name` to save monitoring logs to storage.
log_analytics_workspace_id = azurerm_log_analytics_workspace.la.id
# Deploy log analytics agents to virtual machine.
# Log analytics workspace customer id and primary shared key required.
deploy_log_analytics_agent = true
log_analytics_customer_id = azurerm_log_analytics_workspace.la.workspace_id
log_analytics_workspace_primary_shared_key = azurerm_log_analytics_workspace.la.primary_shared_key
# Adding additional TAG's to your Azure resources
tags = {
ProjectName = "demo-project"
Env = "dev"
Owner = "user#example.com"
BusinessUnit = "CORP"
ServiceClass = "Gold"
}
}
Just adding a new variable called create_resource_group will not do anything as long as there is no corresponding logic/code behind it.
I am new to terraform and want to change the subnet on a network and I am getting a weird error.
google got nothing. here's what I am entering (after changing the main.tf and running plan)
terraform apply -replace="azurerm_subnet.subnet1"
Terraform will perform the following actions:
# module.network.azurerm_subnet.subnet[0] will be updated in-place
~ resource "azurerm_subnet" "subnet" {
~ address_prefixes = [
- "10.0.2.0/24",
+ "10.0.4.0/24",
]
id =
"/subscriptions/xxxxxxxxxxxxxxxxxxxxxxxxx/resourceGroups/lab-
resources/providers/Microsoft.Network/virtualNetworks/acctvnet/subnets/subnet1"
name = "subnet1"
# (7 unchanged attributes hidden)
}
Plan: 0 to add, 1 to change, 0 to destroy.
Do you want to perform these actions?
Terraform will perform the actions described above.
Only 'yes' will be accepted to approve.
I enter yes and I get this error:
Error: updating Subnet: (Name "subnet1" / Virtual Network Name "acctvnet" / Resource Group "lab-resources"): network.SubnetsClient#CreateOrUpdate: Failure sending request: StatusCode=400 -- Original Error: Code="InUseSubnetCannotBeUpdated" Message="Subnet subnet1 is in use and cannot be updated." Details=[]
│
│ with module.network.azurerm_subnet.subnet[0],
│ on .terraform/modules/network/main.tf line 15, in resource "azurerm_subnet" "subnet":
│ 15: resource "azurerm_subnet" "subnet" {
│
The VM is off and I do not see what else can be using it.
I also tried using the terraform taint "azurerm_subnet.subnet1"
Any ideas? Is what I am doing not possible?
Here is my main.tf
terraform {
required_version = ">=0.12"
required_providers {
azurerm = {
source = "hashicorp/azurerm"
version = "~>2.0"
}
}
}
provider "azurerm" {
features {}
}
resource "azurerm_resource_group" "lab_autodeploy" {
name = "lab-resources"
location = "East US 2"
}
module "Windowsservers" {
source = "Azure/compute/azurerm"
resource_group_name = azurerm_resource_group.lab_autodeploy.name
is_windows_image = true
vm_hostname = "new_ddc" // line can be removed if only one VM module per resource group
size = "Standard_F2"
admin_password = "$omePassw0rd"
vm_os_simple = "WindowsServer"
public_ip_dns = ["srv"] // change to a unique name per datacenter region
vnet_subnet_id = module.network.vnet_subnets[0]
depends_on = [azurerm_resource_group.lab_autodeploy]
}
module "network" {
source = "Azure/network/azurerm"
resource_group_name = azurerm_resource_group.lab_autodeploy.name
subnet_prefixes = ["10.4.0.0/24"]
subnet_names = ["subnet1"]
depends_on = [azurerm_resource_group.lab_autodeploy]
}
output "windows_vm_public_name" {
value = module.windowsservers.public_ip_dns_name
}
This isn't an issue specific to Terraform - in Azure you cannot change a subnet that has things attached to it. The fact that the VM is powered off makes no difference.
To get around this without destroying the VM, you could move the NIC to a different subnet (create a temporary subnet if necessary), perform the address space change and then move the NIC back.
I would not normally ask a question like this, however I feel stuck and I do not want to hack things, but rather take the time to understand. I am new to terraform and trying to learn it, a simple task that I have set myself is to create a SQL server.
My Environment
I have some resource groups created before, anytime I try to use the same name, I get the error.
Error: A resource with the ID "/subscriptions/000000-0000-0000-0000-00000000005/resourceGroups/tf_learning" already exists - to be managed via Terraform this resource needs to be imported into the State. Please see the resource documentation for "azurerm_resource_group" for more information.
Now, you look at the error and after 2 days of google research, i followed the steps here.
Using Terraform to import existing resources on Azure
and https://gmusumeci.medium.com/how-to-import-an-existing-azure-resource-in-terraform-6d585f93ea02 and Terraform resource with the ID already exists
I create a file called existing_statee.tf with the content below.
resource "azurerm_resource_group" "tf_learning" {
}
Ran
terraform import azurerm_resource_group.tf_learning /subscriptions/000000-0000-0000-0000-00000000005/resourceGroups/tf_learningterraform import azurerm_resource_group.tf_learning /subscriptions/000000-0000-0000-0000-00000000005/resourceGroups/tf_learning
I edited the file again and saved it.
resource "azurerm_resource_group" "tf_learning" {
# (resource arguments)
name = "tf_learning"
location = "UK South"
}
Then ran the following.
terraform init
terraform plan
terraform apply
To my surprise I am still getting the error.
Error: A resource with the ID "/subscriptions/00000-00000-0000-0000-00000000000/resourceGroups/tf_learning" already exists - to be managed via Terraform this resource needs to be imported into the State. Please see the resource documentation for "azurerm_resource_group" for more information.
│
│ with azurerm_resource_group.RG-Terraform,
│ on main.tf line 1, in resource "azurerm_resource_group" "RG-Terraform":
│ 1: resource "azurerm_resource_group" "RG-Terraform" {
My tf.main file.
resource "azurerm_resource_group" "RG-Terraform" {
name = var.resource-group-name
location = var.my_location
}
resource "azurerm_sql_server" "test" {
name = var.my_dev_server
resource_group_name = azurerm_resource_group.RG-Terraform.name
location = azurerm_resource_group.RG-Terraform.location
version = var.my_dev_sql_version
administrator_login = var.my_dev_sql_login
administrator_login_password = "change_me"
}
resource "azurerm_sql_database" "test" {
name = var.my_dev_sql_database
resource_group_name = azurerm_resource_group.RG-Terraform.name
location = azurerm_resource_group.RG-Terraform.location
server_name = azurerm_sql_server.test.name
edition = var.my_dev_sql_database_sku
requested_service_objective_name = var.my_dev_sql_database_objective
tags = {
environment = "dev_database_build"
}
}
variables.tf file
variable "resource-group-name" {
default = "tf_learning"
description = "The prefix used for all resources in this example"
}
variable "app-service-name" {
default = "terraform-app-service"
description = "The name of the Web App"
}
variable "location" {
default = "UK South"
description = "The Azure location where all resources in this example should be created"
}
variable "my_location" {
type = string
default = "UK South"
}
variable "my_dev_server"{
type = string
default = "test-server-test"
}
variable "my_dev_sql_login" {
type = string
default = "mylogin"
}
variable "my_dev_sql_version" {
type = string
default = "12.0"
}
variable "my_dev_sql_database" {
type = string
default = "dev_db"
}
variable "my_dev_sql_database_tag" {
type = string
default = "dev sql database from TF"
}
variable "my_dev_sql_database_sku" {
type = string
default = "Basic"
}
variable "my_dev_sql_database_objective" {
type = string
default = "Basic"
}
I am lost as to what to do next, for now I will carry on researching.
I would like to point out this, you have configured in existing_statee.tf after importing the state
resource "azurerm_resource_group" "tf_learning" {
name = "tf_learning"
location = "UK South"
}
But in main.tf you also define again
resource "azurerm_resource_group" "RG-Terraform" {
name = var.resource-group-name
location = var.my_location
}
variable "resource-group-name" {
default = "tf_learning"
description = "The prefix used for all resources in this example"
}
variable "my_location" {
type = string
default = "UK South"
}
Since you have already declared this resource in existing_statee.tf, maybe you should remove it from main.tf