I'm using Terraform to deploy some dev and prod VMs on our VMware vCenter infrastructure and use vsphere tags to define responsibilities of VMs. I therefore added the following to the (sub)module:
resource "vsphere_tag" "tag" {
name = "SYS-Team"
category_id = "Responsibility"
description = "Systems group"
}
...
resource "vsphere_virtual_machine" "web" {
tags = [vsphere_tag.tag.id]
...
}
Now, when I destroy e.g. the dev infra, it also deletes the prod vsphere tag and leave the VMs without the tag.
I tried to skip the deletion with the lifecycle, but then I would need to separately delete each resource which I don't like.
lifecycle {
prevent_destroy = true
}
Is there a way to add an existing tag without having the resource managed by Terraform? Something hardcoded without having the tag included as a resource like:
resource "vsphere_virtual_machine" "web" {
tags = [{
name = "SYS-Team"
category_id = "Responsibility"
description = "Systems group"
}
]
...
}
You can use Terraform's data sources to refer to things that are either not managed by Terraform or are managed in a different Terraform context when you need to retrieve some output from the resource such as an automatically generated ID.
In this case you could use the vsphere_tag data source to look up the id of the tag:
data "vsphere_tag_category" "responsibility" {
name = "Responsibility"
}
data "vsphere_tag" "sys_team" {
name = "SYS-Team"
category_id = data.vsphere_tag_category.responsibility.id
}
...
resource "vsphere_virtual_machine" "web" {
tags = [data.vsphere_tag.sys_team.id]
...
}
This will use a vSphere tag that has either been created externally or managed by Terraform in another place, allowing you to easily run terraform destroy to destroy the VM but keep the tag.
If I understood correctly the real problem is on:
when I destroy e.g. the dev infra, it also deletes the prod vsphere tag
That should not be happening!
any cross environment deletions is a red flag
It feels like the problem is in your pipeline, not your terraform code ...
The deployment pipelines I create the dev resources are not mixed with prod,
and IF they are mix, you are just asking for troubles,
your team should make redesigning that a high priority
You do ask:
Is there a way to add an existing tag without having the resource managed by Terraform?
Yes you can use PowerCLI for that:
https://blogs.vmware.com/PowerCLI/2014/03/using-vsphere-tags-powercli.html
the command to add tags is really simple:
New-Tag –Name “jsmith” –Category “Owner”
You could even integrate that into terraform code with a null_resource something like:
resource "null_resource" "create_tag" {
provisioner "local-exec" {
when = "create"
command = "New-Tag –Name “jsmith” –Category “Owner”"
interpreter = ["PowerShell"]
}
}
Related
Take 2 cases.
The AWS terraform provider
terraform {
required_providers {
aws = {
source = "hashicorp/aws"
version = "4.14.0"
}
}
}
provider "aws"{
region = "us-east-1"
access_key = "<insert here>"
secret_key = "<insert here>"
}
resource "aws_instance" "my_ec2" {
ami = "ami-0022f774911c1d690"
instance_type = "t2.micro"
}
After running a terraform apply on the above, if one manually creates a custom security group and assigns it to the EC2 instance created above, then a terraform apply thereafter will update the terraform.tfstate file to contain the custom security group in its security group section. However, it will NOT put back the previous "default" security group.
This is what I would expect as well, since my tf code did not explicitly want to anchor down a certain security group.
The github terraform provider
terraform {
required_providers {
github = {
source = "integrations/github"
version = "~> 4.0"
}
}
}
# Configure the GitHub Provider
provider "github" {
token = var.github-token
}
resource "github_repository" "example" {
name = "tfstate-demo-1"
//description = "My awesome codebase" <------ NOTE THAT WE DO NOT SPECIFY A DESCRIPTION
auto_init = true
visibility = "public"
}
In this case, the repo is created without a description. Thereafter, if one updates the description via github.com and re-runs terraform apply on the above code, terraform will
a) put the new description into its tf state file during the refresh stage and
b) remove the description from the terraform.tfstate file as well as the repo in github.com.
A message on the terraform command line does allude to this confusing behavior:
Unless you have made equivalent changes to your configuration,. or ignored the relevant attributes using ignore_changes, the following plan may include actions to undo or respond to these changes.
May include? Why the ambiguity?
And why does tf enforce the blank description in this case when I have not specified anything about the optional description field in my tf code? And why does this behavior vary across providers? Shouldnt optional arguments be left alone unenforced, the way the AWS provider does not undo a custom security group attached to an EC2 instance outside of terraform? What is the design thinking behind this?
So I am completely new to the terraform and I found that by using this in terraform main.tf I can create Azure Databricks infrastructure:
resource "azurerm_databricks_workspace" "bdcc" {
depends_on = [
azurerm_resource_group.bdcc
]
name = "dbw-${var.ENV}-${var.LOCATION}"
resource_group_name = azurerm_resource_group.bdcc.name
location = azurerm_resource_group.bdcc.location
sku = "standard"
tags = {
region = var.BDCC_REGION
env = var.ENV
}
}
And I also found here
That by using this I can even create particular notebook in this Azure DataBricks infrastructure:
resource "databricks_notebook" "notebook" {
content_base64 = base64encode(<<-EOT
# created from ${abspath(path.module)}
display(spark.range(10))
EOT
)
path = "/Shared/Demo"
language = "PYTHON"
}
But since I am new to this, I am not sure in what order I should put those pieces of code together.
It would be nice if someone could point me to the full example of how to create notebook via terraform on Azure Databricks.
Thank you beforehand!
In general you can put these objects in any order - it's a job of the Terraform to detect dependencies between the objects and create/update them in the correct order. For example, you don't need to have depends_on in the azurerm_databricks_workspace resource, because Terraform will find that it needs resource group before workspace could be created, so workspace creation will follow the creation of the resource group. And Terraform is trying to make the changes in the parallel if it's possible.
But because of this, it's becoming slightly more complex when you have workspace resource together with workspace objects, like, notebooks, clusters, etc. As there is no explicit dependency, Terraform will try create notebook in parallel with creation of workspace, and it will fail because workspace doesn't exist - usually you will get a message about authentication error.
The solution for that would be to have explicit dependency between notebook & workspace, plus you need to configure authentication of Databricks provider to point to newly created workspace (there are differences between user & service principal authentication - you can find more information in the docs). At the end your code would look like this:
resource "azurerm_databricks_workspace" "bdcc" {
name = "dbw-${var.ENV}-${var.LOCATION}"
resource_group_name = azurerm_resource_group.bdcc.name
location = azurerm_resource_group.bdcc.location
sku = "standard"
tags = {
region = var.BDCC_REGION
env = var.ENV
}
}
provider "databricks" {
host = azurerm_databricks_workspace.bdcc.workspace_url
}
resource "databricks_notebook" "notebook" {
depends_on = [azurerm_databricks_workspace.bdcc]
...
}
Unfortunately, there is no way to put depends_on on the provider level, so you will need to put it into every Databricks resource that is created together with workspace. Usually the best practice is to have a separate module for workspace creation & separate module for objects inside Databricks workspace.
P.S. I would recommend to read some book or documentation on Terraform. For example, Terraform: Up & Running is very good intro
I'm setting up a virtual network in Azure with Terraform.
I have several VNets each with their own Network Security Group 100% managed in Terraform, no resources except the Resource Group exist prior to running Terraform.
When I run Terraform apply the first time all the resources are created correctly. However if I try and run apply again to update other resources I get an error saying the NSG resources already exist.
Error: A resource with the ID
"/subscriptions/0000000000000000/resourceGroups/SynthArtInfra/providers/Microsoft.Network/networkSecurityGroups/SynthArtInfra_ServerPoolNSG"
already exists - to be managed via Terraform this resource needs to be
imported into the State. Please see the resource documentation for
"azurerm_network_security_group" for more information.
Why is Terraform complaining about an existing resource when it should already be under it's control?
Edit:
This is the code related to the NSG, everything else is to do with a VPN gatway:
# Configure the Azure provider
terraform {
required_providers {
azurerm = {
source = "hashicorp/azurerm"
version = ">= 2.26"
}
}
}
provider "azurerm" {
features {}
}
data "azurerm_resource_group" "SynthArtInfra" {
name = "SynthArtInfra"
location = "Somewhere" # not real
most_recent = true
}
resource "azurerm_virtual_network" "SynthArtInfra_ComputePool" {
name = "SynthArtInfra_ComputePool"
location = azurerm_resource_group.SynthArtInfra.location
resource_group_name = azurerm_resource_group.SynthArtInfra.name
address_space = ["10.6.0.0/16"]
}
resource "azurerm_subnet" "ComputePool_default" {
name = "ComputePool_default"
resource_group_name = azurerm_resource_group.SynthArtInfra.name
virtual_network_name = azurerm_virtual_network.SynthArtInfra_ComputePool.name
address_prefixes = ["10.6.0.0/24"]
}
resource "azurerm_network_security_group" "SynthArtInfra_ComputePoolNSG" {
name = "SynthArtInfra_ComputePoolNSG"
location = azurerm_resource_group.SynthArtInfra.location
resource_group_name = azurerm_resource_group.SynthArtInfra.name
security_rule {
name = "CustomSSH"
priority = 119
direction = "Inbound"
access = "Allow"
protocol = "*"
source_port_range = "*"
destination_port_range = "0000" # not the real port number
source_address_prefix = "*"
destination_address_prefix = "*"
}
}
The other odd thing is our subscription has a security policy that automatically adds NSGs to resources that don't have one. But weirdly after applying my terraform script the NSGs are created but aren't actually associated with the Subnets and the security policy has created new NSGs. This needs to be resolved but didn't think it would cause this error.
I think what was going on is this is my first time using Terraform so I was getting a lot of errors midway through apply and destroy operations.
I ended up manually removing all the resources in Azure and deleting Terraform's local cache then everything started working.
TLDR;
Try removing any custom dependencies between resources that you have added yourself.
Hi, I came across this post whilst having a similar problem and will put my solution here in case that helps someone else.
I was working on creating a Cloud Run Service through Terraform. The first time went great and it created the resource I wanted, but as soon as I ran the apply again I would get this error saying that a resource with that name already exists. This was strange because according to the plan it was supposed to delete and then replace that resource.
What happened was that I added an uneccessary depends_on field on a few other resources and this was blocking the Cloud Run Service resource from being deleted before trying to create a new one.
According to the docs the depends_on field is only needed if there is some strange dependency that cannot be inferred by looking at the fields. So I just removed all of the custom ties between the dependencies and can now re-apply as much as I like.
Terraform v0.11.9
+ provider.aws v1.41.0
I want to know if there is a way to update a resource that is not directly created in the plan but by a resource in the plan. The example is creating a managed Active Directory by using aws_directory_service_directory This process creates a security group and I want to add tags to the security group. Here is the snippet I'm using to create the resource
resource "aws_directory_service_directory" "NewDS" {
name = "${local.DSFQDN}"
password = "${var.ADPassword}"
size = "Large"
type = "MicrosoftAD"
short_name = "${local.DSShortName}"
vpc_settings {
vpc_id = "${aws_vpc.this.id}"
subnet_ids = ["${aws_subnet.private.0.id}",
"${aws_subnet.private.1.id}",
]
}
tags = "${merge(var.tags, var.ds_tags, map("Name", format("%s", local.VPCname)))}"
}
I can reference the newly created security group using
"${aws_directory_service_directory.NewDS.security_group_id}"
I can't use that to update the resource. I want to add all of the tags I have on the directory to the security, as well as updating the Name tag. I've tried using a local-exec provisioner, but the results have not been consistent and getting the map of tags to the command without hard coding it has not worked.
Thanks
I moved the local provider out of the directory service resource and into a dummy resource.
resource "null_resource" "ManagedADTags"
{
provisioner "local-exec"
{
command = "aws --profile ${var.profile} --region ${var.region} ec2 create-tags --
resources ${aws_directory_service_directory.NewDS.security_group_id} --tags
Key=Name,Value=${format("${local.security_group_prefix}-%s","ManagedAD")}"
}
}
(The command = is a single line)
Using the format command allowed me to send the entire list of tags to the resource. Terraform doesn't "manage" it, but it does allow me to update it as part of the plan.
You can then leverage the aws_ec2_tag resource, which works on non-ec2 resources as well, on conjunction with the provider attribute ignore_tags. Please refer to another answer I made on the topic for more detail.
aws already exposes api for that where you can tag resources not just a resource. not sure why terraform is not implementing that
Just hit this as well. Turns out the tags propagate from the directory service. So if you tag your directory appropriately, the name tag from your directory service will be applied to the security group.
I have a simple AWS deployment with a vpc, public subnet, route, and security group. Running terraform apply will launch an AWS instance, and I have that instance configured to associate a public IP. After the instance has been created, I run terraform plan and it properly says everything is up to date. No problems so far.
We have a management node that will shut down that instance if it's unused for a period of time as a cost saving measure.
Here's the problem: Once that instance is shut down, when I run terraform plan, the aws provider sees everything configured properly, but since the public IP has been released, the value for associate_public_ip_address no longer matches what is configured in the terraform configs, so terraform wants to delete and recreate that instance:
associate_public_ip_address: "false" => "true" (forces new resource)
Is there a way to get terraform to ignore just that one parameter?
This question is marginally related to https://github.com/hashicorp/terraform/issues/7262. But in my case, I don't want to set the expected state, I just want to be able to tell terraform to ignore that one parameter because it's ok that it's not associated right now, as long as it's configured to be associated when it starts.
(This occurred to me while writing this question: I have not experimented with configuring the subnet to automatically associate public ip for instances launched in it. Conceivably, by making the subnet automatically do it, and removing the option from "aws_instance", I might be able to make terraform not pay attention to that value...but I doubt it.)
You can use a lifecycle block to ignore certain attribute changes.
Using this, the resource is initially created using the provided value for that attribute. Upon a subsequent plan, apply, etc., Terraform will ignore changes to that attribute.
If we add an ignore for associate_public_ip_address in the lifecycle block, a stopped instance will no longer trigger a new resource.
Note that if you alter any other parameter that would require a new instance, the stopped one will be terminated and replaced.
Example based on the Terraform aws_instance example code :
provider "aws" {
region = "us-west-2"
}
data "aws_ami" "ubuntu" {
most_recent = true
filter {
name = "name"
values = ["ubuntu/images/hvm-ssd/ubuntu-trusty-14.04-amd64-server-*"]
}
filter {
name = "virtualization-type"
values = ["hvm"]
}
owners = ["099720109477"] # Canonical account ID
}
resource "aws_instance" "web" {
ami = "${data.aws_ami.ubuntu.id}"
instance_type = "t2.micro"
associate_public_ip_address = "true"
tags {
Name = "HelloWorld"
}
lifecycle {
ignore_changes = ["associate_public_ip_address"]
}
}