Create resource via terraform but do not recreate if manually deleted? - azure

I want to initially create a resource using Terraform, but if the resource gets later deleted outside of TF - e.g. manually by a user - I do not want terraform to re-create it. Is this possible?
In my case the resource is a blob on an Azure Blob storage. I tried using ignore_changes = all but that didn't help. Every time I ran terraform apply, it would recreate the blob.
resource "azurerm_storage_blob" "test" {
name = "myfile.txt"
storage_account_name = azurerm_storage_account.deployment.name
storage_container_name = azurerm_storage_container.deployment.name
type = "Block"
source_content = "test"
lifecycle {
ignore_changes = all
}
}

The requirement you've stated is not supported by Terraform directly. To achieve it you will need to either implement something completely outside of Terraform or use Terraform as part of some custom scripting written by you to perform a few separate Terraform steps.
If you want to implement it by wrapping Terraform then I will describe one possible way to do it, although there are various other variants of this that would get a similar effect.
My idea for implementing it would be to implement a sort of "bootstrapping mode" which your custom script can enable only for initial creation, but then for subsequent work you would not use the bootstrapping mode. Bootstrapping mode would be a combination of an input variable to activate it and an extra step after using it.
variable "bootstrap" {
type = bool
default = false
description = "Do not use this directly. Only for use by the bootstrap script."
}
resource "azurerm_storage_blob" "test" {
count = var.bootstrap ? 1 : 0
name = "myfile.txt"
storage_account_name = azurerm_storage_account.deployment.name
storage_container_name = azurerm_storage_container.deployment.name
type = "Block"
source_content = "test"
}
This alone would not be sufficient because normally if you were to run Terraform once with -var="bootstrap=true" and then again without it Terraform would plan to destroy the blob, after noticing it's no longer present in the configuration.
So to make this work we need a special bootstrap script which wraps Terraform like this:
terraform apply -var="bootstrap=true"
terraform state rm azurerm_storage_blob.test
That second terraform state rm command above tells Terraform to forget about the object it currently has bound to azurerm_storage_blob.test. That means that the object will continue to exist but Terraform will have no record of it, and so will behave as if it doesn't exist.
If you run the bootstrap script then, you will have the blob existing but with Terraform unaware of it. You can therefore then run terraform apply as normal (without setting the bootstrap variable) and Terraform will both ignore the object previously created and not plan to create a new one, because it will now have count = 0.
This is not a typical use-case for Terraform, so I would recommend to consider other possible solutions to meet your use-case, but I hope the above is useful as part of that design work.

If you have a resource defined in terraform configuration then terraform will always try to create it. I can't imagine what is your setup, but maybe you want to take the blob creation to a CLI script and run terraform and the script in desired order.

Related

How to change the name of terrafrom resource in tf file wiithout terrafrom registerring a change in plan?

today I imported my cloud instance to terraform
resource "linode_domain" "example_domain" {
domain = var.primary_domain_name
soa_email = var.domain_soa_email
type = "master"
}
After I imported the instance to terrafrom using the terrafrom import command. I realized I was suppose to name example_domain as primary_domain.
Now if I change example_domain to primary_domain directly in the tf file the terrafrom plan registers as change in the plan, which I do not want to! so i want to know how can I rename this resource ?
Probably the easies way to rename a resource would be to make a copy for resource block with the new name and then run a terraform state mv command.
So, in your case, we duplicate the resource with a new name:
resource "linode_domain" "example_domain" {
domain = var.primary_domain_name
soa_email = var.domain_soa_email
type = "master"
}
resource "linode_domain" "primary_domain" {
domain = var.primary_domain_name
soa_email = var.domain_soa_email
type = "master"
}
We run state mv:
terraform state mv linode_domain.example_domain linode_domain.primary_domain
We remove the old (example_domain) resource block from the code.
The procedure above is also documented in the official Terraform docs for state mv.
If you are using Terraform v1.1 or later then you can use the new config-based refactoring features, which specifically means a moved block in this case:
resource "linode_domain" "primary_domain" {
domain = var.primary_domain_name
soa_email = var.domain_soa_email
type = "master"
}
moved {
from = linode_domain.example_domain
to = linode_domain.primary_domain
}
The above configuration tells Terraform that if it finds an object in the prior state bound to the address linode_domain.example_domain, it should pretend that it was instead bound to linode_domain.primary_domain before making any other decisions.
Terraform will include an extra annotation on the planned changes for that resource to say that its address changed, but should not propose any changes to the remote object itself unless you've also changed some other settings in the resource block.

Creating a list of existing aws_instance resources in Terraform

I'm currently trying to find a way to create a list of aws_instance type resources.
The ec2s are already configured. I imported them using Terraform, but I would like to have them as a single list, so I could perform actions like remote-exec on all of them at the same time.
I'm just not sure how can I declare a list type variable to include all existing aws_instance resources.
Any help would be much appreciated! Cheers.
EDIT:
As asked, I'm going to add some of the HCL:
As I stated, each instance is imported from an existing aws configuration.
This means I already have aws_instance blocks for each ec2.
resource "aws_instance" "ec2_1" {
*truncated*
}
I was wondering if there was a way to take these resources, and append them into a list.
I would like to create this list in order to perform actions on all instances at once, using the Provisioner remote-exec.
I tried creating a variable, but I'm afraid it doesn't function that way:
variable "ec2_list" {
type = list
default = [aws_instance.ec2_1, aws_instance.ec2_2,...]
}
But variables from main.tf cannot be used for variables.
I'm just curious if you can make a general resource to create a list under.
If you know anything, please let me know.

In Terraform 0.12, how to skip creation of resource, if resource name already exists?

I am using Terraform version 0.12. I have a requirement to skip resource creation if resource with the same name already exists.
I did the following for this :
Read the list of custom images,
data "ibm_is_images" "custom_images" {
}
Check if image already exists,
locals {
custom_vsi_image = contains([for x in data.ibm_is_images.custom_images.images: "true" if x.visibility == "private" && x.name == var.vnf_vpc_image_name], "true")
}
output "abc" {
value="${local.custom_vsi_image}"
}
Create only if image exists is false.
resource "ibm_is_image" "custom_image" {
count = "${local.custom_vsi_image == true ? 0 : 1}"
depends_on = ["data.ibm_is_images.custom_images"]
href = "${local.image_url}"
name = "${var.vnf_vpc_image_name}"
operating_system = "centos-7-amd64"
timeouts {
create = "30m"
delete = "10m"
}
}
This works fine for the first time with "terraform apply". It finds that the image did not exists, so it creates image.
When I run "terraform apply" for the second time. It is deleting the resource "custom_image" that is created above. Any idea why it is deleting the resource, when it is run for the 2nd time ?
Also, how to create a resource based on some condition(like only when it does not exists) ?
In Terraform, you're required to decide explicitly what system is responsible for the management of a particular object, and conversely which systems are just consuming an existing object. There is no way to make that decision dynamically, because that would make the result non-deterministic and -- for objects managed by Terraform -- make it unclear which configuration's terraform destroy would destroy the object.
Indeed, that non-determinism is why you're seeing Terraform in your situation flop between trying to create and then trying to delete the resource: you've told Terraform to only manage that object if it doesn't already exist, and so the first time you run Terraform after it exists Terraform will see that the object is no longer managed and so it will plan to destroy it.
If you goal is to manage everything with Terraform, an important design task is to decide how object dependencies flow within and between Terraform configurations. In your case, it seems like there is a producer/consumer relationship between a system that manages images (which may or may not be a Terraform configuration) and one or more Terraform configurations that consume existing images.
If the images are managed by Terraform then that suggests either that your main Terraform configuration should assume the image does not exist and unconditionally create it -- if your decision is that the image is owned by the same system as what consumes it -- or it should assume that the image does already exist and retrieve the information about it using a data block.
A possible solution here is to write a separate Terraform configuration that manages the image and then only apply that configuration in situations where that object isn't expected to already exist. Then your configuration that consumes the existing image can just assume it exists without caring about whether it was created by the other Terraform configuration or not.
There's a longer overview of this situation in the Terraform documentation section Module Composition, and in particular the sub-section Conditional Creation of Objects. That guide is focused on interactions between modules in a single configuration, but the same underlying principles apply to dependencies between configurations (via data sources) too.

Declare multiple providers for a list of regions

I have a Terraform module that manages AWS GuardDuty.
In the module, an aws_guardduty_detector resource is declared. The resource allows no specification of region, although I need to configure one of these resources for each region in a list. The region used needs to be declared by the provider, apparently(?).
Lack of module for_each seems to be part of the problem, or, at least, module for_each, if it existed, might let me declare the whole module, once for each region.
Thus, I wonder, is it possible to somehow declare a provider, for each region in a list?
Or, short of writing a shell script wrapper, or doing code generation, is there any other clean way to solve this problem that I might not have thought of?
To support similar processes I have found two approaches to this problem
Declare multiple AWS providers in the Terraform module.
Write the module to use a single provider, and then have a separate .tfvars file for each region you want to execute against.
For the first option, it can get messy having multiple AWS providers in one file. You must give each an alias and then each time you create a resource you must set the provider property on the resource so that Terraform knows which region provider to execute against. Also, if the provider for one of the regions can not initialize, maybe the region is down, then the entire script will not run, until you remove it or the region is back up.
For the second option, you can write the Terraform for what resources you need to set up and then just run the module multiple times, once for each regional .tfvars file.
prod-us-east-1.tfvars
prod-us-west-1.tfvars
prod-eu-west-2.tfvars
My preference is to use the second option as the module is simpler and less duplication. The only duplication is in the .tfvars files and should be more manageable.
EDIT: Added some sample .tfvars
prod-us-east-1.tfvars:
region = "us-east-1"
account_id = "0000000000"
tags = {
env = "prod"
}
dynamodb_read_capacity = 100
dynamodb_write_capacity = 50
prod-us-west-1.tfvars:
region = "us-west-1"
account_id = "0000000000"
tags = {
env = "prod"
}
dynamodb_read_capacity = 100
dynamodb_write_capacity = 50
We put whatever variables might need to be changed for the service or feature based on environment and/or region. For instance in a testing environment, the dynamodb capacity may be lower than in the production environment.

Terraform: when would you manually change the state?

According to the docs:
As your Terraform usage becomes more advanced, there are some cases where you may need to modify the Terraform state.
Under what circumstances would you want to directly change terraform's state?
It seems like a very dangerous practice to do it as opposed to changing the terraform code itself.
You are correct in thinking that it can be dangerous to modify the state file as this could corrupt the state file or cause Terraform to do things that you don't want it to as the state file drifts from your changes to the actual state of the provider it is operating against.
However, there are times when you may want to modify the state file such as for adding resources you created outside of the Terraform state file (either being created outside of Terraform entirely or just with a different state file), using the terraform import command or for renaming Terraform config resources using the terraform state commands.
For example, if you start off with defining a resource directly with something like:
variable "ami_name" {
default = "ubuntu/images/hvm-ssd/ubuntu-trusty-14.04-amd64-server-*"
}
variable "ami_owner" {
default = "099720109477" # Canonical
}
data "aws_ami" "ubuntu" {
most_recent = true
filter {
name = "name"
values = [var.ami_name]
}
filter {
name = "virtualization-type"
values = ["hvm"]
}
owners = [var.ami_owner]
}
resource "aws_instance" "web" {
ami = data.aws_ami.ubuntu.id
instance_type = "t2.micro"
tags {
Name = "HelloWorld"
}
}
And then you later decide to refactor this to a module so that others can call it with something like:
module "instance" {
ami_name = "my-image-name-*"
ami_owner = "123456789"
}
When you run a plan after this refactoring Terraform will tell you that it wants to remove the aws_instance.web resource and coincidentally create a resource with the same parameters called module.instance.aws_instance.web.
If you want to do this without causing an outage as Terraform destroys the old resource and replaces it with the new one then you could simply edit the state file to change the name of the resource with:
terraform state mv aws_instance.web module.instance.aws_instance.web
If you then run a plan it will show an empty change, successfully completing your refactoring without causing any impact on your deployed instance.

Resources