I have a single main.tf at the root and different modules under it for different parts of my Azure cloud e.g.
main.tf
- apim
- firewall
- ftp
- function
The main.tf passes variable down to the various modules e.g. resource group name or a map of tags.
During development I have been investigating certain functionality using the portal and I don't have it in terraform yet.
e.g. working out how best to add a mock service in the web module
If I now want to update a different module (e.g. update firewall rules) terraform will suggest destroying the added service.
How can I do terraform plan/apply for just a single module?
You can target only the module by specifying the module namespace as the target argument in your plan and apply commands:
terraform plan -target=module.<declared module name>
For example, if your module declaration was:
module "the_firewall" {
source = "${path.root}/firewall"
}
then the command would be:
terraform plan -target=module.the_firewall
to only target that module declaration.
Note that this should only be used in development/testing scenarios, which it seems you already are focused on according to the question.
Related
Currently, I am using "Mongey/kafka" provider and now I have to switch to "confluentinc/confluent" provider with my existing terraform pipeline.
How can I do this ?
Steps currently following to switch the provider
Changing the provider in main.tf file and running following command to replace provider
terraform state replace-provider Mongey/kafka confluentinc/confluent
and after that I run
terraform init command to install the new provider
But after that when I am running
terraform plan
it is giving "no schema available for module.iddn_news_cms_kafka_topics.kafka_acl.topic_writer[13] while reading state; this is a bug in terraform and should be reported" error.
Is there any way, I will change the terraform provider without disturbing the existing resources created using terraform pipeline ?
The terraform state replace-provider command is intended for switching between providers that are in some way equivalent to one another, such as the hashicorp/google and hashicorp/google-beta providers, or when someone forks a provider into their own namespace but remains compatible with the original provider.
Mongey/kafka and confluentinc/confluent do both have resource types that seem to represent the same concepts in the remote system:
Mongey/kafka
confluentinc/confluent
kafka_acl
confluent_kafka_acl
kafka_quota
confluent_kafka_client_quota
kafka_topic
confluent_kafka_topic
However, despite representing the same concepts in the remote system these resource types have different names and incompatible schemas, so there is no way to migrate directly between them. Terraform has no way to understand which resource types in one provider match with resource types in another, or to understand how to map attributes from one of the resource types onto corresponding attributes of the other.
Instead, I think the best thing to do here would be to ask Terraform to "forget" the objects and then re-import them into the new resource types:
terraform state rm kafka_acl.example to ask Terraform to forget about the remote object associated with kafka_acl.example. There is no undo for this action.
terraform import confluent_kafka_acl.example OBJECT-ID to bind the OBJECT-ID (as described in the documentation) to confluent_kafka_acl.example.
I suggest practicing this in a non-production environment first so that you can be confident about the behavior of each of these commands, and learn how to translate from whatever ID format the Mongey/kafka provider uses into whatever import ID format the confluentinc/confluent provider uses to describe the same objects.
I have created terraform template (azure) with two modules. One module is for the resource group. the other is for vnet (it handles the creation of NSG and route table as well along with their association with subnets).
When I run terraform apply, it is giving an error for the route table, as the resource group is not created yet. the order of creation is showing as route table is created first and then resource group.
Is there a way to set the order of creation? in the main.tf in the root folder, module resource group is called first and then vnet.
Reconsider the idea of creating RG and resources using two modules. Ask yourself a simple question: why?
If you are 100% sure it is the right approach then use depends_on:
module "rg1" {
source = "./rg_module"
...
}
module "net1" {
source = "./network_module"
....
depends_on = [module.rg1]
}
Another alternative would be to use implicit dependence:
-Have the root module where resource group is actually defined return an output:
output "rg_name" {
value = azurerm_resource_group.root_rg.name
}
-No modifications in resource group module which calls the root module
-While creating route table(module),use the output value from resource group module:
[Assuming the variable assignment in below module is providing input to its root source using the name resource_group_name]
resource_group_name =module.rg_module["<OPTIONAL KEY IF USING FOR EACH IN RG MODULE"].rg_name
This creates an internal dependence on the resource group.
Note that it is not possible to reference arguments(actually variables) from the resource group module unless output values were defined.
You must use -out option to save the plan to a file. Like:
terraform plan -out <plan_file>
It is always recommended to use -out and save the plan file. This will ensure that the order of creation is preserved across subsequent applies.
This question is not how to import and it's not what's the purpose of tfstate. It's what's the purpose of importing a pre-existing resource, esp. compared to just referencing the ID of the existing resource?
Terraform has the feature of terraform import. HashiCorp describes the purpose of this as:
Terraform is able to import existing infrastructure. This allows you take resources you've created by some other means and bring it under Terraform management.
This is a great way to slowly transition infrastructure to Terraform, or to be able to be confident that you can use Terraform in the future if it potentially doesn't support every feature you need today.
I read the article about the purpose of Terraform state. It does make sense to me to track Terraform state with .tfstate files when those files are mappings back to the configurations in .tf files.
But it's still unclear to me what the purpose of a standalone .tfstate file is when it only maps to an empty resource block. If there is a resource not in terraform yet, I would typically do one of two things:
put the resource in terraform, tear down the resource manually and re-deploy the resource with terraform, or...
keep the resource un-templated, reference its resource ID as a parameter and get its metadata via a data element for terraform-managed resources that rely on it.
Is terraform import an alternative to those two approaches? And if so, why would you use that approach?
The only way to make changes to an imported resource (that only has an empty resource block in the .tf file and detailed state in .tfstate) is to make manual changes and then re-import into .tfstate`, right? And if so, then what's the point of tracking the state of that resource in terraform?
I'm sure there's a good reasons. Just want to understand this deeper! Thanks!
But it's still unclear to me what the purpose of a standalone .tfstate
file is when it only maps to an empty resource block.
You wouldn't use a standalone .tfstate file. You would be using the same .tfstate file that all your other resources are in.
If there is a resource not in terraform yet, I would typically do one
of two things:
put the resource in terraform, tear down the resource manually and re-deploy the resource with terraform, or...
keep the resource un-templated, reference its resource ID as a parameter and get its metadata via a data element for
terraform-managed resources that rely on it.
Is terraform import an alternative to those two approaches? And if so,
why would you use that approach?
Consider the case where you have a production database with terrabytes of data already load in it, and users actively performing actions that query that database 24 hours a day. Your option 1 would require some down time, possibly a lot of down time, because you would have to deal with backing up and restoring terrabytes of data. Your option 2 would never let you manage changes to your database server via Terraform. That's what the Terraform import feature solves. It lets Terraform take "full control" of resources that already exist, without having to recreate them.
I agree that if a system outage is not an issue, and if recreating a resource isn't going to take much time, using option 1 is the way to go. Option 2 is only for resources that you never want to fully manage in Terraform, which is really a separate issue from the one Terraform import solves.
When importing a resource with terraform import it is necessary to write the configuration block to manage it with Terraform. On the same page you linked it states:
The current implementation of Terraform import can only import resources into the state. It does not generate configuration. A future version of Terraform will also generate configuration.
Because of this, prior to running terraform import it is necessary to
write manually a resource configuration block for the resource, to
which the imported object will be mapped.
So to bring preexisting resources under Terraform management, you first write the resource block for it in a .tf file. Next you use terraform import to map the resource to this resource block in your .tfstate. The next time you run terraform plan, Terraform will determine what changes (if any) will need to be made upon the next terraform apply based on the resource block and the actual state of the resource.
EDIT
The "why" of terraform import is to manage resources that are previously unknown to Terraform. As you alluded to in your second bullet point, if you want metadata from a resource but do not want to change the configuration of the resource, you would use a data block and reference that in dependent resources.
When you want to manage the configuration of a resource that was provisioned outside of Terraform you use terraform import. If you tear down the resource there may be data loss or service downtime until you re-deploy with Terraform, but if you use terraform import the resource will be preserved.
The import process can be started with an empty resource block, but the attributes need to be filled out to describe the resource. You will get the benefits of terraform plan after importing, which can help you find the discrepancies between the resource block and the actual state of the resource. Once the two match up, you can continue to make additional changes to the resource like any other resource in Terraform.
Terraform state file is your source of truth for your cloud infrastructure. Terraform uses local state to create plans and make changes to the infrastructure. Before any terraform operation, terraform does a refresh to update the state with the real infrastructure.
I'm trying to launch AWS Config and rules in each region in my account. Right now in my root main.tf I create an AWS provider in a single region and call my AWS Config module from my modules directory. This is fine for creating one module, but I would be hoping to have a regions list that I could iterate over to create AWS Config rules in each
I have tried creating individual modules with region as a parameter, but I do not know if 10+ different modules is effective. It seems using a for loop would be more effective, but I cant find any examples for this behavior.
provider "aws" {
region = "${var.aws_region}"
}
module "config" {
source = "./modules/config"
...
}
My goal is to use my config modules over all and create them in all regions. us-east-1, us-east-2, us-west1-, etc etc
I believe you're trying to dynamically pass in a list of regions to a module's provider in order to provision resources across regions in a single module. This is not possible at the moment.
Here is the ticket to upvote and follow: https://github.com/hashicorp/terraform/issues/24476
I have multiple different services which each have their own terraform configuration to create resources (in this particular case, a BigQuery table for each service).
Each of these services depends on the existence of a single instance of a resource (in this case, a BigQuery dataset).
I would like to somehow configure Terraform so that this shared resource is created exactly once if it does not exist.
My first thought was to use modules, however this leads to each root service attempting to create its own instance of the shared resource due to module namespacing.
Ideally I would like to mark one directory of terraform configuration as dependent on another directory of terraform configuration, without importing that latter directory as a module. Is this possible?
It is, you need to create a module and then save the remote state somewhere. You can configure backends in terraform to handle this for you. Once you have that you can then have other resources reference that state using the "data_terraform_remote_state" resource. Any outputs you have configured in the module will be available to reference in the remote state.