can we order .tf files in terraform? - terraform

Asking a novice question as I am new to terraform.
I have 4 *.tf files in a folder (no main.tf file).
a.tf
b.tf
c.tf
d.tf
I want to define the order of execution as below-
c.tf
d.tf
a.tf
b.tf
I referred to Multiple .tf files in a folder. As per it, the ordering is alphabetical.
how shall I achieve it?

Terraform does not make any use of the order of .tf files or of the declarations in those files. Instead, Terraform decodes all of the blocks across all of your files and analyzes them to look for references between objects.
For example, you might have a variables.tf file containing the following:
variable "example" {
type = string
}
...and you might have a compute.tf file containing the following:
resource "example" "example" {
name = var.example
}
Terraform can see that resource "example" "example" contains a reference to var.example, which corresponds with the variable "example" block.
Therefore Terraform knows that it should deal with variable "example" before it should deal with resource "example" "example", even though they are in two different files. You could instead place them both in the same file in either order and the meaning would be identical.
When thinking about order of operations in Terraform, the primary concern is making sure that all of the objects have the right references between one another. In many cases the correct ordering emerges naturally from the data flow you'd be describing anyway, but if not then you can add additional references or you can use depends_on to describe additional dependencies that Terraform would not otherwise be able to see.

Related

Extracting the list of resources from the terraform files modified in a GitHub PR

I want to extract the list of all the resources modified in the Terraform files in a given PR on GitHub. I want to pass this to the "terraform plan/apply" command as targets to limit the scope for the terraform apply command.
e.g. If a .tf file changed is as below in a PR,
resource "kubernetes_namespace" "sandbox" {
...
}
module "sandbox_base" {
...
}
I would like to extract the resources in all the changed files and pass it to the command as
terraform plan -target=kubernetes_namespace.sandbox -target=module.sandbox_base
So, I need to extract the list of the targets to pass from the changed .tf files. I have a way to get the list using 2 to 3 tasks in the GitHub action such as getting the list of the changed files in the PR, parsing the list, and passing it on to another task to get the modules changed in the .tf files. However, looking for the script if anyone has one in order not to reinvent the wheel and save time.

How to copy in Terraform file from one local directory to another local directory

I want to make a simple copy of file that is located in one module folder to another module folder, but it seems that terraform do not have such kind of resource. I tried to do this with null_resource, but interpreter looses all environment through execution.
With local_file it also do not work properly. Can you advice how to do this?
Tried this, but "data" source is searching for file before apply and this failes:
data "local_file" "code" {
filename = "${path.module}/code.zip"
depends_on = [null_resource.build_package]
}
# copy package to module root directory
resource "local_file" "code" {
content_base64 = data.local_file.code.content_base64
filename = "${path.root}/resources/code_elasticsearch_proxy.zip"
}
Also tried:
resource "null_resource" "code" {
provisioner "local-exec" {
command = "cp ./${path.module}/code.zip ./code_elasticsearch_proxy.zip"
interpreter = ["bash"]
environment = {
PWD = path.root
}
}
}
But this one looses all environment and i could not have the full path to files
Managing files on the local system is not an intended use-case for Terraform, so I'd encourage you to consider other potential solutions to this problem, but since Terraform is open system where you can write a provider for anything in principle there is a way to get a result like what you want, though it'll probably have some drawbacks compared to other options.
The solution I thought about when considering your problem was to use the local_file resource type from the hashicorp/local provider:
terraform {
required_providers {
local = {
source = "hashicorp/local"
}
}
}
resource "local_file" {
content_base64 = filebase64("${path.module}/code.zip")
filename = "${path.cwd}/code_elasticsearch_proxy.zip"
}
One drawback of this approach I can identify immediately is that it will cause Terraform to load the whole content of this file into memory and pass it as a string to the provider. If the file is large then this might be slow, use more RAM than you'd like, and it might even hit internal limits of Terraform's plugin mechanism which is designed for the more typical use-case of sending small data attributes rather than large file contents.
You didn't mention why it's important to create a copy of this file in a different location, but I'd strongly recommend considering alternative approaches where you can just pass the original location of the file to whatever subsystem is ultimately going to be reading it, rather than creating a copy. Both of these files are at different places in your local filesystem, so hopefully the system consuming this should be able to read directly from ${path.module}/code.zip just as it is able to read from ${path.cwd}/code_elasticsearch_proxy.zip.

Call to function "file" failed: no file exists - Terraform

I have a template_file section in my terraform code, which has a variable value to be picked from a file like below
data "template_file" "post_sql"{
template = "${file("/home/user/setup_template.yaml")}"
vars = {
name1= data.azurerm_storage_account.newazure_storage_data.name
name2="${file("/home/user/${var.newname}loca.txt")}"
}
}
This file will get generated in the middle of tasks, but terraform looks for it at the starting of apply stage itself. I have even tried adding depends_on to no avail and throws the below error
Call to function "file" failed: no file exists at
/home/user/newnamerloca.txt.
How can i make this work, any help on this would be appreciated
The reason for the behavior you are seeing is included in the documentation for the file function:
This function can be used only with files that already exist on disk at the beginning of a Terraform run. Functions do not participate in the dependency graph, so this function cannot be used with files that are generated dynamically during a Terraform operation. We do not recommend using dynamic local files in Terraform configurations, but in rare situations where this is necessary you can use the local_file data source to read files while respecting resource dependencies.
The file function is intended for reading files that are included on disk as part of the configuration, typically in the same directory as the .tf file that refers to them, and using the path.module symbol to specify the path like this:
file("${path.module}/example.tmpl")
Your question doesn't explain why you are reading files from a user's home directory rather than from the current module configuration directory, or why one of the files doesn't exist before you run Terraform, so it's hard to give a specific suggestion on how to proceed. The documentation offers the local_file data source as a possible alternative, but it may not be the best approach depending on your goals. In particular, reading files on local disk from outside of the current module is often indicative of using Terraform for something outside of its intended scope, and so it may be most appropriate to use a different tool altogether.
Try "cat /home/user/newnamerloca.txt" and see if this file is actually in there.
Edit: Currently there is no workaround this, "data" resources are applied at the start of plan/apply thus need to be present in order to use them
Data resources have the same dependency resolution behavior as defined for managed resources. Setting the depends_on meta-argument within data blocks defers reading of the data source until after all changes to the dependencies have been applied.
NOTE: In Terraform 0.12 and earlier, due to the data resource behavior of deferring the read until the apply phase when depending on values that are not yet known, using depends_on with data resources will force the read to always be deferred to the apply phase, and therefore a configuration that uses depends_on with a data resource can never converge. Due to this behavior, we do not recommend using depends_on with data resources.
So maybe something like:
data "template_file" "post_sql"{
template = "${file("/home/user/setup_template.yaml")}"
vars = {
name1= data.azurerm_storage_account.newazure_storage_data.name
name2="${file("/home/user/${var.newname}loca.txt")}"
}
depends_on = [null_resource.example1]
}
resource "null_resource" "example1" { # **create the file here**
provisioner "local-exec" {
command = "open WFH, '>completed.txt' and print WFH scalarlocaltime"
interpreter = ["perl", "-e"]
}
}

How best to handle multiple .tfvars files that use common .tf files?

I am going to be managing the CDN configurations of dozens of applications through Terraform. I have a set of .tf files holding all the default constant settings that are common among all configurations and then each application has its own .tfvars file to hold its unique settings.
If I run something like terraform apply --var-file=app1.tfvars --var-file=app2.tfvars --var-file=app3.tfvars then only the last file passed in is used.
Even if this did work it will become unmanageable when I extend this to more sites.
What is the correct way to incorporate multiple .tfvars files that populate a common set of .tf files?
Edit: I should add that the .tfvar files define the same variables but with different values. I need to declare state of the resources defined in the .tf files once for each .tfvar file.
I found the best way to handle this case (without any 3rd party tools is to use) Terraform workspaces and create a separate workspace for each .tfvars file. This way I can use the same common .tf files and simply swap to a different workspace with terraform workspace select <workspace name> before running terraform apply --var-file=<filename> with each individual .tfvars file.
This should work using process substitution:
terraform apply -var-file=<(cat app1.tfvars app2.tfvars app3.tfvars)
Best Way may be the use of TerraGrunt https://terragrunt.gruntwork.io/ from GruntWork, which is a thin wrapper around Terraform, you can use the HCL configuration file to define your requirements.
Sample terragrunt.hcl configuration:
terraform {
extra_arguments "conditional_vars" {
commands = [
"apply",
"plan",
"import",
"push",
"refresh"
]
required_var_files = [
"${get_parent_terragrunt_dir()}/terraform.tfvars"
]
optional_var_files = [
"${get_parent_terragrunt_dir()}/${get_env("TF_VAR_env", "dev")}.tfvars",
"${get_parent_terragrunt_dir()}/${get_env("TF_VAR_region", "us-east-1")}.tfvars",
"${get_terragrunt_dir()}/${get_env("TF_VAR_env", "dev")}.tfvars",
"${get_terragrunt_dir()}/${get_env("TF_VAR_region", "us-east-1")}.tfvars"
]
}
You can pass down tfvars, also you can get more features from terragrunt by better organise your Terraform Layout, and use configuration file for passing tfvars from different locations.

terraform.tfvars vs variables.tf difference [duplicate]

This question already has answers here:
What is the difference between variables.tf and terraform.tfvars?
(5 answers)
Closed 3 years ago.
I've been researching this but can't find the distinction. A variables.tf file can store variable defaults/values, like a terraform.tfvars file.
What's the difference between these two and the need for one over the other? My understanding is if you pass in the var file as an argument in terraform via the command line.
There is a thread about this already and the only benefit seems to be passing in the tfvars file as an argument, as you can "potentially" do assignment of variables in a variable.tf file.
Is this the correct thinking?
The distinction between these is of declaration vs. assignment.
variable blocks (which can actually appear in any .tf file, but are in variables.tf by convention) declare that a variable exists:
variable "example" {}
This tells Terraform that this module accepts an input variable called example. Stating this makes it valid to use var.example elsewhere in the module to access the value of the variable.
There are several different ways to assign a value to this input variable:
Include -var options on the terraform plan or terraform apply command line.
Include -var-file options to select one or more .tfvars files to set values for many variables at once.
Create a terraform.tfvars file, or files named .auto.tfvars, which are treated the same as -var-file arguments but are loaded automatically.
For a child module, include an expression to assign to the variable inside the calling module block.
A variable can optionally be declared with a default value, which makes it optional. Variable defaults are used for situations where there's a good default behavior that would work well for most uses of the module/configuration, while still allowing that behavior to be overridden in exceptional cases.
The various means for assigning variable values are for dealing with differences. What that means will depend on exactly how you are using Terraform, but for example if you are using the same configuration multiple times to deploy different "copies" of the same infrastructure (environments, etc) then you might choose to have a different .tfvars file for each of these copies.
Because terraform.tfvars and .auto.tfvars are automatically loaded without any additional options, they behave similarly to defaults, but the intent of these is different. When running Terraform in automation, some users have their automation generate a terraform.tfvars file or .auto.tfvars just before running Terraform in order to pass in values the automation knows, such as what environment the automation is running for, etc.
The difference between the automatically-loaded .tfvars files and variable defaults is more clear when dealing with child modules. .tfvars files (and -var, -var-file options) only apply to the root module variables, while variable defaults apply when that module is used as a child module too, meaning that variables with defaults can be omitted in module blocks.
A variables.tf file is used to define the variables type and optionally set a default value.
A terraform.tfvars file is used to set the actual values of the variables.
You could set default values for all your variables and not use tfvars files at all.
Actually the objective of splitting between the definitions and the values, is to allow the definition of a common infrastructure design, and then apply specific values per environment.
Using multiple tfvars files that you give as an argument allows you to set different values per environment : secrets, VM size, number of instances, etc.

Resources