I'm relatively new to Terraform My current $employer uses Terraform and we have init.tf files in each project.
It has:
a terraform block,
provider blocks,
data terraform_remote_state blocks
I want to understand what is this file for.
I don't see any mention of it in documentation/guides for structuring Terraform projects, e.g.:
https://www.terraform.io/language/modules/develop/structure
Terraform folder structure
https://www.digitalocean.com/community/tutorials/how-to-structure-a-terraform-project
https://www.terraform-best-practices.com/code-structure
I do see some (scant) mentions of other people having this file, e.g. https://github.com/hashicorp/terraform/issues/21722
I don't see any mention of it in the terraform code base:
[~/git/terraform] main ± git log -S init.tf
[~/git/terraform] main ±
I also checked "Up and Running with Terraform" (3rd edition), no mention of init.tf
Did this used to be a convention? What's going on here? :)
It is more likely that the person who named it as init.tf wants to convey a message that it is required to initialize terraform.
you can continue to work by run the commands - terraform init, terraform plan etc.
The norm is to name the file as provider.tf, although it doesn't matter how you name the terraform files. It can be whatever.tf as long as it ends with .tf
Related
I am new to Terraform and its CDK. I am confuse about the following:
When I try to run the tf.json generated through cdktf synth using cdktf deploy, terraform plan or terraform apply, the console keeps telling me that all attributes inside the access_config are required and emit errors, but I checked the documentation, it is said that these field can be optional.
So, I want to know is it a bug or the documentation is wrong ?
If you are checking the correct version of Terraform documentation and still see in tf plan/apply these attributes as required then you should add these attributes in your config. Might happen that the documentation is not up to date
After discuss with my colleagues, I managed to solve the problem. For the access_config, you have to fill in the attributes while leave them blank if you don't want to give any value to them:
"access_config":[{
"nat_ip":"google_compute_address.some_name.address",
"public_ptr_domain_name":"",
"network_tier":""
}]
The use for access_config is required in terraform cdk in comparison to terraform hcl. In terraform where we use HCL to write the configurations,
access_config can be left empty but for terraform cdk is needs to be populated with parameters which can be left empty.
I want to extract the list of all the resources modified in the Terraform files in a given PR on GitHub. I want to pass this to the "terraform plan/apply" command as targets to limit the scope for the terraform apply command.
e.g. If a .tf file changed is as below in a PR,
resource "kubernetes_namespace" "sandbox" {
...
}
module "sandbox_base" {
...
}
I would like to extract the resources in all the changed files and pass it to the command as
terraform plan -target=kubernetes_namespace.sandbox -target=module.sandbox_base
So, I need to extract the list of the targets to pass from the changed .tf files. I have a way to get the list using 2 to 3 tasks in the GitHub action such as getting the list of the changed files in the PR, parsing the list, and passing it on to another task to get the modules changed in the .tf files. However, looking for the script if anyone has one in order not to reinvent the wheel and save time.
I have a Terraform config that essentially wraps Kubespray, basically a set of Ansible playbooks. Much of the Kubernetes cluster configuration is stored in YAML files. In some respects embedding calls to things such as Perl in provisioners would be the easiest way to substitute variables into these files. This leaves things such as the Terraform template function, and I have done things such as taken YAML files, k8s-cluster.yml for example and turned it into a template file, the problem with this is that if the YAML file changes in the upstream GitHub repo for Kubespray I have to recreate the template file, which is not a brilliant way of doing things. Presuming that other people must have faced this issue, what is the most elegant way of dealing with YAML configurations that can change in this way ?
The best possible solution I have come up with to date is this, replacing the hard coding with variables is advisable, but it works:
data "http" "k8s_cluster_yml" {
url = "https://raw.githubusercontent.com/kubernetes-sigs/kubespray/master/inventory/sample/group_vars/k8s_cluster/k8s-cluster.yml"
}
resource "local_file" "k8s_cluster_yml" {
content = replace(data.http.k8s_cluster_yml.body, "/kube_version: v[0-9].[0-9][0-9].[0-9]/", "kube_version: v1.22.5")
filename = "./k8s-cluster.yaml"
}
I have a template_file section in my terraform code, which has a variable value to be picked from a file like below
data "template_file" "post_sql"{
template = "${file("/home/user/setup_template.yaml")}"
vars = {
name1= data.azurerm_storage_account.newazure_storage_data.name
name2="${file("/home/user/${var.newname}loca.txt")}"
}
}
This file will get generated in the middle of tasks, but terraform looks for it at the starting of apply stage itself. I have even tried adding depends_on to no avail and throws the below error
Call to function "file" failed: no file exists at
/home/user/newnamerloca.txt.
How can i make this work, any help on this would be appreciated
The reason for the behavior you are seeing is included in the documentation for the file function:
This function can be used only with files that already exist on disk at the beginning of a Terraform run. Functions do not participate in the dependency graph, so this function cannot be used with files that are generated dynamically during a Terraform operation. We do not recommend using dynamic local files in Terraform configurations, but in rare situations where this is necessary you can use the local_file data source to read files while respecting resource dependencies.
The file function is intended for reading files that are included on disk as part of the configuration, typically in the same directory as the .tf file that refers to them, and using the path.module symbol to specify the path like this:
file("${path.module}/example.tmpl")
Your question doesn't explain why you are reading files from a user's home directory rather than from the current module configuration directory, or why one of the files doesn't exist before you run Terraform, so it's hard to give a specific suggestion on how to proceed. The documentation offers the local_file data source as a possible alternative, but it may not be the best approach depending on your goals. In particular, reading files on local disk from outside of the current module is often indicative of using Terraform for something outside of its intended scope, and so it may be most appropriate to use a different tool altogether.
Try "cat /home/user/newnamerloca.txt" and see if this file is actually in there.
Edit: Currently there is no workaround this, "data" resources are applied at the start of plan/apply thus need to be present in order to use them
Data resources have the same dependency resolution behavior as defined for managed resources. Setting the depends_on meta-argument within data blocks defers reading of the data source until after all changes to the dependencies have been applied.
NOTE: In Terraform 0.12 and earlier, due to the data resource behavior of deferring the read until the apply phase when depending on values that are not yet known, using depends_on with data resources will force the read to always be deferred to the apply phase, and therefore a configuration that uses depends_on with a data resource can never converge. Due to this behavior, we do not recommend using depends_on with data resources.
So maybe something like:
data "template_file" "post_sql"{
template = "${file("/home/user/setup_template.yaml")}"
vars = {
name1= data.azurerm_storage_account.newazure_storage_data.name
name2="${file("/home/user/${var.newname}loca.txt")}"
}
depends_on = [null_resource.example1]
}
resource "null_resource" "example1" { # **create the file here**
provisioner "local-exec" {
command = "open WFH, '>completed.txt' and print WFH scalarlocaltime"
interpreter = ["perl", "-e"]
}
}
I am going to be managing the CDN configurations of dozens of applications through Terraform. I have a set of .tf files holding all the default constant settings that are common among all configurations and then each application has its own .tfvars file to hold its unique settings.
If I run something like terraform apply --var-file=app1.tfvars --var-file=app2.tfvars --var-file=app3.tfvars then only the last file passed in is used.
Even if this did work it will become unmanageable when I extend this to more sites.
What is the correct way to incorporate multiple .tfvars files that populate a common set of .tf files?
Edit: I should add that the .tfvar files define the same variables but with different values. I need to declare state of the resources defined in the .tf files once for each .tfvar file.
I found the best way to handle this case (without any 3rd party tools is to use) Terraform workspaces and create a separate workspace for each .tfvars file. This way I can use the same common .tf files and simply swap to a different workspace with terraform workspace select <workspace name> before running terraform apply --var-file=<filename> with each individual .tfvars file.
This should work using process substitution:
terraform apply -var-file=<(cat app1.tfvars app2.tfvars app3.tfvars)
Best Way may be the use of TerraGrunt https://terragrunt.gruntwork.io/ from GruntWork, which is a thin wrapper around Terraform, you can use the HCL configuration file to define your requirements.
Sample terragrunt.hcl configuration:
terraform {
extra_arguments "conditional_vars" {
commands = [
"apply",
"plan",
"import",
"push",
"refresh"
]
required_var_files = [
"${get_parent_terragrunt_dir()}/terraform.tfvars"
]
optional_var_files = [
"${get_parent_terragrunt_dir()}/${get_env("TF_VAR_env", "dev")}.tfvars",
"${get_parent_terragrunt_dir()}/${get_env("TF_VAR_region", "us-east-1")}.tfvars",
"${get_terragrunt_dir()}/${get_env("TF_VAR_env", "dev")}.tfvars",
"${get_terragrunt_dir()}/${get_env("TF_VAR_region", "us-east-1")}.tfvars"
]
}
You can pass down tfvars, also you can get more features from terragrunt by better organise your Terraform Layout, and use configuration file for passing tfvars from different locations.