Avoid Racing Condition between resources in a Terraform Module - terraform

I have these two resource definitions in a terraform module:
resource "null_resource" "install_dependencies" {
provisioner "local-exec" {
command = "pip install -r requirements.txt -t . ; exit"
}
triggers = {
dependencies_versions = filemd5(".${var.package_requirements_path}")
}
}
data "archive_file" "lambda_source_package" {
type = "zip"
source_dir = ".${var.source_directory}"
output_path = var.zipped_lambda
}
The null resource installs a host of dependencies. These resources are then archived by the lambda_source_package. However, I find that, data "archive_file" "lambda_source_package" executes before all the pip packages are installed.
To solve this, I used the depends_on meta-argument for this. I realized this also doesn't solve the issue since terraform isn't able to find the zip file generated by data "archive_file" "lambda_source_package" before it completes execution. The output error message is │ Error: unable to load "../outputs/xx.zip": open ../outputs/xx.zip: no such file or directory
The documentation says that, the depends_on argument "create more conservative plans that replace more resources than necessary" and hence it is not advisable to use it especially when creating a module(which is my use case).
The documentation also suggests that "Instead of depends_on, we recommend using expression references to imply dependencies when possible.". What does this mean? How can I apply it to my use-case?

Related

Upgrade the version of terraform provider

I want to deploy this portion of code about azure frontdoor :
`
resource "azurerm_cdn_frontdoor_profile" "example" {
name = "example-cdn-profile"
resource_group_name = "frdoor-p-to-01"
sku_name = "Standard_AzureFrontDoor"
tags = {
environment = "Production"
}
}
`
But when i run the pipeline, i have this problem :
│ Error: Failed to query available provider packages
│
│ Could not retrieve the list of available versions for provider
│ hashicorp/azurerm: no available releases match the given constraints ~>
│ 3.27.0, 3.28.0
#--------------------------------------------------
I use a VM hosted agent (linux)
I try to modify the provider file to the another version but the same error.
Some one have a solution please ?
`
terraform {
required_providers {
azurerm = {
source = "hashicorp/azurerm"
#version = "~> 3.27.0"
version = "=3.28.0"
#version = "=2.97.0"
}
`
I use a VM Hosted agent where terraform is installed and azure devops pipeline.
This error suggests that you have two modules in your configuration with contradictory version constraints: it isn't possible to require both exactly 3.28.0 and exactly 3.27.0 at the same time.
You can use the terraform providers command to learn which version constraints are specified in which of your modules. To proceed with this change you will need to make sure that there is at least one version that all of your modules declare that they are compatible with.
To avoid situations like this, it's better to only use lower-bound version constraints >= in your shared modules, declaring the minimum version that the module has been tested with, and leave the upper bound unspecified unless you already know that the module is incompatible with a later provider version. This means you can then gradually upgrade the provider without having to update the version constraints across all of your modules in lockstep.
The dependency lock file is the correct place to record exact version selections, and Terraform generates that automatically during terraform init so you do not need to edit it yourself. If you are using only >= constraints in your modules then you can run terraform init -upgrade whenever you wish to upgrade to the latest compatible version of each provider.

Terraform null_resource trigger when file missing

I am using null_resource provisioner to copy source files into build folder and run package installation. Then I use archive_file to create a zip that is used for a serverless function deployment. The current trigger is a output_sha of a zip file I create only for the purpose of detecting a change to the source. All works well except when the copy of source gets deleted from the build folder. At that point Terraform assumes that the source has not changed, the null_resource is not replaced and therefore no files end up in the build folder.
Is there a way to to trigger the null_resource when the file is missing as well when the source changes?
Terraform version 1.2.8
Here is a sample of what is happening:
data "archive_file" "change_check" {
type = "zip"
source_dir = local.src_dir
output_path = "${local.change_check_path}/${var.lambda_name}.zip"
}
resource "null_resource" "dependencies" {
triggers = {
src_hash = data.archive_file.change_check.output_sha
}
provisioner "local-exec" {
command = <<EOT
rm -rf ${local.build_dir};
cp -R ${local.src_dir} ${local.build_dir};
EOT
}
provisioner "local-exec" {
command = "pip install -r ${local.build_dir}/requirements.txt -t ${local.build_dir}/python --upgrade"
}
}

Terraform: Attempting to use a "null_resource" fails

I have a working terraform script which I would like to add a null_resource local_exec command to. But when I do, it fails. Here's the block:
resource "null_resource" "es_lincoln" {
provisioner "local-exec" {
command = "echo $(pwd) > somefile.txt"
}
}
and when I uncomment it and try to execute a plan I get this error:
Error: Could not load plugin
Plugin reinitialization required. Please run "terraform init".
Plugins are external binaries that Terraform uses to access and manipulate
resources. The configuration provided requires plugins which can't be located,
don't satisfy the version constraints, or are otherwise incompatible.
Terraform automatically discovers provider requirements from your
configuration, including providers used in child modules. To see the
requirements and constraints, run "terraform providers".
Failed to instantiate provider "registry.terraform.io/hashicorp/null" to
obtain schema: unknown provider "registry.terraform.io/hashicorp/null"
After googling for a while, I can't seem to find anyone else with this problem. Why is my Terraform attempting to match "null_resource" with a provider when it should just run its local-exec provisioner?
terraform init is required if you change provider requirements or modules change.
Providers are required for each resource type, in this case null_resource requires the null provider.
If you change your code in such a way that a new provider is required, as in this case you uncommented your resource to make it active, then terraform init is required to run again. You can see the changes in the .terraform directory in the terraform script directory.
It should be noted that if you add a second null_resource, you would not have to re-run terraform init again. -- unless you add a module ;-)

Terraform clone git repo at plan or init stage

Context:
I am building API Gateway with OpenAPI Specifications 3.0 using terraform. I have got the api-spec.yaml file in a different repo from the terraform code. So, here's what I have done so far.
Using null_resource to clone the repo at the desired location
resource "null_resource" "clone-spec-file" {
provisioner "local-exec" {
command = "git clone https://gitlab.com/dShringi/openapi-spec.git"
}
}
Using the cloned api-spec file while creating the api gateway resource
data "template_file" swagger {
template = file("./openapi-spec/api-spec.yaml")
depends_on = ["null_resource.clone-spec-file"]
}
Problem:
The script fails at terraform plan because although I have used depends_on with template_file, it doesn't actually clones the repo at plan stage but it checks if the file is present and hence it fails with file not found at template = file("./openapi-spec/api-spec.yaml").
Will appreciate any thoughts regarding how it can be best handled, thanks.

How to copy in additional files during "terraform get" that reside outside of the module directory?

The Hashicorp Consul repository contains a Terraform module for launching a Consul cluster in AWS. The module references several files that are found in the parent directory of the module under the shared/scripts directory here https://github.com/hashicorp/consul/tree/master/terraform
However, when I reference the module in one of my .tf files and run terraform get to download the module, the required files under shared/scripts/ are not included with the downloaded module files, leading to errors like the one described here
My module section in Terraform looks like this:
module "consul" {
source = "github.com/hashicorp/consul/terraform/aws"
key_name = "example_key"
key_path = "/path/to/example_key"
region = "us-east-1"
servers = "3"
platform = "centos7"
}
Is there anyway to have terraform get pull in files that live outside the module directory?
Thanks
From looking at what those files do, I'd just copy the one's you need (depending on whether you're deploying on debian or rhel), which will be 2/3 files and feeding them into provisioner "file":
https://www.terraform.io/docs/provisioners/file.html

Resources