Terraform upgrade and multiple versions.tf, do child modules inherit the provider versions? - terraform

Not very experienced with Terraform. I upgraded my project from 12 to 13 and looking to upgrade it to 14 afterwards.
As the documentation specifies, I ran terraform 0.13upgrade and terraform 0.13upgrade module, my directory became like this:
terraform
├── module
│ ├── main.tf
│ └── versions.tf
├── main.tf
└── versions.tf
I moved my versions but the problem is that I specified them only in the root versions.tf:
terraform {
required_providers {
google = {
source = "hashicorp/google"
version = "~> 3.8.0"
}
}
in module/versions.tf I kept only:
terraform {
required_providers {
google = {
source = "hashicorp/google"
}
}
Do child modules inherit the version from the root (where they are imported) or does this mean my module will automatically run with a more recent provider version (3.64 I think)?
Should I simply remove module/versions.tf? (It's tiresome to have 2 versions to edit everytime).
Thanks!

For a given Terraform configuration (which includes both the root module and any other modules you might call), there can be only one version of each provider. Terraform recognizes that two providers are "the same" by them having the same source value after normalization, and in your examples here both are using hashicorp/google, which is short for registry.terraform.io/hashicorp/google, and so both of them need to be able to agree on a particular version of that provider to use.
Terraform handles version constraints from multiple providers by combining them together and trying to honor all of them together. In your examples here you've written no version argument in the child module, and this means "any version is allowed".
Terraform will therefore look for an available provider version that matches both the ~> 3.8.0 constraint and the implied "any version" constraint, and since the ~> 3.8.0 constraint is a proper subset of "any version" it effectively takes priority over the open constraint in the child module. This is not strictly "inheritance", but it happens to behave somewhat like it in this case because the child module is totally unconstrained.
A more interesting example would be if both of your modules specified different version constraints, which means we can see a more interesting effect of combining them. Let's pretend that your two modules had the following requirements instead:
terraform {
required_providers {
google = {
source = "hashicorp/google"
version = "~> 3.8.2"
}
}
}
terraform {
required_providers {
google = {
source = "hashicorp/google"
version = ">= 3.8.5"
}
}
}
In this situation neither of these constraints is a subset of the other, but they do have some overlap: all of the 3.8.x versions from 3.8.5 onwards are acceptable to both modules. Therefore Terraform will select the newest available version from that set.
If you write two modules that have conflicting version constraints then that would be an error:
terraform {
required_providers {
google = {
source = "hashicorp/google"
version = "~> 3.7.0"
}
}
}
terraform {
required_providers {
google = {
source = "hashicorp/google"
version = "~> 3.8.5"
}
}
}
There is no provider version that is both a 3.7.x release and a 3.8.x release at the same time, so no release can ever possibly match both of these constraints, and thus provider version selection will fail. It's for this reason that the Terraform documentation section Best Practices for Provider Versions advises to use ~> version constraints only in the root module.

Related

Terraform, Archive failed due to missing directory

I have been working on Terraform using AzureDevOps before that I developed tf files using VS code and everything worked fine when try to move files from VS code to Azure DevOps , getting issue on Archive source file path it unable to find the directory, searched every where but unable to resolve this,
Path which was working fine on VS code was “…/Folder name” using same path in Azure DevOps as I have upload completed folder that I have build in VS code but it always get failed when try to archive files as it un-able to find the directory.
[Code Block DevOps]
terraform {
required_providers {
azurerm = {
source = "hashicorp/azurerm"
# Root module should specify the maximum provider version
# The ~> operator is a convenient shorthand for allowing only patch releases within a specific minor release.
version = "~>2.11"
}
}
}
provider "azurerm" {
features {}
#skip_provider_registration = true
}
locals {
location = "uksouth"
}
data "archive_file" "file_function_app" {
type = "zip"
source_dir = "../BlobToBlobTransferPackage"
output_path = "blobtoblobtransfer-app.zip"
}
module "windows_consumption" {
source = "./modules/fa"
archive_file = data.archive_file.file_function_app
}
output "windows_consumption_hostname" {
value = module.windows_consumption.function_app_default_hostname
}
Image of VS Code where everything is working fine:
Image of DevOps where getting Missing Directory Error:
Folder Structure that is working fine with VS code
It was due to path which is fixed now,

Using terragrunt generate provider block causes conflicts with require providers block in module

I'm using Terragrunt with Terraform version 0.14.8.
My project uses mono repo structure as it is a project requirement to package Terragrunt files and Terraform modules together in a single package.
Folder structure:
project root:
├── environments
│   └── prd
│   ├── rds-cluster
│   │   └── terragrunt.hcl
│   └── terragrunt.hcl
└── modules
├── rds-cluster
│   ├── README.md
│   ├── main.tf
│   ├── output.tf
│   └── variables.tf
└── secretsmanager-secret
├── README.md
├── main.tf
├── output.tf
└── variables.tf
In prd/terragrunt.hcl I define the remote state block and the generate provider block.
remote_state {
backend = "s3"
...
}
generate "provider" {
path = "provider.tf"
if_exists = "overwrite_terragrunt"
contents = <<EOF
terraform {
required_providers {
aws = {
source = "hashicorp/aws"
version = "~> 3.0"
}
}
}
provider "aws" {
region = "ca-central-1"
}
EOF
}
In environments/prd/rds-cluster/terragrunt.hcl, I defined the following:
include {
path = find_in_parent_folders()
}
terraform {
source = "../../../modules//rds-cluster"
}
inputs = {
...
}
In modules/rds-cluster/main.tf, I defined the following:
terraform {
required_providers {
aws = {
source = "hashicorp/aws"
version = ">= 3.0"
}
}
}
// RDS related resources...
My problem is that when I try to run terragrunt plan under environments/prd/rds-cluster, I get the following error message:
Error: Duplicate required providers configuration
on provider.tf line 3, in terraform:
3: required_providers {
A module may have only one required providers configuration. The required
providers were previously configured at main.tf:2,3-21.
I can resolve this by declaring the version within the provider block as shown here. However, the version attribute in provider blocks has been deprecated in Terraform 0.13; Terraform recommends the use of the required_providers sub-block under terraform block instead.
Does anyone know what I need to do to use the new required_providers block for my aws provider?
As you've seen, Terraform expects each module to have only one definition of its required providers, which is intended to avoid a situation where it's unclear why Terraform is detecting a particular when the declarations are spread among multiple files.
However, to support this sort of piecemeal code generation use-case Terraform has an advanced feature called Override Files which allows you to explicitly mark certain files for a different mode of processing where they selectively override particular definitions from other files, rather than creating entirely new definitions.
The details of this mechanism depend on which block type you're overriding, but the section on Merging terraform blocks` discusses the behavior relevant to your particular situation:
If the required_providers argument is set, its value is merged on an element-by-element basis, which allows an override block to adjust the constraint for a single provider without affecting the constraints for other providers.
In both the required_version and required_providers settings, each override constraint entirely replaces the constraints for the same component in the original block. If both the base block and the override block both set required_version then the constraints in the base block are entirely ignored.
The practical implication of the above is that if you have an override file with a required_providers block that includes an entry for the AWS provider then Terraform will treat it as a full replacement for any similar entry already present in a non-override file, but it won't affect other provider requirements entries which do not appear in the override file at all.
Putting all of this together, you should be able to get the result you were looking for by asking Terragrunt to name this generated file provider_override.tf instead of just provider.tf, which will then activate the override file processing behavior and thus allow this generated file to override any existing definition of AWS provider requirements, while allowing the configurations to retain any other provider requirements they might also be defining.

found a virtual manifest at <path> instead of a package manifest

I searched for [rust] "instead of a package manifest" on this site before asking and found no hits. I also read about virtual manifests here but did not resolve my question.
My goal is to make changes to azul.
To achieve this I read about patching dependencies here, and now I have this Cargo.toml
[package]
name = "my_first_azul_app"
version = "0.1.0"
authors = ["Name <Email>"]
edition = "2018"
# See more keys and their definitions at https://doc.rust-lang.org/cargo/reference/manifest.html
[dependencies]
azul = { git = "https://github.com/maps4print/azul" }
[patch."https://github.com/maps4print/azul"]
azul = { path = "../azul" }
In path ../azul I have checked out the azul project with git clone. In main.rs I have followed this to get,
extern crate azul;
fn main() {
println!("Hello world!");
}
Then I try to test
$ cargo run
error: failed to resolve patches for `https://github.com/maps4print/azul`
Caused by:
failed to load source for a dependency on `azul`
Caused by:
Unable to update /home/name/projects/azul
Caused by:
found a virtual manifest at `/home/name/projects/azul/Cargo.toml` instead of a package manifest
I do not understand the final caused by line.
If I remove the [patch] configuration, it "works".
Quoting because it fails to compile, but that is why I am trying to check it out and attempt a fix. What charges do I need to make to develop the azul dependency?
TIA,
looks like azul is using workspaces so if you want to refer to it via path you must point to the exact member(s) of that workspace.
azul's Cargo.toml contains
[workspace]
members = [
"cargo/azul",
"cargo/azul-css",
"cargo/azul-core",
"cargo/azul-layout",
"cargo/azul-text-layout",
"cargo/azul-widgets",
"cargo/azul-css-parser",
"cargo/azul-native-style",
]
so I believe you should be able to do something like:
[dependencies]
azul = { path = "../azul/cargo/azul"
azul-css = { path = "../azul/cargo/azul-css" }
you may need all/some of the members there.

Execution order inside Terraform modules

I had an issue with the execution order of modules in terraform scripts. I have raised an issue with the source repo. https://github.com/hashicorp/terraform/issues/18143
Can anyone please help me with this issue here or on GitHub?
Any help will be highly appreciated.
Thanks!
The execution does not wait for the completion of the "vpc" module, but only for the availability of the value "module.vpc.vpc_id". For this it is enough to execute the aws_vpc resource. Therefore, you are not actually telling TerraForm to also wait for the consul_keys resource.
To fix this you have to add a dependency from the consul_keys resource to your other modules. This works either by :
Using a value exported by consul_keys in your other modules (either datacenter or var.name>)
Dumping the resources that depend on consul_keys in the same file.
Sadly there is no nice solution for this at the moment, but module dependencies are being worked on.
EDIT:
As an example of dumping all resources in the same file:
This does not work because there are no module dependencies:
module "vpc" {
...
}
module "other" {
depends_on=["module.vpc"]
}
vpc module file:
resource "aws_instance" "vpc_res1" {
...
}
resource "consul_keys" "my_keys" {
...
}
other module file:
resource "aws_instance" "other_res1" {
...
}
resource "aws_instance" "other_res2" {
...
}
Putting everything in the same file works. You can also keep the "vpc_res1" resource in a separate module:
resource "consul_keys" "my_keys" {
...
}
resource "aws_instance" "other_res1" {
depends_on = ["consul_keys.my_keys"]
}
resource "aws_instance" "other_res2" {
depends_on = ["consul_keys.my_keys"]
}

Trouble setting terraform variable from CLI

My terraform project layout looks something like:
[project root dir]
main.tf
- my_module [directory]:
variables.tf
my_instance.tf
In variables.tf I have something like this:
variable "foo-ami" {
default = "ami-12345"
description = "Super awesome AMI"
}
In my_instance.tf I reference it like so:
resource "aws_instance" "my_instance" {
ami = "${var.foo-ami}"
...
}
If I want to override this variable from the command line, the docs here seem to suggest I should be able to run the following command from the top level (main.tf) location:
terraform plan -var 'foo-ami=ami-987654'
However, the variable switch doesn't seem to be getting picked up. The old value remains set. Further more if I remove the default setting I get an error from terraform saying it's not set, so clearly the -var switch isn't being picked up.
Thoughts?
TIA
The variables of root and my_module are different. If you want to pass a variable to my_module, you need to specify it.
In the main.tf, you should set the variable as follows:
variable "foo-ami" {}
module "my_module" {
source = "./my_module"
foo-ami = "${var.foo-ami}"
...
}
For details of the module feature refer to the following documents:
https://www.terraform.io/docs/modules/usage.html

Resources