Generate file with dynamic content with Terragrunt - terraform

I'm really new to Terragrunt.
I was wondering if there is a way to dynamically generate the content of a file?
For example, consider the following piece of code:
generate "provider" {
path = "provider.tf"
if_exists = "overwrite"
contents = <<EOF
terraform {
required_providers {
azurerm = {
source = "azurerm"
version = "=2.49.0"
}
}
}
provider "azurerm" {
features {}
subscription_id = "xxxxxxxxxxxxxxxxx"
}
EOF
}
Is there a way to set values such as subscription_id dynamically? I've tried using something like ${local.providers.subscription_id} but it doesn't work:
provider "azurerm" {
features {}
subscription_id = "${local.providers.subscription_id}"
}

What you have there should work exactly as is, so long as you define the local in the same scope. Just tested the following with Terragrunt v0.28.24.
In common.hcl, a file located in some parent directory (but still in the same Git repo):
locals {
providers = {
subscription_id = "foo"
}
}
In your terragrunt.hcl:
locals {
common_vars = read_terragrunt_config(find_in_parent_folders("common.hcl"))
}
generate "provider" {
path = "provider.tf"
if_exists = "overwrite"
contents = <<EOF
terraform {
required_providers {
azurerm = {
source = "azurerm"
version = "=2.49.0"
}
}
}
provider "azurerm" {
features {}
subscription_id = "${local.common_vars.locals.providers.subscription_id}"
}
EOF
}
After I run terragrunt init, the provider.tf is generated with the expected contents:
provider "azurerm" {
features {}
subscription_id = "foo"
}

Related

Terraform Multi-providers VS explicit passing within Module

I have seen similar questions but the answers there rather addressed the formatting or workarounds which weren't that "clean". I will try to summarize my issue and hopefully can get some lean/clean solution. Thank you in advance guys!
I am creating AKS namespaces via Kubernetes provider in Terraform. Since I have 3 clusters, I want to be able to control which provider should be used to create the namespace. e.g. dev / prod
Folder structure
.terraform
├───modules
└───namespace.tf
└───module_providers.tf
└───main-deploy
└───main.tf
└───main_provider.tf
My Module // namespace.tf
# Create Namespace
resource "kubernetes_namespace" "namespace-appteam" {
metadata {
annotations = {
name = var.application_name
}
labels = {
appname = var.application_name
}
name = var.application_name
}
}
My main.tf file
module "appteam-test" {
source = "../modules/aks-module"
application_name = "dev-app"
providers = {
kubernetes.dev = kubernetes.dev
kubernetes.prod = kubernetes.prod
}
}
Now since I have passed 2 providers in the main.tf module block! How do I control that the resource I am creating in namespace.tf file should use Dev or Prod provider? In short, how does the module know which resource to create with what provider if several are passed.
Note: I have required_providers defined in the module_providers.tf and providers in main_provider.tf folder.
module_provider.tf
terraform {
required_providers {
azurerm = {
source = "hashicorp/azurerm"
version = "3.20.0"
}
azuread = {
source = "hashicorp/azuread"
version = "2.27.0"
}
kubernetes = {
source = "hashicorp/kubernetes"
version = "2.12.1"
}
}
}
Main_provider.tf
provider "azuread" {
alias = "AD"
}
provider "azurerm" {
alias = "default"
features {}
}
provider "kubernetes" {
alias = "dev"
}
provider "kubernetes" {
alias = "prod"
}
You need to add alias to all your providers.
provider "kubernetes" {
alias = "dev"
}
provider "kubernetes" {
alias = "stage"
}
In your main.tf file. Pass providers in module like following:
providers = {
kubernetes.dev = kubernetes.dev
kubernetes.stage = kubernetes.stage
}
Now in the module_provider.tf file. You need to pass configuration_aliases
kubernetes = {
source = "hashicorp/kubernetes"
version = "2.12.1"
configuration_aliases = [ kubernetes.dev, kubernetes.stage ]
}
Once all the configurations are in place. You can specify the provider explicitly for resources you want. Your namespace.tf file will look like
resource "kubernetes_namespace" "namespace-appteam-1" {
provider = kubernetes.dev
metadata {
annotations = {
name = var.application_name
}
labels = {
appname = var.application_name
}
name = var.application_name
}
}
resource "kubernetes_namespace" "namespace-appteam-2" {
provider = kubernetes.stage
metadata {
annotations = {
name = var.application_name
}
labels = {
appname = var.application_name
}
name = var.application_name
}
}

How do I use the value of provider default tags in a data source or resource block in terraform?

Below is a small snippet of a set of terraform script I'm trying to build. The goal is to define an IAM policy that will be attached to a new IAM role that I will create.
My problem is I'm trying to use the environment tag that I've defined in my AWS provider's default_tags block but I'm not sure how. The goal is to pull the environment value as part of the S3 prefix in the IAM policy document instead of having it hard coded.
Is there a way to do this?
terraform {
required_providers {
aws = {
source = "hashicorp/aws"
version = "4.19.0"
}
}
required_version = ">=1.2.3"
}
provider "aws" {
default_tags {
tags = {
Environment = "dev"
Application = "myapp"
Terraform = "true"
}
}
}
data "aws_iam_policy_document" "this" {
statement {
sid = "S3BucketAccess"
actions = "s3:*"
resources = [
"${data.aws_s3_bucket.this.arn}/dev"
]
}
}
data "aws_s3_bucket" "this" {
bucket = "myBucket"
}
Notice
A solution without code duplication is to use aws_default_tags:
terraform {
required_providers {
aws = {
source = "hashicorp/aws"
version = "4.19.0"
}
}
required_version = ">=1.2.3"
}
provider "aws" {
default_tags {
tags = {
Environment = "dev"
Application = "myapp"
Terraform = "true"
}
}
}
# Get the default tags from the provider
data "aws_default_tags" "my_tags" {}
data "aws_iam_policy_document" "this" {
statement {
sid = "S3BucketAccess"
actions = "s3:*"
resources = ["${data.aws_s3_bucket.this.arn}/${data.aws_default_tags.my_tags.tags.Environment}/*"
]
}
}
The solution is to use locals.
Here's how the final solution looks like
# New locals block
locals {
common_tags = {
Environment = "dev"
Application = "myapp"
Terraform = "true"
}
}
terraform {
required_providers {
aws = {
source = "hashicorp/aws"
version = "4.19.0"
}
}
required_version = ">=1.2.3"
}
provider "aws" {
# Reference common_tags from locals
default_tags {
tags = local.common_tags
}
}
data "aws_iam_policy_document" "this" {
# In the resources statement, I replaced
# "dev" prefix with Environment tag value using locals
statement {
sid = "S3BucketAccess"
actions = "s3:*"
resources = ["${data.aws_s3_bucket.this.arn}/${local.common_tags.Environment}/*"
]
}
}
data "aws_s3_bucket" "this" {
bucket = "myBucket"
}

terraform null_resource frequently triggers change

Using null_resource I attempt to run kubectl apply on a kubernetes manifest. I often find that this applies changed for no given reason. i'm running terraform 0.14.8.
data "template_file" "app_crds" {
template = file("${path.module}/templates/app_crds.yaml")
}
resource "null_resource" "app_crds_deploy" {
triggers = {
manifest_sha1 = sha1(data.template_file.app_crds.rendered)
}
provisioner "local-exec" {
command = "kubectl apply -f -<<EOF\n${data.template_file.aws_ingress_controller_crds.rendered}\nEOF"
}
}
terraform plan output
# module.system.null_resource.app_crds_deploy must be replaced
-/+ resource "null_resource" "app_crds_deploy" {
~ id = "698690821114034664" -> (known after apply)
~ triggers = {
- "manifest_sha1" = "9a4fc962fe92c4ff04677ac12088a61809626e5a"
} -> (known after apply) # forces replacement
}
however, this sha is indeed in the state file.
[I] ➜ terraform state pull | grep 9a4fc962fe92c4ff04677ac12088a61809626e5a
"manifest_sha1": "9a4fc962fe92c4ff04677ac12088a61809626e5a"
I would recommend using the kubernetes_manifest resource from the terraform-kubernetes-provider. Using the provider won't require the host to have kubectl installed and will be far more reliable than the null_resource, as you are seeing. They have an example specifically for CRDs. Here is the snippet of terraform from that example:
resource "kubernetes_manifest" "test-crd" {
manifest = {
apiVersion = "apiextensions.k8s.io/v1"
kind = "CustomResourceDefinition"
metadata = {
name = "testcrds.hashicorp.com"
}
spec = {
group = "hashicorp.com"
names = {
kind = "TestCrd"
plural = "testcrds"
}
scope = "Namespaced"
versions = [{
name = "v1"
served = true
storage = true
schema = {
openAPIV3Schema = {
type = "object"
properties = {
data = {
type = "string"
}
refs = {
type = "number"
}
}
}
}
}]
}
}
}
you can keep your k8s yaml template and feed it to the kubernetes_manifest with this:
data "template_file" "app_crds" {
template = file("${path.module}/templates/app_crds.yaml")
}
resource "kubernetes_manifest" "test-configmap" {
manifest = yamldecode(data.template_file.app_crds.rendered)
}

Output variables from remote terraform module

For a project I use remote modules (git modules) these are called and executed in a terraformMain.tf file.
For example, I use an Azure Resource Group Module, this module is looped in the terraformMain.tf by "count = length (var.resourcegroups)". The problem I have now is that I want to use one of the two created Resource groups in the next module (creating VNET) but I keep encountering the following error:
Error: Unsupported attribute
on outputs.tf line 2, in output "RG": 2: value =
[module.resourceGroups.resource_group_name]
This value does not have any attributes.
Unsupported attribute
on terraformMain.tf line 33, in module "vnet": 33:
resourcegroup_name = module.resourceGroups.resource_group_name[0]
This value does not have any attributes.
The Azure Resource Group module code looks like this :
main.tf
resource "azurerm_resource_group" "RG" {
name = var.resource_group_name
location = var.location
}
variables.tf
variable "location" {
type = string
}
variable "resource_group_name" {
type = string
}
outputs.tf
output "resource_group_names" {
value = concat(azurerm_resource_group.RG.*.name, [""])[0]
}
The code of the terraformMain.tf looks like this:
terraformMain.tf
terraform {
required_version = ">= 0.13"
required_providers {
azurerm = {
source = "hashicorp/azurerm"
version = "2.45.1"
}
}
backend "azurerm" {
resource_group_name = "__terraformresourcegroup__"
storage_account_name = "__terraformstorageaccount__"
container_name = "__terraformcontainer__"
key = "__terraformkey__"
}
}
provider "azurerm" {
features {}
}
module "resourceGroups" {
count = length(var.resourcegroups)
source = "git::https://*****#dev.azure.com/****/TerraformAzureModules/_git/ResourceGroup"
location = var.location
resource_group_name = var.resourcegroups[count.index]
}
module "vnet" {
source = "git::https://*****#dev.azure.com/****/TerraformAzureModules/_git/VirtualNetwork"
resourcegroup_name = module.resourceGroups.resource_group_name[0]
location = var.location
vnet_name = var.vnet_name
count = length(var.subnet_names)
vnet_cidr = var.vnet_cidr[count.index]
subnet_cidr = var.subnet_cidr[count.index]
subnet_name = var.subnet_names[count.index]
}
variables.tf
variable "location" {
default = "westeurope"
}
variable "resourcegroups" {
default = ["rg1", "rg2"]
}
#Azure Vnet / Subnet
variable "vnet_name" {
default = "vnet_1"
}
variable "subnet_names" {
default = ["subnet1", "subnet2"]
}
variable "vnet_cidr" {
default = ["10.116.15.0/24"]
}
variable "subnet_cidr" {
default = ["10.116.15.0/26", "10.116.15.128/27"]
}
outputs.tf
output "RG" {
value = [module.resourceGroups.resource_group_name]
}
any help is appreciated!
Your resourceGroups module has count = length(var.resourcegroups) set, and so module.resourceGroups is a list of objects and therefore you will need to request a specific element from the list before accessing an attribute:
module.resourceGroups[0].resource_group_name
Or, if your goal was to return a list of all of the resource group names, you can use the [*] operator to concisely access the resource_group_name argument from each of the elements and return the result as a list:
resource.resourceGroups[*].resource_group_name
The variables in the module need to have a type or a default.
For example, this would be a valid file:
variable "location" {
type = string
}
variable "resource_group_name" {
type = string
}
The solution we have applied is to move the count from the terraformMain.tf to the resource module main.tf. this allowed us to pass the resoucgroups to the terraformMain through the output.tf of the module.
ResourceGroup module:
main.tf
resource "azurerm_resource_group" "RG" {
count = length(var.resource_group_name)
name = var.resource_group_name[count.index]
location = var.location
}
outputs.tf
output "resource_group_names" {
value = azurerm_resource_group.RG.*.name
}
terraformMain.tf code:
terraform {
required_version = ">= 0.13"
required_providers {
azurerm = {
source = "hashicorp/azurerm"
version = "2.45.1"
}
}
backend "azurerm" {
resource_group_name = "__terraformresourcegroup__"
storage_account_name = "__terraformstorageaccount__"
container_name = "__terraformcontainer__"
key = "__terraformkey__"
}
}
provider "azurerm" {
features {}
}
module "resourceGroups" {
source = "git::https://*****#dev.azure.com/*****/TerraformAzureModules/_git/ResourceGroup"
location = var.location
resource_group_name = var.resourcegroups
}
module "vnet" {
source = "git::https://******#dev.azure.com/*****/TerraformAzureModules/_git/VirtualNetwork"
resourcegroup_name = module.resourceGroups.resource_group_names[0]
location = var.location
vnet_name = var.vnet_name
vnet_cidr = var.vnet_cidr
subnet_cidr = var.subnet_cidr
subnet_name = var.subnet_names
}
I want to thank you for your contribution

How to put different aks deployment within the same resource group/cluster?

Current state:
I have all services within a cluster and under just one resource_group. My problem is that I have to push all the services every time and my deploy is getting slow.
What I want to do: I want to split every service within my directory so I can deploy it separately. Now I have a backend to each service, so that can have his own remote state and won't change things when I deploy. However, can I push still have all the services within the same resource_group? If yes, how can I achieve that? If I need to create a resource group for each service that I want to deploy separately, can I still use the same cluster?
main.tf
provider "azurerm" {
version = "2.23.0"
features {}
}
resource "azurerm_resource_group" "main" {
name = "${var.resource_group_name}-${var.environment}"
location = var.location
timeouts {
create = "20m"
delete = "20m"
}
}
resource "tls_private_key" "key" {
algorithm = "RSA"
}
resource "azurerm_kubernetes_cluster" "main" {
name = "${var.cluster_name}-${var.environment}"
location = azurerm_resource_group.main.location
resource_group_name = azurerm_resource_group.main.name
dns_prefix = "${var.dns_prefix}-${var.environment}"
node_resource_group = "${var.resource_group_name}-${var.environment}-worker"
kubernetes_version = "1.18.6"
linux_profile {
admin_username = var.admin_username
ssh_key {
key_data = "${trimspace(tls_private_key.key.public_key_openssh)} ${var.admin_username}#azure.com"
}
}
default_node_pool {
name = "default"
node_count = var.agent_count
vm_size = "Standard_B2s"
os_disk_size_gb = 30
}
role_based_access_control {
enabled = "false"
}
addon_profile {
kube_dashboard {
enabled = "true"
}
}
network_profile {
network_plugin = "kubenet"
load_balancer_sku = "Standard"
}
timeouts {
create = "40m"
delete = "40m"
}
service_principal {
client_id = var.client_id
client_secret = var.client_secret
}
tags = {
Environment = "Production"
}
}
provider "kubernetes" {
version = "1.12.0"
load_config_file = "false"
host = azurerm_kubernetes_cluster.main.kube_config[0].host
client_certificate = base64decode(
azurerm_kubernetes_cluster.main.kube_config[0].client_certificate,
)
client_key = base64decode(azurerm_kubernetes_cluster.main.kube_config[0].client_key)
cluster_ca_certificate = base64decode(
azurerm_kubernetes_cluster.main.kube_config[0].cluster_ca_certificate,
)
}
backend.tf (for main)
terraform {
backend "azurerm" {}
}
client.tf (service that I want to deploy separately)
resource "kubernetes_deployment" "client" {
metadata {
name = "client"
labels = {
serviceName = "client"
}
}
timeouts {
create = "20m"
delete = "20m"
}
spec {
progress_deadline_seconds = 600
replicas = 1
selector {
match_labels = {
serviceName = "client"
}
}
template {
metadata {
labels = {
serviceName = "client"
}
}
}
}
}
}
resource "kubernetes_service" "client" {
metadata {
name = "client"
}
spec {
selector = {
serviceName = kubernetes_deployment.client.metadata[0].labels.serviceName
}
port {
port = 80
target_port = 80
}
}
}
backend.tf (for client)
terraform {
backend "azurerm" {
resource_group_name = "test-storage"
storage_account_name = "test"
container_name = "terraform"
key="test"
}
}
deployment.sh
terraform -v
terraform init \
-backend-config="resource_group_name=$TF_BACKEND_RES_GROUP" \
-backend-config="storage_account_name=$TF_BACKEND_STORAGE_ACC" \
-backend-config="container_name=$TF_BACKEND_CONTAINER" \
terraform plan
terraform apply -target="azurerm_resource_group.main" -auto-approve \
-var "environment=$ENVIRONMENT" \
-var "tag_version=$TAG_VERSION" \
PS: I can build the test resource-group from scratch if needed. Don't worry about his current state.
PS2: The state files are being saved into the right place, no issue about that.
If you want to deploy resources separately, you could take a look at terraform apply with this option.
-target=resource Resource to target. Operation will be limited to this
resource and its dependencies. This flag can be used
multiple times.
For example, just deploy a resource group and its dependencies like this,
terraform apply -target="azurerm_resource_group.main"

Resources