Terraform Multi-providers VS explicit passing within Module - terraform

I have seen similar questions but the answers there rather addressed the formatting or workarounds which weren't that "clean". I will try to summarize my issue and hopefully can get some lean/clean solution. Thank you in advance guys!
I am creating AKS namespaces via Kubernetes provider in Terraform. Since I have 3 clusters, I want to be able to control which provider should be used to create the namespace. e.g. dev / prod
Folder structure
.terraform
├───modules
└───namespace.tf
└───module_providers.tf
└───main-deploy
└───main.tf
└───main_provider.tf
My Module // namespace.tf
# Create Namespace
resource "kubernetes_namespace" "namespace-appteam" {
metadata {
annotations = {
name = var.application_name
}
labels = {
appname = var.application_name
}
name = var.application_name
}
}
My main.tf file
module "appteam-test" {
source = "../modules/aks-module"
application_name = "dev-app"
providers = {
kubernetes.dev = kubernetes.dev
kubernetes.prod = kubernetes.prod
}
}
Now since I have passed 2 providers in the main.tf module block! How do I control that the resource I am creating in namespace.tf file should use Dev or Prod provider? In short, how does the module know which resource to create with what provider if several are passed.
Note: I have required_providers defined in the module_providers.tf and providers in main_provider.tf folder.
module_provider.tf
terraform {
required_providers {
azurerm = {
source = "hashicorp/azurerm"
version = "3.20.0"
}
azuread = {
source = "hashicorp/azuread"
version = "2.27.0"
}
kubernetes = {
source = "hashicorp/kubernetes"
version = "2.12.1"
}
}
}
Main_provider.tf
provider "azuread" {
alias = "AD"
}
provider "azurerm" {
alias = "default"
features {}
}
provider "kubernetes" {
alias = "dev"
}
provider "kubernetes" {
alias = "prod"
}

You need to add alias to all your providers.
provider "kubernetes" {
alias = "dev"
}
provider "kubernetes" {
alias = "stage"
}
In your main.tf file. Pass providers in module like following:
providers = {
kubernetes.dev = kubernetes.dev
kubernetes.stage = kubernetes.stage
}
Now in the module_provider.tf file. You need to pass configuration_aliases
kubernetes = {
source = "hashicorp/kubernetes"
version = "2.12.1"
configuration_aliases = [ kubernetes.dev, kubernetes.stage ]
}
Once all the configurations are in place. You can specify the provider explicitly for resources you want. Your namespace.tf file will look like
resource "kubernetes_namespace" "namespace-appteam-1" {
provider = kubernetes.dev
metadata {
annotations = {
name = var.application_name
}
labels = {
appname = var.application_name
}
name = var.application_name
}
}
resource "kubernetes_namespace" "namespace-appteam-2" {
provider = kubernetes.stage
metadata {
annotations = {
name = var.application_name
}
labels = {
appname = var.application_name
}
name = var.application_name
}
}

Related

How do I use the value of provider default tags in a data source or resource block in terraform?

Below is a small snippet of a set of terraform script I'm trying to build. The goal is to define an IAM policy that will be attached to a new IAM role that I will create.
My problem is I'm trying to use the environment tag that I've defined in my AWS provider's default_tags block but I'm not sure how. The goal is to pull the environment value as part of the S3 prefix in the IAM policy document instead of having it hard coded.
Is there a way to do this?
terraform {
required_providers {
aws = {
source = "hashicorp/aws"
version = "4.19.0"
}
}
required_version = ">=1.2.3"
}
provider "aws" {
default_tags {
tags = {
Environment = "dev"
Application = "myapp"
Terraform = "true"
}
}
}
data "aws_iam_policy_document" "this" {
statement {
sid = "S3BucketAccess"
actions = "s3:*"
resources = [
"${data.aws_s3_bucket.this.arn}/dev"
]
}
}
data "aws_s3_bucket" "this" {
bucket = "myBucket"
}
Notice
A solution without code duplication is to use aws_default_tags:
terraform {
required_providers {
aws = {
source = "hashicorp/aws"
version = "4.19.0"
}
}
required_version = ">=1.2.3"
}
provider "aws" {
default_tags {
tags = {
Environment = "dev"
Application = "myapp"
Terraform = "true"
}
}
}
# Get the default tags from the provider
data "aws_default_tags" "my_tags" {}
data "aws_iam_policy_document" "this" {
statement {
sid = "S3BucketAccess"
actions = "s3:*"
resources = ["${data.aws_s3_bucket.this.arn}/${data.aws_default_tags.my_tags.tags.Environment}/*"
]
}
}
The solution is to use locals.
Here's how the final solution looks like
# New locals block
locals {
common_tags = {
Environment = "dev"
Application = "myapp"
Terraform = "true"
}
}
terraform {
required_providers {
aws = {
source = "hashicorp/aws"
version = "4.19.0"
}
}
required_version = ">=1.2.3"
}
provider "aws" {
# Reference common_tags from locals
default_tags {
tags = local.common_tags
}
}
data "aws_iam_policy_document" "this" {
# In the resources statement, I replaced
# "dev" prefix with Environment tag value using locals
statement {
sid = "S3BucketAccess"
actions = "s3:*"
resources = ["${data.aws_s3_bucket.this.arn}/${local.common_tags.Environment}/*"
]
}
}
data "aws_s3_bucket" "this" {
bucket = "myBucket"
}

How to push a docker image to Azure container registry using terraform?

I am a beginner in Terraform/Azure and I want to deploy a docker image in ACR using terraform but was unable to find internet solutions. So, if anybody knows how to deploy a docker image to an azure container registry using Terraform, please share.
Tell me whether this is possible or not.
You may use Terraform resource null_resource and execute your own logic in Terraform.
Example:
resource "azurerm_resource_group" "rg" {
name = "example-resources"
location = "West Europe"
}
resource "azurerm_container_registry" "acr" {
name = "containerRegistry1"
resource_group_name = azurerm_resource_group.rg.name
location = azurerm_resource_group.rg.location
sku = "Premium"
admin_enabled = true
georeplication_locations = ["East US", "West Europe"]
}
resource "azurerm_azuread_application" "acr-app" {
name = "acr-app"
}
resource "azurerm_azuread_service_principal" "acr-sp" {
application_id = "${azurerm_azuread_application.acr-app.application_id}"
}
resource "azurerm_azuread_service_principal_password" "acr-sp-pass" {
service_principal_id = "${azurerm_azuread_service_principal.acr-sp.id}"
value = "Password12"
end_date = "2022-01-01T01:02:03Z"
}
resource "azurerm_role_assignment" "acr-assignment" {
scope = "${azurerm_container_registry.acr.id}"
role_definition_name = "Contributor"
principal_id = "${azurerm_azuread_service_principal_password.acr-sp-pass.service_principal_id}"
}
resource "null_resource" "docker_push" {
provisioner "local-exec" {
command = <<-EOT
docker login ${azurerm_container_registry.acr.login_server}
docker push ${azurerm_container_registry.acr.login_server}
EOT
}
}
Just figured this out with the docker_registry_image resource. I do not like using a null resource, since it requires a dependency to local system packages. Furthermore, I made it so that you can both deploy with local authentication as well as authentication with credentials stored as secret in a Github repository for example.
main.tf
terraform {
required_version = ">= 1.1.7"
required_providers {
docker = {
source = "kreuzwerker/docker"
version = ">= 2.16.0"
}
}
backend "azurerm" {}
}
provider "docker" {
// Used when deploying locally
dynamic "registry_auth" {
for_each = var.docker_config_file_path == "" ? [] : [1]
content {
address = var.docker_registry_url
config_file = pathexpand(var.docker_config_file_path)
}
}
// Used when deploying from a build pipeline
dynamic "registry_auth" {
for_each = (var.docker_registry_username == "" || var.docker_registry_password == "") ? [] : [1]
content {
address = var.docker_registry_url
username = var.docker_registry_username
password = var.docker_registry_password
}
}
}
resource "docker_registry_image" "image" {
name = "${var.docker_image_name}:${var.docker_image_tag}"
keep_remotely = var.keep_remotely
build {
context = var.docker_file_path
build_args = var.build_args
}
}
variables.tf
variable "docker_registry_url" {
description = "Address of ACR container registry."
type = string
}
variable "docker_registry_username" {
description = "Username for authenticating with the container registry. Required if docker_config_file_path is not set."
type = string
default = ""
}
variable "docker_registry_password" {
description = "Password for authenticating with the container registry. Required if docker_config_file_path is not set."
type = string
default = ""
sensitive = true
}
variable "docker_config_file_path" {
description = "Path to config.json containing docker configuration."
type = string
default = ""
}
variable "docker_image_name" {
description = "Name of docker image to build."
type = string
}
variable "docker_image_tag" {
description = "Tag to use for the docker image."
type = string
default = "latest"
}
variable "source_path" {
description = "Path to folder containing application code"
type = string
default = null
}
variable "docker_file_path" {
description = "Path to Dockerfile in source package"
type = string
}
variable "build_args" {
description = "A map of Docker build arguments."
type = map(string)
default = {}
}
variable "keep_remotely" {
description = "Whether to keep Docker image in the remote registry on destroy operation."
type = bool
default = false
}

Generate file with dynamic content with Terragrunt

I'm really new to Terragrunt.
I was wondering if there is a way to dynamically generate the content of a file?
For example, consider the following piece of code:
generate "provider" {
path = "provider.tf"
if_exists = "overwrite"
contents = <<EOF
terraform {
required_providers {
azurerm = {
source = "azurerm"
version = "=2.49.0"
}
}
}
provider "azurerm" {
features {}
subscription_id = "xxxxxxxxxxxxxxxxx"
}
EOF
}
Is there a way to set values such as subscription_id dynamically? I've tried using something like ${local.providers.subscription_id} but it doesn't work:
provider "azurerm" {
features {}
subscription_id = "${local.providers.subscription_id}"
}
What you have there should work exactly as is, so long as you define the local in the same scope. Just tested the following with Terragrunt v0.28.24.
In common.hcl, a file located in some parent directory (but still in the same Git repo):
locals {
providers = {
subscription_id = "foo"
}
}
In your terragrunt.hcl:
locals {
common_vars = read_terragrunt_config(find_in_parent_folders("common.hcl"))
}
generate "provider" {
path = "provider.tf"
if_exists = "overwrite"
contents = <<EOF
terraform {
required_providers {
azurerm = {
source = "azurerm"
version = "=2.49.0"
}
}
}
provider "azurerm" {
features {}
subscription_id = "${local.common_vars.locals.providers.subscription_id}"
}
EOF
}
After I run terragrunt init, the provider.tf is generated with the expected contents:
provider "azurerm" {
features {}
subscription_id = "foo"
}

How to attach a scheduler policy to a persistent volume claim in Gcloud with terraform

I created a webserver infrastructure with terraform (v0.12.21) in Gcloud to deploy a lot of websites.
I created a persistent volume claim for each deploy (1GB each):
I used this code to create them:
resource "kubernetes_persistent_volume_claim" "wordpress_volumeclaim" {
for_each = var.wordpress_site
metadata {
name = "wordpress-volumeclaim-${terraform.workspace}-${each.value.name}"
namespace = "default"
}
spec {
access_modes = ["ReadWriteOnce"]
resources {
requests = {
storage = each.value.disk
resource_policies = google_compute_resource_policy.policy.name
}
}
}
}
resource "kubernetes_deployment" "wordpress" {
for_each = var.wordpress_site
metadata {
name = each.value.name
labels = { app = each.value.name }
}
spec {
replicas = 1
selector {
match_labels = { app = each.value.name }
}
template {
metadata {
labels = { app = each.value.name }
}
spec {
volume {
name = "wordpress-persistent-storage-${terraform.workspace}-${each.value.name}"
persistent_volume_claim {
claim_name = "wordpress-volumeclaim-${terraform.workspace}-${each.value.name}"
}
}
[...]
But now I need to backup all these disks, and my best idea is using the Gcloud snapshot functionallity, and it must be dynamic, as the creation of these disks are dynamic.
First of all, I created a Snapshot policy:
resource "google_compute_resource_policy" "policy" {
name = "my-resource-policy"
region = "zone-region-here"
project = var.project
snapshot_schedule_policy {
schedule {
daily_schedule {
days_in_cycle = 1
start_time = "04:00"
}
}
retention_policy {
max_retention_days = 7
on_source_disk_delete = "KEEP_AUTO_SNAPSHOTS"
}
}
}
And now I want to add it to my persistent volumen claim. But I dont know how, because this line is not working at all:
resource_policies = google_compute_resource_policy.policy.name
All my tries resulted in errors. Could you help me here?

How to put different aks deployment within the same resource group/cluster?

Current state:
I have all services within a cluster and under just one resource_group. My problem is that I have to push all the services every time and my deploy is getting slow.
What I want to do: I want to split every service within my directory so I can deploy it separately. Now I have a backend to each service, so that can have his own remote state and won't change things when I deploy. However, can I push still have all the services within the same resource_group? If yes, how can I achieve that? If I need to create a resource group for each service that I want to deploy separately, can I still use the same cluster?
main.tf
provider "azurerm" {
version = "2.23.0"
features {}
}
resource "azurerm_resource_group" "main" {
name = "${var.resource_group_name}-${var.environment}"
location = var.location
timeouts {
create = "20m"
delete = "20m"
}
}
resource "tls_private_key" "key" {
algorithm = "RSA"
}
resource "azurerm_kubernetes_cluster" "main" {
name = "${var.cluster_name}-${var.environment}"
location = azurerm_resource_group.main.location
resource_group_name = azurerm_resource_group.main.name
dns_prefix = "${var.dns_prefix}-${var.environment}"
node_resource_group = "${var.resource_group_name}-${var.environment}-worker"
kubernetes_version = "1.18.6"
linux_profile {
admin_username = var.admin_username
ssh_key {
key_data = "${trimspace(tls_private_key.key.public_key_openssh)} ${var.admin_username}#azure.com"
}
}
default_node_pool {
name = "default"
node_count = var.agent_count
vm_size = "Standard_B2s"
os_disk_size_gb = 30
}
role_based_access_control {
enabled = "false"
}
addon_profile {
kube_dashboard {
enabled = "true"
}
}
network_profile {
network_plugin = "kubenet"
load_balancer_sku = "Standard"
}
timeouts {
create = "40m"
delete = "40m"
}
service_principal {
client_id = var.client_id
client_secret = var.client_secret
}
tags = {
Environment = "Production"
}
}
provider "kubernetes" {
version = "1.12.0"
load_config_file = "false"
host = azurerm_kubernetes_cluster.main.kube_config[0].host
client_certificate = base64decode(
azurerm_kubernetes_cluster.main.kube_config[0].client_certificate,
)
client_key = base64decode(azurerm_kubernetes_cluster.main.kube_config[0].client_key)
cluster_ca_certificate = base64decode(
azurerm_kubernetes_cluster.main.kube_config[0].cluster_ca_certificate,
)
}
backend.tf (for main)
terraform {
backend "azurerm" {}
}
client.tf (service that I want to deploy separately)
resource "kubernetes_deployment" "client" {
metadata {
name = "client"
labels = {
serviceName = "client"
}
}
timeouts {
create = "20m"
delete = "20m"
}
spec {
progress_deadline_seconds = 600
replicas = 1
selector {
match_labels = {
serviceName = "client"
}
}
template {
metadata {
labels = {
serviceName = "client"
}
}
}
}
}
}
resource "kubernetes_service" "client" {
metadata {
name = "client"
}
spec {
selector = {
serviceName = kubernetes_deployment.client.metadata[0].labels.serviceName
}
port {
port = 80
target_port = 80
}
}
}
backend.tf (for client)
terraform {
backend "azurerm" {
resource_group_name = "test-storage"
storage_account_name = "test"
container_name = "terraform"
key="test"
}
}
deployment.sh
terraform -v
terraform init \
-backend-config="resource_group_name=$TF_BACKEND_RES_GROUP" \
-backend-config="storage_account_name=$TF_BACKEND_STORAGE_ACC" \
-backend-config="container_name=$TF_BACKEND_CONTAINER" \
terraform plan
terraform apply -target="azurerm_resource_group.main" -auto-approve \
-var "environment=$ENVIRONMENT" \
-var "tag_version=$TAG_VERSION" \
PS: I can build the test resource-group from scratch if needed. Don't worry about his current state.
PS2: The state files are being saved into the right place, no issue about that.
If you want to deploy resources separately, you could take a look at terraform apply with this option.
-target=resource Resource to target. Operation will be limited to this
resource and its dependencies. This flag can be used
multiple times.
For example, just deploy a resource group and its dependencies like this,
terraform apply -target="azurerm_resource_group.main"

Resources