Using null_resource I attempt to run kubectl apply on a kubernetes manifest. I often find that this applies changed for no given reason. i'm running terraform 0.14.8.
data "template_file" "app_crds" {
template = file("${path.module}/templates/app_crds.yaml")
}
resource "null_resource" "app_crds_deploy" {
triggers = {
manifest_sha1 = sha1(data.template_file.app_crds.rendered)
}
provisioner "local-exec" {
command = "kubectl apply -f -<<EOF\n${data.template_file.aws_ingress_controller_crds.rendered}\nEOF"
}
}
terraform plan output
# module.system.null_resource.app_crds_deploy must be replaced
-/+ resource "null_resource" "app_crds_deploy" {
~ id = "698690821114034664" -> (known after apply)
~ triggers = {
- "manifest_sha1" = "9a4fc962fe92c4ff04677ac12088a61809626e5a"
} -> (known after apply) # forces replacement
}
however, this sha is indeed in the state file.
[I] ➜ terraform state pull | grep 9a4fc962fe92c4ff04677ac12088a61809626e5a
"manifest_sha1": "9a4fc962fe92c4ff04677ac12088a61809626e5a"
I would recommend using the kubernetes_manifest resource from the terraform-kubernetes-provider. Using the provider won't require the host to have kubectl installed and will be far more reliable than the null_resource, as you are seeing. They have an example specifically for CRDs. Here is the snippet of terraform from that example:
resource "kubernetes_manifest" "test-crd" {
manifest = {
apiVersion = "apiextensions.k8s.io/v1"
kind = "CustomResourceDefinition"
metadata = {
name = "testcrds.hashicorp.com"
}
spec = {
group = "hashicorp.com"
names = {
kind = "TestCrd"
plural = "testcrds"
}
scope = "Namespaced"
versions = [{
name = "v1"
served = true
storage = true
schema = {
openAPIV3Schema = {
type = "object"
properties = {
data = {
type = "string"
}
refs = {
type = "number"
}
}
}
}
}]
}
}
}
you can keep your k8s yaml template and feed it to the kubernetes_manifest with this:
data "template_file" "app_crds" {
template = file("${path.module}/templates/app_crds.yaml")
}
resource "kubernetes_manifest" "test-configmap" {
manifest = yamldecode(data.template_file.app_crds.rendered)
}
Related
I have seen similar questions but the answers there rather addressed the formatting or workarounds which weren't that "clean". I will try to summarize my issue and hopefully can get some lean/clean solution. Thank you in advance guys!
I am creating AKS namespaces via Kubernetes provider in Terraform. Since I have 3 clusters, I want to be able to control which provider should be used to create the namespace. e.g. dev / prod
Folder structure
.terraform
├───modules
└───namespace.tf
└───module_providers.tf
└───main-deploy
└───main.tf
└───main_provider.tf
My Module // namespace.tf
# Create Namespace
resource "kubernetes_namespace" "namespace-appteam" {
metadata {
annotations = {
name = var.application_name
}
labels = {
appname = var.application_name
}
name = var.application_name
}
}
My main.tf file
module "appteam-test" {
source = "../modules/aks-module"
application_name = "dev-app"
providers = {
kubernetes.dev = kubernetes.dev
kubernetes.prod = kubernetes.prod
}
}
Now since I have passed 2 providers in the main.tf module block! How do I control that the resource I am creating in namespace.tf file should use Dev or Prod provider? In short, how does the module know which resource to create with what provider if several are passed.
Note: I have required_providers defined in the module_providers.tf and providers in main_provider.tf folder.
module_provider.tf
terraform {
required_providers {
azurerm = {
source = "hashicorp/azurerm"
version = "3.20.0"
}
azuread = {
source = "hashicorp/azuread"
version = "2.27.0"
}
kubernetes = {
source = "hashicorp/kubernetes"
version = "2.12.1"
}
}
}
Main_provider.tf
provider "azuread" {
alias = "AD"
}
provider "azurerm" {
alias = "default"
features {}
}
provider "kubernetes" {
alias = "dev"
}
provider "kubernetes" {
alias = "prod"
}
You need to add alias to all your providers.
provider "kubernetes" {
alias = "dev"
}
provider "kubernetes" {
alias = "stage"
}
In your main.tf file. Pass providers in module like following:
providers = {
kubernetes.dev = kubernetes.dev
kubernetes.stage = kubernetes.stage
}
Now in the module_provider.tf file. You need to pass configuration_aliases
kubernetes = {
source = "hashicorp/kubernetes"
version = "2.12.1"
configuration_aliases = [ kubernetes.dev, kubernetes.stage ]
}
Once all the configurations are in place. You can specify the provider explicitly for resources you want. Your namespace.tf file will look like
resource "kubernetes_namespace" "namespace-appteam-1" {
provider = kubernetes.dev
metadata {
annotations = {
name = var.application_name
}
labels = {
appname = var.application_name
}
name = var.application_name
}
}
resource "kubernetes_namespace" "namespace-appteam-2" {
provider = kubernetes.stage
metadata {
annotations = {
name = var.application_name
}
labels = {
appname = var.application_name
}
name = var.application_name
}
}
Below is a small snippet of a set of terraform script I'm trying to build. The goal is to define an IAM policy that will be attached to a new IAM role that I will create.
My problem is I'm trying to use the environment tag that I've defined in my AWS provider's default_tags block but I'm not sure how. The goal is to pull the environment value as part of the S3 prefix in the IAM policy document instead of having it hard coded.
Is there a way to do this?
terraform {
required_providers {
aws = {
source = "hashicorp/aws"
version = "4.19.0"
}
}
required_version = ">=1.2.3"
}
provider "aws" {
default_tags {
tags = {
Environment = "dev"
Application = "myapp"
Terraform = "true"
}
}
}
data "aws_iam_policy_document" "this" {
statement {
sid = "S3BucketAccess"
actions = "s3:*"
resources = [
"${data.aws_s3_bucket.this.arn}/dev"
]
}
}
data "aws_s3_bucket" "this" {
bucket = "myBucket"
}
Notice
A solution without code duplication is to use aws_default_tags:
terraform {
required_providers {
aws = {
source = "hashicorp/aws"
version = "4.19.0"
}
}
required_version = ">=1.2.3"
}
provider "aws" {
default_tags {
tags = {
Environment = "dev"
Application = "myapp"
Terraform = "true"
}
}
}
# Get the default tags from the provider
data "aws_default_tags" "my_tags" {}
data "aws_iam_policy_document" "this" {
statement {
sid = "S3BucketAccess"
actions = "s3:*"
resources = ["${data.aws_s3_bucket.this.arn}/${data.aws_default_tags.my_tags.tags.Environment}/*"
]
}
}
The solution is to use locals.
Here's how the final solution looks like
# New locals block
locals {
common_tags = {
Environment = "dev"
Application = "myapp"
Terraform = "true"
}
}
terraform {
required_providers {
aws = {
source = "hashicorp/aws"
version = "4.19.0"
}
}
required_version = ">=1.2.3"
}
provider "aws" {
# Reference common_tags from locals
default_tags {
tags = local.common_tags
}
}
data "aws_iam_policy_document" "this" {
# In the resources statement, I replaced
# "dev" prefix with Environment tag value using locals
statement {
sid = "S3BucketAccess"
actions = "s3:*"
resources = ["${data.aws_s3_bucket.this.arn}/${local.common_tags.Environment}/*"
]
}
}
data "aws_s3_bucket" "this" {
bucket = "myBucket"
}
I'm trying to set multiple environment variables on a cloud run module I've created. The example I'm following from Terraform is static. Is it possible to dynamically create these?
template {
spec {
containers {
image = "us-docker.pkg.dev/cloudrun/container/hello"
env {
name = "SOURCE"
value = "remote"
}
env {
name = "TARGET"
value = "home"
}
}
}
}
https://registry.terraform.io/providers/hashicorp/google/latest/docs/resources/cloud_run_service#example-usage---cloud-run-service-multiple-environment-variables
I've tried:
dynamic "env" {
for_each = var.envs
content {
name = each.key
value = each.value
}
}
But I get the following error:
A reference to "each.value" has been used in a context in which it unavailable, such as when the configuration no longer contains the value in
│ its "for_each" expression. Remove this reference to each.value in your configuration to work around this error.
Edit: Full code example
resource "google_cloud_run_service" "default" {
name = "cloudrun-srv"
location = "us-central1"
template {
spec {
containers {
image = "us-docker.pkg.dev/cloudrun/container/hello"
env {
name = "SOURCE"
value = "remote"
}
env {
name = "TARGET"
value = "home"
}
}
}
}
traffic {
percent = 100
latest_revision = true
}
autogenerate_revision_name = true
}
When you use dynamic blocks, you can't use each. It should be:
dynamic "env" {
for_each = var.envs
content {
name = env.key
value = env.value
}
}
I am a beginner in Terraform/Azure and I want to deploy a docker image in ACR using terraform but was unable to find internet solutions. So, if anybody knows how to deploy a docker image to an azure container registry using Terraform, please share.
Tell me whether this is possible or not.
You may use Terraform resource null_resource and execute your own logic in Terraform.
Example:
resource "azurerm_resource_group" "rg" {
name = "example-resources"
location = "West Europe"
}
resource "azurerm_container_registry" "acr" {
name = "containerRegistry1"
resource_group_name = azurerm_resource_group.rg.name
location = azurerm_resource_group.rg.location
sku = "Premium"
admin_enabled = true
georeplication_locations = ["East US", "West Europe"]
}
resource "azurerm_azuread_application" "acr-app" {
name = "acr-app"
}
resource "azurerm_azuread_service_principal" "acr-sp" {
application_id = "${azurerm_azuread_application.acr-app.application_id}"
}
resource "azurerm_azuread_service_principal_password" "acr-sp-pass" {
service_principal_id = "${azurerm_azuread_service_principal.acr-sp.id}"
value = "Password12"
end_date = "2022-01-01T01:02:03Z"
}
resource "azurerm_role_assignment" "acr-assignment" {
scope = "${azurerm_container_registry.acr.id}"
role_definition_name = "Contributor"
principal_id = "${azurerm_azuread_service_principal_password.acr-sp-pass.service_principal_id}"
}
resource "null_resource" "docker_push" {
provisioner "local-exec" {
command = <<-EOT
docker login ${azurerm_container_registry.acr.login_server}
docker push ${azurerm_container_registry.acr.login_server}
EOT
}
}
Just figured this out with the docker_registry_image resource. I do not like using a null resource, since it requires a dependency to local system packages. Furthermore, I made it so that you can both deploy with local authentication as well as authentication with credentials stored as secret in a Github repository for example.
main.tf
terraform {
required_version = ">= 1.1.7"
required_providers {
docker = {
source = "kreuzwerker/docker"
version = ">= 2.16.0"
}
}
backend "azurerm" {}
}
provider "docker" {
// Used when deploying locally
dynamic "registry_auth" {
for_each = var.docker_config_file_path == "" ? [] : [1]
content {
address = var.docker_registry_url
config_file = pathexpand(var.docker_config_file_path)
}
}
// Used when deploying from a build pipeline
dynamic "registry_auth" {
for_each = (var.docker_registry_username == "" || var.docker_registry_password == "") ? [] : [1]
content {
address = var.docker_registry_url
username = var.docker_registry_username
password = var.docker_registry_password
}
}
}
resource "docker_registry_image" "image" {
name = "${var.docker_image_name}:${var.docker_image_tag}"
keep_remotely = var.keep_remotely
build {
context = var.docker_file_path
build_args = var.build_args
}
}
variables.tf
variable "docker_registry_url" {
description = "Address of ACR container registry."
type = string
}
variable "docker_registry_username" {
description = "Username for authenticating with the container registry. Required if docker_config_file_path is not set."
type = string
default = ""
}
variable "docker_registry_password" {
description = "Password for authenticating with the container registry. Required if docker_config_file_path is not set."
type = string
default = ""
sensitive = true
}
variable "docker_config_file_path" {
description = "Path to config.json containing docker configuration."
type = string
default = ""
}
variable "docker_image_name" {
description = "Name of docker image to build."
type = string
}
variable "docker_image_tag" {
description = "Tag to use for the docker image."
type = string
default = "latest"
}
variable "source_path" {
description = "Path to folder containing application code"
type = string
default = null
}
variable "docker_file_path" {
description = "Path to Dockerfile in source package"
type = string
}
variable "build_args" {
description = "A map of Docker build arguments."
type = map(string)
default = {}
}
variable "keep_remotely" {
description = "Whether to keep Docker image in the remote registry on destroy operation."
type = bool
default = false
}
I created a webserver infrastructure with terraform (v0.12.21) in Gcloud to deploy a lot of websites.
I created a persistent volume claim for each deploy (1GB each):
I used this code to create them:
resource "kubernetes_persistent_volume_claim" "wordpress_volumeclaim" {
for_each = var.wordpress_site
metadata {
name = "wordpress-volumeclaim-${terraform.workspace}-${each.value.name}"
namespace = "default"
}
spec {
access_modes = ["ReadWriteOnce"]
resources {
requests = {
storage = each.value.disk
resource_policies = google_compute_resource_policy.policy.name
}
}
}
}
resource "kubernetes_deployment" "wordpress" {
for_each = var.wordpress_site
metadata {
name = each.value.name
labels = { app = each.value.name }
}
spec {
replicas = 1
selector {
match_labels = { app = each.value.name }
}
template {
metadata {
labels = { app = each.value.name }
}
spec {
volume {
name = "wordpress-persistent-storage-${terraform.workspace}-${each.value.name}"
persistent_volume_claim {
claim_name = "wordpress-volumeclaim-${terraform.workspace}-${each.value.name}"
}
}
[...]
But now I need to backup all these disks, and my best idea is using the Gcloud snapshot functionallity, and it must be dynamic, as the creation of these disks are dynamic.
First of all, I created a Snapshot policy:
resource "google_compute_resource_policy" "policy" {
name = "my-resource-policy"
region = "zone-region-here"
project = var.project
snapshot_schedule_policy {
schedule {
daily_schedule {
days_in_cycle = 1
start_time = "04:00"
}
}
retention_policy {
max_retention_days = 7
on_source_disk_delete = "KEEP_AUTO_SNAPSHOTS"
}
}
}
And now I want to add it to my persistent volumen claim. But I dont know how, because this line is not working at all:
resource_policies = google_compute_resource_policy.policy.name
All my tries resulted in errors. Could you help me here?