How to use a token created in a secret in another resource? - terraform

I'm trying to create a service account secret in order to populate a secret with the token:
resource "kubernetes_service_account" "k8s-api-token" {
metadata {
namespace = "${var.whatever_namespace}"
name = "api-service-account"
}
secret {
name = "api-service-account-secret"
}
}
resource "kubernetes_secret" "k8s-api-token" {
metadata {
namespace = "${var.whatever_namespace}"
name = "${kubernetes_service_account.k8s-api-token.metadata.0.name}-secret"
annotations = {
"kubernetes.io/service-account.name" = "${kubernetes_service_account.k8s-api-token.metadata.0.name}"
}
}
type = "kubernetes.io/service-account-token"
}
data "kubernetes_secret" "k8s-api-token" {
depends_on = ["kubernetes_secret.k8s-api-token"]
metadata {
namespace = "${var.whatever_namespace}"
name = "${kubernetes_secret.k8s-api-token.metadata.0.name}"
}
}
resource "kubernetes_secret" "whatever-secrets" {
depends_on = ["kubernetes_secret.k8s-api-token"]
metadata {
name = "botfront-secrets"
namespace = "${var.whatever_namespace}"
}
data = {
K8S_API = "${data.kubernetes_secret.k8s-api-token.data.token}"
}
}
But it gives me an error:
Resource 'data.kubernetes_secret.k8s-api-token' does not have attribute 'data.token' for variable 'data.kubernetes_secret.k8s-api-token.data.token'
I can verify the secret is created, but even running terraform state show kubernetes_secret.k8s_api_token doesn't return anything
What am I doing wrong?

The solution is to use a lookup:
K8S_API = ${lookup(data.kubernetes_secret.k8s-api-token-data.data, "token","")}
Source: http://blog.crashtest-security.com/resource-does-not-have-attribute

Related

Terraform Multi-providers VS explicit passing within Module

I have seen similar questions but the answers there rather addressed the formatting or workarounds which weren't that "clean". I will try to summarize my issue and hopefully can get some lean/clean solution. Thank you in advance guys!
I am creating AKS namespaces via Kubernetes provider in Terraform. Since I have 3 clusters, I want to be able to control which provider should be used to create the namespace. e.g. dev / prod
Folder structure
.terraform
├───modules
└───namespace.tf
└───module_providers.tf
└───main-deploy
└───main.tf
└───main_provider.tf
My Module // namespace.tf
# Create Namespace
resource "kubernetes_namespace" "namespace-appteam" {
metadata {
annotations = {
name = var.application_name
}
labels = {
appname = var.application_name
}
name = var.application_name
}
}
My main.tf file
module "appteam-test" {
source = "../modules/aks-module"
application_name = "dev-app"
providers = {
kubernetes.dev = kubernetes.dev
kubernetes.prod = kubernetes.prod
}
}
Now since I have passed 2 providers in the main.tf module block! How do I control that the resource I am creating in namespace.tf file should use Dev or Prod provider? In short, how does the module know which resource to create with what provider if several are passed.
Note: I have required_providers defined in the module_providers.tf and providers in main_provider.tf folder.
module_provider.tf
terraform {
required_providers {
azurerm = {
source = "hashicorp/azurerm"
version = "3.20.0"
}
azuread = {
source = "hashicorp/azuread"
version = "2.27.0"
}
kubernetes = {
source = "hashicorp/kubernetes"
version = "2.12.1"
}
}
}
Main_provider.tf
provider "azuread" {
alias = "AD"
}
provider "azurerm" {
alias = "default"
features {}
}
provider "kubernetes" {
alias = "dev"
}
provider "kubernetes" {
alias = "prod"
}
You need to add alias to all your providers.
provider "kubernetes" {
alias = "dev"
}
provider "kubernetes" {
alias = "stage"
}
In your main.tf file. Pass providers in module like following:
providers = {
kubernetes.dev = kubernetes.dev
kubernetes.stage = kubernetes.stage
}
Now in the module_provider.tf file. You need to pass configuration_aliases
kubernetes = {
source = "hashicorp/kubernetes"
version = "2.12.1"
configuration_aliases = [ kubernetes.dev, kubernetes.stage ]
}
Once all the configurations are in place. You can specify the provider explicitly for resources you want. Your namespace.tf file will look like
resource "kubernetes_namespace" "namespace-appteam-1" {
provider = kubernetes.dev
metadata {
annotations = {
name = var.application_name
}
labels = {
appname = var.application_name
}
name = var.application_name
}
}
resource "kubernetes_namespace" "namespace-appteam-2" {
provider = kubernetes.stage
metadata {
annotations = {
name = var.application_name
}
labels = {
appname = var.application_name
}
name = var.application_name
}
}

AWS BACKUP vaults for cross account in Terraform

I Need some help in configuring AWS backup vaults in multiple AWS accounts using terraform. I'm able to create backup vaults in 2 accounts with specific plan and schedule. but i cant see the backedup data on the destination account. Here's the code which i'm using.
resource "aws_backup_vault" "backup-vault" {
provider = aws.source-account
name = var.backup-vault-name
kms_key_arn = aws_kms_key.backup-key.arn
}
resource "aws_backup_vault" "diff-account-vault" {
provider = aws.crossbackup
name = var.cross-account-vault-name
kms_key_arn = aws_kms_key.backup-key.arn
}
resource "aws_backup_plan" "backup-plan" {
name = var.backup-plan-name
rule {
rule_name = "some-rule"
target_vault_name = aws_backup_vault.backup-vault.name
schedule = "cron(0 17-23 * * ? *)"
copy_action {
destination_vault_arn = aws_backup_vault.diff-account-vault.arn
}
}
}
resource "aws_backup_selection" "tag" {
name = "some-backup-selection-name"
iam_role_arn = aws_iam_role.aws-backup-service-role.arn
plan_id = aws_backup_plan.backup-plan.id
selection_tag {
type = var.selection-type
key = var.key
value = var.value
}
}
resource "aws_backup_vault_policy" "organization-policy" {
backup_vault_name = aws_backup_vault.diff-account-vault.name
provider = aws.crossbackup
policy = <<POLICY
{
"Version":"2012-10-17",
"Statement":[
{
"Effect":"Allow",
"Action":"backup:CopyIntoBackupVault",
"Resource":"*",
"Principal":"*",
"Condition":{
"StringEquals":{
"aws:PrincipalOrgID":[
"Organization-ID"
]
}
}
}
]
}
POLICY
}

terraform null_resource frequently triggers change

Using null_resource I attempt to run kubectl apply on a kubernetes manifest. I often find that this applies changed for no given reason. i'm running terraform 0.14.8.
data "template_file" "app_crds" {
template = file("${path.module}/templates/app_crds.yaml")
}
resource "null_resource" "app_crds_deploy" {
triggers = {
manifest_sha1 = sha1(data.template_file.app_crds.rendered)
}
provisioner "local-exec" {
command = "kubectl apply -f -<<EOF\n${data.template_file.aws_ingress_controller_crds.rendered}\nEOF"
}
}
terraform plan output
# module.system.null_resource.app_crds_deploy must be replaced
-/+ resource "null_resource" "app_crds_deploy" {
~ id = "698690821114034664" -> (known after apply)
~ triggers = {
- "manifest_sha1" = "9a4fc962fe92c4ff04677ac12088a61809626e5a"
} -> (known after apply) # forces replacement
}
however, this sha is indeed in the state file.
[I] ➜ terraform state pull | grep 9a4fc962fe92c4ff04677ac12088a61809626e5a
"manifest_sha1": "9a4fc962fe92c4ff04677ac12088a61809626e5a"
I would recommend using the kubernetes_manifest resource from the terraform-kubernetes-provider. Using the provider won't require the host to have kubectl installed and will be far more reliable than the null_resource, as you are seeing. They have an example specifically for CRDs. Here is the snippet of terraform from that example:
resource "kubernetes_manifest" "test-crd" {
manifest = {
apiVersion = "apiextensions.k8s.io/v1"
kind = "CustomResourceDefinition"
metadata = {
name = "testcrds.hashicorp.com"
}
spec = {
group = "hashicorp.com"
names = {
kind = "TestCrd"
plural = "testcrds"
}
scope = "Namespaced"
versions = [{
name = "v1"
served = true
storage = true
schema = {
openAPIV3Schema = {
type = "object"
properties = {
data = {
type = "string"
}
refs = {
type = "number"
}
}
}
}
}]
}
}
}
you can keep your k8s yaml template and feed it to the kubernetes_manifest with this:
data "template_file" "app_crds" {
template = file("${path.module}/templates/app_crds.yaml")
}
resource "kubernetes_manifest" "test-configmap" {
manifest = yamldecode(data.template_file.app_crds.rendered)
}

Terraform dynamically generate attributes (not blocks)

I am trying to generate attributes dynamically in terraform 13. I've read through the docs but I can't seem to get this to work:
Given the following terraform:
#main.tf
locals {
secrets = {
secret1 = [
{
name = "user",
value = "secret"
},
{
name = "password",
value = "password123"
}
],
secret2 = [
{
name = "token",
value = "secret"
}
]
}
}
resource "kubernetes_secret" "secrets" {
for_each = local.secret
metadata {
name = each.key
}
data = {
[for name, value in each.value : name = value]
}
}
I would expect the following resources to be rendered:
resource "kubernetes_secret" "secrets[secret1]" {
metadata {
name = "secret1"
}
data = {
user = "secret"
password = "password123"
}
}
resource "kubernetes_secret" "secrets[secret2]" {
metadata {
name = "secret2"
}
data = {
token = "secret"
}
}
However I just get the following error:
Error: Invalid 'for' expression
on ../../main.tf line 96, in resource "kubernetes_secret" "secrets":
96: [for name, value in each.value : name = value]
Extra characters after the end of the 'for' expression.
Does anybody know how to make this work?
The correct syntax for generating a mapping using a for expression is the following:
data = {
for name, value in each.value : name => value
}
The above would actually be totally redundant, because it would produce the same value as each.value. However, because your local value has a list of objects with name and value attributes instead of maps from name to value, so to get a working result we'd either need to change the input to already be a map, like this:
locals {
secrets = {
secret1 = {
user = "secret"
password = "password123"
}
secret2 = {
token = "secret"
}
}
}
resource "kubernetes_secret" "secrets" {
for_each = local.secrets
metadata {
name = each.key
}
# each.value is already a map of a suitable shape
data = each.value
}
or, if the input being a list of objects is important for some reason, you can project from the list of objects to the mapping like this:
locals {
secrets = {
secret1 = [
{
name = "user",
value = "secret"
},
{
name = "password",
value = "password123"
}
],
secret2 = [
{
name = "token",
value = "secret"
}
]
}
}
resource "kubernetes_secret" "secrets" {
for_each = local.secrets
metadata {
name = each.key
}
data = {
for obj in each.value : obj.name => obj.value
}
}
Both of these should produce the same result, so which to choose will depend on what shape of local value data structure you find most readable or most convenient.

How to attach a scheduler policy to a persistent volume claim in Gcloud with terraform

I created a webserver infrastructure with terraform (v0.12.21) in Gcloud to deploy a lot of websites.
I created a persistent volume claim for each deploy (1GB each):
I used this code to create them:
resource "kubernetes_persistent_volume_claim" "wordpress_volumeclaim" {
for_each = var.wordpress_site
metadata {
name = "wordpress-volumeclaim-${terraform.workspace}-${each.value.name}"
namespace = "default"
}
spec {
access_modes = ["ReadWriteOnce"]
resources {
requests = {
storage = each.value.disk
resource_policies = google_compute_resource_policy.policy.name
}
}
}
}
resource "kubernetes_deployment" "wordpress" {
for_each = var.wordpress_site
metadata {
name = each.value.name
labels = { app = each.value.name }
}
spec {
replicas = 1
selector {
match_labels = { app = each.value.name }
}
template {
metadata {
labels = { app = each.value.name }
}
spec {
volume {
name = "wordpress-persistent-storage-${terraform.workspace}-${each.value.name}"
persistent_volume_claim {
claim_name = "wordpress-volumeclaim-${terraform.workspace}-${each.value.name}"
}
}
[...]
But now I need to backup all these disks, and my best idea is using the Gcloud snapshot functionallity, and it must be dynamic, as the creation of these disks are dynamic.
First of all, I created a Snapshot policy:
resource "google_compute_resource_policy" "policy" {
name = "my-resource-policy"
region = "zone-region-here"
project = var.project
snapshot_schedule_policy {
schedule {
daily_schedule {
days_in_cycle = 1
start_time = "04:00"
}
}
retention_policy {
max_retention_days = 7
on_source_disk_delete = "KEEP_AUTO_SNAPSHOTS"
}
}
}
And now I want to add it to my persistent volumen claim. But I dont know how, because this line is not working at all:
resource_policies = google_compute_resource_policy.policy.name
All my tries resulted in errors. Could you help me here?

Resources