I am working on terraform workspace automation using tfe provider and want to create a terraform HCL variable as a map using custom_tags from the below data structure.
workspaces = {
"PROD" = {
"custom_tags" = {
"Application" = "demo"
"EnvironmentType" = "prod"
"NamePrefix" = "sof"
"ProductType" = "terraform"
}
"env_variables" = {}
"id" = "alfsdfksf"
"name" = "PROD"
"repo" = "github/something"
"tf_variables" = {}
}
"UAT" = {
"custom_tags" = {
"Application" = "demo"
"EnvironmentType" = "uat"
"NamePrefix" = "sof"
"ProductType" = "terraform"
}
"env_variables" = {}
"id" = "ws-k7KWYfsdfsdf"
"name" = "UAT"
"repo" = "github/otherthing"
"tf_variables" = {}
}
}
Here is my resource block
resource "tfe_variable" "terraform_hcl_variables" {
for_each = { for w in local.workspaces : w.name => w }
key = "custom_tags"
value = each.value.custom_tags
category = "terraform"
hcl = true
sensitive = false
workspace_id = tfe_workspace.main[each.key].id
}
And, I am getting this error. Any help is appreciated to resolve this.
**each.value.custom_tags is object with 4 attributes
Inappropriate value for attribute "value": string required.**
Expected outcome
custom_tags should be created as a HCL variable
custom_tags =
{
"Application" = "demo"
"EnvironmentType" = "prod"
"NamePrefix" = "sof"
"ProductType" = "terraform"
}
Sadly you can't do this. value attribute must be string, but you are trying to assign an "object with 4 attributes" to it.
You could convert your each.value.custom_tags into string using jsonencode, but this is probably not what you want.
Related
I am trying to create aws codepipeline using resources in TF. here is my resources section in m,y TF.
resource "aws_codepipeline" "codepipeline" {
name = var.name
role_arn = var.role_arn
artifact_store {
location = var.location
type = var.type
}
stage {
name = var.stage1_name
action {
name = var.action1_name
category = var.source_category
owner = var.source_owner
provider = var.source_provider
version = var.source_version
output_artifacts = var.source_output_artifacts
configuration = {
ConnectionArn = var.connection_arn
FullRepositoryId = var.full_repository_id
BranchName = var.branch_name
OutputArtifactFormat = var.output_artifact_format
}
}
}
stage {
name = var.stage2_name
action {
name = var.action2_name
category = var.build_category
owner = var.build_owner
provider = var.build_provider
input_artifacts = var.input_artifacts
output_artifacts = var.build_output_artifacts
version = var.build_version
configuration = {
ProjectName = var.project_name
EnvironmentVariables = var.environment_variables /*jsonencode(
[
{
name = var.environment_name
type = var.environment_type
value = var.environment_value
}
]
) */
}
}
}
}
In my TF modules section, creating codepipeline by calling the resources given above. my modules code is
module "codepipeline_notification" {
source = "../../modules/codepipeline"
name = var.codepipeline_lambda_notification_name
role_arn = aws_iam_role.cp_lambda_deploy_role.arn #var.codepipeline_lambda_notification_role_arn
location = module.s3_codepipeline_artifact.s3_bucket_account_id #var.codepipeline_lambda_notification_location
type = var.codepipeline_lambda_notification_type
stage1_name = var.codepipeline_lambda_notification_stage1_name
action1_name = var.codepipeline_lambda_notification_action1_name
source_category = var.codepipeline_lambda_notification_source_category
source_owner = var.codepipeline_lambda_notification_source_owner
source_provider = var.codepipeline_lambda_notification_source_provider
source_version = var.codepipeline_lambda_notification_source_version
source_output_artifacts = var.codepipeline_lambda_notification_source_output_artifacts
full_repository_id = var.codepipeline_lambda_notification_full_repository_id
branch_name = var.codepipeline_lambda_notification_branch_name
output_artifact_format = var.codepipeline_lambda_notification_output_artifact_format
environment_variables = jsonencode(
[
{
name = var.codepipeline_lambda_notification_environment_name
type = var.codepipeline_lambda_notification_environment_type
value = var.codepipeline_lambda_notification_environment_value
}
]
)
build_output_artifacts = var.codepipeline_lambda_notification_build_output_artifacts
connection_arn = module.codestarconnections.arn
stage2_name = var.codepipeline_lambda_notification_stage2_name
action2_name = var.codepipeline_lambda_notification_action2_name
build_category = var.codepipeline_lambda_notification_build_category
build_owner = var.codepipeline_lambda_notification_build_owner
build_provider = var.codepipeline_lambda_notification_build_provider
build_version = var.codepipeline_lambda_notification_build_version
input_artifacts = var.codepipeline_lambda_notification_input_artifacts
project_name = module.codebuild_notification.name
}
with this approach, I am trying to create 4 pipelines where one pipeline has only 2 stages and other 2 pipeline has 3 stages, If I define 3 stages in resources then Terraform forces the modules to create 3 stages in all pipelines where I need onyl two stages. Is there any way in terraform to define in resources and use the resource in modules based on condition
Not sure if you ever got an answer to your question, but yes, there is a way. It's called Dynamic Pipeline. I have a repository that walks you through the usage of the dynamic pipeline. In short, you treat the resource like a dynamic resource using each statement and passing in the configuration as a map.
The module looks like this:
resource "aws_codepipeline" "codepipeline" {
for_each = var.code_pipeline
name = "${local.name_prefix}-${var.AppName}"
role_arn = each.value["code_pipeline_role_arn"]
tags = {
Pipeline_Key = each.key
}
artifact_store {
type = lookup(each.value, "artifact_store", null) == null ? "" : lookup(each.value.artifact_store, "type", "S3")
location = lookup(each.value, "artifact_store", null) == null ? null : lookup(each.value.artifact_store, "artifact_bucket", null)
}
dynamic "stage" {
for_each = lookup(each.value, "stages", {})
iterator = stage
content {
name = lookup(stage.value, "name")
dynamic "action" {
for_each = lookup(stage.value, "actions", {}) //[stage.key]
iterator = action
content {
name = action.value["name"]
category = action.value["category"]
owner = action.value["owner"]
provider = action.value["provider"]
version = action.value["version"]
run_order = action.value["run_order"]
input_artifacts = lookup(action.value, "input_artifacts", null)
output_artifacts = lookup(action.value, "output_artifacts", null)
configuration = action.value["configuration"]
namespace = lookup(action.value, "namespace", null)
}
}
}
}
}
Executing Module
module "code_pipeline" {
source = "../module-aws-codepipeline" #using module locally
#source = "your-github-repository/aws-codepipeline" #using github repository
AppName = "My_new_pipeline"
code_pipeline = local.code_pipeline
}
Sample locals.tf with pipeline variable
locals {
/*
DECLARE enviornment variables. Note each Action does not require environment variables
*/
action_second_stage_variables = [
{
name = "PIPELINE_EXECUTION_ID"
type = "PLAINTEXT"
value = "#{codepipeline.PipelineExecutionId}"
},
{
name = "NamespaceVariable"
type = "PLAINTEXT"
value = "some_value"
},
]
action_third_stage_variables = [
{
name = "PL_VARIABLE_1"
type = "PLAINTEXT"
value = "VALUE1"
},
{
name = "PL_VARIABLE 2"
type = "PLAINTEXT"
value = "VALUE2"
},
{
name = "PL_VARIABLE_3"
type = "PLAINTEXT"
value = "VAUE3"
},
{
name = "PL_VARIABLE_4"
type = "PLAINTEXT"
value = "#{BLD.NamespaceVariable}"
},
]
/*
BUILD YOUR STAGES
*/
code_pipeline = {
codepipeline-configs = {
code_pipeline_role_arn = "arn:aws:iam::aws_account_name:role/role_name"
artifact_store = {
type = "S3"
artifact_bucket = "your-aws-bucket-name"
}
stages = {
stage_1 = {
name = "Download"
actions = {
action_1 = {
run_order = 1
category = "Source"
name = "First_Stage"
owner = "AWS"
provider = "CodeCommit"
version = "1"
output_artifacts = ["download_ouput"]
configuration = {
RepositoryName = "Codecommit_target_repo"
BranchName = "main"
PollForSourceChanges = true
OutputArtifactFormat = "CODE_ZIP"
}
}
}
}
stage_2 = {
name = "Build"
actions = {
action_1 = {
run_order = 2
category = "Build"
name = "Second_Stage"
owner = "AWS"
provider = "CodeBuild"
version = "1"
namespace = "BLD"
input_artifacts = ["Download_ouput"]
output_artifacts = ["build_outputs"]
configuration = {
ProjectName = "codebuild_project_name_for_second_stage"
EnvironmentVariables = jsonencode(local.action_second_stage_variables)
}
}
}
}
stage_3 = {
name = "Validation"
actions = {
action_1 = {
run_order = 1
name = "Third_Stage"
category = "Build"
owner = "AWS"
provider = "CodeBuild"
version = "1"
input_artifacts = ["build_outputs"]
output_artifacts = ["validation_outputs"]
configuration = {
ProjectName = "codebuild_project_name_for_third_stage"
EnvironmentVariables = jsonencode(local.action_third_stage_variables)
}
}
}
}
}
}
}
}
The full use of the module can be found in this GitHub repository. In your case, you could pass in multiple resources to create various pipelines in one module with unique and custom stages and actions. I hope this helps.
I am trying to output gcp project information by doing something like this :
output "projects" {
value = tomap({
for project_name in ["project_1", "project_2", "project_3"] :
project_name => tomap({
id = google_project."${project_name}".id
number = google_project."${project_name}".number
})
})
description = "Projects"
}
Or like this :
output "projects" {
value = tomap({
for_each = toset([google_project.project_1,google_project.project_2])
id = each.key.id
number = each.key.number
})
description = "Projects"
}
Is it at all possible to use resource names this way? Do I have to specify every resource by duplicating code?
E.g.
output "projects" {
value = tomap({
project_1 = tomap({
id = google_project.project_1.id
number = google_project.project_1.number
})
project_2 = tomap({
id = google_project.project_2.id
number = google_project.project_2.number
})
project_3 = tomap({
id = google_project.project_3 .id
number = google_project.pproject_3 .number
})
})
description = "Projects"
}
EDIT : declared resources.
In main.tf projects 1 to 3 are declared the same way.
resource "google_project" "project_3" {
name = var.projects.project_3.name
project_id = var.projects.project_3.id
folder_id = google_folder.parent.name
billing_account = data.google_billing_account.acct.id
auto_create_network = false
}
in variables.tf
variable "projects" {
type = map(object({
name = string
id = string
}))
}
in variables.tfvars
projects = {
project_1= {
name = "project_1"
id = "project_1-12345"
}
project_2= {
name = "project_2"
id = "project_2-12345"
}
project_3= {
name = "project_2"
id = "project_2-12345"
}
}
I misunderstood your question originally. I see now that you want to reference a resource by a variable name. No you cannot do that. But your setup here doesn't really make sense, and seems more complex than it needs to be.
Consider if these options would improve your setup.
locals {
projects = { # This is equivalent to your input.
project_1 = {
name = "project_1"
id = "project_1-12345"
}
project_2 = {
name = "project_2"
id = "project_2-12345"
}
project_3 = {
name = "project_3"
id = "project_3-12345"
}
}
}
resource "google_project" "this" {
for_each = local.projects
name = each.key # or each.value.name / don't really need name
project_id = each.value.id
folder_id = google_folder.parent.name
billing_account = data.google_billing_account.acct.id
auto_create_network = false
}
output "projects_from_input" {
description = "You can of course, just use the input."
value = local.projects
}
output "projects_explicit_values" {
description = "Alternatively, if you need a subset of resource values."
value = { for k, v in google_project.this : k => {
name = v.name
id = v.project_id
} }
}
output "complete_resources" {
description = "But you can just output the complete resource."
value = google_project.this
}
I edited my initial answer after seeing the Terraform resource that creates a project. The need is a way to get a resource name in the output bloc with interpolation.
I think if a single resource is used to create all the projets instead of one resource per projet, it's easier to expose this resource in the output bloc.
For example you can configure projects metadata information from a json file, or directly a local variable or a var if needed :
Example for a json file and local variable
mymodule/resource/projects.json :
{
"projects": {
"project_1": {
"id": "project_1",
"number": "23333311"
},
"project_2": {
"id": "project_2",
"number": "33399999"
}
}
}
Then retrieve projects as a variable from locals.tf file :
mymodule/locals.tf :
locals {
projects = jsondecode(file("${path.module}/resource/projects.json"))["projects"]
}
Create your projects in a single resource with a foreach :
resource "google_project" "projects" {
for_each = local.projects
name = each.key
project_id = each.value["id"]
folder_id = google_folder.parent.name
billing_account = data.google_billing_account.acct.id
auto_create_network = false
}
Expose the projects resource in an output.tf file :
output "projects" {
value = google_project.projects
description = "Projects"
}
The same principle can be done with a var instead of local variable.
I'm hitting Error: Incorrect attribute value type when passing var_3 below to a tfe_variable of type HCL.
Is there a way to convert the decoded JSON variable to HCL?
My config.json:
{
"vars": {
"var_1": "foo",
"var_2": "bar",
"var_3": {
"default": "foo"
}
}
}
My terraform config:
variable "tfe_token" {}
provider "tfe" {
hostname = "app.terraform.io"
token = var.tfe_token
}
data "tfe_workspace" "this" {
name = "my-workspace-name"
organization = "my-org-name"
}
locals {
json_config = jsondecode(file("config.json"))
}
resource "tfe_variable" "workspace" {
for_each = local.json_config.vars
workspace_id = data.tfe_workspace.this.id
key = each.key
value = each.value
category = "terraform"
hcl = true
sensitive = false
}
I have maps of variables like this:
users.tfvars
users = {
"testterform" = {
path = "/"
force_destroy = true
email_address = "testterform#example.com"
group_memberships = [ "test1" ]
tags = { department : "test" }
ssh_public_key = "ssh-rsa AAAAB3NzaC1yc2EAAA4l7"
}
"testterform2" = {
path = "/"
force_destroy = true
email_address = "testterform2#example.com"
group_memberships = [ "test1" ]
tags = { department : "test" }
ssh_public_key = ""
}
I would like to upload ssh key only if ssh_public_key not empty for the user. But don't understand how to check this
#main.tf
resource "aws_iam_user" "this" {
for_each = var.users
name = each.key
path = each.value["path"]
force_destroy = each.value["force_destroy"]
tags = merge(each.value["tags"], { Provisioner : var.provisioner, EmailAddress : each.value["email_address"] })
}
resource "aws_iam_user_group_membership" "this" {
for_each = var.users
user = each.key
groups = each.value["group_memberships"]
depends_on = [ aws_iam_user.this ]
}
resource "aws_iam_user_ssh_key" "this" {
for_each = var.users
username = each.key
encoding = "SSH"
public_key = each.value["ssh_public_key"]
depends_on = [ aws_iam_user.this ]
}
It sounds like what you need here is a derived "users that have non-empty SSH keys" map. You can use the if clause of a for expression to derive a new collection from an existing one while filtering out some of the elements:
resource "aws_iam_user_ssh_key" "this" {
for_each = {
for name, user in var.users : name => user
if user.ssh_public_key != ""
}
username = each.key
encoding = "SSH"
public_key = each.value.ssh_public_key
depends_on = [aws_iam_user.this]
}
The derived map here uses the same keys and values as the original var.users, but is just missing some of them. That means that the each.key results will correlate and so you'll still get the same username value you were expecting, and your instances will have addresses like aws_iam_user_ssh_key.this["testterform"].
You can use a for loop to exclude those blanks.
For example, you can do it on local:
variable "users" {
default = {
"testterform" = {
path = "/"
force_destroy = true
tags = { department : "test" }
ssh_public_key = "ssh-rsa AAAAB3NzaC1yc2EAAA4l7"
}
"testterform2" = {
path = "/"
force_destroy = true
tags = { department : "test" }
ssh_public_key = ""
}
}
}
locals {
public_key = flatten([
for key, value in var.users :
value.ssh_public_key if ! contains([""], value.ssh_public_key)
])
}
output "myout" {
value = local.public_key
}
that will output:
myout = [
"ssh-rsa AAAAB3NzaC1yc2EAAA4l7",
]
As you can see the empty ones have been removed, and you can add other stuff you want to exclude on that contains array.
Then you can use that local.public_key in the for_each for your ssh keys
So I have created a google_bigquery module to create datasets and set access.
The module iterates over a map of list of maps. It uses the each.key to create the datasets then iterates over the list of maps to create the dynamic access.
The module works as in:
It has no errors nor warning
It deploys the resources
It populates the remote statefile appropriately.
The issue is that everytime I ran terraform it wants to re-apply the same changes, over and over again.
Clearly something is not right but not sure what.
here is the code
main.tf
locals {
env = basename(path.cwd)
project = basename(abspath("${path.cwd}/../.."))
project_name = coalesce(var.project_name, format("%s-%s", local.project, local.env))
}
data "google_compute_zones" "available" {
project = local.project_name
region = var.region
}
provider "google" {
project = local.project_name
region = var.region
version = "~> 2.0" #until 3.0 goes out of beta
}
terraform {
required_version = ">= 0.12.12"
}
resource "google_bigquery_dataset" "main" {
for_each = var.datasets
dataset_id = upper("${each.key}_${local.env}")
location = var.region
delete_contents_on_destroy = true
dynamic "access" {
for_each = flatten([ for k, v in var.datasets : [
for i in each.value : {
role = i.role
user_by_email = i.user_by_email
group_by_email = i.group_by_email
dataset_id = i.dataset_id
project_id = i.project_id
table_id = i.table_id
}]])
content {
role = lookup(access.value,"role", "")
user_by_email = lookup(access.value,"user_by_email","")
group_by_email = lookup(access.value,"group_by_email","")
view {
dataset_id = lookup(access.value,"dataset_id","")
project_id = lookup(access.value,"project_id","")
table_id = lookup(access.value,"table_id", "")
}
}
}
access {
role = "READER"
special_group = "projectReaders"
}
access {
role = "OWNER"
group_by_email = "Group"
}
access {
role = "OWNER"
user_by_email = "ServiceAccount"
}
access {
role = "WRITER"
special_group = "projectWriters"
}
}
variables.tf
variable "region" {
description = ""
default = ""
}
variable "env" {
default = ""
}
variable "project_name" {
default = ""
}
variable "owner_group" {
description = ""
default = ""
}
variable "owner_sa" {
description = ""
default = ""
}
variable "datasets" {
description = "A map of objects, including dataset_isd abd access"
type = map(list(map(string)))
}
terraform.tfvars
datasets = {
dataset01 = [
{
role = "WRITER"
user_by_email = "email_address"
group_by_email = ""
dataset_id = ""
project_id = ""
table_id = ""
},
{
role = ""
user_by_email = ""
group_by_email = ""
dataset_id ="MY_OTHER_DATASET"
project_id ="my_other_project"
table_id ="my_test_view"
}
]
dataset02 = [
{
role = "READER"
user_by_email = ""
group_by_email = "group"
dataset_id = ""
project_id = ""
table_id = ""
},
{
role = ""
user_by_email = ""
group_by_email = ""
dataset_id ="MY_OTHER_DATASET"
project_id ="my_other_project"
table_id ="my_test_view_2"
}
]
}
So the problem is that the dynamic block (the way I wrote it) can generate this output
+ access {
+ role = "WRITER"
+ special_group = "projectWriters"
+ view {}
}
this is applied, no errors, but it will want to re-apply it over and over
The issue seems to be that the provider API response doesn't include the empty view{}
Any suggestion how I could make the view block conditional on the values of it being not null?
I fixed the problem. I changed the module slightly and the variable type.
I have split the roles and the views into their own lists of maps within the parent map of datasets.
There are conditionals in each block so the dynamic block is only applied if the roles exists or views exists.
Also realized the dynamic block was iterating on the wrong iterator.
The dynamic block was iterating on var.datasets which was causing the permissions assigned to each dataset to be applied to all datasets. So now it has been changed to iterate on each.value (from the resource for_each).
Here is the new code that works
MAIN.TF
resource "google_bigquery_dataset" "main" {
for_each = var.datasets
dataset_id = upper("${each.key}_${local.env}")
location = var.region
delete_contents_on_destroy = true
dynamic "access" {
for_each = flatten([for i in each.value : [
for k, v in i : [
for l in v :
{
role = l.role
user_by_email = l.user_by_email
group_by_email = l.group_by_email
special_group = l.special_group
}]
if k == "roles"
]])
content {
role = access.value["role"]
user_by_email = access.value["user_by_email"]
group_by_email = access.value["group_by_email"]
special_group = access.value["special_group"]
}
}
dynamic "access" {
for_each = flatten([for i in each.value : [
for k, v in i : [
for l in v :
{
dataset_id = l.dataset_id
project_id = l.project_id
table_id = l.table_id
}]
if k == "views"
]])
content {
view {
dataset_id = access.value["dataset_id"]
project_id = access.value["project_id"]
table_id = access.value["table_id"]
}
}
}
}
VARIABLES.TF
variable "datasets" {
description = "A map of objects, including datasets IDs, roles and views"
type = map(list(map(list(map(string)))))
default = {}
}
continued....
Terraform.tfvars
datasets = {
dataset01 = [
{
roles = [
{
role="WRITER"
user_by_email="email_address"
group_by_email=""
special_group=""
}
]
views = [
{
dataset_id="MY_OTHER_DATASET"
project_id="my_other_project"
table_id="my_test_view"
}
]
}
]
dataset02 = [
{
roles = [
{
role="READER"
user_by_email=""
group_by_email="group"
special_group=""
}
]
views=[
{
dataset_id="MY_OTHER_DATASET"
project_id="my_other_project"
table_id="my_test_view_2"
}
]
}
]
}