creating a list of list objects terraform - terraform

I'm setting up a terraform repo for my snowflake instance and bringing in a list of users to start managing.
I have a module called users
and have the following files:
I have a variable defined as follows.
variable "users" {
type = list(object(
{
name = string
comment = string
default_role = string
disabled = bool
must_change_password = bool
display_name = string
email = string
first_name = string
last_name = string
default_warehouse = string
}
)
)
}
now inside users.tf I want to hold a list of all my users based on the above variable, I thought I could define it as follows:
users {
user_1 = {
name = 'x'
},
user_2 = {
name = 'y'
}
}
however, when I run Terraform validate on this it gives me the error that a user block is not expected here.
Can someone tell me my error and give me some guidance if I'm doing this correctly?
My intention is to have a file to hold all my users that I then define with a dynamic block inside my main.tf file within this module.
I can then reference the dynamic block inside the outputs.tf which will give me access to the users inside said module in the global project namespace.

Looks to me like you are attempting to configuring your users as an object:
users {
user_1 = {
name = "x"
},
user_2 = {
name = "y"
}
}
but you actually set your variable constraint to a list of objects. So it should be:
users = [
{
name = "user_1"
# other fields
},
{
name = "user_2"
# other fields
}
]
Here is a full working example:
modules/users/variables.tf
variable "users" {
type = list(object({
name = string
}))
}
modules/users/outputs.tf
output "users" {
value = var.users
}
main.tf
module "users" {
source = "./modules/users"
users = [
{ name = "user_1" },
{ name = "user_2" }
]
}
output "users" {
value = module.users.users
}
plan output
Changes to Outputs:
+ users = [
+ {
+ name = "user_1"
},
+ {
+ name = "user_2"
},
]

Your config syntax and usage is completely correct here. Your config file organization is the issue here. users.tf is a Terraform variables file, and therefore should have the .tfvars extension. If you rename the file from users.tf to e.g. users.tfvars, then you can specify it as an input with the -var-file=users.tfvars argument with the CLI or otherwise as per standard usage. You can see more information in the documentation.
On a side note: it is not really best practices to manage an entire module just for managing a set of users for a specific service. If you follow this design pattern in the future, then your codebase will not scale very well, and could easily become unmanageably large.

Related

adding multiple destinations to new relic workflows using terraform

I am trying to create a newrelic workflow using terraform modules. I am fine with creating a workflow with signle destination. But, I am trying to create a workflow with more than one destination.
slack channel ids
variable "channel_ids" {
type = set(string)
default = ["XXXXXXXXXX","YYYYYYYYY"]
}
creating notification channels using slack channel ids
resource "newrelic_notification_channel" "notification_channel" {
for_each = var.channel_ids
name = "test" # will modify if required
type = "SLACK" # will parameterize this
destination_id = "aaaaaaaaa-bbbbb-cccc-ddddd-eeeeeeeeee"
product = "IINT"
property {
key = "channelId"
value = each.value
}
}
Now I want to create something like below (two destinations)
resource "newrelic_workflow" "newrelic_workflow" {
name = "my-workflow"
muting_rules_handling = "NOTIFY_ALL_ISSUES"
issues_filter {
name = "Filter-name"
type = "FILTER"
predicate {
attribute = "accumulations.policyName"
operator = "EXACTLY_MATCHES"
values = [ "policy_name" ]
}
}
destination {
channel_id = newrelic_notification_channel.notification_channel.id
}
destination {
channel_id = newrelic_notification_channel.notification_channel.id
}
}
I tried using for_each and for loop but no luck. Any idea on how to get my desired output?
Is it possible to loop through and create multiple destinations within the same resource, like attaching multiple destination to a single workflow?
I was able to achieve this by using a dynamic block, which produces a dynamic number of destination blocks based on the number of elements of newrelic_notification_channel.notification_channel.
resource "newrelic_workflow" "newrelic_workflow" {
name = "my-workflow"
muting_rules_handling = "NOTIFY_ALL_ISSUES"
issues_filter {
name = "Filter-name"
type = "FILTER"
predicate {
attribute = "accumulations.policyName"
operator = "EXACTLY_MATCHES"
values = [ "policy_name" ]
}
}
dynamic "destination" {
for_each = newrelic_notification_channel.notification_channel
content {
channel_id = destination.value.id
}
}
}

snowflake terraform create multiple table in one tf files

I am trying to create multiple table through tf in snowflake.
Below are the sample code.
resource "snowflake_table" "table" {
database = "AMAYA"
schema = "public"
name = "info"
comment = "A table."
column {
name = "id"
type = "int"
nullable = true
default {
sequence = snowflake_sequence.sequence.fully_qualified_name
}
}
column {
name = "identity"
type = "NUMBER(38,0)"
nullable = true
identity {
start_num = 1
step_num = 3
}
}
resource "snowflake_table" "table" {
database = "AMAYA"
schema = "public"
name = "arch_info"
comment = "A table."
column {
name = "id"
type = "int"
nullable = true
default {
sequence = snowflake_sequence.sequence.fully_qualified_name
}
}
column {
name = "identity"
type = "NUMBER(38,0)"
nullable = true
identity {
start_num = 1
step_num = 3
}
}
}
When I run this script I get the error.
A snowflake_procedure resource named "table" was already declared at str.tf:16,1-38. Resource names must be unique per type in each module.
The only solution I have tried and worked is to create different files for different table. however I have 100 of tables to create, and was wondering if there is simpler way of putting all in one file and run the script
You can't use the same name for a resource more than once, like tablebelow:
resource "snowflake_table" "table" {
Use different names:
resource "snowflake_table" "table_1" {
You should look into for_each and dynamic functions when needing to create lots of the same resource with different parameters:
Terraform for_each
Terraform dynamic
With those, you can create complex maps that are defined on input and automatically create the required amount of resources, something like below (just an example with a couple of parameters):
locals {
snowflake_tables = {
info = {
database = "AMAYA"
...
columns = {
identity = {
type = "NUMBER(38,0)"
nullable = true
...
}
}
}
}
}
resource "snowflake_table" "table" {
for_each = local.snowflake_tables
name = each.key # info
database = each.value.database # AMAYA
...
dynamic "column" {
for_each = each.value.columns
content {
name = setting.key
type = setting.value["type"]
nullable = setting.value["nullable"]
...
}
}
}
With this technique, all you do is add more objects to the map for tables and columns. I've set the example in locals but you could have this as a variable input instead in a .tfvars file etc.

Create list of user IDs for users not managed by terraform

I am configuring PagerDuty using terraform and part of that is assigning each user to a schedule.
In this scenario the users already exist in PagerDuty as they are pulled in from our SSO provider.
Initially this is how I looked at deploying the setup:
Use a data source to access the users details (such as ID)
users.tf
data "pagerduty_user" "user1" {
email = "user1#test.com"
}
data "pagerduty_user" "user2" {
email = "user2#test.co.nz"
}
Create and assign users to a schedule:
schedule.tf
resource "pagerduty_schedule" "schedule" {
name = "Rotation"
time_zone = "Pacific/Auckland"
layer {
name = "On-Call"
start = "2021-09-10T00:00:00-00:00"
rotation_virtual_start = "2021-09-10T00:00:00-00:00"
// One week rotation
rotation_turn_length_seconds = 604800
// The position of the user on the list determines their order in the layer.
users = [data.pagerduty_user.user1.id, data.pagerduty_user.user2.id]
}
teams = [pagerduty_team.team.id]
}
This works correctly, however each time I wanted to add a new user to a team I would have to add duplicate blocks for each user.
My question is how can I avoid doing this?
My first thought was to use a for_each, so it would look like this:
variables.tf
variable "all_users" {
description = "List of users"
type = map(any)
default = { user1 = "user1#test.com", user2 = "user1#test.com" }
}
users.tf
data "pagerduty_user" "users" {
for_each = var.all_users
email = each.value
}
schedule.tf
resource "pagerduty_schedule" "schedule" {
for_each = data.pagerduty_user.users
name = "Rotation"
time_zone = "Pacific/Auckland"
layer {
name = "On-Call"
start = "2021-09-10T00:00:00-00:00"
rotation_virtual_start = "2021-09-10T00:00:00-00:00"
// One week rotation
rotation_turn_length_seconds = 604800
// The position of the user on the list determines their order in the layer.
users = [data.pagerduty_user.users[each.key].id]
}
teams = [pagerduty_team.team.id]
}
The issue here is that two schedules are being created (which would is the expected behavior).
So, my question is how can I create a list of User IDs what I could then pass to the schedule?
I was able to achieve this with locals:
locals {
users = [
for user in data.pagerduty_user.users :
user.id
]
}
So my final config ended up as:
variables.tf
variable "all_users" {
description = "List of storage users"
type = list(any)
default = ["user1", "user2"]
}
users.tf
data "pagerduty_user" "users" {
for_each = toset(var.all_users)
email = "${each.value}#test.com"
}
schedules.tf
locals {
users = [
for user in data.pagerduty_user.users :
user.id
]
}
resource "pagerduty_schedule" "storage_schedule" {
name = "Storage Team Rotation"
time_zone = "Pacific/Auckland"
layer {
name = "On-Call"
start = "2021-09-10T00:00:00-00:00"
rotation_virtual_start = "2021-09-10T00:00:00-00:00"
// One week rotation
rotation_turn_length_seconds = 604800
// The position of the user on the list determines their order in the layer.
users = local.users
}
teams = [pagerduty_team.storage.id]
}

Iterate Through Map of Maps in Terraform 0.12

I need to build a list of templatefile's like this:
templatefile("${path.module}/assets/files_eth0.nmconnection.yaml", {
interface-name = "eth0",
addresses = element(values(var.virtual_machines), count.index),
gateway = element(var.gateway, count.index % length(var.gateway)),
dns = join(";", var.dns_servers),
dns-search = var.domain,
}),
templatefile("${path.module}/assets/files_etc_hostname.yaml", {
hostname = element(keys(var.virtual_machines), count.index),
}),
by iterating over a map of maps like the following:
variable templatefiles {
default = {
"files_eth0.nmconnection.yaml" = {
"interface-name" = "eth0",
"addresses" = "element(values(var.virtual_machines), count.index)",
"gateway" = "element(var.gateway, count.index % length(var.gateway))",
"dns" = "join(";", var.dns_servers)",
"dns-search" = "var.domain",
},
"files_etc_hostname.yaml" = {
"hostname" = "host1"
}
}
}
I've done something similar with a list of files:
file("${path.module}/assets/files_90-disable-console-logs.yaml"),
file("${path.module}/assets/files_90-disable-auto-updates.yaml"),
...but would like to expand this to templatefiles (above).
Here's the code I've done for the list of files:
main.tf
variable files {
default = [
"files_90-disable-auto-updates.yaml",
"files_90-disable-console-logs.yaml",
]
}
output "snippets" {
value = flatten(module.ingition_snippets.files)
}
modules/main.tf
variable files {}
resource "null_resource" "files" {
for_each = toset(var.files)
triggers = {
snippet = file("${path.module}/assets/${each.value}")
}
}
output "files" {
value = [for s in null_resource.files: s.triggers.*.snippet]
}
Appreciate any help!
Both of these use-cases can be met without using any resource blocks at all, because the necessary features are built in to the Terraform language.
Here is a shorter way to write the example with static files:
variable "files" {
type = set(string)
}
output "files" {
value = tomap({
for fn in var.files : fn => file("${path.module}/assets/${fn}")
})
}
The above would produce a map from filenames to file contents, so the calling module can more easily access the individual file contents.
We can adapt that for templatefile like this:
variable "template_files" {
# We can't write down a type constraint for this case
# because each file might have a different set of
# template variables, but our later code will expect
# this to be a mapping type, like the default value
# you shared in your comment, and will fail if not.
type = any
}
output "files" {
value = tomap({
for fn, vars in var.template_files : fn => templatefile("${path.module}/assets/${fn}", vars)
})
}
Again, the result will be a map from filename to the result of rendering the template with the given variables.
If your goal is to build a module for rendering templates from a source directory to publish somewhere, you might find the module hashicorp/dir/template useful. It combines fileset, file, and templatefile in a way that is hopefully convenient for static website publishing and similar use-cases. (At the time I write this the module is transitioning from being in my personal GitHub account to being in the HashiCorp organization, so if you look at it soon after you may see some turbulence as the docs get updated, etc.)

How can I set up a Influxdb database based on workspace name?

I have a terraform script where I have to set up an Influxdb server and I want to create different databases based on the workspace name. Is it possible to create a map in the variables file to allocate a database name and look it up from a different variable within the same file?
Ex:
var file:
variable "influx_database" "test" {
name = "${lookup(var.influx_database_name, terraform.workspace)}
}
variable "influx_database_name" {
type = "map"
default = {
dump = "dump_database"
good = "good_database"
}
}
You can use local variable like below,
locals {
influx_database_name = "${lookup(var.influx_database_name, terraform.workspace)}"
}
variable "influx_database_name" {
type = "map"
default = {
default = "default_database"
dump = "dump_database"
good = "good_database"
}
}
output "influx_database_name" {
value = "${local.influx_database_name}"
}
local.influx_database_name is defined by workspace name.

Resources