I am currently using Terraform to create Azure vdi.
My current structure is
main.tf
input.tf
module/ main.tf, var.tf
So whenever a user joined the particular team, I do the following steps
Let say user1
Currently I am creating user1 folder with input.tf file and create the resource, Similar when user2 comes I am doing the same process (folder user2).
Is there any better way to handle such requirement ??
There are several possibilities to do that:
Separate config files and folders - you are already doing this, and as the number of users increases, it will be a problem. But it is also most flexible, as you can custom modify TF files for each user separately.
Use a set of users as an input. So you have one folder and one copy of your TF code, but you maintain set variable with users. Then use for_each to create different instance of module for each user. For example:
variable "users" {
type = set(string)
default = ["user1", "user2"]
}
module "vdi" {
source = "./module"
for_each = var.users
}
Use a separate workspace for each user.
Related
Context: I'm developing a new resource for my TF Provider.
This foo resource has a name and associated config: a list of key value pairs (both sensitive and non-sensitive).
There're 3 options I've identified:
resource "foo" "option1" {
name = "option1"
config = {
"name" = "option1"
"errors.length" = 3
"tasks.type" = "FOO"
}
config_sensitive = {
"jira.key" = "..."
"credentials.json" = "..."
}
}
resource "foo" "option2" {
name = "option2"
config = {
"name" = "option1"
"errors.length" = 3
"tasks.type" = "FOO"
"jira.key" = "..."
"credentials.json" = "..."
}
}
resource "foo" "option3" {
name = "option3"
config = file("config.json")
}
The advantage of option #3 is it looks very readable but requires a user to store an extra json file (with secrets) in the same folder (I'm not sure how acceptable that setup is). Option #2 looks tempting but foo should accept updates and if we mark the whole block as sensitive (since it may contain secret key-value pairs), the update functionality will suffer (user won't see the expected change). So Option #1 is the winner in my eyes since it's the most explicit one and allows us to distinguish between sensitive and non-sensitive attributes (while allowing updates for non-sensitive ones). Reading from file the whole config is probably not ideal since it doesn't really allow an engineer to see how the config looks like without opening another file.
There's also this weird duplicated name attribute but let's ignore it for now.
What configuration is the most acceptable and used by other TF Providers?
Option #3 should be struck immediately for three reasons:
You cannot realiably use the sensitive flag in the schema struct like you can with 1 and 2.
It requires a JSON format value which is cumbersome to work with unless you are forced into it (e.g. security policies).
Someone could inline the JSON and not store it in a file, which would completely workaround your attempt to obscure the secrets.
Options 1 and 2 are honestly no different from a secrets management perspective. You could apply the sensitive flag to either in the nested schema struct on a per-attribute basis, and use e.g. Vault to pass in values on a KV basis for either.
I would opt for 1 over 2 simply because it appears to me from your question that the arguments and values in the two blocks have no relationship with each other. Therefore, it makes more sense to organize your schema into two separate blocks for code cleanliness purposes.
I will also mention that if it is possible to refactor the credentials.json into your provider, and leverage the JIRA provider for the jira.key, then that would be best practices by both code architecture and security. It is also how the major providers handle this situation.
Terraform providers should handle the credential/auth implementation and the resource handles the resource configuration.
e.g.
resource "jira_issue" "some_story" {
title = "My story"
type = "story"
labels = ["someexampleonstackoverflow","jakewashere"]
}
Notice there's no config that doesn't relate to the thing I'm creating inside the Terraform resource.
It's very acceptable to have some documented convention in your provider that reads credentials from somewhere, whether that's an OS variable, file on disk etc.
For example: The Google Cloud provider, will read an environment variable if it's populated, if not it'll attempt to read either a configuration file that sits inside a hidden directory within $HOME or attempts to read a localhost http metadata server for the credentials.
Context: I'm developing a TF provider (here's the official guide from HashiCorp).
I run into the following situation:
# main.tf
resource "foo" "example" {
id = "foo-123"
name = "foo-name"
lastname = "foo-lastname"
}
resource "bar" "example" {
id = "bar-123"
parent_id = foo.example.id
parent_name = foo.example.name
parent_lastname = foo.example.lastname
}
where I have to declare parent_name and parent_lastname (effectively duplicate them) explicitly to be able to read these values that are necessary for read / create request for Bar resource.
Is it possible to use a fancy trick with d *schema.ResourceData in
func resourceBarRead(ctx context.Context, d *schema.ResourceData, m interface{}) diag.Diagnostics {
to avoid duplicated in my TF config, i.e. have just:
resource "foo" "example" {
id = "foo-123"
name = "foo-name"
lastname = "foo-lastname"
}
resource "bar" "example" {
id = "bar-123"
parent_id = foo.example.id
}
infer foo.example.name and foo.example.lastname just based on foo.example.id in resourceBarRead() somehow so I won't have to duplicate those fields in both resources?
Obviously, this is minimal example and let's assume I need both foo.example.name and foo.example.lastname to send a read / create request for Bar resource? In other words, can I iterate through other resource in TF state / main.tf file based on target ID to find its other attributes? It seems to be a useful feauture howerever it's not mentioned in HashiCorp's guide so I guess it's undesirable and I have to duplicate those fields.
In Terraform's provider/resource model, each resource block is independent of all others unless the user explicitly connects them using references like you showed.
However, in most providers it's sufficient to pass only the id (or similar unique identifier) attribute downstream to create a relationship like this, because the other information about that object is already known to the remote system.
Without knowledge about the particular remote system you are interacting with, I would expect that you'd be able to use the value given as parent_id either directly in an API call (and thus have the remote system connect it with the existing object), or to make an additional read request to the remote API to look up the object using that ID and obtain the name and lastname values that were saved earlier.
If those values only exist in the provider's context and not in the remote API then I don't think there will be any alternative but to have the user pass them in again, since that is the only way that local values (as opposed to values persisted in the remote API) can travel between resources.
I'm trying to create separate modules(git repos) to create for several azure resources using terraform.
For example I want to create module-1 which will create aks cluster service and default node pool. I want to create separate module-2 which will create user node pools. So is there anyway I can import values from module-1 like aks cluster id, network id etc by just giving cluster name or any other identifier? If I use 'source' I had to give values everytime and default values are not same always for all the clusters. I don't want to hardcode the ids in tf files. Ofcourse I will use same state file.
So basically in terraform is it possible to get values from another azure resource which is created in terraform.
thanks,
Santosh
Using terraform_remote_state data resource
One way you can achieve this is by leveraging the terraform_remote_state data element.
On your module-1 main tf, output any attribute that other modules or repos would use. Then, pull that information you need on module-2 main via the data “terraform_remote_state” resource by providing the location of the state file used by module-1 .
Using a single main tf
The way I would approach this scenario is by defining outputs from module-1, and have those outputs passed in as parameters to module-2.
For example, you could have something like the following:
On your main tf file
module "module-1" {
source = "/path/to/module-1"
... < some parameters to module-1 ...>
}
module "module-2" {
source = "/path/to/module-2"
... < some parameters to module-2 ...>
aks-cluster-id = module.module-1.aks-cluster-id
}
Add the following output to module-1
output "aks-cluster-id" {
# Replace this with the proper resource and attribute based on how your cluster is created
value = azure.aks_cluster.aks-cluster-id
description = "AKS Cluster ID"
}
Terraform's documentation is actually pretty good and provides some useful examples. https://www.terraform.io/language/values/outputs
I've added our infrastructure setup to puppet, and used roles and profiles method. Each profile resides inside a group, based on their nature. For example, Chronyd setup and Message of the day are in "base" group, nginx-related configuration is in "app" group. Also, on the roles, each profile is added to the corresponding group. For example for memcached we have the following:
class role::prod::memcache inherits role::base::debian {
include profile::app::memcache
}
The profile::app::memcached has been set up like this :
class profile::app::memcache {
service { 'memcached':
ensure => running,
enable => true,
hasrestart => true,
hasstatus => true,
}
}
and for role::base::debian I have :
class role::base::debian {
include profile::base::motd
include profile::base::chrony
}
The above structure has proved to be flexible enough for our infrastructure. Adding services and creating new roles could not been easier than this. But now I face a new problem. I've been trying to separate data from logic, write some yaml files to keep the data there, using Hiera version 5. Been looking through internet for a couple of days, but I cannot deduct how to write my hiera files based on the structure I have. I tried adding profile::base::motd to common.yaml and did a puppet lookup, it works fine, but I could not append chrony to common.yaml. Puppet lookup returns nothing with the following common.yaml contents :
---
profile::base::motd::content: This server access is restricted to authorized users only. All activities on this system are logged. Unauthorized access will be liable to prosecution.'
profile::base::chrony::servers: 'ntp.centos.org'
profile::base::chrony::service_enable: 'true'
profile::base::chrony::service_ensure: 'running'
Motd lookup works fine. But the rest, no luck. puppet lookup profile::base::chrony::servers returns with no output. Don't know what I'm missing here. Would really appreciate the community's help on this one.
Also, using hiera, is the following enough code for a service puppet file?
class profile::base::motd {
class { 'motd':
}
}
PS : I know I can add yaml files inside modules to keep the data, but I want my .yaml files to reside in one place (e.g. $PUPPET_HOME/environment/production/data) so I can manage the code with git.
The issue was that in init.pp file inside the puppet module itself, the variable $content was assigned a value. Removing the value fixed the problem.
I have the following use-case: I'm using a combination of the Azure DevOps pipelines and Terraform to synchronize our TAP for Grafana (v7.4). Intention is that we can tweak and tune our dashboards on Test, and push the changes to Acceptance (and Production) via the pipelines.
I've got one pipeline that pulls in the state of the Test environment and writes it to a set of json files (for the dashboards) and a single json array (for the folders).
The second pipeline should use these resources to synchronize the Acceptance environment.
This works flawlessly for the dashboards, but I'm hitting a snag putting the dashboards in the right folder dynamically. Here's my latest working code:
resource "grafana_folder" "folders" {
for_each = toset(var.grafana_folders)
title = each.key
}
resource "grafana_dashboard" "dashboards" {
for_each = fileset(path.module, "../dashboards/*.json")
config_json = file("${path.module}/${each.key}")
}
The folder resources pushes the folders based on a variable list of names that I pass via variables. This generates the folders correctly.
The dashboard resource pushes the dashboards correctly, based on all dashboard files in the specified folder.
But now I'd like to make sure the dashboards end up in the right folder. The provider specifies that I need to do this based on the folder UID, which is generated when the folder is created. So I'd like to take the output from the grafana_folder resource and use it in the grafana_dashboard resource. I'm trying the following:
resource "grafana_folder" "folders" {
for_each = toset(var.grafana_folders)
title = each.key
}
resource "grafana_dashboard" "dashboards" {
for_each = fileset(path.module, "../dashboards/*.json")
config_json = file("${path.module}/${each.key}")
folder = lookup(transpose(grafana_folder.folders), "Station_Details", "Station_Details")
depends_on = [grafana_folder.folders]
}
If I read the Grafana Provider github correctly, the grafana_folder resource should output a map of [uid, title]. So I figured if I transpose that map, and (by way of test) lookup a folder title that I know exists, I can test the concept.
This gives the following error:
on main.tf line 38, in resource "grafana_dashboard" "dashboards":
38: folder = lookup(transpose(grafana_folder.folders),
"Station_Details", "Station_Details")
Invalid value for "default" parameter: the default value must have the
same type as the map elements.
Both Uid and Title should be strings, so I'm obviously overlooking something.
Does anyone have an inkling where I'm going wrong and/or have suggestions on how I can do this (better)?
I think the problem this error is trying to report is that grafana_folder.folders is a map of objects, and so passing it to transpose doesn't really make sense but seems to be succeeding because Terraform has found some clever way to do automatic type conversions to produce some result, but then that result (due to the signature of transpose) is a map of lists rather than a map of strings, and so "Station_Details" (a string, rather than a list) isn't a valid fallback value for that lookup.
My limited familiarity with folders in Grafana leaves me unsure as to what to suggest instead, but I expect the final expression will look something like the following:
folder = grafana_folder.folders[SOMETHING].id
SOMETHING here will be an expression that allows you to know for a given dashboard which folder key it ought to belong to. I'm not seeing an answer to that from what you shared in your question, but just as a placeholder to make this a complete answer I'll suggest that one option would be to make a local map from dashboard filename to folder name:
locals {
# a local value probably isn't actually the right answer
# here, but I'm just showing it as a placeholder for one
# possible way to map from dashboard filename to folder
# name. These names should all be elements of
# var.grafana_folders in order for this to work.
dashboard_folders = {
"example1.json" = "example-folder"
"example2.json" = "example-folder"
"example3.json" = "another-folder"
}
}
resource "grafana_dashboard" "dashboards" {
for_each = fileset("${path.module}/dashboards", "*.json")
config_json = file("${path.module}/dashboards/${each.key}")
folder = grafana_folder.folders[local.dashboard_folders[each.key]].id
}