Terraform cyclic dependency challenge - terraform

Ok, so most of this is working except...
We have a user data template file for getting each new AWS server to register with Chef Automate. Chef refers to each client by the "node_name" set in the user data script, which is the instance id by default. But when viewing in the Chef UI or "knife node list", the instance id isn't exactly user friendly. We were able to write out a meaningful node_name using the template. Something like:
data "template_file" "user-data-qa" {
count = "${var.QA_numhosts}"
template = "${file("userdata.tpl")}"
vars {
node_name = "${var.customer_name}-QA-${format("%d", count.index + 1)}"
}
}
However, If we rebuild the instance, we get an error from Chef since the new instance tries to register with the same name, but a newly generated key.
So, we added a random number suffix to the node_name. We'd like this random number to be updated every time the instance is rebuilt. The first attempt was to set the instance id as the "keeper" for the random number. This resulted in the cycle error: ( -> means "depends on") Instance -> User Data -> Random -> Instance
Also tried dumping random generation and just appending a substring of the instance id to the node_name. Same problem, although a shorter cycle: Instance -> User Data -> Instance.
Any ideas for getting around this? In short, we want to append a string to the node_name, which gets inserted into the user data, and that string should update every time the instance is terminated and re-launched. Without cycle errors from Terraform.

instance-id, as you have seen, is calculated, so you can't use it in the definition of the resource itself, or through dependency cycles.
Rather than populate the instance ID inside Terraform, try having the user data script itself get the instance ID and populate node_name.rb with that value. You can do this with a simple curl and echo:
echo "node_name '$(curl --silent /dev/null http://169.254.169.254/latest/meta-data/instance-id)'" > path/to/node_name.rb

Related

Can I iterate through other resources in main.tf in readResource() when developing a TF provider?

Context: I'm developing a TF provider (here's the official guide from HashiCorp).
I run into the following situation:
# main.tf
resource "foo" "example" {
id = "foo-123"
name = "foo-name"
lastname = "foo-lastname"
}
resource "bar" "example" {
id = "bar-123"
parent_id = foo.example.id
parent_name = foo.example.name
parent_lastname = foo.example.lastname
}
where I have to declare parent_name and parent_lastname (effectively duplicate them) explicitly to be able to read these values that are necessary for read / create request for Bar resource.
Is it possible to use a fancy trick with d *schema.ResourceData in
func resourceBarRead(ctx context.Context, d *schema.ResourceData, m interface{}) diag.Diagnostics {
to avoid duplicated in my TF config, i.e. have just:
resource "foo" "example" {
id = "foo-123"
name = "foo-name"
lastname = "foo-lastname"
}
resource "bar" "example" {
id = "bar-123"
parent_id = foo.example.id
}
infer foo.example.name and foo.example.lastname just based on foo.example.id in resourceBarRead() somehow so I won't have to duplicate those fields in both resources?
Obviously, this is minimal example and let's assume I need both foo.example.name and foo.example.lastname to send a read / create request for Bar resource? In other words, can I iterate through other resource in TF state / main.tf file based on target ID to find its other attributes? It seems to be a useful feauture howerever it's not mentioned in HashiCorp's guide so I guess it's undesirable and I have to duplicate those fields.
In Terraform's provider/resource model, each resource block is independent of all others unless the user explicitly connects them using references like you showed.
However, in most providers it's sufficient to pass only the id (or similar unique identifier) attribute downstream to create a relationship like this, because the other information about that object is already known to the remote system.
Without knowledge about the particular remote system you are interacting with, I would expect that you'd be able to use the value given as parent_id either directly in an API call (and thus have the remote system connect it with the existing object), or to make an additional read request to the remote API to look up the object using that ID and obtain the name and lastname values that were saved earlier.
If those values only exist in the provider's context and not in the remote API then I don't think there will be any alternative but to have the user pass them in again, since that is the only way that local values (as opposed to values persisted in the remote API) can travel between resources.

How to use the sops provider with terraform using an array instead an single value

I'm pretty new to Terraform. I'm trying to use the sops provider plugin for encrypting secrets from a yaml file:
Sops Provider
I need to create a Terraform user object for a later provisioning stage like this example:
users = [{
name = "user123"
password = "password12"
}]
I've prepared a secrets.values.enc.yaml file for storing my secret data:
yaml_users:
- name: user123
password: password12
I've encrypted the file using "sops" command. I can decrypt the file successfully for testing purposes.
Now I try to use the encrypted file in Terraform for creating the user object:
data "sops_file" "test-secret" {
source_file = "secrets.values.enc.yaml"
}
# user data decryption
users = yamldecode(data.sops_file.test-secret.raw).yaml_users
Unfortunately I cannot debug the data or the structure of "users" as Terraform doesn't display sensitive data. When I try to use that users variable for the later provisioning stage than it doesn't seem to be what is needed:
Cannot use a set of map of string value in for_each. An iterable
collection is required.
When I do the same thing with the unencrypted yaml file everything seems to be working fine:
users = yamldecode(file("secrets.values.dec.yaml")).yaml_users
It looks like the sops provider decryption doesn't create an array or that "iterable collection" that I need.
Does anyone know how to use the terraform sops provider for decrypting an array of key-value pairs? A single value like "adminpassword" is working fine.
I think the "set of map of string" part of this error message is the important part: for_each requires either a map directly (in which case the map keys become the instance identifiers) or a set of individual strings (in which case those strings become the instance identifiers).
Your example YAML file shows yaml_users being defined as a YAML sequence of maps, which corresponds to a tuple of objects on conversion with yamldecode.
To use that data structure with for_each you'll need to first project it into a map whose keys will serve as the unique identifier for each instance of the resource. Assuming that the name values are suitably unique, you could project it so that those values are the keys:
data "sops_file" "test-secret" {
source_file = "secrets.values.enc.yaml"
}
locals {
users = tomap({
for u in yamldecode(data.sops_file.test-secret.raw).yaml_users :
u.name => u
})
}
The result being a sensitive value adds an extra wrinkle here, because Terraform won't allow using a sensitive value as the identifier for an instance of a resource -- to do so would make it impossible to show the resource instance address in the UI, and impossible to describe the instance on the command line for commands that need that.
However, this does seem like exactly the use-case shown in the example of the nonsensitive function at the time I'm writing this: you have a collection that is currently wholly marked as sensitive, but you know that only parts of it are actually sensitive and so you can use nonsensitive to explain to Terraform how to separate the nonsensitive parts from the sensitive parts. Here's an updated version of the locals block in my previous example using that function:
locals {
users = tomap({
for u in yamldecode(data.sops_file.test-secret.raw).yaml_users :
nonsensitive(u.name) => u
})
}
If I'm making a correct assumption that it's only the passwords that are sensitive and that the usernames are okay to disclose, the above will produce a suitable data structure where the usernames are visible in the keys but the individual element values will still be marked as sensitive.
local.users then meets all of the expectations of resource for_each, and so you should be able to use it with whichever other resources you need to repeat systematically for each user.
Please note that Terraform's tracking of sensitive values is for UI purposes only and will not prevent this passwords from being saved in the state as a part of whichever resources make use of them. If you use Terraform to manage sensitive data then you should treat the resulting state snapshots as sensitive artifacts in their own right, being careful about where and how you store them.

Terraform providers - how would you represent a resource that doesn't have clearly defined CRUD operations?

For work I'm learning Go and Terraform. I read in their tutorial how the different contexts are defined but I'm not clear on exactly when these different contexts are called and what triggers them.
From looking at the Hashicups example it looks like when you put this:
resource "hashicups_order" "new" {
items {
coffee {
id = 3
}
quantity = 2
}
items {
coffee {
id = 2
}
quantity = 2
}
}
in your Terraform file that is going to go look at hashicups_order remove the hashicups prefix and look for a resource called order. The order resource provides the following contexts:
func resourceOrder() *schema.Resource {
return &schema.Resource{
CreateContext: resourceOrderCreate,
ReadContext: resourceOrderRead,
UpdateContext: resourceOrderUpdate,
DeleteContext: resourceOrderDelete,
What isn't clear to me is what triggers each context . From that example it seems like since you are increasing the value of quantity it will trigger the update context. If this were the first run and no previous state existed it would trigger create etc.
However it my case the resource is a server and one API resource I want to present to the user is server power control. However you would never "create/destroy" this resource... or would you? You could read the current power state and you could update the power state but, at least intuitively, you wouldn't create or destroy it. I'm having trouble wrapping my head around how this would be modeled in Terraform/Go. I conceptually understand the coffee resource in the example but I'm having trouble making the leap to imagining what that looks like as something like a server power capability or other things without a clear matching to the different CRUD operations.

What problem does the keepers map for the random provider solve?

I am trying to understand the use case for keepers feature of the terraform random provider. I read the docs but it's not clicking-in for me. What is a concrete example, situation where keeper map would be used and why. Example form the docs reproduced below.
resource "random_id" "server" {
keepers = {
# Generate a new id each time we switch to a new AMI id
ami_id = "${var.ami_id}"
}
byte_length = 8
}
resource "aws_instance" "server" {
tags = {
Name = "web-server ${random_id.server.hex}"
}
# Read the AMI id "through" the random_id resource to ensure that
# both will change together.
ami = "${random_id.server.keepers.ami_id}"
# ... (other aws_instance arguments) ...
}
The keepers are seeds for the random string that is generated. They contain data that you can use to ensure, essentially, that your random string is deterministic - until something happens that means it should change.
If you had a random string without any keepers, and you were using it in your server's Name tag as in this example, then Terraform would generate a plan to change the Name (containing a new random ID) every time you ran terraform plan/terraform apply.
This is not desirable, because while you might want randomness when you first create the server, you probably don't want so much randomness that it constantly changes. That is, once you apply your plan, your infrastructure should remain stable and subsequent plans should generate no changes as long as everything else remains the same.
When it comes time to make changes to this server - such as, in this case, changing the image it's built from - you may well want the server name to automatically change to a new random value to represent that this is no longer the same server as before. Using the AMI ID in the keepers for the random ID therefore means that when your AMI ID changes, a new random ID will be generated for the server's Name as well.

[13.5.2]Create DataStore through ssoadm.jsp -> Atttribut dont match with 'service schema'

I want to create a DataSore through ssoadm.jsp because I use endpoint url in order to automatize process of configuration.
[localhost]/ssoadm.jsp?cmd=create-datastore
I put:
domain name (previously created with default coniguration): myDomain
data store name: myDataStore
type of DataStore: LDAPv3
Attribut values: LDAPv3=org.forgerock.openam.idrepo.ldap.DJLDAPv3Repo
Then I got something like: Attribute name "LDAPv3" doesn't match with service schema. What am I supposed to put in those fields "Attribut values" pls? An example is given:
"sunIdRepoClass=com.sun.identity.idm.plugins.files.FilesRepo"
PS: I dont want to create datastore from [Localhost]/realm/IDRepoSelectType because there is jato.pageSession that i can't automaticly get.
PS2: it is my first time asking a question on Stackoverflow, sorry if my question didn't fit with the expectation. I tried my best.
ssoadm.jsp?cmd=list-datastore-types
shows the list of user data store types
Every user data store type has specific attributes to be set. Unfortunately those are not explicitly documented. The service attributes are defined in the related service definition XML template, which is loaded (after potential tag swapping) into the OpenAM configuration data store during initial configuration. For the user data stores you can find them in OPENAM_CONFIGURATION_DIRECTORY/template/xml/idRepoService.xml
E.g. for user data store type LDAPv3 the following service attributes are defined
sunIdRepoClass
sunIdRepoAttributeMapping
sunIdRepoSupportedOperations
sun-idrepo-ldapv3-ldapv3Generic
sun-idrepo-ldapv3-config-ldap-server
sun-idrepo-ldapv3-config-authid
sun-idrepo-ldapv3-config-authpw
openam-idrepo-ldapv3-heartbeat-interval
openam-idrepo-ldapv3-heartbeat-timeunit
sun-idrepo-ldapv3-config-organization_name
sun-idrepo-ldapv3-config-connection-mode
sun-idrepo-ldapv3-config-connection_pool_min_size
sun-idrepo-ldapv3-config-connection_pool_max_size
sun-idrepo-ldapv3-config-max-result
sun-idrepo-ldapv3-config-time-limit
sun-idrepo-ldapv3-config-search-scope
sun-idrepo-ldapv3-config-users-search-attribute
sun-idrepo-ldapv3-config-users-search-filter
sun-idrepo-ldapv3-config-user-objectclass
sun-idrepo-ldapv3-config-user-attributes
sun-idrepo-ldapv3-config-createuser-attr-mapping
sun-idrepo-ldapv3-config-isactive
sun-idrepo-ldapv3-config-active
sun-idrepo-ldapv3-config-inactive
sun-idrepo-ldapv3-config-groups-search-attribute
sun-idrepo-ldapv3-config-groups-search-filter
sun-idrepo-ldapv3-config-group-container-name
sun-idrepo-ldapv3-config-group-container-value
sun-idrepo-ldapv3-config-group-objectclass
sun-idrepo-ldapv3-config-group-attributes
sun-idrepo-ldapv3-config-memberof
sun-idrepo-ldapv3-config-uniquemember
sun-idrepo-ldapv3-config-memberurl
sun-idrepo-ldapv3-config-dftgroupmember
sun-idrepo-ldapv3-config-roles-search-attribute
sun-idrepo-ldapv3-config-roles-search-filter
sun-idrepo-ldapv3-config-role-search-scope
sun-idrepo-ldapv3-config-role-objectclass
sun-idrepo-ldapv3-config-filterrole-objectclass
sun-idrepo-ldapv3-config-filterrole-attributes
sun-idrepo-ldapv3-config-nsrole
sun-idrepo-ldapv3-config-nsroledn
sun-idrepo-ldapv3-config-nsrolefilter
sun-idrepo-ldapv3-config-people-container-name
sun-idrepo-ldapv3-config-people-container-value
sun-idrepo-ldapv3-config-auth-naming-attr
sun-idrepo-ldapv3-config-psearchbase
sun-idrepo-ldapv3-config-psearch-filter
sun-idrepo-ldapv3-config-psearch-scope
com.iplanet.am.ldap.connection.delay.between.retries
sun-idrepo-ldapv3-config-service-attributes
sun-idrepo-ldapv3-dncache-enabled
sun-idrepo-ldapv3-dncache-size
openam-idrepo-ldapv3-behera-support-enabled
It might be best that you create an user data store instance via console and then use ssoadm.jsp?cmd=show-datastore to list the properties. You would get a long list of attriutes ... to much to show here.
When you create the data store, make sure you specify the password for the bind DN using property
sun-idrepo-ldapv3-config-authpw=PASSWORD

Resources