Puppet write a command that runs if lookup succeeds? - puppet

I have the following in my puppet manifest and it works:
package {
lookup('latest_packages'): ensure => latest,
}
Now we are adding another option to ensure absent, this lookup can contain values but it can also be non-existent. When the hiera data doesn't exists it causes my manifest to fail.
package {
lookup('latest_packages'): ensure => absent,
}
If that data doesn't exists I get back this on the agent:
Error: Could not retrieve catalog from remote server: Error 500 on
SERVER: Server Error: Function lookup() did not find a value for the
name 'removed_packages' on node dev-596e89d2fe5e08410003f2e6
How can I set this up to run only if lookup finds values? Do I need to wrap the package function in a conditional?

The fastest path to success here is probably to make use of the default value argument to the lookup function. We can also add in the data type and merge behavior just to help focus the lookup:
lookup('removed_packages', Array[String], 'unique', [])
Also, based on your error message I am guessing the key you are looking up is actually removed_packages for the absent case.
Array[String]: data type that guarantees your package list will be an array of strings. This helps protect against undesired inputs to this resource from your data.
unique: Combines any number of arrays and scalar values to return a merged and flattened array with all duplicate values removed. This is nice and efficient.
[]: the default value, so that for a nonexistent removed_packages key the resource will resolve to:
package { []: ensure => absent }
which will be a benign and successfully compiled resource in your catalog.

Related

Is DiffSuppressFunc or being more restrictive when saving to TF state is preferable in Terraform SDKv2?

context: I'm adding a new resource to TF Provider (using SDKv2) with roughly the following schema:
resource "player" "football" {
type = "FOOTBALL"
...
config = {
"dribbling" = "50"
"speed" = "90"
"position" = "GOALKEEPER"
}
}
that I represent as:
"config": {
Type: schema.TypeMap,
Elem: &schema.Schema{
Type: schema.TypeString,
},
Required: true,
ForceNew: true,
},
The important detail here for different palyer instances' types there'll be a different set of required attributes (dribbling, speed, position for football and height, can_dunk, arm_span for basketball) -- all players share the same API endpoint so I introduced just one resource to cover them all.
I'd like to support the ability of importing players and apparently READ response includes a bunch of fields that are optional on create (and I suspect most of the users won't have them in Terraform configuration file) which results in the fact that I've got a state difference when saving the whole config like:
d.Set("config", player.GetConfig()) # GetConfig includes a bunch of new attributes (optional on a create or even computed)
So I've got a question: which of the following 2 options is preferable:
Implement DiffSuppressFunc for a config attribute where I'll be ignoring these optional fields (the downside is I'll have an implicit drift between main.tf and TF state file).
Be more restrictive when writing configs to TF state file:
instead of
d.Set("config", player.GetConfig())
# filtered config will match config in main.tf
filteredConfig = ...
d.Set("config", filteredConfig)
In some other Terraform providers that deal with similar situations (where a particular argument has a mixture of configuration-provided and remote-system-provided nested values), the resource type implementation takes a compromise position of effectively exposing the same data in two different attributes, where one of them represents what the user configured and the other represents the full data returned by the remote system. For example, you might have config to be set in the configuration, and expanded_config representing the full set of elements the server decided.
There is a challenge with that approach in that you'll probably need a special rule in your Read function to somehow decide if a change you detect in the remote system constitutes "drift" relative to the configuration or if it's just an additional element added by the server.
From what you described it seems like the rule could be that any key that's present in config in the prior state (that is, the values visible to d.Get inside Read before you call d.Set) would have its value overwritten by what the server returned, but any keys that were not present before are ignored entirely. This would create the effect then that any key the author specified in the configuration is considered "managed by Terraform" while any other key is only read by Terraform and not directly managed.
If you adopt that strategy then it's worth keeping in mind what will happen in a situation where the user has changed the configuration to include a new key or to remove a previously-present key. The Read operation is in terms of the previous state rather than the configuration, so that function will see the keys that were present at the end of the last apply, not the keys currently present in the configuration. In particular this means that if an author adds a new key that the server was already tracking then it will appear in the subsequent plan as being added, even though it might technically be more appropriate to show it as an in-place update ~ or a no-op. This is an example of the compromises we sometimes need to make in order to adapt remote APIs to fit within Terraform's model of resource instances.

How to use the sops provider with terraform using an array instead an single value

I'm pretty new to Terraform. I'm trying to use the sops provider plugin for encrypting secrets from a yaml file:
Sops Provider
I need to create a Terraform user object for a later provisioning stage like this example:
users = [{
name = "user123"
password = "password12"
}]
I've prepared a secrets.values.enc.yaml file for storing my secret data:
yaml_users:
- name: user123
password: password12
I've encrypted the file using "sops" command. I can decrypt the file successfully for testing purposes.
Now I try to use the encrypted file in Terraform for creating the user object:
data "sops_file" "test-secret" {
source_file = "secrets.values.enc.yaml"
}
# user data decryption
users = yamldecode(data.sops_file.test-secret.raw).yaml_users
Unfortunately I cannot debug the data or the structure of "users" as Terraform doesn't display sensitive data. When I try to use that users variable for the later provisioning stage than it doesn't seem to be what is needed:
Cannot use a set of map of string value in for_each. An iterable
collection is required.
When I do the same thing with the unencrypted yaml file everything seems to be working fine:
users = yamldecode(file("secrets.values.dec.yaml")).yaml_users
It looks like the sops provider decryption doesn't create an array or that "iterable collection" that I need.
Does anyone know how to use the terraform sops provider for decrypting an array of key-value pairs? A single value like "adminpassword" is working fine.
I think the "set of map of string" part of this error message is the important part: for_each requires either a map directly (in which case the map keys become the instance identifiers) or a set of individual strings (in which case those strings become the instance identifiers).
Your example YAML file shows yaml_users being defined as a YAML sequence of maps, which corresponds to a tuple of objects on conversion with yamldecode.
To use that data structure with for_each you'll need to first project it into a map whose keys will serve as the unique identifier for each instance of the resource. Assuming that the name values are suitably unique, you could project it so that those values are the keys:
data "sops_file" "test-secret" {
source_file = "secrets.values.enc.yaml"
}
locals {
users = tomap({
for u in yamldecode(data.sops_file.test-secret.raw).yaml_users :
u.name => u
})
}
The result being a sensitive value adds an extra wrinkle here, because Terraform won't allow using a sensitive value as the identifier for an instance of a resource -- to do so would make it impossible to show the resource instance address in the UI, and impossible to describe the instance on the command line for commands that need that.
However, this does seem like exactly the use-case shown in the example of the nonsensitive function at the time I'm writing this: you have a collection that is currently wholly marked as sensitive, but you know that only parts of it are actually sensitive and so you can use nonsensitive to explain to Terraform how to separate the nonsensitive parts from the sensitive parts. Here's an updated version of the locals block in my previous example using that function:
locals {
users = tomap({
for u in yamldecode(data.sops_file.test-secret.raw).yaml_users :
nonsensitive(u.name) => u
})
}
If I'm making a correct assumption that it's only the passwords that are sensitive and that the usernames are okay to disclose, the above will produce a suitable data structure where the usernames are visible in the keys but the individual element values will still be marked as sensitive.
local.users then meets all of the expectations of resource for_each, and so you should be able to use it with whichever other resources you need to repeat systematically for each user.
Please note that Terraform's tracking of sensitive values is for UI purposes only and will not prevent this passwords from being saved in the state as a part of whichever resources make use of them. If you use Terraform to manage sensitive data then you should treat the resulting state snapshots as sensitive artifacts in their own right, being careful about where and how you store them.

Azure Function OpenAPI - Error same key has already been added

I have a problem implementing OpenAPI in an Azure Function.
I am using the following package
Microsoft.Azure.WebJobs.Expensions.OpenApi
This is the error:
An item with the same key has already been added. Key: result_list`1
The cause is because I have several functions that are returning the same type CustomCollection<T>
How could you implement a solution for this issue?
Before inserting a data into a custom generic collection (CustomCollection<T>) try to remove/delete previous values or after got a values from function remove/delete the values in a CustomCollection<T>. This can helps to avoid conflict.
Try to use multiple CustomCollection<T> with different names can solve the problem.
Add a key value with some prefix or suffix characters while inserting.
Eg: Actual Key : result_list you can use as result_list_fun1 or fun1_result_list

Firebase multi-path update set a parent node to null

I am trying to use firebase realtime database multi path updates.
However trying to set a parent node to null as below will result on an error.
const firebaseUpdate = {}
firebaseUpdate[`user/${uid}`] = null
db.ref().update(firebaseUpdate)
Error: Reference.update failed: First argument contains a path /user/USER_ID
that is ancestor of another path /user/USER_ID/creationTime
I was wondering if there is a way to use multi-path updates in order to set a parent node with multiple children to null.
I assume I could use remove or set function but I'd rather use the multi-path update.
The error message indicates that you're trying to apply two conflicting updates to the database in one operation. As the message says, your update tries to:
write to /user/USER_ID
write to /user/USER_ID/creationTime
The second one write is a child of the first one. Since the order of writes in a multi-location is unspecified, it's impossible to say what the outcome of the write operation will be.
If you want to replace any data that currently exists at /user/USER_ID with the creationTime, you should update it like this:
db.ref().update({
"/user/USER_ID": { creationTime: Date.now() }
})

Retrieve superior hash-key name in Hiera

Hallo I am building in Hiera / Puppet a data structure for creating mysql / config files. My goal ist to have some default values which can be overwritten with a merge. It works until this point.
Because we have different mysql instances on many hosts I want to automaticly configure some paths to be unique for every instance. I have the instance name as a hash (name) of hashes in the Namespace: our_mysql::configure_db::dbs:
In my case I want to lookup the instance names like "sales_db' or 'hr_db' in paths like datadir, but I can not find a way to lookup the superior keyname.
Hiera data from "our_mysql" module represents some default values:
our_mysql::configure_db::dbs:
'defaults':
datadir: /var/lib/mysql/"%{lookup('lookup to superior hash-key name')}"
log_error: /var/log/mysql/"%{lookup('lookup to superior hash-key name')}".log
logbindir: /var/lib/mysql/"%{lookup('lookup to superior hash-key name')}"
db_port: 3306
...: ...
KEY_N: VALUE_N
Hiera data from node definiton:
our_mysql::configure_db::dbs:
'sales_db':
db_port: "3317"
innodb_buffer_pool_size: "1"
innodb_log_file_size: 1GB
innodb_log_files_in_group: "2"
server_id: "1"
'hr_db':
db_port: "3307"
I now how to do simple lookups or to iterate by
.each | String $key, Hash $value | { ... }
but I have no clue how to reference a key from a certain hierarchy level. Searching all related topics to puppet and hiera didn't help.
Is it possible an any way and if yes how?
As I understand the question, I think what you hope to achieve is that, for example, when you look up our_mysql::configure_db::dbs.sales_db key, you get a merge of the data for that (sub)key and those for the our_mysql::configure_db::dbs.defaults subkey, AND that the various %{lookup ...} tokens in the latter somehow resolve to the string sales_db.
I'm afraid that's not going to happen. The interpolation tokens don't even factor in here -- Hiera simply won't perform such a merge at all. I guess you have a hash-merge lookup in mind, but that merges only identical keys and subkeys, so not our_mysql::configure_db::dbs.sales_db and our_mysql::configure_db::dbs.defaults. Hiera provides for defaults for particular keys in the form of data recorded for those specific keys at a low-priority level of the data hierarchy. The "defaults" subkey you present, on the other hand, has no special meaning to the standard Hiera data providers.
You can still address this problem, just not entirely within the data. For example, consider this:
$dbs = lookup('our_mysql::configure_db::dbs', Hash, 'deep')
$dbs.filter |$dbname, $dbparms| { $dbname != 'defaults' }.each |$dbname, $dbparms| {
# Declare a database using a suitable resource type. "my_mysql::database" is
# a dummy resource name for the purposes of this example only
my_mysql::database {
$dbname:
* => $dbparams;
default:
datadir => "/var/lib/mysql/${dbname}",
log_error => "/var/log/mysql/${dbname}.log",
logbindir => "/var/lib/mysql/${dbname}",
* => $dbs['defaults'];
}
}
That supposes data of the form presented in the question, and it uses the data from the defaults subkey where those do not require knowledge of the specific DB name, but it puts the patterns for various directory names into the resource declaration, instead of into the data. The most important things to recognize are the use of the splat * parameter wildcard for obtaining multiple parameters from a hash, and the use per-expression resource property defaults by use of the default keyword in a resource declaration.
If you wanted to do so, you could push more details of the directory names back into the data with a little more effort (and one or more new keys).

Resources