Overwrite Puppet Class Variables in Manifest - puppet

I'm currently using hiera to set all my class parameters for the Puppet forge Gitlab module.
cat hieradata/nodes/example.yaml
---
gitlab::backup_cron_enable: true
gitlab::gitlab_rails:
backup_keep_time: 604800
backup_path: /opt/gitlab_backup
gitlab_default_can_create_group: false
initial_root_password: foobar
...
cat site/profiles/manifests/gitlab.rb
class profile::gitlab {
include gitlab
}
This code works as intended but I'd like to redact the password values in the log output and reports.
I tried to use hiera_options to convert the sensitive values but Puppet still displays the unredacted values.
cat hieradata/nodes/example.yaml
---
lookup_options:
gitlab::gitlab_rails::initial_root_password:
convert_to: "Sensitive"
gitlab::backup_cron_enable: true
gitlab::gitlab_rails:
backup_keep_time: 604800
backup_path: /opt/gitlab_backup
gitlab_default_can_create_group: false
initial_root_password: foobar
...
What is the best way to redact all sensitive values whilst using hiera to define the class parameters?

You need to have the password as a separate key in order for the auto conversion to take effect. The key that is looked up is bound to a hash, and it is not possible to address individual values in a hash with lookup_options (it is the entire hash that is looked up).
You can make an individual value Sensitive by using an alias and binding the password in clear text to a separate key - like this:
cat hieradata/nodes/example.yaml
---
lookup_options:
gitlab::gitlab_rails::initial_root_password:
convert_to: "Sensitive"
gitlab::backup_cron_enable: true
gitlab::gitlab_rails:
backup_keep_time: 604800
backup_path: /opt/gitlab_backup
gitlab_default_can_create_group: false
initial_root_password: '%{alias("gitlab::gitlab_rails::initial_root_password")}'
gitlab::gitlab_rails::initial_root_password: 'foobar'
...
With this approach you could also use EYAML or some other secure hiera backend to store the password in encrypted form. Such a backend may already return decrypted values wrapped in Sensitive - this is for example done by the Vault backend.
However, even if you get past the first hurdle, the result depends on what the gitlab module does with the hash now containing a Sensitive value. If it just passes the value for initial_root_password on it may work, but if it is doing any operation on this value (like checking if it is an empty string for example) it may fail. If you are unlucky it may seem to work but you may end up with the password "redacted" :-). Contact the maintainers of the module if it does not work and request that they support having the password as a Sensitive value instead of a String.

Related

How to use the sops provider with terraform using an array instead an single value

I'm pretty new to Terraform. I'm trying to use the sops provider plugin for encrypting secrets from a yaml file:
Sops Provider
I need to create a Terraform user object for a later provisioning stage like this example:
users = [{
name = "user123"
password = "password12"
}]
I've prepared a secrets.values.enc.yaml file for storing my secret data:
yaml_users:
- name: user123
password: password12
I've encrypted the file using "sops" command. I can decrypt the file successfully for testing purposes.
Now I try to use the encrypted file in Terraform for creating the user object:
data "sops_file" "test-secret" {
source_file = "secrets.values.enc.yaml"
}
# user data decryption
users = yamldecode(data.sops_file.test-secret.raw).yaml_users
Unfortunately I cannot debug the data or the structure of "users" as Terraform doesn't display sensitive data. When I try to use that users variable for the later provisioning stage than it doesn't seem to be what is needed:
Cannot use a set of map of string value in for_each. An iterable
collection is required.
When I do the same thing with the unencrypted yaml file everything seems to be working fine:
users = yamldecode(file("secrets.values.dec.yaml")).yaml_users
It looks like the sops provider decryption doesn't create an array or that "iterable collection" that I need.
Does anyone know how to use the terraform sops provider for decrypting an array of key-value pairs? A single value like "adminpassword" is working fine.
I think the "set of map of string" part of this error message is the important part: for_each requires either a map directly (in which case the map keys become the instance identifiers) or a set of individual strings (in which case those strings become the instance identifiers).
Your example YAML file shows yaml_users being defined as a YAML sequence of maps, which corresponds to a tuple of objects on conversion with yamldecode.
To use that data structure with for_each you'll need to first project it into a map whose keys will serve as the unique identifier for each instance of the resource. Assuming that the name values are suitably unique, you could project it so that those values are the keys:
data "sops_file" "test-secret" {
source_file = "secrets.values.enc.yaml"
}
locals {
users = tomap({
for u in yamldecode(data.sops_file.test-secret.raw).yaml_users :
u.name => u
})
}
The result being a sensitive value adds an extra wrinkle here, because Terraform won't allow using a sensitive value as the identifier for an instance of a resource -- to do so would make it impossible to show the resource instance address in the UI, and impossible to describe the instance on the command line for commands that need that.
However, this does seem like exactly the use-case shown in the example of the nonsensitive function at the time I'm writing this: you have a collection that is currently wholly marked as sensitive, but you know that only parts of it are actually sensitive and so you can use nonsensitive to explain to Terraform how to separate the nonsensitive parts from the sensitive parts. Here's an updated version of the locals block in my previous example using that function:
locals {
users = tomap({
for u in yamldecode(data.sops_file.test-secret.raw).yaml_users :
nonsensitive(u.name) => u
})
}
If I'm making a correct assumption that it's only the passwords that are sensitive and that the usernames are okay to disclose, the above will produce a suitable data structure where the usernames are visible in the keys but the individual element values will still be marked as sensitive.
local.users then meets all of the expectations of resource for_each, and so you should be able to use it with whichever other resources you need to repeat systematically for each user.
Please note that Terraform's tracking of sensitive values is for UI purposes only and will not prevent this passwords from being saved in the state as a part of whichever resources make use of them. If you use Terraform to manage sensitive data then you should treat the resulting state snapshots as sensitive artifacts in their own right, being careful about where and how you store them.

Using the following hashing method to store a database object

I would like to create a valid variable identifier that can be used across any database object. It seems Oracle has the strictest requirements and only allows 30 chars for a (non-DB) object name, so something like [_a-zA-Z]\W{,29}. My thinking was to base32-encode an md5, such as:
>>> base64.b32encode(hashlib.md5(b'asdf').digest()).rstrip(b'=')
# b'SEXMQA5SZZE6JJKBA2GUSWVVOA'
# len=26
Does that seem like an acceptable approach?

Retrieve superior hash-key name in Hiera

Hallo I am building in Hiera / Puppet a data structure for creating mysql / config files. My goal ist to have some default values which can be overwritten with a merge. It works until this point.
Because we have different mysql instances on many hosts I want to automaticly configure some paths to be unique for every instance. I have the instance name as a hash (name) of hashes in the Namespace: our_mysql::configure_db::dbs:
In my case I want to lookup the instance names like "sales_db' or 'hr_db' in paths like datadir, but I can not find a way to lookup the superior keyname.
Hiera data from "our_mysql" module represents some default values:
our_mysql::configure_db::dbs:
'defaults':
datadir: /var/lib/mysql/"%{lookup('lookup to superior hash-key name')}"
log_error: /var/log/mysql/"%{lookup('lookup to superior hash-key name')}".log
logbindir: /var/lib/mysql/"%{lookup('lookup to superior hash-key name')}"
db_port: 3306
...: ...
KEY_N: VALUE_N
Hiera data from node definiton:
our_mysql::configure_db::dbs:
'sales_db':
db_port: "3317"
innodb_buffer_pool_size: "1"
innodb_log_file_size: 1GB
innodb_log_files_in_group: "2"
server_id: "1"
'hr_db':
db_port: "3307"
I now how to do simple lookups or to iterate by
.each | String $key, Hash $value | { ... }
but I have no clue how to reference a key from a certain hierarchy level. Searching all related topics to puppet and hiera didn't help.
Is it possible an any way and if yes how?
As I understand the question, I think what you hope to achieve is that, for example, when you look up our_mysql::configure_db::dbs.sales_db key, you get a merge of the data for that (sub)key and those for the our_mysql::configure_db::dbs.defaults subkey, AND that the various %{lookup ...} tokens in the latter somehow resolve to the string sales_db.
I'm afraid that's not going to happen. The interpolation tokens don't even factor in here -- Hiera simply won't perform such a merge at all. I guess you have a hash-merge lookup in mind, but that merges only identical keys and subkeys, so not our_mysql::configure_db::dbs.sales_db and our_mysql::configure_db::dbs.defaults. Hiera provides for defaults for particular keys in the form of data recorded for those specific keys at a low-priority level of the data hierarchy. The "defaults" subkey you present, on the other hand, has no special meaning to the standard Hiera data providers.
You can still address this problem, just not entirely within the data. For example, consider this:
$dbs = lookup('our_mysql::configure_db::dbs', Hash, 'deep')
$dbs.filter |$dbname, $dbparms| { $dbname != 'defaults' }.each |$dbname, $dbparms| {
# Declare a database using a suitable resource type. "my_mysql::database" is
# a dummy resource name for the purposes of this example only
my_mysql::database {
$dbname:
* => $dbparams;
default:
datadir => "/var/lib/mysql/${dbname}",
log_error => "/var/log/mysql/${dbname}.log",
logbindir => "/var/lib/mysql/${dbname}",
* => $dbs['defaults'];
}
}
That supposes data of the form presented in the question, and it uses the data from the defaults subkey where those do not require knowledge of the specific DB name, but it puts the patterns for various directory names into the resource declaration, instead of into the data. The most important things to recognize are the use of the splat * parameter wildcard for obtaining multiple parameters from a hash, and the use per-expression resource property defaults by use of the default keyword in a resource declaration.
If you wanted to do so, you could push more details of the directory names back into the data with a little more effort (and one or more new keys).

Iterate over a hash key/values in Puppet

I'm dabbling with Puppet to update an arbitrary list of appsettings in an ASP.NET web.config (for deployment purpose) and I'm in a dilemma, mostly because I'm a real n00b in puppet.
I have this yaml file (hiera)
---
appSettings:
setting1: "hello"
setting2: "world!"
setting3: "lalala"
the number of setting[x] can span arbitrarily (one appSetting) and I would like to loop through the hash keys/value to update the corresponding appSetting/add in the web.config (using exec with powershell) the problem is i've searched high and low on how to iterate on keys and values.
I came across create_resources and this of course iterates through a hash of hash with a pre-determined set of keys. again, the key names are not known within the manifest (hence iterating the key/value pairs).
any guidance is appreciated.
Edit: looks like there is a keys() function i can use over the hash and iterate over that then use hiera_hash('appSettings') to get the hash and iterate through the values.
ok i just confirmed that what you can do in your manifest:
define updateAppSetting {
# get the hashes again because outside vars aren't visible here
$appSettings = hiera_hash('appSettings')
# $name is the key $appsettingValue is the value
$appsettingValue = $appSettings[$name]
# update the web.config here!
}
$appSettings = hiera_hash('appSettings')
# the keys() function returns the array of hash keys
$appSettingKeys = keys($appSettings)
# iterate through each appSetting key
updateAppSetting{$appSettingKeys:}

How do I create a user with a random password and store it to a file using puppet

I want to create a user for a service (postgres, rabbitmq...) using a random generated password. This password should then be written to a file on the host. This file, containing env vars is then used by an application to connect to those services.
I don't want to store these passwords elsewhere.
postgresql::server::db { $name:
user => $name,
password => postgresql_password($name, random_password(10)),
}
Then i want to insert this password in the form PG_PASS='the same password' into a config file but the whole thing should happen only if the user is not already present.
In pure Puppet
A trick is to define a custom type somehow like :
define authfile($length=24,$template,$path) {
$passwordfile = "/etc/puppet/private/${::hostname}/${::title}"
$password = file($passwordfile,'/dev/null')
##exec { "generate-${title}":
command => "openssl rand -hex -out '$passwordfile' 24",
creates => $passwordfile,
tag => 'generated_password'
}
file { $path:
content => template($template);
}
}
And on your puppetmaster, have something like :
Exec<|| tag = 'generated_password' ||>
You can then pass in the $template variable the name of a template that will have the variable $password available. You will need to be careful about how these authfile types are defined (as it creates files on the puppetmaster, you will want to take care of malicious facts), and you will need to run puppet once on the host (so that the exported resources is created), once on the puppetmaster (so that the secret file is generated), then once again on the host (so that the secret file is read) for it to work.
With a custom function
Another solution is to write a custom function, random_password, that will use the fqdn and sign it with a secret that is stored on the puppetmaster (using a HMAC) to seed the password. That way you will have a password that can't be guessed without getting the secret, and no extra puppet roundtrips.
I haven't tried it myself yet. But trocla looks exactly like what you're looking for. Here's a little intro.
EDIT: After having now tried out trocla I can tell you that it works like a charm :-)
You need to have the trocla module installed to use it from puppet.

Resources