Puppet 6 and module puppetlabs/accounts hiera yaml does not fill content - puppet

I am attempting to define my user accounts as Hashes in Hiera, like this:
---
accounts::user:
jack:
ensure: present
bashrc_content: file('accounts/shell/bashrc')
bash_profile_content: file('accounts/shell/bash_profile')
It works fine if I define them in my *.pp files.
Please, find more details about hiera.yaml, manifest and users.yamal on Gist
Why doesn't this work?
P.S. This question continues to,

No, what you are trying to do is not possible.
I have a few options for you. In Hiera, you could have all of the data other than the call to the file() function:
---
accounts::user:
jack:
locked: false
comment: Jack Doe
ensure: present
groups:
- admins
- sudo
shell: '/bin/bash'
home_mode: '0700'
purge_sshkeys: false
managehome: true
managevim: false
sshkeys:
- ssh-rsa AAAA
password: '70'
And then in your manifest:
$defaults = {
'bashrc_content' => file('accounts/shell/bashrc'),
'bash_profile_content' => file('accounts/shell/bash_profile'),
}
$user_data = lookup('accounts::user', Hash[String,Hash], 'hash', {})
$user_data.each |$user,$props| {
accounts::user { $user: * => $props + $defaults }
}
Another option is to simply include your file content in the YAML data, i.e.
---
accounts::user:
jack:
locked: false
comment: Jack Doe
ensure: present
groups:
- admins
- sudo
shell: '/bin/bash'
home_mode: '0700'
purge_sshkeys: false
managehome: true
managevim: false
bashrc_content: |
# If not running interactively, don't do anything
[ -z "$PS1" ] && return
if [ -f /etc/bashrc ]; then
. /etc/bashrc # --> Read /etc/bashrc, if present.
fi
...
bash_profile_content: ...
sshkeys:
- ssh-rsa AAAA
password: '70'
Then you won't need the file function or the files at all.
For more info:
On what you can interpolate in Hiera data.
The splat operator (*) and a useful blog on how to use it.
On multiline-strings in YAML.

Related

Ansible tasks after delegate_to:127.0.0.1 are skipped

My code is as below, I am reading from a local csv file use those values for tasks on remote hosts
---
- name: Empty Topics
hosts: remote_host
gather_facts: no
vars:
kafka_topics: /bin/kafka-topics
bootstrap_server: "list_of_broker_hosts"
retention_ms: 604800000
command_config: /etc/kafka/client.properties
kafka_log_dirs: /usr/bin/kafka-log-dirs
#ansible_connection: ssh
#ansible_user: ansible
#ansible_become: true
tasks:
- name: "Reading Topic Names"
read_csv:
path: topics_list.csv
register: topics
delegate_to: 127.0.0.1
- name: "Setting Topic Retention to 0"
become: yes
become_user: root
shell:
{{ kafka_topics }} --bootstrap_server {{bootstrap_server}} --alter --topic "{{ item.topic_name }}" --config retention.ms=0 --command_config #{{command_config}}
#touch /tmp/producer_test_1
loop: "{{ topics.list }}"
- name: "waiting for size to go zero "
become: yes
become_user: root
shell:
topic_size=1
while [ $topic_size -ne 0 ]
do
topic_size=`{{kafka_log_dirs}} --command_config {{command_config}} --bootstrap_server {{bootstrap_server}} --topic-list "{{ item.topic_name }}" --describe | grep -oP '(?<=size":)\d+' | awk '{ sum += $1 } END { print sum }' `
sleep 40
done
loop: "{{ topics.list }}"
- name : "Setting Topic Retention to 7"
become: yes
become_user: root
shell:
#{{ kafka_topics }} --bootstrap_server {{bootstrap_server}} --alter --topic "{{ item.topic_name }}" --config retention.ms={{retention_ms}} --command_config {{command_config}}
#touch /tmp/producer_test_2
loop: "{{ topics.list }}"
here the tasks after first task "Reading Topic Names" are skipped , if I remove this task, they succeed but that means I have to hard code values on subsequent tasks
execution log as below. How can I avoid this. current ansible version is 2.9.19 This playbook had previously worked as is on ansible version 2.9.6 in my previous organization, not sure what settings may be changed in my new company.
I tried delegate_to: localhost as well as delegate_to: 127:0:0:1
ansible#localhost[~] $ ansible-playbook empty_topics.pb -i /home/ansible/inv_sit.yml -l all
PLAY [Empty Topics] ***********************************************************************************************************
TASK [Reading Topic Names] ****************************************************************************************************
ok: [remotehost]
TASK [Setting Topic Retention to 0] *******************************************************************************************
TASK [waiting for size to go zero] ********************************************************************************************
TASK [Setting Topic Retention to 7] *******************************************************************************************
PLAY RECAP ********************************************************************************************************************
remotehost : ok=1 changed=0 unreachable=0 failed=0 skipped=3 rescued=0 ignored=0
ansible#localhost[~] $ ansible-playbook empty_topics.pb -i /home/ansible/inv_sit.yml -l all -vvv
ansible-playbook 2.9.19
config file = /etc/ansible/ansible.cfg
configured module search path = ['/home/ansible/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /usr/lib/python3.6/site-packages/ansible
executable location = /usr/bin/ansible-playbook
python version = 3.6.8 (default, Jun 14 2022, 12:54:58) [GCC 8.5.0 20210514 (Red Hat 8.5.0-10)]
Using /etc/ansible/ansible.cfg as config file
host_list declined parsing /home/ansible/inv_sit.yml as it did not pass its verify_file() method
script declined parsing /home/ansible/inv_sit.yml as it did not pass its verify_file() method
Parsed /home/ansible/inv_sit.yml inventory source with ini plugin
Skipping callback 'actionable', as we already have a stdout callback.
Skipping callback 'counter_enabled', as we already have a stdout callback.
Skipping callback 'debug', as we already have a stdout callback.
Skipping callback 'dense', as we already have a stdout callback.
Skipping callback 'dense', as we already have a stdout callback.
Skipping callback 'full_skip', as we already have a stdout callback.
Skipping callback 'json', as we already have a stdout callback.
Skipping callback 'minimal', as we already have a stdout callback.
Skipping callback 'null', as we already have a stdout callback.
Skipping callback 'oneline', as we already have a stdout callback.
Skipping callback 'selective', as we already have a stdout callback.
Skipping callback 'skippy', as we already have a stdout callback.
Skipping callback 'stderr', as we already have a stdout callback.
Skipping callback 'unixy', as we already have a stdout callback.
Skipping callback 'yaml', as we already have a stdout callback.
PLAYBOOK: empty_topics.pb *****************************************************************************************************
1 plays in empty_topics.pb
PLAY [Empty Topics] ***********************************************************************************************************
META: ran handlers
TASK [Reading Topic Names] ****************************************************************************************************
task path: /home/ansible/empty_topics.pb:18
<127.0.0.1> ESTABLISH LOCAL CONNECTION FOR USER: ansible
<127.0.0.1> EXEC /bin/sh -c 'echo ~ansible && sleep 0'
<127.0.0.1> EXEC /bin/sh -c '( umask 77 && mkdir -p "` echo /home/ansible/.ansible/tmp `"&& mkdir "` echo /home/ansible/.ansibl e/tmp/ansible-tmp-1672893960.0795922-3957982-94418260356492 `" && echo ansible-tmp-1672893960.0795922-3957982-94418260356492="` echo /home/ansible/.ansible/tmp/ansible-tmp-1672893960.0795922-3957982-94418260356492 `" ) && sleep 0'
Using module file /usr/lib/python3.6/site-packages/ansible/modules/files/read_csv.py
<127.0.0.1> PUT /home/ansible/.ansible/tmp/ansible-local-3957970ixug96xw/tmpn373dahz TO /home/ansible/.ansible/tmp/ansible-tmp- 1672893960.0795922-3957982-94418260356492/AnsiballZ_read_csv.py
<127.0.0.1> EXEC /bin/sh -c 'chmod u+x /home/ansible/.ansible/tmp/ansible-tmp-1672893960.0795922-3957982-94418260356492/ /home/ ansible/.ansible/tmp/ansible-tmp-1672893960.0795922-3957982-94418260356492/AnsiballZ_read_csv.py && sleep 0'
<127.0.0.1> EXEC /bin/sh -c '/usr/libexec/platform-python /home/ansible/.ansible/tmp/ansible-tmp-1672893960.0795922-3957982-944 18260356492/AnsiballZ_read_csv.py && sleep 0'
<127.0.0.1> EXEC /bin/sh -c 'rm -f -r /home/ansible/.ansible/tmp/ansible-tmp-1672893960.0795922-3957982-94418260356492/ > /dev/ null 2>&1 && sleep 0'
ok: [remotehost] => {
"changed": false,
"dict": {},
"invocation": {
"module_args": {
"delimiter": null,
"dialect": "excel",
"fieldnames": null,
"key": null,
"path": "topics_list.csv",
"skipinitialspace": null,
"strict": null,
"unique": true
}
},
"list": []
}
TASK [Setting Topic Retention to 0] *******************************************************************************************
task path: /home/ansible/empty_topics.pb:23
TASK [waiting for size to go zero] ********************************************************************************************
task path: /home/ansible/empty_topics.pb:31
TASK [Setting Topic Retention to 7] *******************************************************************************************
task path: /home/ansible/empty_topics.pb:42
META: ran handlers
META: ran handlers
PLAY RECAP ********************************************************************************************************************
remotehost : ok=1 changed=0 unreachable=0 failed=0 skipped=3 rescued=0 ignored=0
ansible#localhost[~] $
I tried delegate_to: localhost as well as delegate_to: 127:0:0:1
As suggested by β.εηοιτ.βε, I checked the csv file, although there were contents , it was missing the header row, hence the issue. Now with header row added to the csv file, the code works.

Sharing wireguard public keys ansible.posix.synchronize:

I've just started get into ansible so can you please help me or maybe give some advice?
The point is that i`m trying to install and configurate wireguard with ansible-playbook (just in case i know how to configure wireguard without ansible)
So i want to share public keys through ansible
(and then read them in wg0.conf by PublicKey = {{ lookup('file', '/etc/wireguard/publickey_client') }} )
I'm trying to use ansible.posix.synchronize in my playbook, but when it goes to task "sharing keys" it just start thinking but don't do anything (for a long time) till i stop the proccess.
Starting playbook with -vv also don't show anything
Playbook wireguard_configuration.yml:
---
- hosts: client
name: make wg keys on client
become: true
tasks:
- name: wg0.conf client file
ansible.builtin.copy:
src: /etc/ansible/conf/wg0_client.conf
dest: /etc/wireguard/wg0.conf
mode: 0755
owner: owner
- name: creating wg keys on client
ansible.builtin.shell:
cmd: wg genkey | tee privatekey_client | wg pubkey > publickey_client
chdir: /etc/wireguard
- name: share pubkey from client to server
ansible.posix.synchronize:
src: /etc/wireguard/publickey_client
dest: /etc/wireguard/publickey_client
delegate_to: server
- hosts: server
name: make wg keys on server
become: true
tasks:
- name: wg0.conf server file
ansible.builtin.copy:
src: /etc/ansible/conf/wg0_server.conf
dest: /etc/wireguard/wg0.conf
mode: 0755
owner: owner
- name: creating wg keys on client
ansible.builtin.shell:
cmd: wg genkey | tee privatekey_server | wg pubkey > publickey_server
chdir: /etc/wireguard
- name: share pubkey from server to client
ansible.posix.synchronize:
src: /etc/wireguard/publickey_server
dest: /etc/wireguard/publickey_server
delegate_to: client
You don't need the synchronize module here: you're not trying to copy a large hierarchy of files; you're only trying to bring a single value from the client to the server. I think a better option is just to stick that value in a variable on the client and then access it via hostvars on the server.
The following playbook is one way of doing that. A few things to note:
I've tried to document the tasks, but let me know if something isn't clear.
This playbook is written to be idempotent: you can run it multiple times and it will only generate the private key once.
- hosts: client
gather_facts: false
become: true
tasks:
# Read an existing private key if it is available. We set
# failed_when to false because an "error" simply means that
# the key doesn't exist and we need to generate it.
- name: read private key
command: cat /etc/wireguard/privatekey_client
failed_when: false
changed_when: wg_private_read.rc != 0
register: wg_private_read
# Generate a new key if necessary. We used the "is changed" test
# here so that we only generate a new key if we failed to read an
# existing key in the previous task.
- name: generate private key
when: wg_private_read is changed
command: wg genkey
register: wg_private_create
# This will either create the privatekey_client file or leave it
# unmodified (because the content matches what we read from it
# earlier in the "read private key" task).
- name: write private key
when: wg_private_read is changed
copy:
content: "{{ wg_private_create.stdout }}"
dest: /etc/wireguard/privatekey_client
# We generate a public key but we don't bother writing it to disk.
# The client doesn't need it and we can always generate it from
# the private key.
- name: generate public key
shell:
cmd: wg pubkey
stdin: "{{ (wg_private_read is changed)|ternary(wg_private_create.stdout, wg_private_read.stdout) }}"
changed_when: false
register: wg_public
- hosts: server
gather_facts: false
become: true
tasks:
- name: write client public key
copy:
content: "{{ hostvars.client.wg_public.stdout }}"
dest: "/etc/wireguard/publickey_client"
Some useful documentation links:
About failed_when and changed_when
The ternary filter

Terraform: YAML file rendering issue in storage section of container linux config of flatcar OS

I am trying to generate a file by template rendering to pass to the user data of the ec2 instance. I am using the third party terraform provider to generate an ignition file from the YAML.
data "ct_config" "worker" {
content = data.template_file.file.rendered
strict = true
pretty_print = true
}
data "template_file" "file" {
...
...
template = file("${path.module}/example.yml")
vars = {
script = file("${path.module}/script.sh")
}
}
example.yml
storage:
files:
- path: "/opt/bin/script"
mode: 0755
contents:
inline: |
${script}
Error:
Error: Error unmarshaling yaml: yaml: line 187: could not find expected ':'
on ../../modules/launch_template/launch_template.tf line 22, in data "ct_config" "worker":
22: data "ct_config" "worker" {
If I change ${script} to sample data then it works. Also, No matter what I put in the script.sh I am getting the same error.
You want this outcome (pseudocode):
storage:
files:
- path: "/opt/bin/script"
mode: 0755
contents:
inline: |
{{content of script file}}
In your current implementation, all lines after the first loaded from script.sh will not be indented and will not be interpreted as desired (the entire script.sh content) by a YAML decoder.
Using indent you can correct the indentation and using the newer templatefile functuin you can use a slightly cleaner setup for the template:
data "ct_config" "worker" {
content = local.ct_config_content
strict = true
pretty_print = true
}
locals {
ct_config_content = templatefile("${path.module}/example.yml", {
script = indent(10, file("${path.module}/script.sh"))
})
}
For clarity, here is the example.yml template file (from the original question) to use with the code above:
storage:
files:
- path: "/opt/bin/script"
mode: 0755
contents:
inline: |
${script}
I had this exact issue with ct_config, and figured it out today. You need to base64encode your script to ensure it's written correctly without newlines - without that, newlines in your script will make it to CT, which attempts to build an Ignition file, which cannot have newlines, causing the error you ran into originally.
Once encoded, you then just need to tell CT to !!binary the file to ensure Ignition correctly base64 decodes it on deploy:
data "template_file" "file" {
...
...
template = file("${path.module}/example.yml")
vars = {
script = base64encode(file("${path.module}/script.sh"))
}
}
storage:
files:
- path: "/opt/bin/script"
mode: 0755
contents:
inline: !!binary |
${script}

How to create secured files in Puppet5 with Hiera?

I want to create SSL certificate and try to secure this operation.
I am using Puppet 5.5.2 and gem hiera-eyaml.
Created simple manifest
cat /etc/puppetlabs/code/environments/production/manifests/site.pp
package { 'tree':
ensure => installed,
}
package { 'httpd':
ensure => installed,
}
$filecrt = lookup('files')
create_resources( 'file', $filecrt )
Hiera config
---
version: 5
defaults:
# The default value for "datadir" is "data" under the same directory as the hiera.yaml
# file (this file)
# When specifying a datadir, make sure the directory exists.
# See https://puppet.com/docs/puppet/latest/environments_about.html for further details on environments.
datadir: data
data_hash: yaml_data
hierarchy:
- name: "Secret data: per-node, per-datacenter, common"
lookup_key: eyaml_lookup_key # eyaml backend
paths:
- "nodes/%{facts.fqdn}.eyaml"
- "nodes/%{trusted.certname}.eyaml" # Include explicit file extension
- "location/%{facts.whereami}.eyaml"
- "common.eyaml"
options:
pkcs7_private_key: /etc/puppetlabs/puppet/eyaml/keys/private_key.pkcs7.pem
pkcs7_public_key: /etc/puppetlabs/puppet/eyaml/keys/public_key.pkcs7.pem
- name: "YAML hierarchy levels"
paths:
- "common.yaml"
- "nodes/%{facts.fqdn}.yaml"
- "nodes/%{::trusted.certname}.yaml"
And common.yaml
---
files:
'/etc/httpd/conf/server.crt':
ensure: present
mode: '0600'
owner: 'root'
group: 'root'
content: 'ENC[PKCS7,{LOT_OF_STRING_SKIPPED}+uaCmcHgDAzsPD51soM+AIkIlv0ANpUXzBpwM3tqQ3ysFtz81S0xuVbKvslK]'
But have en error while applying manifest
Error: Evaluation Error: Error while evaluating a Function Call, create_resources(): second argument must be a hash (file: /etc/puppetlabs/code/environments/production/manifests/site.pp, line: 12, column: 1) on node test1.com
I really dont know what to do )
The problem appears to be that the indentation in common.yaml isn't right - currently, file will be null rather than a hash, which explains the error message. Also, the file should be called common.eyaml, otherwise the ENC string won't be decrypted. Try
---
files:
'/etc/httpd/conf/server.crt':
ensure: present
mode: '0600'
owner: 'root'
group: 'root'
content: 'ENC[PKCS7{LOTS_OF_STRING_SKIPPED}UXzBpwM3tqQ3ysFtz81S0xuVbKvslK]'
There is an online YAML parser at http://yaml-online-parser.appspot.com/ if you want to see the difference the indentation makes.
Found another solution.
Its was a problem with lookup and hashes. When I have multiply lines in hiera hash, I must specify them https://docs.puppet.com/puppet/4.5/function.html#lookup
So i decided use only 'content' variable to lookup
cat site.pp
$filecrt = lookup('files')
file { 'server.crt':
ensure => present,
path => '/etc/httpd/conf/server.crt',
content => $filecrt,
owner => 'root',
group => 'root',
mode => '0600',
}
and Hiera
---
files:'ENC[PKCS7{LOT_OF_STRING_SKIPPED}+uaCmcHgDAzsPD51soM+AIkIlv0ANpUXzBpwM3tqQ3ysFtz81S0xuVbKvslK]'

How to pass Rundeck key storage to script

I created Rundeck Key storage and stored password in it
Then created Job option
Then in inline script i specified folowing (keys/JIRA is Rundeck password storage)
curl -XN -u user:keys/JIRA
But password is not passed and authnetication fails, what am i doing wrong ?
The password value will be expanded when it is passed to the script. Below is an example:
- description: ''
executionEnabled: true
id: 1f7f5312-0887-4841-a7ef-1c30f712f927
loglevel: INFO
name: How to pass Rundeck key storage to script
nodeFilterEditable: false
options:
- name: JiraPass
secure: true
storagePath: keys/jira.password
valueExposed: true
scheduleEnabled: true
sequence:
commands:
- args: ${option.JiraPass}
script: |
#!/usr/bin/env bash
jira_password=$1
echo curl -XN -u "user:$1"
keepgoing: false
strategy: node-first
uuid: 1f7f5312-0887-4841-a7ef-1c30f712f927

Resources