hiera merging not working - puppet

I am trying to implement hiera merging. hier is my hiera.yaml
---
:hierarchy:
- fqdn/%{fqdn}
- roles/%{role}
- os/%{osfamily}
- common
:backends:
- yaml
# options are native, deep, deeper
:merge_behavior: deeper
:yaml:
:datadir: /etc/puppet/environments/%{environment}/data
then I have:
common.yaml
---
classes:
- a
- b
and fqdn/some.host.yaml
---
classes:
- c
- d
running
hiera --debug -c /etc/puppet/hiera.yaml classes fqdn=some.host environment=development
["c", "d"]
and
hiera --debug -c /etc/puppet/hiera.yaml classes fqdn=blablahost environment=development
["a", "b"]
so the "blablahost" take a common.yaml and applied "a" and "b" classes.. but fqdn=some.host should apply a,b,c,d.. and not only c,d ... what am I doing wrong?
Regards

To enable array merging, you need to add the --array option.
hiera --array -c /etc/puppet/hiera.yaml classes fqdn=some.host environment=development

Related

Read variable from file for usage in GitLab pipeline

Given the following very simple .gitlab-ci.yml pipeline:
---
variables:
KEYCLOAK_VERSION: 20.0.1 # this should be populated from reading a file from the repo...
stages:
- test
build:
stage: test
script:
- echo "$KEYCLOAK_VERSION"
As you might see, this simply outputs the value of KEYCLOAK_VERSION defined in the variables section.
Now, the Git repository contains a env.properties file with KEYCLOAK_VERSION=20.0.1 as content. How would I read the variable from that file and use it in the GitLab pipeline?
The documentation mentions import but this seems to be using YAML files.
To read variables from a file you can use the source or . command.
script:
- source env.properties
- echo $KEYCLOAK_VERSION
Attention:
One reason why you might not want to do it this way is because whatever is in env.properties will be run in your shell, such as rm -rf /, which could be very dangerous.
Maybe you can take a look here for some other solutions.

Ansible copy file to similar path locations

Currently I am trying to replace all Tomcat keystore files in a particular location across multiple nodes. The problem is, the directory structures are similar, but not exactly the same.
For example, our tomcat directory structure looks like this:
/home/tomcat121test/jdk-11.0.7+10.
But across the different nodes, the paths are slightly different. The differences are the tomcat folder name and the jdk folder name.
The structure is /home/tomcat<version_no><test_or_prod>/jdk-<jdk_version> All in one word for each folder names.
e.g. /home/tomcat11test/jdk-11.0.7+10
So, the idea is to use cp as shown in the task named Backup the current keystore: cp -p /home/tomcat*/jdk*/keystore /home/tomcat*/jdk*/keystore_old_2021
My play book currently looks like this:
---
- name: Update Tomcat Test Servers Keystore
hosts: tomcattest_servers
gather_facts: False
tasks:
- name: ls the jdk dir
shell: ls -lah /home/tomcat*/jdk*/bin/
register: ls_command_output
- debug:
var: ls_command_output.stdout_lines
- name: Backup the current keystore
shell: >
cp -p /home/tomcat*/jdk*/keystore /home/tomcat*/jdk*/keystore_old_2021
- name: Verify copy took place
shell: ls -lah /home/tomcat*/jdk*/bin
register: ls_command_output
- debug:
var: ls_command_output.stdout_lines
Task names Backup the current keystore
is where it seems to be failing.
TASK [Backup the current keystore]
******************************************************************************************************************* fatal: [tomcattest1]: FAILED! => {"changed": true, "cmd": "cp -p
/home/tomcat*/jdk*/keystore /home/tomcat*/jdk*/keystore_old_2021\n",
"delta": "0:00:00.005322", "end": "2022-03-13 18:57:06.091283", "msg":
"non-zero return code", "rc": 1, "start": "2022-03-13
18:57:06.085961", "stderr": "cp: cannot stat
‘/home/tomcat*/jdk*/keystore’: No such file or directory",
"stderr_lines": ["cp: cannot stat ‘/home/tomcat*/jdk*/keystore’: No
such file or directory"], "stdout": "", "stdout_lines": []}
The task names ls the jdk dir works fine and they're both using the shell module, which, in my understanding, is needed if a wildcard needs to be used, instead of the command module.
Here is how I would rephrase, then approach your requirement.
Problem statement:
In /home, we have an unknown folder that match a pattern tomcat.* to find.
In the folder found here above, we have an unknown folder that match a pattern jdk.* to find.
In the folder found here above, I want to ship a new file and backup the state of the existing file prior to copying.
Applying the DRY (Do Not Repeat Yourself) principle:
Clearly the first and second point of our problem statement seem to be the same, so it would be nice if we could have some sort of mechanism that could answer the requirement: "For a given path, return me a unique folder matching a pattern".
Solution:
Ansible have multiple ways of helping you create sets of tasks that you can reuse. Here is a, not exhaustive, list of two of them:
roles: a quite extensive way to reuse multiple Ansible artefacts, including, but not limited to tasks, variables, handlers, files, etc.
the include_tasks module that allows you to load an arbitrary YAML containing a list of tasks
Because role is a quite extensive mechanism, it requires the creation of a set of folders that would be unrelated to this solution, so I am going to demonstrate this using the include_tasks module, but depending on your needs and reusability considerations, creating a role might be a better bet.
So, here is what would be the YAML that we would use in the include_tasks:
a find task based on a given folder
an extraction of the folder matching the given pattern out of the result of the find task, using the selectattr filter and the match test.
an assertion that we have a unique folder matching our pattern
This gives us a file, called here find_exactly_one_folder.yml:
- find:
path: "{{ root_folder }}"
file_type: directory
register: find_exactly_one_folder
- set_fact:
found_folder: >
{{
find_exactly_one_folder.files
| selectattr('path', 'match', root_folder)
| map(attribute='path')
}}
- assert:
that:
- found_folder | length == 1
fail_msg: >-
Did not found exactly one folder, result: `{{ found_folder }}`.
success_msg: >-
{{ found_folder.0 | default('') }} found
Now that we have that "For a given path, return me a unique folder matching a pattern" mechanism, we can have a playbook doing:
Find a unique folder matching the pattern tomcat.* from /home
Find a unique folder matching the pattern jdk.* from the folder resulting from the previous task
Copy the new file in the found folder, using the existing backup mechanism
This result in this set of tasks:
- include_tasks:
file: find_exactly_one_folder.yml
vars:
root_folder: /home
folder_match: 'tomcat.*'
- include_tasks:
file: find_exactly_one_folder.yml
vars:
root_folder: "{{ found_folder.0 }}"
folder_match: 'jdk.*'
- copy:
src: keystore
dest: "{{ found_folder.0 }}/keystore"
backup: true
Here is for an example playbook, that ends with an extra find and debug task to demonstrate the resulting backup file is being created.
- hosts: node1
gather_facts: no
tasks:
- include_tasks:
file: find_exactly_one_folder.yml
vars:
root_folder: /home
folder_match: 'tomcat.*'
- include_tasks:
file: find_exactly_one_folder.yml
vars:
root_folder: "{{ found_folder.0 }}"
folder_match: 'jdk.*'
- copy:
src: keystore
dest: "{{ found_folder.0 }}/keystore"
backup: true
- find:
path: "{{ found_folder.0 }}"
pattern: "keystore*"
register: keystores
- debug:
var: keystores.files | map(attribute='path')
This would yield:
PLAY [node1] **************************************************************
TASK [include_tasks] ******************************************************
included: /usr/local/ansible/find_exactly_one_folder.yml for node1
TASK [find] ***************************************************************
ok: [node1]
TASK [set_fact] ***********************************************************
ok: [node1]
TASK [assert] *************************************************************
ok: [node1] => changed=false
msg: |-
/home/tomcat11test found
TASK [include_tasks] ******************************************************
included: /usr/local/ansible/find_exactly_one_folder.yml for node1
TASK [find] ***************************************************************
ok: [node1]
TASK [set_fact] ***********************************************************
ok: [node1]
TASK [assert] *************************************************************
ok: [node1] => changed=false
msg: |-
/home/tomcat11test/jdk-11.0.7+10 found
TASK [copy] **************************************************************
changed: [node1]
TASK [find] **************************************************************
ok: [node1]
TASK [debug] *************************************************************
ok: [node1] =>
keystores.files | map(attribute='path'):
- /home/tomcat11test/jdk-11.0.7+10/keystore.690.2022-03-13#22:11:08~
- /home/tomcat11test/jdk-11.0.7+10/keystore
After reviewing and reading what #β.εηοιτ.βε mentioned about using find, I went back and tried some other things before implementing what was mentioned. ( just needed something quick and fast )
- name: Find the tomcat*/jdk*/bin/keystore, make copy of
shell: find /home/ -iname keystore -exec cp -p "{}" "{}_old_2021" \;
- name: Check for copied keystore
shell: ls -lah /home/tomcat*/jdk*/bin/keystore*
register: ls_command_output
- debug:
var: ls_command_output.stdout_lines
This did exactly what I needed.
UPDATE:
While the above command worked, when it came time to use the copy module to copy the keystore that was going to be replaced ... the issue came up here:
- name: Copy New Keystore to Tomcat TEST servers
copy:
src: /opt/ansible/playbooks/ssl-renew/keystore
dest: /home/tomcat*/jdk*/bin/
Note the destination path. This did not work. I had to specify the specific path.
So I will be looking more into #β.εηοιτ.βε solution above.
I also wanted to say thank you for the detailed and descriptive response to my initial post.
I also looked into the fileglob module, but the first note on the ansible documentation page for fileglob, states:
"Patterns are only supported on files, not directory/paths."

Array variable inside .gitlab-ci.yml yaml

I want to use arrays in variables of my gitlab ci/cd yml file, something like that:
variables:
myarrray: ['abc', 'dcef' ]
....
script: |
echo myarray[0] myarray[1]
But Lint tells me that file is incorrect:
variables config should be a hash of key value pairs, value can be a hash
I've tried the next:
variables:
arr[0]: 'abc'
arr[1]: 'cde'
....
script: |
echo $arr[0] $arr[1]
But build failed and prints out bash error:
bash: line 128: export: `arr[0]': not a valid identifier
Is there any way to use array variable in .gitlab-ci.yml file?
According to the docs, this is what you should be doing:
It is not possible to create a CI/CD variable that is an array of values, but you can use shell scripting techniques for similar behavior.
For example, you can store multiple variables separated by a space in a variable, then loop through the values with a script:
job1:
variables:
FOLDERS: src test docs
script:
- |
for FOLDER in $FOLDERS
do
echo "The path is root/${FOLDER}"
done
After some investigations I found some surrogate solution. Perhaps It may be useful for somebody:
variables:
# Name of using set
targetType: 'test'
# Variables set test
X_test: 'TestValue'
# Variables set dev
X_dev: 'DevValue'
# Name of variable from effective set
X_curName: 'X_$targetType'
.....
script: |
echo Variable X_ is ${!X_curName} # prints TestValue
Another approach you could follow is two use a matrix of jobs that will create a job per array entry.
deploystacks:
stage: deploy
parallel:
matrix:
- PROVIDER: aws
STACK: [monitoring, app1]
- PROVIDER: gcp
STACK: [data]
tags:
- ${PROVIDER}-${STACK}
Here is the Gitlab docs regarding matrix
https://docs.gitlab.com/ee/ci/jobs/job_control.html#run-a-one-dimensional-matrix-of-parallel-jobs

Ansible turn files item into strings

Im trying to find a clean way of converting a files list into a path(string) list.
So far i came up with this:
- name: Get apt source files
find:
paths: /etc/apt/sources.list.d
use_regex: yes
patterns: '^.*\.list$'
register: source_files
- name: loop trough source files
when:
- item != SOME_VAR
- DO_CLEAN_UP
lineinfile:
path: "{{ item }}"
regexp: "^deb {{ REPO_CLEAN_URL }}" # set in vars/main.yml
state: absent
with_items:
- /etc/apt/sources.list
- "{{ source_files.files | items2dict(key_name='path', value_name='path') | list }}"
I would like to improve the "with_items" part please.
There is no such thing as file-object in Ansible playbooks, you only work with basic data types supported by YAML/JSON: string, list, mapping (dictionary), integer and boolean.
When in doubt make use of debug module to display variables and how jinja2 constructs are evaluated, works well even with loops.

How to exclude all files except one in gitlab-ci artifacts with

I have a folder structure like
foo/bar/lorem/a.txt
foo/bar/lorem/b.txt
foo/bar/lorem/c.ext
foo/bar/ipsum/p.txt
foo/bar/ipsum/q.ext
In GitLab CI's yml artifacts I want to include everything in foo/bar, exclude *.txt but include b.txt
The GitLab CI reference for artifacts says that:
Wildcards can be used that follow the glob patterns and golang's filepath.Match.
Try 1:
job1:
artifacts:
paths:
- foo/bar/
- foo/bar/lorem/b.txt
exclude:
- foo/bar/**/*.txt
Try 2:
job1:
artifacts:
paths:
- foo/bar/
exclude:
- foo/bar/**/!(b).txt
Expected output:
foo/bar/lorem/b.txt
foo/bar/lorem/c.ext
foo/bar/ipsum/q.ext
What paths and exclude combination do I use to achieve this?
You can do this:
job1:
artifacts:
paths:
- foo/bar/lorem/b.txt
- foo/bar/*/*.ext
Although the previous answer is good and correct, it relies on a particular case where there are only two types of files. Having a whole hierarchy with multiple file types, that could possibly be different for each job, makes it very hard to apply to other cases.
Here is an answer, a bit more difficult to set up, but that covers more cases, if someone else is looking for an answer:
At the end of your script section, delete the files you don't want. For example:
job1:
script:
- # Perform the actions you already do
- mkdir keep # Create a place to temporarly keep your files
- mv foo/bar/lorem/b.txt keep/ # Move the file to the safe place
- rm foo/bar/*.txt # Delete all files you want to delete
- mv keep/b.txt foo/bar/lorem/ # Put back the file you want to keep
artifacts:
paths:
- foo/bar/ # Save everything, the *.txt are already removed
Note for the anxious: CI jobs are run in a container (at least it's the default behavious), and the files you work on are a copy of your project. You can delete whatever you want, nothing will be lost outside the container.

Resources