Hiera 5 command not working
I am unable to find issue in hiera as it is not working i am tried to understand the working of hiera but its showing same result all time.
I have hiera.yaml as:
---
version: 5
defaults:
datadir: data
data_hash: yaml_data
hierarchy:
- name: "Test Message"
path: "test/%{testname}.yaml"
- name: "Common"
path: "common.yaml"
first its not looking for data directory but looking for hieradata directory
once added the files common.yaml and test/value.yaml file and run the command
hiera msg environment=development
above command return "Common file"
here i have already setup the development environment and common.yaml contains
---
msg: "Common file"
and test/value.yaml
---
msg: "Demo test"
but i again run the command
hiera msg environment=development testname=value
it returns "Common file"
please tell me what is wrong here so i am not getting "Demo test" as output
The hiera command line utility should not be used any more and you should use puppet lookup instead (docs).
If you fix that up, you should be able to correctly lookup data using these commands:
▶ FACTER_testname="" puppet lookup msg
--- Common file
and:
▶ FACTER_testname=value puppet lookup msg
--- Demo test
Note that you mention environment=development but your Hierarchy doesn't seem to know about environment so I ignored that.
Related
This seems like a silly question to ask, but I'm struggling to create a compressed archive using ansible then copying that to a remote host. I'm receiving an error that the target directory/file doesn't exist during a copy task. I've verified that he /home/ansible-admin/app/certs directory exists. But from what I can tell, the zip file is never being created.
---
- hosts: example
become: yes
tasks:
- name: Create cert archive
archive:
path:
- /home/ansible-admin/app/certs
dest: /home/ansible-admin/app/app_certs.zip
format: zip
- name: Copy certs to target servers
copy:
src: /home/ansible-admin/app/app_certs.zip
dest: /home/ubuntu/app_certs.zip
owner: ubuntu
group: ubuntu
mode: "0400"
This is the error message I'm consistently getting
An exception occurred during task execution. To see the full traceback, use -vvv. The error was: If you are using a module and expect the file to exist on the remote, see the remote_src option fatal: [app.example.com]: FAILED! => {"changed": false, "msg": "Could not find or access '/home/ansible-admin/app_certs.zip' on the Ansible Controller.\nIf you are using a module and expect the file to exist on the remote, see the remote_src option"}
I'm hoping I'm just missing something trivial here. But looking at the docs and the yaml file, I'm not seeing where the issue is. https://docs.ansible.com/ansible/latest/collections/community/general/archive_module.html
From the documentation of the archive module:
The source and archive are on the remote host, and the archive is not
copied to the local host.
I think that is your problem.
Should that path indeed exist on the remote host and create the archive at least the copy will fail as the archive won't be present on the controller.
As tink pointed out, I was trying to archive a directory on the remote host that didn't exist rather than archiving a local directory. I resolved this by added a localhost task to be performed prior to.
- hosts: localhost
tasks:
- name: Create cert archive
archive:
path:
- /home/ansible-admin/app/certs
dest: /home/ansible-admin/app/app_certs.zip
format: zip
Then copying it to the remote servers and extracting the archive worked as expected.
I need to replace a variable value inside a file based gitlab variable like below.
File based variable:
Name: app_service_dev_env
Value:
iam_role_name="xxxx"
lambda_s3_bucket_name = "xxxxx"
lambda_s3_key="xxxxx"
Variable:
Name: ENV
Value: dev
Below is what I am looking to implement
before_script:
-cat ${app_service_${ENV}_env} > dev.txt
Getting error: ERROR: Job failed: exit code 2
Could anyone please let me know how to resolve this?
How could I resolve this?
The file variable type creates a file. They are not environment variables. The key is the path to the file.
before_script:
- cat app_service_dev_env > dev.txt
Though, you could simply make the name of your file variable dev.txt
As for your ENV variable, with your current setup you can do something like this:
before_script:
- cat "app_service_${ENV}_env"
I recently upgraded puppet version 3 to version 5. all is working fine with the new version but hiera configurations for puppet 5 is not working as expected. I think I missing something which would deploy changes in the remote node. Please advise what should I do here. below are the configurations for my setup.
1) Hiera.yaml
cat /etc/puppetlabs/code/environments/hiera.yaml
version: 5
hierarchy:
- name: "Master"
path: "environments/%{environment}/data/%{trusted.certname}.yaml"
data_hash: yaml_data
datadir: /etc/puppetlabs/code/
2) And my Environment YAML files are kept at
cat /etc/puppetlabs/code/environments/staging/data/puppetsr7.demo.com.yaml
demo::configuration::phpini::memory_limit: '64'
3) but when I run the command on my remote node, nothing is changing
/opt/puppetlabs/bin/puppet agent
4) In order to troubleshoot I tried to run the command
puppet lookup --explain demo::configuration::phpini::memory_limit --environment staging --node puppetsr7.demo.com
and got below output
Searching for "lookup_options"
Global Data Provider (hiera configuration version 5)
Using configuration "/etc/puppetlabs/code/environments/hiera.yaml"
Hierarchy entry "Master"
Path "/etc/puppetlabs/code/environments/staging/data/puppetsr7.demo.com.yaml"
Original path: "environments/%{environment}/data/%{trusted.certname}.yaml"
Found key: "lookup_options" value: nil
Module data provider for module "demo" not found
Searching for "demo::configuration::phpini::memory_limit"
Global Data Provider (hiera configuration version 5)
Using configuration "/etc/puppetlabs/code/environments/hiera.yaml"
Hierarchy entry "Master"
Path "/etc/puppetlabs/code/environments/staging/data/puppetsr7.demo.com.yaml"
Original path: "environments/%{environment}/data/%{trusted.certname}.yaml"
Found key: "demo::configuration::phpini::memory_limit" value: "64"
It's showing the proper value when running from CLI i.e 64 which I need to be get applied on a remote node in php.ini and change the value from 512 to 64.
But don't know how to proceed further from here as I struck now. please help to troubleshoot this.
What I did is I kept the required class in site.pp file as well which I want to get executed through hieradata.
"demo::configuration::phpini::memory_limit: '64'" in hiera file and "demo::configuration::phpini::memory_limit in site.pp.
Hoping that some one could get help from it.
I have created a custom module and i would like to keep it within a sub-directory (category) because there are several components that should logically fall under that category. So to segregate things in a better way, i created the following structure.
- hieradata
- manifests
- modules
- infra
- git
- files
- manifests
- init.pp
- install.pp
- configure.pp
- monitoring
- etc
- templates
$ cat modules/infra/git/manifests/init.pp
class infra::git {}
$ cat modules/infra/git/manifests/install.pp
class infra::git::install {
file { 'Install Git':
...
...
}
}
$ cat manifests/site.pp
node abc.com {
include infra::git::install
}
Now on the puppet agent, when i try puppet agent -t, i get the following error:
ruby 2.1.8p440 (2015-12-16 revision 53160) [x64-mingw32]
C:\puppet agent -t
Info: Using configured environment 'production'
Info: Retrieving pluginfacts
Info: Retrieving plugin
Info: Loading facts
Error: Could not retrieve catalog from remote server: Error 500 on SERVER: {"message":"Server Error: Evaluation Error: Error while evaluating a Function Call, Could not find class ::infra::git::install for abc.com at /etc/puppetlabs/code/environments/production/manifests/site.pp:15:2 on node abc.com","issue_kind":"RUNTIME_ERROR"}
Warning: Not using cache on failed catalog
Error: Could not retrieve catalog; skipping run
I have already read this link but that suggests keeping custom module directly under main modules directory, which is not how i would like to structure the directories.
Any help will really be appreciated.
I am a newbie to ansible and I am using a very simple playbook to issue sudo apt-get update and sudo apt-get upgrade on a couple of servers.
This is the playbook I am using:
---
- name: Update Servers
hosts: my-servers
become: yes
become_user: root
tasks:
- name: update packages
apt: update_cache=yes
- name: upgrade packages
apt: upgrade=dist
and this is an extract from my ~/.ansible/inventory/hosts file:
[my-servers]
san-francisco ansible_host=san-francisco ansible_ssh_user=user ansible_become_pass=<my_sudo_password_for_user_on_san-francisco>
san-diego ansible_host=san-diego ansible_ssh_user=user ansible_become_pass=<my_sudo_password_for_user_on_san-diego>
This is what I get if I launch the playbook:
$ ansible-playbook update-servers-playbook.yml
PLAY [Update Servers] **********************************************************
TASK [setup] *******************************************************************
ok: [san-francisco]
ok: [san-diego]
TASK [update packages] *********************************************************
ok: [san-francisco]
ok: [san-diego]
TASK [upgrade packages] ********************************************************
ok: [san-francisco]
ok: [san-diego]
PLAY RECAP *********************************************************************
san-francisco : ok=3 changed=0 unreachable=0 failed=0
san-diego : ok=3 changed=0 unreachable=0 failed=0
What is bothering me is the fact that I have the password for my user user stored in plaintext in my ~/.ansible/inventory/hosts file.
I have read about vaults, I have also read about the best practices for variables and vaults but I do not understand how to apply this to my very minimal use case.
I also tried to use lookups. While in general they also work in the inventory file, and I am able to do something like this:
[my-servers]
san-francisco ansible_host=san-francisco ansible_ssh_user=user ansible_become_pass="{{ lookup('env', 'ANSIBLE_BECOME_PASSWORD_SAN_FRANCISCO') }}"
where this case the password would be stored in an environment variable called ANSIBLE_BECOME_PASSWORD_SAN_FRANCISCO; there is no way to look up variables in vaults as far as I know.
So, how could I organize my file such that I would be able to lookup up my passwords from somewhere and have them safely stored?
You need to create some vaulted variable files and then either include them in your playbooks or on the command line.
If you change your inventory file to use a variable for the become pass this variable can be vaulted:
[my-servers]
san-francisco ansible_host=san-francisco ansible_ssh_user=user ansible_become_pass='{{ sanfrancisco_become_pass }}'
san-diego ansible_host=san-diego ansible_ssh_user=user ansible_become_pass='{{ sandiego_become_pass }}'
Then use ansible-vault create vaulted_vars.yml to create a vaulted file with the following contents:
sanfrancisco_become_pass: <my_sudo_password_for_user_on_san-francisco>
sandiego_become_pass : <my_sudo_password_for_user_on_san-diego>
Then either include the vaulted file as extra vars like this:
ansible-playbook -i ~/.ansible/inventory/hosts playbook.yml --ask-vault-pass -e#~/.ansible/inventory/vault_vars
Or include the vars file in your playbook with an include_vars task:
- name : include vaulted variables
include_vars: ~/.ansible/inventory/vault_vars
The best way to solve this problem is to use host_vars. The easiest setup is to just put the ansible_become_pass in Vault encrypted files in the corresponding host_vars directories like this:
myplaybook.yml
host_vars/onehost.com/crypted
host_vars/otherhost.com/crypted
In the crypted files you place the assignment of the ansible_become_pass variable:
ansible_become_pass: SuperSecre3t
Create the file with ansible-vault create, edit it with ansible-vault edit.
Following the advice in the Ansible docs you need to create an additional file per host that assigns the ansible_become_passwd from the crypted variable that has a different name. That way it is possible to search for the ansible_become_passwd in the project files.
myplaybook.yml
host_vars/onehost.com/plain
host_vars/onehost.com/crypted
host_vars/otherhost.com/plain
host_vars/otherhost.com/crypted
where a plain file contains something like this:
ansible_become_pass: "{{ vaulted_become_pass }}"
and the crypted file sets the vaulted_become_pass like shown above.
All crypted files must be encrypted with the same key and ansible-playbook must be called with --ask-vault-pass.
After setting up an inventory with your own relevant settings. These settings assume that you have already set up a rsa-key pair to access your server. You should be able to ssh into your server with ssh remoteuser#155.42.88.199
[local]
localhost ansible_connection=local
[remote]
155.42.88.199 ansible_connection=ssh ansible_user=remoteuser ansible_become_user=root ansible_become=yes ansible_ssh_private_key_file=<private_key_file_path>
You need to store your root password in a file (I called mine 'my_vault.yml'). You can do this with the following command:
~/.ansible$ ansible-vault create my_vault.yml
Simple store your remote server password as follows (do not include the '<>' tags)
su_password: <myreallyspecialpassword>
The password will now be encrypted by vault and the only way to view this is to enter the following command.
~/.ansible$ ansible-vault edit my_vault.yml
We now need to include our 'my_vault.yml' file in our playbook. We can do this by using vars-files to get the value of su-password. We can now create a var titled ansible_become_pass which will be passed the value from our my_vault.yml file which will allow our remoteuser to su once on the server.
---
- name: My Awesome Playbook
hosts: remote
become: yes
vars_files:
- ~/.ansible/my_vault.yml
vars:
ansible_become_pass: '{{ su_password }}'
roles:
- some_awesome_role
As we are using vault each time we want to run this playbook we need to use the following command.
ansible-playbook myawesome_playbook.yml --ask-vault-pass