I've recently started using more and more Ansible, and especially AWX, for simple repetitive tasks. Below is a playbook for downloading, installing and configuring logging via a Bash script. The script is for two hosts: Ubuntu 20.04 and CentOS 7.6, and for the latter, making some changes to SELinux is required.
The question is, why am I getting an error for the Ubuntu only and not the CentOS also?
Here is the playbook:
# Download an run Nagios Log Server configuration script
---
- name: nagios-log configure
hosts: all
remote_user: root
tasks:
- name: Distribution
debug: msg="{{ ansible_distribution }}"
- name: Download setup-linux.sh
get_url:
url: http://10.10.10.10/nagioslogserver/scripts/setup-linux.sh
validate_certs: no
dest: /tmp/setup-linux.sh
- name: Change script permission
file: dest=/tmp/setup-linux.sh mode=a+x
- name: Run setup-linux.sh
shell: /tmp/setup-linux.sh -s 10.10.10.10 -p 5544
register: ps
failed_when: "ps.rc not in [ 0, 1 ]"
- name: Install policycoreutils if needed
yum:
name:
- policycoreutils
- policycoreutils-python
state: latest
when: ansible_distribution == 'CentOS'
- name: Check if policy file exists
stat:
path: /etc/selinux/targeted/active/ports.local
register: result
when: ansible_distribution == 'CentOS'
- name: Check whether line exists
find:
paths: /etc/selinux/targeted/active/ports.local
contains: '5544'
register: found
when: result.stat.exists == True
- name: Add SELinux policy exception if missing
command: semanage port -a -t syslogd_port_t -p udp 5544
when: found.matched > 0
- name: Restart rsyslog
systemd:
name: rsyslog
state: restarted
enabled: yes
And here is the error output when running the playbook on AWX:
TASK [Check whether line exists] ***********************************************
fatal: [Ubuntu.domain.corp]: FAILED! => {"msg": "The conditional check 'result.stat.exists == True' failed. The error was: error while evaluating conditional (result.stat.exists == True): 'dict object' has no attribute 'stat'\n\nThe error appears to be in '/tmp/awx_154_1811rny6/project/nagios-log.yml': line 39, column 5, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n\n - name: Check whether line exists\n ^ here\n"}
ok: [Centos.domain.corp]
For reasons I can't comprehend, the CentOS server is fine, but the Ubuntu is getting a strange error that I don't understand. I've tried other methods to achieve the same logic as the when command.
You get this error, because you register the variable result in
- name: Check if policy file exists
stat:
path: /etc/selinux/targeted/active/ports.local
register: result
when: ansible_distribution == 'CentOS'
But because of when: ansible_distribution == 'CentOS' this does not run on Ubuntu and therefor the variable result does not exist when running the playbook on Ubuntu.
To fix this (and run the task using result on CentOS only as well) you can change it to this:
- name: Check whether line exists
find:
paths: /etc/selinux/targeted/active/ports.local
contains: '5544'
register: found
when:
- ansible_distribution == 'CentOS'
- result.stat.exists == True
- name: Add SELinux policy exception if missing
command: semanage port -a -t syslogd_port_t -p udp 5544
when:
- ansible_distribution == 'CentOS'
- found.matched > 0
Or you can put all CentOS specific tasks in a block like this:
- name: CentOS specific tasks
block:
- name: Install policycoreutils if needed
yum:
name:
- policycoreutils
- policycoreutils-python
state: latest
- name: Check if policy file exists
stat:
path: /etc/selinux/targeted/active/ports.local
register: result
- name: Check whether line exists
find:
paths: /etc/selinux/targeted/active/ports.local
contains: '5544'
register: found
when: result.stat.exists == True
- name: Add SELinux policy exception if missing
command: semanage port -a -t syslogd_port_t -p udp 5544
when: found.matched > 0
when: ansible_distribution == 'CentOS'
Or you can put them in their own file and include that file. There are actually a lot of ways to do this.
Related
I have tested this playbook with updating so I know that the credentials work, as well as the elevation to sudo. I have a test server with an extant /var/run/reboot-required file. I cannot get my ansible playbook to reboot the server though. This is an Ubuntu server. Playbook currently:
---
- hosts: server
vars:
ansible_user: sudo_user
ansible_password: "password"
become: yes
become_user: sudo_user
tasks:
- name: Check if reboot required
stat:
path: /var/run/reboot-required
register: reboot_required_file
- name: Reboot if required
reboot:
when: reboot_required_file.stat.exists == true
Ive tried variations of this playbook and I cant get the playbook to reboot the server. Playbook returns:
PLAY [server] *******************************************************************************************************************************************************************
TASK [Gathering Facts] **********************************************************************************************************************************************************
ok: [server]
PLAY RECAP **********************************************************************************************************************************************************************
server : ok=1 changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0
Ive also tried just doing a shell command:
- name:
shell: if [ -f /var/run/reboot-required ]; then init 6; else wall "reboot not required"; fi
ignore_errors: true
This also doesnt work.
Cheers
You may change the alignment of the tasks in the playbook. I performed the operation successfully by running the following playbook:
---
- hosts: nodes
become: yes
become_user: sudo_user
tasks:
- name: Check if reboot required
stat:
path: /var/run/reboot-required
register: reboot_required_file
- name: Diplay variable
debug:
msg: '{{reboot_required_file.stat.exists}}'
- name: Reboot if required
reboot:
when: reboot_required_file.stat.exists == true
For the password set you may run the playbook as following and put the password in prompt:
ansible-playbook playbook.yml – ask-become-pass
I've been trying to remove some java files and reinstall them to prevent a bug on rocky linux but I have troubles doing so while using the DNF module.
My problem might be from me using the shell command "rpm -qa | grep java" to gather the files that I need to reinstall but I just can't tell.
Here's my code:
---
- name: Rocky | Java reinstall to prevent bugs
hosts: "fakeHost"
gather_facts: false
become: true
tasks:
#Ping the server
- name: Test reachability
ping:
#Check if the path exist
- name: Check java file path
stat:
path: /usr/lib/jvm/java
register: dir_name
#Report if the dir exists
- name: Report if the dir exists
debug:
msg: "The directory exists"
when:
- dir_name.stat.exists
#Load up all the java file that the machine has
- name: grep all java file
shell: "rpm -qa | grep java"
args:
warn: false #prevent false change
register: java_files
when:
- dir_name.stat.exists
#Display all the java files of the machine
- name: Show all java java_files
debug:
msg: "{{ item }}"
loop:
- "{{ java_files.stdout_lines }}"
when:
- dir_name.stat.exists
#Uninstall each java file with the DNF command
- name: Uninstall all the java files
dnf:
name: "{{ item }}"
state: absent
autoremove: no
loop:
- "{{ java_files.stdout_lines }}"
when:
- dir_name.stat.exists
#Install each java file with the DNF command
- name: Install all the java files
dnf:
name: "{{ item }}"
state: present
loop:
- "{{ java_files.stdout_lines }}"
when:
- dir_name.stat.exists
I have written a playbook task in ansible. I am able to run the playbook on linux end.
- name: Set paths for go
blockinfile:
path: $HOME/.profile
backup: yes
state: present
block: |
export PATH=$PATH:/usr/local/go/bin
export GOPATH=$HOME/go
export FABRIC_CFG_PATH=$HOME/.fabdep/config
- name: Load Env variables
shell: source $HOME/.profile
args:
executable: /bin/bash
register: source_result
become: yes
As in linux we have .profile in home directory but in Mac there is no .profile and .bash_profile in macOS.
So I want to check if os is Mac then path should be $HOME/.bash_profile and if os is linux based then it should look for $HOME/.profile.
I have tried adding
when: ansible_distribution == 'Ubuntu' and ansible_distribution_release == 'precise'
But it does not work firstly and also it is length process. I want to get path based on os in a variable and use it.
Thanks
I found a solution this way. I added gather_facts:true at top of yaml file and it started working. I started using variable as ansible_distribution.
Thanks
An option would be to include_vars from files. See example below
- name: "OS specific vars (will overwrite /vars/main.yml)"
include_vars: "{{ item }}"
with_first_found:
- files:
- "{{ ansible_distribution }}-{{ ansible_distribution_release }}.yml"
- "{{ ansible_distribution }}.yml"
- "{{ ansible_os_family }}.yml"
- "default.yml"
paths: "{{ playbook_dir }}/vars"
skip: true
- name: Set paths for go
blockinfile:
path: "$HOME/{{ my_profile_file }}"
[...]
In the playbooks' directory create directory vars and create files
# cat var/Ubuntu.yml
my_profile_file: ".profile"
# cat var/macOS.yml
my_profile_file: ".bash_profile"
If you have managed hosts with different OS, group them by OS in your inventory:
[Ubuntu]
ubu1
ubu2
[RHEL6]
RH6_1
[RHEL7]
RH7_1
RH7_2
I am trying to update permissions on all the shell script in a particular directory on remote servers using ansible but it gives me error:
- name: update permissions
file: dest=/home/goldy/scripts/*.sh mode=a+x
This is the error I am getting:
fatal: [machineA]: FAILED! => {"changed": false, "msg": "file (/home/goldy/scripts/*.sh) is absent, cannot continue", "path": "/home/goldy/scripts/*.sh", "state": "absent"}
to retry, use: --limit #/var/lib/jenkins/workspace/copy/copy.retry
What wrong I am doing here?
you should run a task with find module to collect all .sh files on that directory, and register the results in a variable.
then run a 2nd task with the file module that will update the permissions when file's extension ends in .sh.
check sample playbook:
- hosts: localhost
gather_facts: false
vars:
tasks:
- name: parse /tmp directory
find:
paths: /tmp
patterns: '*.sh'
register: list_of_files
- debug:
var: item.path
with_items: "{{ list_of_files.files }}"
- name: change permissions
file:
path: "{{ item.path }}"
mode: a+x
with_items: "{{ list_of_files.files }}"
In a playbook, I copy files using sudo. It used to work... Until we migrated to Ansible 1.9... Since then, it fails with the following error message:
"ssh connection closed waiting for sudo password prompt"
I provide the ssh and sudo passwords (through the Ansible prompt), and all the other commands running through sudo are successful (only the file copy and template fail).
My command is:
ansible-playbook -k --ask-become-pass --limit=testhost -C -D playbooks/debug.yml
and the playbookd contains:
- hosts: designsync
gather_facts: yes
tasks:
- name: Make sure the syncmgr home folder exists
action: file path=/home/syncmgr owner=syncmgr group=syncmgr mode=0755 state=directory
sudo: yes
sudo_user: syncmgr
- name: Copy .cshrc file
action: copy src=roles/designsync/files/syncmgr.cshrc dest=/home/syncmgr/.cshrc owner=syncmgr group=syncmgr mode=0755
sudo: yes
sudo_user: syncmgr
Is this a bug or did I miss something?
François.
Your playbook should look like:
- hosts: designsync
gather_facts: yes
tasks:
- name: Make sure the syncmgr home folder exists
sudo: yes
sudo_user: syncmgr
file:
path: "/home/syncmgr"
owner: syncmgr
group: syncmgr
mode: 0755
state: directory
- name: Copy .cshrc file
sudo: yes
sudo_user: syncmgr
copy:
src: "roles/designsync/files/syncmgr.cshrc"
dest: "/home/syncmgr/.cshrc"
owner: syncmgr
group: syncmgr
mode: 0755
Depending on the exact version of Ansible you're using, there may be a bug with sudo_user (experienced it myself).
Trying changing your playbooks from "sudo_user" to "remote_user".