Ansible task write to local log file - linux

Using Ansible I would like to be able to write the sysout of a task running a command to a local(i.e. on the managed server) log file.
For the moment I can only do this using a task like this:
- name: Run my command
shell: <command> <arg1> <arg3> ... |tee -a <local log file>
The reason to do this is that the takes a long time to complete(i.e. we cannot wait until it finishes to get its output) and would like to collect the output during its execution.
Is there any "Ansible" way to redirect to sysout of the command to a local log file during its execution without using the tee pipe?

Not 100% answers your question as you wont get a constantly updating file in your manager server but you could use async commands
# Requires ansible 1.8+
- name: 'YUM - async task'
yum:
name: docker-io
state: installed
async: 1000
poll: 0
register: yum_sleeper
- name: 'YUM - check on async task'
async_status:
jid: "{{ yum_sleeper.ansible_job_id }}"
register: job_result
until: job_result.finished
retries: 30
and then dump the contents of yum_sleeper to a file with
- name: save log locally
copy:
content: '{{ yum_sleeper.stdout }}'
dest: file.log
delegate_to: localhost

Related

Airflow task is failing with ERROR! A worker was found in a dead state

This task uses an ansible-playbook to submit a spark job, the spark job is getting completed but the Airflow task is failing with
ERROR! A worker was found in a dead state.
This is how my YAML looks like:
# test.yml
- hosts: all
become: true
become_user: user
tasks:
- name: change the working directory to /home/ and runs command.
shell: "/path_to_spark_submit/spark-submit --master yarn-client <spark-submit>"
args:
chdir: /path_to_spark_submit/
ignore_errors: True
register: command_result
- name: change the working directory to /home/ and runs command.
shell: <some_command>
args:
chdir: /tmp
ignore_errors: True
when: command_result.rc != 0
- fail:
msg: extract was failed
when: command_result.rc != 0
I tried removing the register variable, in case of both success/failure of the job, the task is going to the next task without any issue.
One more thing, I am getting a huge log from the cluster, how can I limit these logs?
What can be the possible cause of this error?

Ansible return code error: 'dict object' has no attribute 'rc'

I am using Ansible to automate the installation, configuration and deployment of an application server which uses JBOSS, therefore I need to use the in-built jboss-cli to deploy packages.
This Ansible task is literally the last stage to run, it simply needs to check if a deployment already exists, if it does, undeploy it and redeploy it (to be idempotent).
Running the below commands manually on the server and checking the return code after each command works as expected, something, somewhere in Ansible refuses to read the return codes correctly!
# BLAZE RMA DEPLOYMENT
- name: Check if Blaze RMA has been assigned to dm-server-group from a previous Ansible run
shell: "./jboss-cli.sh --connect '/server-group=dm-server-group/deployment={{ blaze_deployment_version }}:read-resource()' | grep -q success"
args:
chdir: "{{ jboss_home_dir }}/bin"
register: blaze_deployment_status
failed_when: blaze_deployment_status.rc == 2
tags: # We always need to check this as the output determines whether or not we need to undeploy an existing deployment.
- skip_ansible_lint
- name: Undeploy Blaze RMA if it has already been assigned to dm-server-group from a previous Ansible run
command: "./jboss-cli.sh --connect 'undeploy {{ blaze_deployment_version }} --all-relevant-server-groups'"
args:
chdir: "{{ jboss_home_dir }}/bin"
when: blaze_deployment_status.rc == 0
register: blaze_undeployment_status
- name: Deploy Blaze RMA once successfully undeployed
command: "./jboss-cli.sh --connect 'deploy {{ jboss_deployment_dir }}/{{ blaze_deployment_version }} --server-groups=dm-server-group'"
args:
chdir: "{{ jboss_home_dir }}/bin"
when: blaze_undeployment_status.rc == 0 or blaze_deployment_status.rc == 1
Any advice would be appreciated.
Your second command contains a when clause. If it is skipped, ansible still registers the variable but there is no rc attribute in the data.
You need to take this into consideration when using the var in the next task. The following condition on last task should fix your issue.
when: blaze_undeployment_status.rc | default('') == 0 or blaze_deployment_status.rc == 1
Also there is maybe the same as author situation, when you run ansible --check

ansible - Nothing happens when executing InstallHalyard.sh script via ansible-playbook

I'm trying to make a playbook to install halyard on a remote host using ansible-playbook. One of the tasks is to execute the "InstallHalyard.sh" wrapper scripts to the actual installer (reference: https://www.spinnaker.io/setup/install/halyard/).
No error / failure present, but also nothing has changed, as if the InstallHalyard.sh script is not executed. Though, it's fine if I do it by typing manual command. The other similar task works perfectly.
You can see the InstallHalyard.sh script implementation: here
Any idea about what is happening?
Here is the task in my playbook:
name: Run InstallHalyard.sh
become: yes
become_user: sudo
shell: "{{ ansible_env.HOME }}/.spinnaker/InstallHalyard.sh"
Any help would be appreciated, thank you very much :)
EDIT:
Already tried using script, command, and shell module.
FYI, the InstallHalyard.sh script will call itself by passing an env variable and needs to do curl -O
I'm suspecting the "export" operation inside the script doesn't work as Ansible has a different understandings for environment vars. (For example, ansible will not recognize "$HOME" instead it uses "{{ ansible_env.HOME }}".)
EDIT:
I found the script does an operation that makes a fork, does ansible handle this kind of operation?
Tested on localhost, got the same result.
SOLUTION
It is because of the different interpretation of environment vars of ansible. If I executed the script manually, the wrapper sets the environment variable and pass it to the next script calls, which ansible not able to do. So what I do is setting the environment var manually with ansible module before executing the script (just adding 2 lines)
Here is my revised task:
name: Run InstallHalyard.sh
become: yes
become_user: sudo
environment:
HAL_USER: "{{ ansible_env.USER }}"
shell: "{{ ansible_env.HOME }}/.spinnaker/InstallHalyard.sh"
The question is really how do I debug the output of a failing command. I'm guessing the command is failing with a useful error message but that message is not being displayed anywhere.
Here is an example to help debug this failing script:
-name: Run InstallHalyard.sh
become: yes
become_user: sudo
shell: "{{ ansible_env.HOME }}/.spinnaker/InstallHalyard.sh"
register: output
- debug: msg="{{ output.stdout }}"
- debug: msg="{{ output.stderr }}"
Try that is see what error message it was giving you.

Running Forever in Ansible Provision Never Fires or Always Hangs

I've come across a problem with Anisble hanging when trying to start a forever process on an Ansible node. I have a very simple API server I'm creating in vagrant and provisioning with Ansible like so:
---
- hosts: all
sudo: yes
roles:
- Stouts.nodejs
- Stouts.mongodb
tasks:
- name: Install Make Dependencies
apt: name={{ item }} state=present
with_items:
- gcc
- make
- build-essential
- name: Run NPM Update
shell: /usr/bin/npm update
- name: Create MongoDB Database Folder
shell: /bin/mkdir -p /data/db
notify:
- mongodb restart
- name: Generate Dummy Data
command: /usr/bin/node /vagrant/dataGen.js
- name: "Install forever (to run Node.js app)."
npm: name=forever global=yes state=latest
- name: "Check list of Node.js apps running."
command: /usr/bin/forever list
register: forever_list
changed_when: false
- name: "Start example Node.js app."
command: /usr/bin/forever start /vagrant/server.js
when: "forever_list.stdout.find('/vagrant/server.js') == -1"
But even though Ansible acts like everything is fine, no forever process is started. When I change a few lines to remove the when: statement and force it to run, Ansible just hands, possibly running the forever process (forever, I presume) but not launching the VM to where I can interact with it.
I've referenced essentially two points online; the only sources I can find.
as stated in the comments the variable content needs to be included in your question for anyone to provide a correct answer but to overcome this I suggest you can do it like so:
- name: "Check list of Node.js apps running."
command: /usr/bin/forever list|grep '/vagrant/server.js'|wc -l
register: forever_list
changed_when: false
- name: "Start example Node.js app."
command: /usr/bin/forever start /vagrant/server.js
when: forever_list.stdout == "0"
which should prevent ansible from starting the JS app if it's already running.

How to wait for server restart using Ansible?

I'm trying to restart the server and then wait, using this:
- name: Restart server
shell: reboot
- name: Wait for server to restart
wait_for:
port=22
delay=1
timeout=300
But I get this error:
TASK: [iptables | Wait for server to restart] *********************************
fatal: [example.com] => failed to transfer file to /root/.ansible/tmp/ansible-tmp-1401138291.69-222045017562709/wait_for:
sftp> put /tmp/tmpApPR8k /root/.ansible/tmp/ansible-tmp-1401138291.69-222045017562709/wait_for
Connected to example.com.
Connection closed
Ansible >= 2.7 (released in Oct 2018)
Use the built-in reboot module:
- name: Wait for server to restart
reboot:
reboot_timeout: 3600
Ansible < 2.7
Restart as a task
- name: restart server
shell: 'sleep 1 && shutdown -r now "Reboot triggered by Ansible" && sleep 1'
async: 1
poll: 0
become: true
This runs the shell command as an asynchronous task, so Ansible will not wait for end of the command. Usually async param gives maximum time for the task but as poll is set to 0, Ansible will never poll if the command has finished - it will make this command a "fire and forget". Sleeps before and after shutdown are to prevent breaking the SSH connection during restart while Ansible is still connected to your remote host.
Wait as a task
You could just use:
- name: Wait for server to restart
local_action:
module: wait_for
host={{ inventory_hostname }}
port=22
delay=10
become: false
..but you may prefer to use {{ ansible_ssh_host }} variable as the hostname and/or {{ ansible_ssh_port }} as the SSH host and port if you use entries like:
hostname ansible_ssh_host=some.other.name.com ansible_ssh_port=2222
..in your inventory (Ansible hosts file).
This will run the wait_for task on the machine running Ansible. This task will wait for port 22 to become open on your remote host, starting after 10 seconds delay.
Restart and wait as handlers
But I suggest to use both of these as handlers, not tasks.
There are 2 main reason to do this:
code reuse - you can use a handler for many tasks. Example: trigger server restart after changing the timezone and after changing the kernel,
trigger only once - if you use a handler for a few tasks, and more than 1 of them will make some change => trigger the handler, then the thing that handler does will happen only once. Example: if you have a httpd restart handler attached to httpd config change and SSL certificate update, then in case both config and SSL certificate changes httpd will be restarted only once.
Read more about handlers here.
Restarting and waiting for the restart as handlers:
handlers:
- name: Restart server
command: 'sleep 1 && shutdown -r now "Reboot triggered by Ansible" && sleep 1'
async: 1
poll: 0
ignore_errors: true
become: true
- name: Wait for server to restart
local_action:
module: wait_for
host={{ inventory_hostname }}
port=22
delay=10
become: false
..and use it in your task in a sequence, like this, here paired with rebooting the server handler:
tasks:
- name: Set hostname
hostname: name=somename
notify:
- Restart server
- Wait for server to restart
Note that handlers are run in the order they are defined, not the order they are listed in notify!
You should change the wait_for task to run as local_action, and specify the host you're waiting for. For example:
- name: Wait for server to restart
local_action:
module: wait_for
host=192.168.50.4
port=22
delay=1
timeout=300
Most reliable I've with 1.9.4 got is (this is updated, original version is at the bottom):
- name: Example ansible play that requires reboot
sudo: yes
gather_facts: no
hosts:
- myhosts
tasks:
- name: example task that requires reboot
yum: name=* state=latest
notify: reboot sequence
handlers:
- name: reboot sequence
changed_when: "true"
debug: msg='trigger machine reboot sequence'
notify:
- get current time
- reboot system
- waiting for server to come back
- verify a reboot was actually initiated
- name: get current time
command: /bin/date +%s
register: before_reboot
sudo: false
- name: reboot system
shell: sleep 2 && shutdown -r now "Ansible package updates triggered"
async: 1
poll: 0
ignore_errors: true
- name: waiting for server to come back
local_action: wait_for host={{ inventory_hostname }} state=started delay=30 timeout=220
sudo: false
- name: verify a reboot was actually initiated
# machine should have started after it has been rebooted
shell: (( `date +%s` - `awk -F . '{print $1}' /proc/uptime` > {{ before_reboot.stdout }} ))
sudo: false
Note the async option. 1.8 and 2.0 may live with 0 but 1.9 wants it 1. The above also checks if machine has actually been rebooted. This is good because once I had a typo that failed reboot and no indication of the failure.
The big issue is waiting for machine to be up. This version just sits there for 330 seconds and never tries to access host earlier. Some other answers suggest using port 22. This is good if both of these are true:
you have direct access to the machines
your machine is accessible immediately after port 22 is open
These are not always true so I decided to waste 5 minutes compute time.. I hope ansible extend the wait_for module to actually check host state to avoid wasting time.
btw the answer suggesting to use handlers is nice. +1 for handlers from me (and I updated answer to use handlers).
Here's original version but it it not so good and not so reliable:
- name: Reboot
sudo: yes
gather_facts: no
hosts:
- OSEv3:children
tasks:
- name: get current uptime
shell: cat /proc/uptime | awk -F . '{print $1}'
register: uptime
sudo: false
- name: reboot system
shell: sleep 2 && shutdown -r now "Ansible package updates triggered"
async: 1
poll: 0
ignore_errors: true
- name: waiting for server to come back
local_action: wait_for host={{ inventory_hostname }} state=started delay=30 timeout=300
sudo: false
- name: verify a reboot was actually initiated
# uptime after reboot should be smaller than before reboot
shell: (( `cat /proc/uptime | awk -F . '{print $1}'` < {{ uptime.stdout }} ))
sudo: false
2018 Update
As of 2.3, Ansible now ships with the wait_for_connection module, which can be used for exactly this purpose.
#
## Reboot
#
- name: (reboot) Reboot triggered
command: /sbin/shutdown -r +1 "Ansible-triggered Reboot"
async: 0
poll: 0
- name: (reboot) Wait for server to restart
wait_for_connection:
delay: 75
The shutdown -r +1 prevents a return code of 1 to be returned and have ansible fail the task. The shutdown is run as an async task, so we have to delay the wait_for_connection task at least 60 seconds. 75 gives us a buffer for those snowflake cases.
wait_for_connection - Waits until remote system is reachable/usable
I wanted to comment on Shahar post, that he is using a hardcoded host address better is to have it a variable to reference the current host ansible is configuring {{ inventory_hostname }}, so his code will be like that:
- name: Wait for server to restart
local_action:
module: wait_for
host={{ inventory_hostname }}
port=22
delay=1
timeout=300
With newer versions of Ansible (i.e. 1.9.1 in my case), poll and async parameters set to 0 are sometimes not enough (may be depending on what distribution is set up ansible ?). As explained in https://github.com/ansible/ansible/issues/10616 one workaround is :
- name: Reboot
shell: sleep 2 && shutdown -r now "Ansible updates triggered"
async: 1
poll: 0
ignore_errors: true
And then, wait for reboot complete as explained in many answers of this page.
Through trial and error + a lot of reading this is what ultimately worked for me using the 2.0 version of Ansible:
$ ansible --version
ansible 2.0.0 (devel 974b69d236) last updated 2015/09/01 13:37:26 (GMT -400)
lib/ansible/modules/core: (detached HEAD bbcfb1092a) last updated 2015/09/01 13:37:29 (GMT -400)
lib/ansible/modules/extras: (detached HEAD b8803306d1) last updated 2015/09/01 13:37:29 (GMT -400)
config file = /Users/sammingolelli/projects/git_repos/devops/ansible/playbooks/test-2/ansible.cfg
configured module search path = None
My solution for disabling SELinux and rebooting a node when needed:
---
- name: disable SELinux
selinux: state=disabled
register: st
- name: reboot if SELinux changed
shell: shutdown -r now "Ansible updates triggered"
async: 0
poll: 0
ignore_errors: true
when: st.changed
- name: waiting for server to reboot
wait_for: host="{{ ansible_ssh_host | default(inventory_hostname) }}" port={{ ansible_ssh_port | default(22) }} search_regex=OpenSSH delay=30 timeout=120
connection: local
sudo: false
when: st.changed
# vim:ft=ansible:
- wait_for:
port: 22
host: "{{ inventory_hostname }}"
delegate_to: 127.0.0.1
In case you don't have DNS setup for the remote server yet, you can pass the IP address instead of a variable hostname:
- name: Restart server
command: shutdown -r now
- name: Wait for server to restart successfully
local_action:
module: wait_for
host={{ ansible_default_ipv4.address }}
port=22
delay=1
timeout=120
These are the two tasks I added to the end of my ansible-swap playbook (to install 4GB of swap on new Digital Ocean droplets.
I've created a reboot_server ansible role that can get dynamically called from other roles with:
- name: Reboot server if needed
include_role:
name: reboot_server
vars:
reboot_force: false
The role content is:
- name: Check if server restart is necessary
stat:
path: /var/run/reboot-required
register: reboot_required
- name: Debug reboot_required
debug: var=reboot_required
- name: Restart if it is needed
shell: |
sleep 2 && /sbin/shutdown -r now "Reboot triggered by Ansible"
async: 1
poll: 0
ignore_errors: true
when: reboot_required.stat.exists == true
register: reboot
become: true
- name: Force Restart
shell: |
sleep 2 && /sbin/shutdown -r now "Reboot triggered by Ansible"
async: 1
poll: 0
ignore_errors: true
when: reboot_force|default(false)|bool
register: forced_reboot
become: true
# # Debug reboot execution
# - name: Debug reboot var
# debug: var=reboot
# - name: Debug forced_reboot var
# debug: var=forced_reboot
# Don't assume the inventory_hostname is resolvable and delay 10 seconds at start
- name: Wait 300 seconds for port 22 to become open and contain "OpenSSH"
wait_for:
port: 22
host: '{{ (ansible_ssh_host|default(ansible_host))|default(inventory_hostname) }}'
search_regex: OpenSSH
delay: 10
connection: local
when: reboot.changed or forced_reboot.changed
This was originally designed to work with Ubuntu OS.
I haven't seen a lot of visibility on this, but a recent change (https://github.com/ansible/ansible/pull/43857) added the "ignore_unreachable" keyword. This allows you to do something like this:
- name: restart server
shell: reboot
ignore_unreachable: true
- name: wait for server to come back
wait_for_connection:
timeout: 120
- name: the next action
...

Resources