Does Ansible support comment cron job - cron

Is there a way to comment a cron using Ansible?
I tried to use disable but it is not working.
Playbook:
cron: name="server_agent" disabled=yes
Error message:
You must specify 'job' to install a new cron job or variable
My Ansible version is: ansible 2.3.1.0`

With ansible 2.6 the following is working:
- name: "cron"
hosts: localhost
tasks:
- cron:
name: "test"
job: "/bin/true"
minute: "0"
hour: "9"
state: present
disabled: True
According to the documentation, this should work since ansible 2.0. Important for this to work is, that disabled: True only has effect if state: present is set. A crontab -l lists:
#Ansible: test
#0 9 * * * /bin/true

Related

Github action on scheduled time not running at all?

Attempting to run a file on a scheduled time (increasing the schedule later) but for some reason, it is not running at all. I am not sure if I have written it correctly.
name: updateStandingsDB
on:
schedule:
- cron: '*/1 * * * *'
jobs:
build-node:
runs-on: ubuntu-latest
container: node:16
steps:
- name: git checkout
uses: actions/checkout#v3
- name: Install
run: npm install
- name: Prem
run: node update-standings/prem.js
Could someone help me understand what I am doing wrong?
Per the documentation at GitHub you cannot set it the way you did:
The shortest interval you can run scheduled workflows is once every 5 minutes.
So I would recommend either trying */5 (or even */10) or simply committing with a fixed time (which is IIRC also in UTC) while you debug.

Airflow task is failing with ERROR! A worker was found in a dead state

This task uses an ansible-playbook to submit a spark job, the spark job is getting completed but the Airflow task is failing with
ERROR! A worker was found in a dead state.
This is how my YAML looks like:
# test.yml
- hosts: all
become: true
become_user: user
tasks:
- name: change the working directory to /home/ and runs command.
shell: "/path_to_spark_submit/spark-submit --master yarn-client <spark-submit>"
args:
chdir: /path_to_spark_submit/
ignore_errors: True
register: command_result
- name: change the working directory to /home/ and runs command.
shell: <some_command>
args:
chdir: /tmp
ignore_errors: True
when: command_result.rc != 0
- fail:
msg: extract was failed
when: command_result.rc != 0
I tried removing the register variable, in case of both success/failure of the job, the task is going to the next task without any issue.
One more thing, I am getting a huge log from the cluster, how can I limit these logs?
What can be the possible cause of this error?

How to use cron triggers in drone in addition to step conditions?

Say I have this drone.yaml file:
kind: pipeline
type: kubernetes
name: default
- name: echo-hello
image: alpine
commands:
- echo "hello"
when:
event:
- push
- name: echo-goodbye
image: alpine
commands:
- echo "goodbye"
when:
event:
- push
In addition to triggering the echo-hello and echo-goodbye step upon each push, I'd like to trigger all steps based on a cron event. I thought adding the trigger section to the bottom of the yaml file would do the trick:
trigger:
event:
- cron
cron:
- hourly
But then, it ignores the conditions defined beneath when in the dedicated steps. Can anybody help me fixing my drone.yaml file, so that I can trigger by cron in addition to the step specific conditions?
Trigger mean that this whole pipeline will run only when cron runs it. You should add
event:
- cron
cron:
- hourly
to when: for each section.

Ansible - List of Linux security updates needed on remote servers

I wanted to run a playbook that will accurately report if one of the remote servers requires security updates. Ansible server = Centos 7, remote servers Amazon Linux.
Remote server would highlight on startup something like below:
https://aws.amazon.com/amazon-linux-2/
8 package(s) needed for security, out of 46 available
Run "sudo yum update" to apply all updates.
To confirm this, I put a playbook together, cobbled from many sources (below) that does perform that function to a degree. It does suggest whether the remote server requires security updates but doesn't say what these updates are?
- name: check if security updates are needed
hosts: elk
tasks:
- name: check yum security updates
shell: "yum updateinfo list all security"
changed_when: false
register: security_update
- debug: msg="Security update required"
when: security_update.stdout != "0"
- name: list some packages
yum: list=available
Then, when I run my updates install playbook:
- hosts: elk
remote_user: ansadm
become: yes
become_method: sudo
tasks:
- name: Move repos from backup to yum.repos.d
shell: mv -f /backup/* /etc/yum.repos.d/
register: shell_result
failed_when: '"No such file or directory" in shell_result.stderr_lines'
- name: Remove redhat.repo
shell: rm -f /etc/yum.repos.d/redhat.repo
register: shell_result
failed_when: '"No such file or directory" in shell_result.stderr_lines'
- name: add line to yum.conf
lineinfile:
dest: /etc/yum.conf
line: exclude=kernel* redhat-release*
state: present
create: yes
- name: yum clean
shell: yum make-cache
register: shell_result
failed_when: '"There are no enabled repos" in shell_result.stderr_lines'
- name: install all security patches
yum:
name: '*'
state: latest
security: yes
bugfix: yes
skip_broken: yes
After install, you would get something similar to below (btw - these are outputs from different servers)
https://aws.amazon.com/amazon-linux-2/
No packages needed for security; 37 packages available
Run "sudo yum update" to apply all updates.
But if I run my list security updates playbook again - it gives a false positive as it still reports security updates needed?
PLAY [check if security updates are needed] ************************************
TASK [Gathering Facts] *********************************************************
ok: [10.10.10.192]
TASK [check yum security updates] **********************************************
ok: [10.10.10.192]
TASK [debug] *******************************************************************
ok: [10.10.10.192] => {
"msg": "Security update required"
}
TASK [list some packages] ******************************************************
ok: [10.10.10.192]
PLAY RECAP *********************************************************************
10.10.10.192 : ok=4 changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0
[ansadm#ansible playbooks]$
What do I need to omit/include in playbook to reflect the changes after the install of the updates?
Thanks in advance :)
So I ran your yum command locally on my system I get the following.
45) local-user#server:/home/local-user> yum updateinfo list all security
Loaded plugins: ulninfo
local_repo | 2.9 kB 00:00:00
updateinfo list done
Now granted our systems may have different output here, but it will serve the purpose of my explanation. The output of the entire command is saved to your register, but your when conditional says to run when the output of that command is not EXACTLY "0".
So unless you par that response down with some awk's or sed's, and it responds with any more text that literally just the character "0" that debug task is always going to fire off.

How to wait for server restart using Ansible?

I'm trying to restart the server and then wait, using this:
- name: Restart server
shell: reboot
- name: Wait for server to restart
wait_for:
port=22
delay=1
timeout=300
But I get this error:
TASK: [iptables | Wait for server to restart] *********************************
fatal: [example.com] => failed to transfer file to /root/.ansible/tmp/ansible-tmp-1401138291.69-222045017562709/wait_for:
sftp> put /tmp/tmpApPR8k /root/.ansible/tmp/ansible-tmp-1401138291.69-222045017562709/wait_for
Connected to example.com.
Connection closed
Ansible >= 2.7 (released in Oct 2018)
Use the built-in reboot module:
- name: Wait for server to restart
reboot:
reboot_timeout: 3600
Ansible < 2.7
Restart as a task
- name: restart server
shell: 'sleep 1 && shutdown -r now "Reboot triggered by Ansible" && sleep 1'
async: 1
poll: 0
become: true
This runs the shell command as an asynchronous task, so Ansible will not wait for end of the command. Usually async param gives maximum time for the task but as poll is set to 0, Ansible will never poll if the command has finished - it will make this command a "fire and forget". Sleeps before and after shutdown are to prevent breaking the SSH connection during restart while Ansible is still connected to your remote host.
Wait as a task
You could just use:
- name: Wait for server to restart
local_action:
module: wait_for
host={{ inventory_hostname }}
port=22
delay=10
become: false
..but you may prefer to use {{ ansible_ssh_host }} variable as the hostname and/or {{ ansible_ssh_port }} as the SSH host and port if you use entries like:
hostname ansible_ssh_host=some.other.name.com ansible_ssh_port=2222
..in your inventory (Ansible hosts file).
This will run the wait_for task on the machine running Ansible. This task will wait for port 22 to become open on your remote host, starting after 10 seconds delay.
Restart and wait as handlers
But I suggest to use both of these as handlers, not tasks.
There are 2 main reason to do this:
code reuse - you can use a handler for many tasks. Example: trigger server restart after changing the timezone and after changing the kernel,
trigger only once - if you use a handler for a few tasks, and more than 1 of them will make some change => trigger the handler, then the thing that handler does will happen only once. Example: if you have a httpd restart handler attached to httpd config change and SSL certificate update, then in case both config and SSL certificate changes httpd will be restarted only once.
Read more about handlers here.
Restarting and waiting for the restart as handlers:
handlers:
- name: Restart server
command: 'sleep 1 && shutdown -r now "Reboot triggered by Ansible" && sleep 1'
async: 1
poll: 0
ignore_errors: true
become: true
- name: Wait for server to restart
local_action:
module: wait_for
host={{ inventory_hostname }}
port=22
delay=10
become: false
..and use it in your task in a sequence, like this, here paired with rebooting the server handler:
tasks:
- name: Set hostname
hostname: name=somename
notify:
- Restart server
- Wait for server to restart
Note that handlers are run in the order they are defined, not the order they are listed in notify!
You should change the wait_for task to run as local_action, and specify the host you're waiting for. For example:
- name: Wait for server to restart
local_action:
module: wait_for
host=192.168.50.4
port=22
delay=1
timeout=300
Most reliable I've with 1.9.4 got is (this is updated, original version is at the bottom):
- name: Example ansible play that requires reboot
sudo: yes
gather_facts: no
hosts:
- myhosts
tasks:
- name: example task that requires reboot
yum: name=* state=latest
notify: reboot sequence
handlers:
- name: reboot sequence
changed_when: "true"
debug: msg='trigger machine reboot sequence'
notify:
- get current time
- reboot system
- waiting for server to come back
- verify a reboot was actually initiated
- name: get current time
command: /bin/date +%s
register: before_reboot
sudo: false
- name: reboot system
shell: sleep 2 && shutdown -r now "Ansible package updates triggered"
async: 1
poll: 0
ignore_errors: true
- name: waiting for server to come back
local_action: wait_for host={{ inventory_hostname }} state=started delay=30 timeout=220
sudo: false
- name: verify a reboot was actually initiated
# machine should have started after it has been rebooted
shell: (( `date +%s` - `awk -F . '{print $1}' /proc/uptime` > {{ before_reboot.stdout }} ))
sudo: false
Note the async option. 1.8 and 2.0 may live with 0 but 1.9 wants it 1. The above also checks if machine has actually been rebooted. This is good because once I had a typo that failed reboot and no indication of the failure.
The big issue is waiting for machine to be up. This version just sits there for 330 seconds and never tries to access host earlier. Some other answers suggest using port 22. This is good if both of these are true:
you have direct access to the machines
your machine is accessible immediately after port 22 is open
These are not always true so I decided to waste 5 minutes compute time.. I hope ansible extend the wait_for module to actually check host state to avoid wasting time.
btw the answer suggesting to use handlers is nice. +1 for handlers from me (and I updated answer to use handlers).
Here's original version but it it not so good and not so reliable:
- name: Reboot
sudo: yes
gather_facts: no
hosts:
- OSEv3:children
tasks:
- name: get current uptime
shell: cat /proc/uptime | awk -F . '{print $1}'
register: uptime
sudo: false
- name: reboot system
shell: sleep 2 && shutdown -r now "Ansible package updates triggered"
async: 1
poll: 0
ignore_errors: true
- name: waiting for server to come back
local_action: wait_for host={{ inventory_hostname }} state=started delay=30 timeout=300
sudo: false
- name: verify a reboot was actually initiated
# uptime after reboot should be smaller than before reboot
shell: (( `cat /proc/uptime | awk -F . '{print $1}'` < {{ uptime.stdout }} ))
sudo: false
2018 Update
As of 2.3, Ansible now ships with the wait_for_connection module, which can be used for exactly this purpose.
#
## Reboot
#
- name: (reboot) Reboot triggered
command: /sbin/shutdown -r +1 "Ansible-triggered Reboot"
async: 0
poll: 0
- name: (reboot) Wait for server to restart
wait_for_connection:
delay: 75
The shutdown -r +1 prevents a return code of 1 to be returned and have ansible fail the task. The shutdown is run as an async task, so we have to delay the wait_for_connection task at least 60 seconds. 75 gives us a buffer for those snowflake cases.
wait_for_connection - Waits until remote system is reachable/usable
I wanted to comment on Shahar post, that he is using a hardcoded host address better is to have it a variable to reference the current host ansible is configuring {{ inventory_hostname }}, so his code will be like that:
- name: Wait for server to restart
local_action:
module: wait_for
host={{ inventory_hostname }}
port=22
delay=1
timeout=300
With newer versions of Ansible (i.e. 1.9.1 in my case), poll and async parameters set to 0 are sometimes not enough (may be depending on what distribution is set up ansible ?). As explained in https://github.com/ansible/ansible/issues/10616 one workaround is :
- name: Reboot
shell: sleep 2 && shutdown -r now "Ansible updates triggered"
async: 1
poll: 0
ignore_errors: true
And then, wait for reboot complete as explained in many answers of this page.
Through trial and error + a lot of reading this is what ultimately worked for me using the 2.0 version of Ansible:
$ ansible --version
ansible 2.0.0 (devel 974b69d236) last updated 2015/09/01 13:37:26 (GMT -400)
lib/ansible/modules/core: (detached HEAD bbcfb1092a) last updated 2015/09/01 13:37:29 (GMT -400)
lib/ansible/modules/extras: (detached HEAD b8803306d1) last updated 2015/09/01 13:37:29 (GMT -400)
config file = /Users/sammingolelli/projects/git_repos/devops/ansible/playbooks/test-2/ansible.cfg
configured module search path = None
My solution for disabling SELinux and rebooting a node when needed:
---
- name: disable SELinux
selinux: state=disabled
register: st
- name: reboot if SELinux changed
shell: shutdown -r now "Ansible updates triggered"
async: 0
poll: 0
ignore_errors: true
when: st.changed
- name: waiting for server to reboot
wait_for: host="{{ ansible_ssh_host | default(inventory_hostname) }}" port={{ ansible_ssh_port | default(22) }} search_regex=OpenSSH delay=30 timeout=120
connection: local
sudo: false
when: st.changed
# vim:ft=ansible:
- wait_for:
port: 22
host: "{{ inventory_hostname }}"
delegate_to: 127.0.0.1
In case you don't have DNS setup for the remote server yet, you can pass the IP address instead of a variable hostname:
- name: Restart server
command: shutdown -r now
- name: Wait for server to restart successfully
local_action:
module: wait_for
host={{ ansible_default_ipv4.address }}
port=22
delay=1
timeout=120
These are the two tasks I added to the end of my ansible-swap playbook (to install 4GB of swap on new Digital Ocean droplets.
I've created a reboot_server ansible role that can get dynamically called from other roles with:
- name: Reboot server if needed
include_role:
name: reboot_server
vars:
reboot_force: false
The role content is:
- name: Check if server restart is necessary
stat:
path: /var/run/reboot-required
register: reboot_required
- name: Debug reboot_required
debug: var=reboot_required
- name: Restart if it is needed
shell: |
sleep 2 && /sbin/shutdown -r now "Reboot triggered by Ansible"
async: 1
poll: 0
ignore_errors: true
when: reboot_required.stat.exists == true
register: reboot
become: true
- name: Force Restart
shell: |
sleep 2 && /sbin/shutdown -r now "Reboot triggered by Ansible"
async: 1
poll: 0
ignore_errors: true
when: reboot_force|default(false)|bool
register: forced_reboot
become: true
# # Debug reboot execution
# - name: Debug reboot var
# debug: var=reboot
# - name: Debug forced_reboot var
# debug: var=forced_reboot
# Don't assume the inventory_hostname is resolvable and delay 10 seconds at start
- name: Wait 300 seconds for port 22 to become open and contain "OpenSSH"
wait_for:
port: 22
host: '{{ (ansible_ssh_host|default(ansible_host))|default(inventory_hostname) }}'
search_regex: OpenSSH
delay: 10
connection: local
when: reboot.changed or forced_reboot.changed
This was originally designed to work with Ubuntu OS.
I haven't seen a lot of visibility on this, but a recent change (https://github.com/ansible/ansible/pull/43857) added the "ignore_unreachable" keyword. This allows you to do something like this:
- name: restart server
shell: reboot
ignore_unreachable: true
- name: wait for server to come back
wait_for_connection:
timeout: 120
- name: the next action
...

Resources