I am trying to add a cron entry using puppet. I have this code in my class:
cron { 'puppet-apply':
ensure => present,
command => "/usr/bin/mycommand",
user => root,
hour => '14',
minute => '49',
require => File['mycommand'],
}
mycommand is another definition in the same class. When puppet runs, mycommand executable is correctly added to /usr/bin, however I do not see any crontab entries created in /etc/crontab (or anywhere else for that matter).
What am I missing here? How can I get it to create the crontab entry?
Your code works fine, and I suspect you are just misunderstanding how Puppet manages cron tabs.
If you are using the latest Puppet, the source code for the cron logic is here. Notice the actual files used by OS type here:
CRONTAB_DIR = case Facter.value('osfamily')
when 'Debian', 'HP-UX', 'Solaris'
'/var/spool/cron/crontabs'
when %r{BSD}
'/var/cron/tabs'
when 'Darwin'
'/usr/lib/cron/tabs/'
else
'/var/spool/cron'
end
So given code like this:
file { 'mycommand':
path => "/usr/bin/mycommand",
content => "#!/usr/bin/bash\necho hello world",
}
cron { 'puppet-apply':
ensure => present,
command => "/usr/bin/mycommand",
user => root,
hour => '14',
minute => '49',
require => File['mycommand'],
}
If you apply that as root on CentOS 7:
[root#centos-72-x64 ~]# puppet apply /tmp/apply_manifest.pp
Notice: Compiled catalog for centos-72-x64.macquarie.local in environment production in 0.05 seconds
Notice: /Stage[main]/Main/File[mycommand]/ensure: defined content as '{md5}8d9f82443e4fb78b8316c17174182d16'
Notice: /Stage[main]/Main/Cron[puppet-apply]/ensure: created
Notice: Applied catalog in 0.04 seconds
You will have the expected crontab:
[root#centos-72-x64 ~]# crontab -l
# HEADER: This file was autogenerated at 2019-11-09 03:55:09 +0000 by puppet.
# HEADER: While it can still be managed manually, it is definitely not recommended.
# HEADER: Note particularly that the comments starting with 'Puppet Name' should
# HEADER: not be deleted, as doing so could cause duplicate cron jobs.
# Puppet Name: puppet-apply
49 14 * * * /usr/bin/mycommand
And the actual files modified are in /var/spool/cron:
[root#centos-72-x64 ~]# find /var/spool/cron
/var/spool/cron
/var/spool/cron/root
[root#centos-72-x64 ~]# cat /var/spool/cron/root
# HEADER: This file was autogenerated at 2019-11-09 03:55:09 +0000 by puppet.
# HEADER: While it can still be managed manually, it is definitely not recommended.
# HEADER: Note particularly that the comments starting with 'Puppet Name' should
# HEADER: not be deleted, as doing so could cause duplicate cron jobs.
# Puppet Name: puppet-apply
49 14 * * * /usr/bin/mycommand
Related
I have added the following line to cron to run the script on reboot
#reboot /usr/local/bin/autostart.sh
But when I prepared the ansible script for it I found that it adds 1 more line each time I apply the ansible.
The task is below:
- name: Add autostart script to cron
cron:
special_time: reboot
user: user
state: present
job: /usr/local/bin/autostart.sh
And after several updates I get the following cron:
#Ansible: None
#reboot /usr/local/bin/autostart.sh
#Ansible: None
#reboot /usr/local/bin/autostart.sh
#Ansible: None
#reboot /usr/local/bin/autostart.sh
#Ansible: None
#reboot /usr/local/bin/autostart.sh
As for me this is strange behavior because state: present should check if the record is already present.
Or maybe have I missed anything else?
Add name parameter. For example
- name: Add autostart script to cron
cron:
name: "autostart"
special_time: reboot
user: user
state: present
job: /usr/local/bin/autostart.sh
Quoting from cron
name: Description of a crontab entry or, if env is set, the name of environment variable. Required if state=absent. Note that if name is not set and state=present, then a new crontab entry will always be created, regardless of existing ones. This parameter will always be required in future releases.
I'm trying to run the following script through crontab every day at 12 :
#!/bin/sh
mount -t nfs 10.1.25.7:gadal /mnt/NAS_DFG
echo >> ~/Documents/Crontab_logs/logs.txt
date >> ~/Documents/Crontab_logs/logs.txt
rsync -ar /home /mnt/NAS_DFG/ >> ~/Documents/Crontab_logs/logs.txt 2>&1
unmout /mnt/NAS_DFG
As it needs to run in sudo, I added the following line to 'sudo crontab' such that I have :
someone#something:~$ sudo crontab -l
# Edit this file to introduce tasks to be run by cron.
#
# Each task to run has to be defined through a single line
# indicating with different fields when the task will be run
# and what command to run for the task
#
# To define the time you can provide concrete values for
# minute (m), hour (h), day of month (dom), month (mon),
# and day of week (dow) or use '*' in these fields (for 'any').#
# Notice that tasks will be started based on the cron's system
# daemon's notion of time and timezones.
#
# Output of the crontab jobs (including errors) is sent through
# email to the user the crontab file belongs to (unless redirected).
#
# For example, you can run a backup of all your user accounts
# at 5 a.m every week with:
# 0 5 * * 1 tar -zcf /var/backups/home.tgz /home/
#
# For more information see the manual pages of crontab(5) and cron(8)
#
# m h dom mon dow command
0 12 * * * ~/Documents/Crontab_logs/Making_save.sh
But it does not run. I mention that just executing the script thourgh :
sudo ~/Documents/Crontab_logs/Making_save.sh
works well, except that no output of the rsync command is written in the log file.
Any ideas what's going wrong ? I think I checked the main source of mistakes, i.e. using shell, leaving an empty line at the end, etc ...
sudo crontab creates a job which runs out of the crontab of root (if you manage to configure it correctly; the syntax in root crontabs is different). When cron runs the job, $HOME (and ~ if you use a shell or scripting language with tilde expansion) will refer to the home of root etc.
You should probably simply add
0 12 * * * sudo ./Documents/Crontab_logs/Making_save.sh
to your own crontab instead.
Notice that crontab does not have tilde expansion at all (but we can rely on the fact that cron will always run out of your home directory).
... Though this will still have issues, because if the script runs under sudo and it creates new files, those files will be owned by root, and cannot be changed by your regular user account. A better solution still is to only run the actual mount and umount commands with sudo, and minimize the amount of code which runs on the privileged account, i.e. remove the sudo from your crontab and instead add it within the script to the individual commands which require it.
I am running SUSE Linux as a non-root user (getting root access also will not be a possibility). I would like to have a .sh script I created be run daily.
My crontab looks like:
0 0 * * * * /path/to/file.sh
I also have a line return after this as per many troubleshooting suggestions. My script deletes files older than 14 days. I also added a means to log output to check whether the script runs.
However, the job does not run automatically. I also am not able to check /var/log/messages for any notifications on whether cron can run or not.
What am I doing wrong? How can I check if cron itself is running/can run for my user? Do I have to supply cron with any paths or environment variables?
The correct approach to run your cron every midnight is:
00 00 * * * /bin/bash path/to/your/script.sh >> /path/to/log/file.log
I have about 50 Debian Linux servers with a bad cron job:
0 * * * * ntpdate 10.20.0.1
I want to configure ntp sync with ntpd and so I need to delete this cron job. For configuring I use Ansible. I have tried to delete the cron entry with this play:
tasks:
- cron: name="ntpdate" minute="0" job="ntpdate 10.20.0.1" state=absent user="root"
Nothing happened.
Then I run this play:
tasks:
- cron: name="ntpdate" minute="0" job="ntpdate pool.ntp.org" state=present
I see new cron job in output of "crontab -l":
...
# m h dom mon dow command
0 * * * * ntpdate 10.20.0.1
#Ansible: ntpdate
0 * * * * ntpdate pool.ntp.org
but /etc/cron.d is empty! I don't understand how the Ansible cron module works.
How can I delete my manually configured cron job with Ansible's cron module?
User's crontab entries are held under /var/spool/cron/crontab/$USER, as mentioned in the crontab man page:
Crontab is the program used to install, remove or list the tables used to drive the cron(8) daemon. Each user can have their own crontab, and though these are files in /var/spool/ , they are not intended to be edited directly. For SELinux in mls mode can be even more crontabs - for each range. For more see selinux(8).
As mentioned in the man page, and the above quote, you should not be editing/using these files directly and instead should use the available crontab commands such as crontab -l to list the user's crontab entries, crontab -r to remove the user's crontab or crontab -e to edit the user's crontab entries.
To remove a crontab entry by hand you can either use crontab -r to remove all the user's crontab entries or crontab -e to edit the crontab directly.
With Ansible this can be done by using the cron module's state: absent like so:
hosts : all
tasks :
- name : remove ntpdate cron entry
cron :
name : ntpdate
state : absent
However, this relies on the comment that Ansible puts above the crontab entry that can be seen from this simple task:
hosts : all
tasks :
- name : add crontab test entry
cron :
name : crontab test
job : echo 'Testing!' > /var/log/crontest.log
state : present
Which then sets up a crontab entry that looks like:
#Ansible: crontab test
* * * * * echo Testing > /var/log/crontest.log
Unfortunately if you have crontab entries that have been set up outside of Ansible's cron module then you are going to have to take a less clean approach to tidying up your crontab entries.
For this we will simply have to throw away our user's crontab using crontab -r and we can invoke this via the shell with a play that looks something like following:
hosts : all
tasks :
- name : remove user's crontab
shell : crontab -r
We can then use further tasks to set the tasks that you wanted to keep or add that properly use Ansible's cron module.
If you have very complicated crontab entries so you can also delete it by shell module of ansible as shown in below example.
---
- name: Deleting contab entry
hosts: ecx
become: true
tasks:
- name: "decroning entry"
shell:
"crontab -l -u root |grep -v mybot |crontab -u root -"
register: cronout
- debug: msg="{{cronout.stdout_lines}}"
Explanation:- You have to just replace "mybot" string on line 8 with your unique identity of crontab entry. that's it. for "how to delete multiple crontab entries by ansible" you can use multiple strings in grep as shown below
"crontab -l -u root |grep -v 'strin1\|string2\|string3\|string4' |crontab -u root -"
I am trying to make a cron job for the first time but i have some problems making it work.
Here is what i have done so far:
Linux commands:
crontab -e
My cronjob looks like this:
1 * * * * wget -qO /dev/null http://mySite/myController/myView
Now when i look in:
/var/spool/cron/crontabs/
I get the following output:
marc root
if i open the file root
i see my cronjob (the one above)
However it doesnt seem like it is running.
is there a way i can check if its running or make sure that it is running?
By default cron jobs do have a log file. It should be in /var/log/syslog (depends on your system). Vouch for it and you're done. Else you can simply append the output to a log file manually by
1 * * * * wget http://mySite/myController/myView >> ~/my_log_file.txt
and see what's your output. Notice I've changed removed the quiet parameter from wget command so that there is some output.