I have a crontab containing around 80 entries on a server. And I would like to manage that crontab using Ansible.
Ideally I would copy the server's crontab to my Ansible directory and create an Ansible task to ensure that crontab is set on the server.
But the cron module only seems to manage individual cron entries and not whole crontab files.
Manually migrating the crontab to Ansible tasks is tedious. And even if I find or make a tool that does it automatically, I feel the YAML file will be far less readable than the crontab file.
Any idea how I can handle that big crontab using Ansible?
I managed to find a simple way to do it. I copy the crontab file to the server and then update the crontab with the shell module if the file changed.
The crontab task:
---
- name: Ensure crontab file is up-to-date.
copy: src=tasks/crontab/files/{{ file }} dest={{ home }}/cronfile
register: result
- name: Ensure crontab file is active.
shell: crontab cronfile
when: result|changed
In my playbook:
- include: tasks/crontab/main.yml file=backend.cron
I solved this problem like this:
- name: Save out Crontabs
copy: src=../files/crontabs/{{ item }} dest=/var/spool/cron/{{ item }} owner={{item}} mode=0600
notify: restart cron
with_items:
- root
- ralph
- jim
- bob
The advantage of this method (versus writing to an intermediate file) is that any manual edits of the live crontab get removed and replaced with the Ansible controlled version. The disadvantage is that it's somewhat hacking the cron process.
Maintain idempotency by doing it this way:
- name: crontab
block:
- name: copy crontab file
copy:
src: /data/vendor/home/cronfile
dest: /home/mule/cronfile
mode: '0644'
register: result
- name: ensure crontab file is active
command: crontab /home/mule/cronfile
when: result.changed
rescue:
- name: delete crontab file
file:
state: absent
path: /home/mule/cronfile
Related
In project's setting for merge request we have option Pipelines must succeed checked. So any merge request that does not produce pipeline cannot be merged.
However, majority of files on root do not actually need a pipeline but because the condition above, there needs to be a job defined for changes there also. What we want to do, is to run a specific empty job only when there are changes on root files and not run it if there are changes in folders and subfolders as well (since these changes already trigger the pipeline). So as example:
folder/subfolder/files
folder/file
file1
file2
If only file1 or/and file2 is changed run the job.
If file1 or/and file2 and folder are changed, do not run the job.
Current definition would be like that, but it does not work:
root_changes:
stage: pre_build
image: alpine:latest
tags: ...
variables:
GIT_STRATEGY: none
script: date
rules:
- if: '$CI_MERGE_REQUEST_ID'
changes:
- ./**/*
when: never
- if: '$CI_MERGE_REQUEST_ID'
changes:
- ./*
when: always
I'm root, and I saw there are a lot of contents in /etc/crontab, which I thought is the root's cron job configurations. When I use crontab -e, I saw nothing in the editor; after I quit crontab -e, what I added was not found in /etc/crontab. So, where is root's cron job configuration stored? And other users?
It is stored in the directory:
/var/spool/cron/crontabs
Containing one file per user.
From man crontab (at least on my Ubuntu 13):
There is one file for each user's crontab under the
/var/spool/cron/crontabs directory. Users are not allowed to edit the
files under that directory directly to ensure that only users allowed
by the system to run periodic tasks can add them, and only
syntactically correct crontabs will be written there. This is
enforced by having the directory writable only by the crontab group
and configuring crontab command with the setgid bid set for that
specific group.
It is distribution-dependant, but mostly it is in /var/spool/crontab/<username>.
For those looking for Fedora/CentOS/RHES, it's:
/var/spool/cron/<username>
I have a QNAP NAS running Google Drive sync so that my QNAP, Computers and Google Drive are all in Sync.
When I create a file on my work computer and get home to the QNAP I get an access denied error on the file I created at work.
If I view the permissions I can see they are set incorrectly. From the QNAP web manager I simply right click the folder containing my files and set permissions to "Reapply and apply to subfolders/files".
How would one go about doing the above via a cron job that runs say every 5 minutes?
I had a similar problem myself and also made a cron job for it.
start of with making a script in a easy to find place.
I used "/share/MD0_DATA/" because all the shares live here.
Create a file like perms.sh and add the following:
#!/bin/bash
cd /share/MD0_DATA/(folder you want to apply this)
chmod -R 775 *
chown -R nobody:nogroup *
I used the nobody:nogroup just for example you can use any user and group you want.
Now you need to add this script to crontab.
To see whats in your crontab use:
crontab -l
to edit the crontab use:
crontab -e
This editor works like vi if you don't like vi and want to access the file directly edit:
/etc/config/crontab
Add this line to your crontab:
*/5 * * * * /share/MD0_DATA/perms.sh
The 5 represents a 5 minute interval.
Then you need to let crontab know about the new commands:
crontab /etc/config/crontab
I hope this helped you.
Have made backup script that does well: makes backup zip-file and then uploads it via ftp to another server. It's located here: /home/www/web5/backup/backup
Then I decided to put this script into crontab to be done automatically.
I'm doing (as root)
crontab -e
On the blank row I put:
*/1 * * * * /home/www/web5/backup/backup
Escape key, :wq!, Enter
I set it to be done each minute to test it.
Then went to the FTP folder, where script uploads the files. I'm waiting, but nothing happens: catalog is empty after each refresh in my Total Commander.
But when I execute /home/www/web5/backup/backup manually (as root as well), it works just fine and I see the new file at FTP.
What's wrong? This server is kind of heritage, so I might know not everything about it. Where to check first? OS is
Linux s090 2.6.18.8-0.13-default
(kind of very old CentOS).
Thanks for any help!
UPD: /home/www/web5/backup/backup has chmod 777
UPD2: /var/log/cron doesn't exist. But /var/log/ directory exists and contains logs of apache, mail, etc.
*/1 may be the problem. Just use *.
* * * * * /home/www/web5/backup/backup
Also, make sure /home/www/web5/backup/backup is executable with chmod 775 /home/www/web5/backup/backup
Check /var/log/cron as well. That may show errors leading to a fix.
From Crontab – Quick Reference
Crontab Environment
cron invokes the command from the user’s HOME
directory with the shell, (/usr/bin/sh). cron supplies a default
environment for every shell, defining:
HOME=user’s-home-directory
LOGNAME=user’s-login-id
PATH=/usr/bin:/usr/sbin:.
SHELL=/usr/bin/sh
Users who desire to have their .profile executed must explicitly do so
in the crontab entry or in a script called by the entry.
This should be pretty straight forward but I can't seem to get getting it working despite reading several tutorials via Google, Stackoverflow and the man page.
I created a cron to run every 1 min (for testing) and all it basically does is spit out the date.
crontab -l
* * * * * /var/test/cron-test.sh
The cron-test file is:
echo "$(date): cron job run" >> test.log
Waiting many minutes and I never see a test.log file.
I can call the "test.sh" manually and get it to output & append.
I'm wondering what I'm missing? I'm also doing this as root. I wonder if I'm miss-understanding something about root's location? Is my path messed up because it's appending some kind of home directory to it?
Thanks!
UPDATE -----------------
It does appear that I'm not following something with the directory path. If I change directory to root's home directory:
# cd
I see my output file "test.log" with all the dates printed out every minute.
So, I will update my question to be, what am I miss-understanding about the /path? Is there a term I need to use to have it start from the root directory?
Cheers!
UPDATE 2 -----------------
Ok, so I got what I was missing.
The script to setup crontab was working right. It was finding the file relative to the root directory. ie:
* * * * * /var/test/cron-test.sh
But the "cron-test.sh" file was not set relative to the root directory. Thus, when "root" ran the script, it dumped it back into "root's" home directory. My thinking was that since the script was being run in "/var/test/" that the file would also be dumped in "/var/test/".
Instead, I need to set the location in the script file to dump it out correctly.
echo "$(date): cron job run" >> /var/test/test.log
And that works.
You have not provided any path for test.log so it will be created in the current path(which is the home directory of the user by default).
You should update your script and provide the full path, e.g:
echo "$(date): cron job run" >> /var/log/test.log
To restate the answer you gave yourself more explicitely: cronjobs are started in the home directory of the executing user, in your case root. That's why a relative file ended up in ~root.