Is there a way to source cron jobs from a file - linux

One of our servers has around 20-25 different cron jobs scheduled on it.
Usually, we periodically check-in the cron jobs to a file in the repo using crontab -l > cron.jobs
While bringing up a new server, which is a replica of the previous server (in terms of OS and deployed code base), is it possible to source the cron jobs for the new server from a file containing valid cron jobs?

If a file name is given as the sole argument to the crontab command, it is used to replace the current crontab:
crontab -l > cron.jobs
crontab cron.jobs
Alternately, feed the file through stdin:
crontab < cron.jobs

Try,
crontab < cron.jobs
on new server. The jobs in cron.jobs becomes new jobs replacing the installed jobs. So better take a back-up of existing cron jobs before replacing,
crontab -l > cron.jobs.bkp

Related

Delete/clear active cron job

How can I delete or clear all the cron job that I made previously and just run new cron job that I assigned? I'm using crontab -r but it just clear in the crontab display, but it still runs that cron job and the previous cron job that I have already deleted by using that code.
After I clear cron job using crontab -r, I run crontab -l and it shows this output.
No crontab for trygcp
Use this command,
crontab -e
to access crontab, in there you can deleted specific line or job you created.
About the output you are getting, that only means there is no crontab created under the username trygcp
What you can do is this:
crontab -u [username] -e
Where:
-u define user
-e edit user's crontab
This command will create a crontab under your username, but remember you must have root privilege for you to do this.

How to distribute cron jobs to the cluster machines?

I can install a new cron job using crontab command.
For example,
crontab -e
0 0 * * * root rdate -s time.bora.net && clock -w > /dev/null 2>&1
Now I have 100 machines in my cluster, I want to install the above cron task in all of the machines.
How to distribute cron jobs to the cluster machines?
Thanks,
Method 1: Ansible
Ansible can distribute machine configuration to remote machines. Please refer to:
https://www.ansible.com/
Method 2: Distributed Cron
You can use distribted cron to assign cron job. It has a master node and you can config your job easily and monitor the running result.
https://github.com/shunfei/cronsun
crontab is stored /var/spool/cron/(username)
so, write your own cron jobs and distribute that location after acquire root permission.
but if other user edit the crontab at the same time, you can never be sure when it will get changed.
below links help you
https://ubuntuforums.org/showthread.php?t=2173811
https://forums.linuxmint.com/viewtopic.php?t=144841
https://serverfault.com/questions/347318/is-it-bad-to-edit-cron-file-manually
Ansible already has a cron module:
https://docs.ansible.com/ansible/2.7/modules/cron_module.html

How do I make sure a cron job will run?

I've always used cpanel to set up con jobs but I don't have cpanel now.
So I added a PHP file in the cron.hourly but I want to be sure it will run.
There must be some way to do this. Like a command that lists all the cron jobs that exist?
I am on Debian 7 64 bit.
If you use crontab -e the crontab job will be syntax-checked
If you are just editing /etc/crontab or /etc/cron.*/* there is no automatic checking
nothing will check the code of your job will actually run successfully.
To list all cron jobs for a user:
crontab -u <user> -l
To list all cron jobs for root:
crontab -l

Synology - Cron job

I'm trying to make cron jobs or task schduler working, but I can not figure out why my script is not taken in consideration.
I'm trying to simply archive a folder with:
tar -cvf /volume1/NetBackup/Backups/Monday.tgz /volume1/NetBackup/Backups/ns3268116.ovh.net/
Each time the script starts working but cannot achieve the work. Either with task scheduler or crontab, a file Monday.tgz is created in folder /volume1/NetBackup/Backups/, but this file is only 1024 bytes.
Synology Cron is really fussy.
Here are my own personal notes for Synology DS413j, DSM 5.2:
Hand edit /etc/crontab as root, crontab -e isn't available
Ensure you use tabs not spaces to separate the columns
Your crontab changes may not survive a reboot if there are syntax problems
The who column in crontab may not be reliable. Use root in the who column and /bin/su -c '<command>' <username> to run as another other user
remember that it uses ash not bash so check for bashisms, e.g use >> /path/to/logfile 2>&1' not&>> /path/to/logfile`
It doesn't support 'MAILTO='
you need to restart crond synoservicectl --reload crond for the new crontab to take effect
You may try adding some diagnostics to it. For instance:
Add MAILTO into the crontab file (on top of crontab -e) to receive cron errors by email:
MAILTO=username#domain.com
Redirect output of your tar command to the file:
your command > ~/log.txt 2>&1
Check cron log and look for anomalies. For instance (it may depend on your configuration):
/var/log/cron.log
You may also try searching through /var/log/messages at the time of your cron job.
Is volume1 a resource on remote host? If yes, it is worth checking this part of the system.
I agree about the really nagging nature of Crontab on Synology Linux OSs.
I would certainly suggest to create de desired job as a .sh shell script and call it via CRON task inserted by using the GUI, as suggested here.
As for today (March 2017) is the best method I have found, since working with crontab via CLI is nearly a pain.

Details of last ran cron job in Unix-like systems?

I want to get the details of the last run cron job. If the job is interrupted due to some internal problems, I want to re-run the cron job.
Note: I don't have superuser privilege.
You can see the date, time, user and command of previously executed cron jobs using:
grep CRON /var/log/syslog
This will show all cron jobs. If you only wanted to see jobs run by a certain user, you would use something like this:
grep CRON.*\(root\) /var/log/syslog
Note that cron logs at the start of a job so you may want to have lengthy jobs keep their own completion logs; if the system went down halfway through a job, it would still be in the log!
Edit: If you don't have root access, you will have to keep your own job logs. This can be done simply by tacking the following onto the end of your job command:
&& date > /home/user/last_completed
The file /home/user/last_completed would always contain the last date and time the job completed. You would use >> instead of > if you wanted to append completion dates to the file.
You could also achieve the same by putting your command in a small bash or sh script and have cron execute that file.
#!/bin/bash
[command]
date > /home/user/last_completed
The crontab for this would be:
* * * * * bash /path/to/script.bash
/var/log/cron contains cron job logs. But you need a root privilege to see.
CentOs,
sudo grep CRON /var/log/cron

Resources