I have a script in /etc/cron.daily/backup.sh file is allowed to execute and run but do not start happening, I read the manual and used the search but not mastered decision.
ls -l /etc/cron.daily/
total 52
-rwxr-xr-x 1 root root 8686 2009-04-17 10:27 apt
-rwxr-xr-x 1 root root 314 2009-02-10 19:45 aptitude
-rwxr-xr-x 1 root root 103 2011-05-22 19:08 backup.sh
-rwxr-xr-x 1 root root 502 2008-11-05 03:43 bsdmainutils
-rwxr-xr-x 1 root root 89 2009-01-27 00:55 logrotate
-rwxr-xr-x 1 root root 954 2009-03-19 16:17 man-db
-rwxr-xr-x 1 root root 646 2008-11-05 03:37 mlocate
The cron job filename can't have a period in it on certain ubuntus. See this. Particularly, this quote within:
Although the directories contain periods in their names, run-parts
will not accept a file name containing a period and will fail silently
when encountering them
Properly, this is a problem with run-parts, which the ubuntu cron runs, and not with cron itself. Still, it's what bit me.
pls check:
1a) is the script executable and has correct owner/group settings?
1b) does it start with the correct #! line? and do you specify the full path to the shell you're using,
e.g. #!/bin/bash ?
2) does the script generate an error while being executed?
e.g. can you write to a log file from it, and do you see the log messages?
also: check in the email inbox of the user who owns the crontab -- errors are emailed to the user / e.g. root
what does the output of ls -l /etc/cron.daily/ look like? can you post that?
NOTE:
you can always create a crontab entry for this yourself, outside of those cron.xxx directory structure ;-)
See: man 5 crontab
10 1 * * * /somewhere/backup.sh >> /somewhere/backup.log 2>&1
this has the advantage that you get to pick the exact time when it runs (e.g. 1:10am here), and you can redirect STRERR and STDOUT to append to a log file for that particular script
For testing purposes you could run it ever 10 minutes, like this:
0,10,20,30,40,50 * * * * /somewhere/backup.sh >> /somewhere/backup.log 2>&1
do touch /somewhere/backup.log to make sure it exists
Related
There are two directories that contains these files:
First one /usr/local/nagios/etc/hosts
[root#localhost hosts]$ ll
total 12
-rw-rw-r-- 1 apache nagios 1236 Feb 7 10:10 10.80.12.53.cfg
-rw-rw-r-- 1 apache nagios 1064 Feb 27 22:47 10.80.12.62.cfg
-rw-rw-r-- 1 apache nagios 1063 Feb 22 12:02 localhost.cfg
And the second one /usr/local/nagios/etc/services
[root#localhost services]$ ll
total 20
-rw-rw-r-- 1 apache nagios 2183 Feb 27 22:48 10.80.12.62.cfg
-rw-rw-r-- 1 apache nagios 1339 Feb 13 10:47 Check usage _etc.cfg
-rw-rw-r-- 1 apache nagios 7874 Feb 22 11:59 localhost.cfg
And I have a script that goes through file in Hosts directory and paste some lines from that file in the file in the Services directory.
The script is ran like this:
./nagios-contacts.sh /usr/local/nagios/etc/hosts/10.80.12.62.cfg /usr/local/nagios/etc/services/10.80.12.62.cfg
How can I achieve that another script calls my script and goes through every file in the Hosts directory and does its job for the files with the same name in the Service directory?
In my script I´m pulling out contacts from the 10.80.12.62.cfg in the Hosts directory and appending them to the file with the same name in the Service directory.
Don't use ls output as an input to for loop instead use the built-in wild-cards. See why it's not a good idea.
for f in /usr/local/nagios/etc/hosts/*.cfg
do
basef=$(basename "$f")
./nagios-contacts.sh "$f" "/usr/local/nagios/etc/services/${basef}"
done
It sounds like you just need to do some iteration.
echo $(pwd)
for file in $(ls); do ./nagious-contacts.sh $file; done;
So it will loop over all files in the current directory.
You can also modify it as well by doing something more absolute.
abspath=$1
for file in $(ls $abspath); do ./nagious-contacts.sh $abspath/$file; done
which would loop over all files in a set directory, and then pass the abspath/filename into your script.
I was trying to redirect the TOP command output in the particular file in every 5 minutes with the below command.
top -b -n 1 > /var/tmp/TOP_USAGE.csv.$(date +"%I-%M-%p_%d-%m-%Y")
-rw-r--r-- 1 root root 0 Dec 9 17:20 TOP_USAGE.csv.05-20-PM_09-12-2015
-rw-r--r-- 1 root root 0 Dec 9 17:25 TOP_USAGE.csv.05-25-PM_09-12-2015
-rw-r--r-- 1 root root 0 Dec 9 17:30 TOP_USAGE.csv.05-30-PM_09-12-2015
-rw-r--r-- 1 root root 0 Dec 9 17:35 TOP_USAGE.csv.05-35-PM_09-12-2015
Hence i made a very small (1 line) shell script for this, so that i can run in every 5 minutes via cronjob.
Problem is when i run this script manually then i can see the output in the file, however when this script in running automatically, file is generating in every 5 minutes but there is no data (aka file is empty)
Can anyone please help me on this?
I now modified the script and still it's the same.
#!/bin/sh
PATH=$(/usr/bin/getconf PATH)
/usr/bin/top -b -n 1 > /var/tmp/TOP_USAGE.csv.$(date +"%I-%M-%p_%d-%m-%Y")
I met the same problem as you.
Top command with -b option must be added.Saving top output to variable before we use it.
the scripts are below
date >> /tmp/mysql-mem-moniter.log
MEM=/usr/bin/top -b -n 1 -u mysql
echo "$MEM" | grep mysql >> /tmp/mysql-mem-moniter.log
Most likely the environment passed to your script from cron is too minimal. In particular, PATH may not be what you think it is (no profiles are read by scripts started from cron).
Place PATH=$(/usr/bin/getconf PATH) at the start of your script, then run it with
/usr/bin/env -i /path/to/script
Once that works without error, it's ready for cron.
What can I do to make this script run daily?
If I manually run the script, it works. I can see that it did what it's supposed to do. (backup files) However, it will not run as a cron.daily script. I've let it go for days without touching it -- and it never runs.
The actual script is here /var/www/myapp/backup.sh
There is a symlink to it here /etc/cron.daily/myapp_backup.sh -> /var/www/myapp/backup.sh
The cron log at /var/log/cron shows anacron running this script:
Aug 19 03:09:01 ip-123-456-78-90 anacron[31537]: Job `cron.daily' started
Aug 19 03:09:01 ip-123-456-78-90 run-parts(/etc/cron.daily)[31545]: starting myapp_backup.sh
Aug 19 03:09:01 ip-123-456-78-90 run-parts(/etc/cron.daily)[31559]: finished myapp_backup.sh
Yet there is no evidence that the script actually did anything.
Here is the security info on these files:
ls -la /var/cron.daily
<snip>
lrwxrwxrwx 1 root root 25 Aug 12 21:18 myapp_backup.sh -> /var/www/myapp/backup.sh
</snip>
ls -la /var/www/myapp
<snip>
drwxr-xr-x 2 root root 4096 Aug 13 13:55 .
drwxr-xr-x 10 root root 4096 Jul 12 01:00 ..
-rwxr-xr-x 1 root root 407 Aug 12 23:37 backup.sh
-rw-r--r-- 1 root root 33 Aug 12 21:13 list.txt
</snip>
The file called list.txt is used by backup.sh.
The script just runs tar to create an archive.
From the cron manpage of a debian/ubuntu system:
the files under these directories have to be pass some sanity checks including the following: be executable, be owned by root, not be writable by group or other and, if symlinks, point to files owned by root. Additionally, the file names must conform to the filename requirements of run-parts: they must be entirely made up of letters, digits and can only contain the special signs underscores ('_') and hyphens ('-'). Any file that does not conform to these requirements will not be executed by run-parts. For example, any file containing dots will be ignored.
So:
file need to be owned by root
if symlink, the source file need to be owned by root
if symlink, the link name should NOT contain dots
I had a similar situation with cron.hourly and awstats processing.
I THINK it is related to SELinux and anacron not having the same powers/permissions as cron.
The ACTUAL solution defeated me (so far).
MY WORKAROUND SOLUTION: Run the job via root's cron entries (crontab -e ) and simply schedule it hourly.
I was just looking at our server root and one of our old developers who has left seems to have a cron job running daily which is writing a file every day to the server root.
I had a look at the daily cron jobs and I guess this is the culprit as it's the only wget (the others are PHP scripts which I know run silently):
wget http://www.example.com/?option=com_portal&view=ops&tmpl=component >/dev/null 2>&1
Each day the server root has a file like this written:
-rw-r--r-- 1 root root 11987 May 12 03:45 login.360
-rw-r--r-- 1 root root 11987 May 13 03:45 login.361
-rw-r--r-- 1 root root 11987 May 14 03:45 login.362
-rw-r--r-- 1 root root 11987 May 15 03:45 login.363
A new one for every day. The content of the file is the HTML page source. How can I safely modify the cron job to stop any output like this, until I do some further investigations into removing this cron job altogether?
My question basically is how can I still request the page but prevent it from outputting it to the server root?
How about making wget dump the data to /dev/null?
wget -O/dev/null http://www.example.com/blab
My crontab isn't running and I'm trying to figure out why. I've created a symbolic link within /etc/cron.d to /var/www/mysite.crontab
user#ip-xxxxxxxxxx:/etc/cron.d$ ll
total 20
drwxr-xr-x 2 root root 4096 Apr 11 03:48 ./
drwxr-xr-x 96 root root 4096 Apr 16 00:50 ../
lrwxrwxrwx 1 root root 30 Apr 11 03:47 mysite.crontab -> /var/www/mysite.crontab
-rw-r--r-- 1 root root 124 Feb 27 2012 drupal7
-rw-r--r-- 1 root root 544 Sep 12 2012 php5
-rw-r--r-- 1 root root 102 Apr 2 2012 .placeholder
The actual cron file is...
#Purge old deals
4 1 * * * www-data wget -q -O- http://www.mysite.com/cron/clean > /dev/null 2>&1;
Oddly enough the problem is with the name of the file. You are not permitted to use a . as a part of the name of the file when present in the /etc/cron.d dirctory.
The logic for this is in the database.c file, in the function valid_name. Renaming the file to something like mysite_crontab should fix the issue.
In general, the filename should probably just be a simple name mysite the fact that it's in this directory implies that it's a cron file already.
The file that is being pointed to must be owned by root, this is stated in the man page for the support of the /etc/cron.d directory:
Support for /etc/cron.d is included in the cron daemon itself, which handles this location as the system-wide crontab spool. This directory can contain any file defining tasks following the format used in /etc/crontab, i.e. unlike the user cron spool, these files must provide the username to run the task as in the task definition.
Files in this directory have to be owned by root, do not need to be executable (they are configuration files, just like /etc/crontab) and must conform to the same naming convention as used by run-parts(8): they must consist solely of upper- and lower-case letters, digits, underscores, and hyphens. This means that they cannot contain any dots. If the -l option is specified to cron (this option can be setup through /etc/default/cron, see below), then they must conform to the LSB namespace specification, exactly as in the --lsbsysinit option in run-parts.
The intended purpose of this feature is to allow packages that require finer control of their scheduling than the /etc/cron.{hourly,daily,weekly,monthly} directories to add a crontab file to /etc/cron.d. Such files should be named after the package that supplies them.