Different timestamps when backup files using cron - linux

I need some help to explain cron for backup files.
I have a shell script to backup logfile running in RHEL 6.7 and Solaris 10. It'll move the logfile to backup directory and gunzip each logfile.
Here the script.
#!/bin/bash
# Defined variable
dirLog=/app/rbt3/prod/cda/logs
dirBackup=/app/rbt3/prod/cda/logs/backup
# Change directory to CDA logfile
cd $dirLog
# Backup mechanism
for file in `ls *.log.*` ; do
#echo "FileSemua -> $file"
echo " Pindahkan file $file ke directory $dirBackup "
/bin/mv $dirLog/$file $dirBackup
echo " start Gzip file [$file]..... "
/bin/gzip $dirBackup/$file
echo " done Gzip file [$file]..... "
done
The script is registered in crontab to run every day at 1:20 AM.
20 1 * * * /app/prod/logs/backupLog.sh
Here the backup files that cron created.
-rw-r--r-- 1 user3 user 36344 Nov 18 11:59 alarm.log.20161117.gz
-rw-r--r-- 1 user3 user 35085 Nov 19 11:59 rsync.log.20161117.gz
-rw-r--r-- 1 user3 user 35018 Nov 20 11:59 trace.log.20161117.gz
As far as I know, when we register the script in cron for a specific time. It'll be running and create the files exactly as the cron said (Please correct me if I'm wrong). But in my case, the time for backup files created is different from the cron. Did I miss something?
Thanks.

When you move file time stamps not change. and you use gzip for compressing which use file time stamps.thats why you see different time stamps
$ ls -l
-rw-r--r-- 1 user3 user 36344 Nov 18 11:59 alarm.log.20161117.gz
-rw-r--r-- 1 user3 user 35085 Nov 19 11:59 rsync.log.20161117.gz
-rw-r--r-- 1 user3 user 35018 Nov 20 11:59 trace.log.20161117.gz
ls -l shows the last update time of your log file

Related

How to take incremental backup of a large file (100GB) with rsync?

I have a scenario where I have to take an incremental backup every 5 minutes of a large file which is around 100GB on the local machine if the content of the file changes.
Filename: example.txt
Backups: example.txt.00:05, example.txt.00:10, example.txt.00:15 and
so on.
What will be the most optimized way to do this?
If I opt for diff than it will take a lot of time to check the content of the file.
I would prefer doing it with rsync but I am unsure about how it will manage the multiple files.
I figured this out with the man page of rsync.
-b, --backup make backups (see --suffix & --backup-dir)
--backup-dir=DIR make backups into hierarchy based in DIR
--suffix=SUFFIX backup suffix (default ~ w/o --backup-dir)
Script:
#!/bin/bash
while True
do
timestamp=$(date +"%H:%M:%S")
echo $timestamp
rsync -avschz --backup --backup-dir=archive --suffix="-$timestamp" example.txt backup
sleep 300
done
The above script will create an archive directory inside the backup directory and rename the files accordingly.
Output:
imohit:rsync-script ethicalmohit$ ls -l backup/archive/
total 88064
-rw-r--r-- 1 ethicalmohit staff 18874368 Mar 25 03:06 example.txt-03:15:41
-rw-r--r-- 1 ethicalmohit staff 12582912 Mar 25 03:17 example.txt-03:25:42
-rw-r--r-- 1 ethicalmohit staff 13631488 Mar 25 03:25 example.txt-03:30:42

Script that calls another script to execute on every file in a directory

There are two directories that contains these files:
First one /usr/local/nagios/etc/hosts
[root#localhost hosts]$ ll
total 12
-rw-rw-r-- 1 apache nagios 1236 Feb 7 10:10 10.80.12.53.cfg
-rw-rw-r-- 1 apache nagios 1064 Feb 27 22:47 10.80.12.62.cfg
-rw-rw-r-- 1 apache nagios 1063 Feb 22 12:02 localhost.cfg
And the second one /usr/local/nagios/etc/services
[root#localhost services]$ ll
total 20
-rw-rw-r-- 1 apache nagios 2183 Feb 27 22:48 10.80.12.62.cfg
-rw-rw-r-- 1 apache nagios 1339 Feb 13 10:47 Check usage _etc.cfg
-rw-rw-r-- 1 apache nagios 7874 Feb 22 11:59 localhost.cfg
And I have a script that goes through file in Hosts directory and paste some lines from that file in the file in the Services directory.
The script is ran like this:
./nagios-contacts.sh /usr/local/nagios/etc/hosts/10.80.12.62.cfg /usr/local/nagios/etc/services/10.80.12.62.cfg
How can I achieve that another script calls my script and goes through every file in the Hosts directory and does its job for the files with the same name in the Service directory?
In my script I´m pulling out contacts from the 10.80.12.62.cfg in the Hosts directory and appending them to the file with the same name in the Service directory.
Don't use ls output as an input to for loop instead use the built-in wild-cards. See why it's not a good idea.
for f in /usr/local/nagios/etc/hosts/*.cfg
do
basef=$(basename "$f")
./nagios-contacts.sh "$f" "/usr/local/nagios/etc/services/${basef}"
done
It sounds like you just need to do some iteration.
echo $(pwd)
for file in $(ls); do ./nagious-contacts.sh $file; done;
So it will loop over all files in the current directory.
You can also modify it as well by doing something more absolute.
abspath=$1
for file in $(ls $abspath); do ./nagious-contacts.sh $abspath/$file; done
which would loop over all files in a set directory, and then pass the abspath/filename into your script.

Top Command Output is Empty when run from cron

I was trying to redirect the TOP command output in the particular file in every 5 minutes with the below command.
top -b -n 1 > /var/tmp/TOP_USAGE.csv.$(date +"%I-%M-%p_%d-%m-%Y")
-rw-r--r-- 1 root root 0 Dec 9 17:20 TOP_USAGE.csv.05-20-PM_09-12-2015
-rw-r--r-- 1 root root 0 Dec 9 17:25 TOP_USAGE.csv.05-25-PM_09-12-2015
-rw-r--r-- 1 root root 0 Dec 9 17:30 TOP_USAGE.csv.05-30-PM_09-12-2015
-rw-r--r-- 1 root root 0 Dec 9 17:35 TOP_USAGE.csv.05-35-PM_09-12-2015
Hence i made a very small (1 line) shell script for this, so that i can run in every 5 minutes via cronjob.
Problem is when i run this script manually then i can see the output in the file, however when this script in running automatically, file is generating in every 5 minutes but there is no data (aka file is empty)
Can anyone please help me on this?
I now modified the script and still it's the same.
#!/bin/sh
PATH=$(/usr/bin/getconf PATH)
/usr/bin/top -b -n 1 > /var/tmp/TOP_USAGE.csv.$(date +"%I-%M-%p_%d-%m-%Y")
I met the same problem as you.
Top command with -b option must be added.Saving top output to variable before we use it.
the scripts are below
date >> /tmp/mysql-mem-moniter.log
MEM=/usr/bin/top -b -n 1 -u mysql
echo "$MEM" | grep mysql >> /tmp/mysql-mem-moniter.log
Most likely the environment passed to your script from cron is too minimal. In particular, PATH may not be what you think it is (no profiles are read by scripts started from cron).
Place PATH=$(/usr/bin/getconf PATH) at the start of your script, then run it with
/usr/bin/env -i /path/to/script
Once that works without error, it's ready for cron.

anacron script in cron.daily not running via symlink

What can I do to make this script run daily?
If I manually run the script, it works. I can see that it did what it's supposed to do. (backup files) However, it will not run as a cron.daily script. I've let it go for days without touching it -- and it never runs.
The actual script is here /var/www/myapp/backup.sh
There is a symlink to it here /etc/cron.daily/myapp_backup.sh -> /var/www/myapp/backup.sh
The cron log at /var/log/cron shows anacron running this script:
Aug 19 03:09:01 ip-123-456-78-90 anacron[31537]: Job `cron.daily' started
Aug 19 03:09:01 ip-123-456-78-90 run-parts(/etc/cron.daily)[31545]: starting myapp_backup.sh
Aug 19 03:09:01 ip-123-456-78-90 run-parts(/etc/cron.daily)[31559]: finished myapp_backup.sh
Yet there is no evidence that the script actually did anything.
Here is the security info on these files:
ls -la /var/cron.daily
<snip>
lrwxrwxrwx 1 root root 25 Aug 12 21:18 myapp_backup.sh -> /var/www/myapp/backup.sh
</snip>
ls -la /var/www/myapp
<snip>
drwxr-xr-x 2 root root 4096 Aug 13 13:55 .
drwxr-xr-x 10 root root 4096 Jul 12 01:00 ..
-rwxr-xr-x 1 root root 407 Aug 12 23:37 backup.sh
-rw-r--r-- 1 root root 33 Aug 12 21:13 list.txt
</snip>
The file called list.txt is used by backup.sh.
The script just runs tar to create an archive.
From the cron manpage of a debian/ubuntu system:
the files under these directories have to be pass some sanity checks including the following: be executable, be owned by root, not be writable by group or other and, if symlinks, point to files owned by root. Additionally, the file names must conform to the filename requirements of run-parts: they must be entirely made up of letters, digits and can only contain the special signs underscores ('_') and hyphens ('-'). Any file that does not conform to these requirements will not be executed by run-parts. For example, any file containing dots will be ignored.
So:
file need to be owned by root
if symlink, the source file need to be owned by root
if symlink, the link name should NOT contain dots
I had a similar situation with cron.hourly and awstats processing.
I THINK it is related to SELinux and anacron not having the same powers/permissions as cron.
The ACTUAL solution defeated me (so far).
MY WORKAROUND SOLUTION: Run the job via root's cron entries (crontab -e ) and simply schedule it hourly.

cron job not started daily

I have a script in /etc/cron.daily/backup.sh file is allowed to execute and run but do not start happening, I read the manual and used the search but not mastered decision.
ls -l /etc/cron.daily/
total 52
-rwxr-xr-x 1 root root 8686 2009-04-17 10:27 apt
-rwxr-xr-x 1 root root 314 2009-02-10 19:45 aptitude
-rwxr-xr-x 1 root root 103 2011-05-22 19:08 backup.sh
-rwxr-xr-x 1 root root 502 2008-11-05 03:43 bsdmainutils
-rwxr-xr-x 1 root root 89 2009-01-27 00:55 logrotate
-rwxr-xr-x 1 root root 954 2009-03-19 16:17 man-db
-rwxr-xr-x 1 root root 646 2008-11-05 03:37 mlocate
The cron job filename can't have a period in it on certain ubuntus. See this. Particularly, this quote within:
Although the directories contain periods in their names, run-parts
will not accept a file name containing a period and will fail silently
when encountering them
Properly, this is a problem with run-parts, which the ubuntu cron runs, and not with cron itself. Still, it's what bit me.
pls check:
1a) is the script executable and has correct owner/group settings?
1b) does it start with the correct #! line? and do you specify the full path to the shell you're using,
e.g. #!/bin/bash ?
2) does the script generate an error while being executed?
e.g. can you write to a log file from it, and do you see the log messages?
also: check in the email inbox of the user who owns the crontab -- errors are emailed to the user / e.g. root
what does the output of ls -l /etc/cron.daily/ look like? can you post that?
NOTE:
you can always create a crontab entry for this yourself, outside of those cron.xxx directory structure ;-)
See: man 5 crontab
10 1 * * * /somewhere/backup.sh >> /somewhere/backup.log 2>&1
this has the advantage that you get to pick the exact time when it runs (e.g. 1:10am here), and you can redirect STRERR and STDOUT to append to a log file for that particular script
For testing purposes you could run it ever 10 minutes, like this:
0,10,20,30,40,50 * * * * /somewhere/backup.sh >> /somewhere/backup.log 2>&1
do touch /somewhere/backup.log to make sure it exists

Resources