I have specific set of logs for different process. Consider
Log_name1.Date.0.log (oldest on the current Date)----->
Log_name1.Date.1.log
Log_name1.Date.2.log
Log_name2.Date.0.log (oldest on the current Date)----->
Log_name2.Date.1.log
Log_name2.Date.2.log
Like these, the logs will be added every day. Now I wish to implement logrotate like this that, all the logs for a specific date should be zipped together after 3 days. i.e.. If today logs were stored, after 3 days, it must be zipped automatically. all the different logs can be zipped together. but each day must have separate tar.gz. Can someone pls help?
Not sure OS, but you can make a Cron or Program task with an script; this is an example of script on linux:
#! /bin/sh
tipo=${PWD##*/}
bkp_dir="/home/USER/${tipo}-$(date +%Y%m%d)"
echo "BackUp From Folder: ${tipo}"
echo "Make BackUpFolder: ${bkp_dir}"
mkdir $bkp_dir
for dir in */
do
base=$(basename "$dir")
tar -zcvf "${bkp_dir}/${base}-$(date +%Y%m%d).tar.gz" "$dir/Log_name1.*.log"
done
sleep 5
clear
echo "BackUp Ready:"
ls -l $bkp_dir
Related
The script in file modBackup.sh does not run completely when started by cron, the result is a corrupted tar.gz file that is half the size of this one if I run manually. In any case, its size is many times smaller than the one started manually, but still creates some content that can not be opened normally, archive is damaged
file modBackup.sh:
#!/bin/sh
find /home/share/ -mmin -720 -type f -exec tar -rvf /mnt/archives/`date +%F`-modified.tar.gz "{}" +
Тhe behavior of the automatic one seems to be interrupted and does not end.
When I run it manualy, the script creates a genuine archive as [current date]-modified.tar.gz
Here is the crontab -e:
00 18 * * 1-5 /home/myScripts/modBackup.sh
Edit:
There is no information in the logs except that crond has started
neither in the mail log, nor in the cron, nor in the messages
(I use very old CentOS :( but I don't think this is the reason for the error).
For testing only: I added %H%M of the file name in the script and did the following:
I ran it manually: sh /home/myScripts/modBackup.sh
and set with crontab -e to run a two minutes later the same command
After a few minutes, two files appeared that grew at the same time, but then the one created by cronjob
stopped growing
(two files).
I use the same GUI tool (Archive Manager) to open in both cases.
Тhe file, created by manually starting the script, opens (manually started), but the other one, from cronjob cannot, even after I changed the extension, the error is 'unexpected EOF in archive' (auto started)
Suggesting to include the users's environment context with $PATH and other critical environment variables for the application to work.:
modBackup.sh:
#!/bin/sh
source ~/.profile
find /home/share/ -mmin -720 -type f -exec tar -rvf /mnt/archives/`date +%F`-modified.tar.gz "{}" +
I found that in the cron environment the "find" command misinterprets filenames containing specific characters, even with the explicit change of the encoding with add at the beginning of the script line "export LANG = en_US.UTF-8; LC_CTYPE=...". With many other combinations and attempts I had no success.
That's why I left the "find" command and use the tar command with an option to archive modified files. This way works perfect now:
fromDate = $(date --date = '15 hours ago')
/bin/tar -N "$fromDate" -zcf /mnt/archives/`date +% F-% H% M`-share.modified.tar.gz /home/share/
I'd like to monitor a directory for new files daily using a linux bash script.
New files are added to the directory every 4 hours or so. So I'd like to at the end of the day process all the files.
By process I mean convert them to an alternative file type then pipe them to another folder once converted.
I've looked at inotify to monitor the directory but can't tell if you can make this a daily thing.
Using inotify I have got this code working in a sample script:
#!/bin/bash
while read line
do
echo "close_write: $line"
done < <(inotifywait -mr -e close_write "/home/tmp/")
This does notify when new files are added and it is immediate.
I was considering using this and keeping track of the new files then processing them at all at once, at the end of the day.
I haven't done this before so I was hoping for some help.
Maybe something other than inotify will work better.
Thanks!
You can use a daily cron job: http://linux.die.net/man/1/crontab
Definitely should look into using a cronjob. Edit your cronfile and put this in:
0 0 * * * /path/to/script.sh
That means run your script at midnight everyday. Then in your script.sh, all you would do is for all the files, "convert them to an alternative file type then pipe them to another folder once converted".
Your cron job (see other answers on this page) should keep a list of the files you have already processed and then use comm -3 processed-list all-list to get the new files.
man comm
Its a better alternative to
awk 'FNR==NR{a[$0];next}!($0 in a)' processed-list all-list
and probably more robust than using find since you record the ones that you have actually processed.
To collect the files by the end of day, just use find:
find $DIR -daystart -mtime -1 -type f
Then as others pointed out, set up a cron job to run your script.
I'm relatively new to coding on linux.
I have the below script for moving my ERP log file.
!/bin/bash #Andrew O. MBX 2015-09-03
#HansaWorld Script to periodically move the log file
_now=$(date +"%m_%d_%Y")
mv /u/OML_Server_72/hansa.log /u/HansaLogs/hansa_$now.log
The code runs but does not rename the log file to the date it has been moved.
I would also like to check when the file exceeds the 90M size so it moves it automatically at the end of every day. a cron job of some kind.
Help Please
After editing this is my new code.
#!/bin/bash
#Andrew O. MBX 2015-09-03
#HansaWorld Script to periodically move the log file
now=$(date +"%m_%d_%Y")
mv /u/OML_Server_72/hansa.log /u/HansaLogs/hansa$now.log
I wish to add code to check if hansa.log file is over 90M then move it. If it is not then leave it as it is.
cd /u find. -name '*hansa.log*' -size +90000k -exec mv '{}' /u/HansaLogs\;
In addition to the other comments, there are a few other things to consider. tgo's logrotate suggestion is a good one. In Linux, if you are every stuck on the use of a utility, etc.. the man files (while a bit cryptic at first), provide concise usage information. To see the logs available for a given utility, use man -k name (some distributions provide this selection capability by default alias) e.g.:
$ man -k logrotate
logrotate (8) - rotates, compresses, and mails system logs
logrotate.conf (5) - rotates, compresses, and mails system logs
Then if you want the logrotate page:
$ man 8 logrotate
or the conf page
$ man 5 logrotate.conf
There are several things you may want to change/consider regarding your script. First, while there is nothing wrong with a variable now, you may run into confusion with the date command's builtin use of now. There is no conflict, but it would look strange to write now=$(date -d "now + 24 hours" "+%F %T"). (recommend a name like tstamp, short for timestamp instead).
For maintainability, readability, etc... you may consider assiging your path components to variables that will help with readability later on. (example below).
Finally, before moving, copying, deleting, etc... it is always a good idea to validate that the target file exists and to provide an error message if something is out of whack. A rewrite could be:
#!/bin/bash
#Andrew O. MBX 2015-09-03
#HansaWorld Script to periodically move the log file
tstamp=$(date +"%m_%d_%Y")
logdir="/u/HansaLogs"
logname="/u/OML_Server_72/hansa.log"
if [ -f "$logname" ]; then
mv "$logname" "$logdir/hansa_${tstamp}.log"
else
printf "error: file not found '%s'.\n" "$logname" >&2
exit 1
fi
Note: the >&2 simply redirects the output of printf to stderr rather than stdout.
As for the find command, there is no need to cd and find ., the find command takes the path as its first argument. Additionally, the --size option has builtin support for Megabytes M. A rewrite here could look like:
find /u -name "*hansa.log*" -size +90M -exec mv '{}' /u/HansaLogs \;
All in all, it looks like you will pick up shell programming without any problem. Just develop good habits early, they will save you a lot of grief later.
Hi Guys Thanx for the help. So far I have come up with this code. I am stuck at creating a cron job to run this periodically say after every 22hrs
#!/bin/bash
#Andrew O. MBX 2015-09-03
#HansaWorld Script to Check if log file exists before moving:
tstamp=$(date +"%m_%d_%Y")
logdir="/u/HansaLogs"
logname="/u/OML_Server_72/hansa.log"
minimumsize=90000
actualsize=$(wc -c <"$logname")
if [ $actualsize -ge $minimumsize ]; then
mv "$logname" "$logdir/hansa_${tstamp}.log"
else
echo size is under $minimumsize bytes
exit 1
fi
--- file makebackup.sh
#!/bin/bash
DATE='date'
mysqldump --all-databases | gzip -9 > /backup/temp_db.gz
tar -Pcf /backup/temp_ftp.tar /public_html/
tar -Pcf /backup/temp_backup.tar /home/temp_db.gz /backup/temp_ftp.tar
sleep 60 && /backup/upload.sh $DATE
--- file upload.sh
#!/usr/bin/expect -f
# connect via scp
spawn scp /backup/temp_backup.tar root#mybackup.com:/home/backup_$argv.tar
#######################
expect {
-re ".*es.*o.*" {
exp_send "yes\r"
exp_continue
}
-re ".*sword.*" {
exp_send "mypassword\r"
}
}
interact
Why this does not work, i don't want to use sleep i need to know when last tar is over and execute file upload.sh. Instead it always executes as soon as last tar file starts.
&& does not do anything even if i remove sleep 60
As you say 'Instead it always executes as soon as last tar file starts', normally that means there is an '&' at the end of the line OR are you sure the tar is really working? Are you looking at an old tar.gz that was created early on? Make sure it is a new tar file that is correct size.
Edit I'm not saying you have to delete files, just dbl-check that what is being put into the final tar makes sense.
Are you checking the sizes of input files to your final tar cmd? (/home/temp_db.gz /backup/temp_ftp.tar)? Edit By this I mean, that an uncompressed tar file (temp_ftp.tar) should be just slightly larger than the sum of sizes of all files it contains. If you know that you have 1 Meg of files that compose temp_ftp.tar, and the file is 1.1 Meg, that is good, if it is .5 Meg, then that is bad. (Also consider gziping the whole thing to reduce transmission time to your remote host). Your compressed db file, hard to say, presumably that is working, if the file size is something like 25 bytes, then that indicates an error in creating the file.
Otherwise what you are saying really seems impossible. It is one of these things, or something else is bollixing things up.
One way to debug how long the last tar is taking is to wrap the command in two date commands, i.e.
date
tar -Pcf /backup/temp_backup.tar /home/temp_db.gz /backup/temp_ftp.tar
rc=$?
date
printf "return code from quick tar was ${rc}\n"
Also, per your title, 'check previous step', I've added capturing the return code from tar and printing the value.
Again, to reinforce what you know already, in a linux shell script, there is no way (excepting a background job with the '&' char) for one command to start executing before the previous one has completed.
EDIT ownership and permissions on your files might be screwing things up is ownership and permissions on your files. use \
ls -l /backup/temp_backup.tar /home/temp_db.gz /backup/temp_ftp.tar
to confirm that your userID owns the files and and that you can write to them. If you want to, edit your posting to include that information.
Also, your headline says 'cron' : are you capturing all of the possible output of this script to help debug the situation? Turn on shell debugging with set -vx near the top of makebackup.sh. Add debugging output to your tar cmd '-v'.
Capture the cron output of your whole process like
59 23 31 12 * { /path/to/makebackup.sh 2>&1 ; } > /tmp/makebackup.`/bin/date +\%Y\%m\%d.\%H\%M\%S.trace_log
And be sure you don't find any error messages.
( Crontab sample, min hr day mon (day-of-week, 0-6 or *) , change date/time to meet your testing needs)
Your expect script uses '\r', don't you want '\n' in the Unix/linux environment. If you're a Windows based server, then you want '\r\n' .
Edit does the expect script work, have you proved to your satisifaction that files are being copied, are they the same size on the backup site, does the date change?
If you expect backups to save your systems someday, you have to develop a better understanding of how the whole process should work and if it is working as expected. Depending on your situation and availability of alternate computers, you should schedule a test of your restoring your backups to see if they will really work. As you're using -P to preserve full-path info, you'll really need to be careful not to overwrite your working system with old files.
To summarize my advise, double-check everything.
I hope this helps.
Hello
I keep my log files under /opt/project/logs/ and I want to daily copy these to /opt/bkp by compressing them.
For this I have written this and works well:
#!/bin/bash
getdate(){
date --date="$1 days ago" "+%Y_%m_%d"
}
rm -rf "/opt/bkp/logs/myapp_log_"$(getdate 365).gz ;
/bin/cat /opt/project/logs/myapp.log | gzip > /opt/bkp/logs/myapp_log_`date +%Y_%m_%d`.gz ;
echo "" > /opt/project/logs/myapp.log ;
However it is not functional or general, I will have several applications saving files with their names ie app1.log app2.log under the same /opt/project/logs/ folder. How can I make this as a "function" where script reads each file under /opt/project/logs/ directory and taking backup of each file ends with .log extension?
You could use the logrotate(8) tool that came with your distro. :) The manpage has an example that looks close to your need:
/var/log/news/* {
monthly
rotate 2
olddir /var/log/news/old
missingok
postrotate
kill -HUP `cat /var/run/inn.pid`
endscript
nocompress
}
Well, not the monthly bit, or restarting inn :) but I hope you get the idea that you could easily add a new config file to /etc/logrotate.d/ and not worry about it again. :)
Have you considered using 'logrotate'? It will compress and prune logs for you, optionally kick processes that need kicking to close log files, make tea, etc etc. It's probably what your linux box uses for log management.
man logrotate
for more. The way you are going, you will have written logrotate by the time you get the functionality you want :)
I'd suggest using logrotate too, but can't resist writing this script :)
proc_logs() {
for log in /opt/project/logs/*.log; do
cat "$log" | gzip > ${log%/*}/$(basename "$log" ".log")_`date +%Y_%m_%d`.gz;
touch "$log";
done
}