Logrotate to clean up date stamped files - linux

I'm currently trying to work out a method of tidying up Oracle Recover log files that are created by Cron...
Currently, our Oracle standby recover process is invoked by Cron every 15mins using the following command:
0,15,30,45 * * * * /data/tier2/scripts/recover_standby.sh SID >> /data/tier2/scripts/logs/recover_standby_SID_`date +\%d\%m\%y`.log 2>&1
This creates files that look like:
$ ls -l /data/tier2/scripts/logs/
total 0
-rw-r--r-- 1 oracle oinstall 0 Feb 1 23:45 recover_standby_SID_010213.log
-rw-r--r-- 1 oracle oinstall 0 Feb 2 23:45 recover_standby_SID_020213.log
-rw-r--r-- 1 oracle oinstall 0 Feb 3 23:45 recover_standby_SID_030213.log
-rw-r--r-- 1 oracle oinstall 0 Feb 4 23:45 recover_standby_SID_040213.log
-rw-r--r-- 1 oracle oinstall 0 Feb 5 23:45 recover_standby_SID_050213.log
-rw-r--r-- 1 oracle oinstall 0 Feb 6 23:45 recover_standby_SID_060213.log
-rw-r--r-- 1 oracle oinstall 0 Feb 7 23:45 recover_standby_SID_070213.log
-rw-r--r-- 1 oracle oinstall 0 Feb 8 23:45 recover_standby_SID_080213.log
-rw-r--r-- 1 oracle oinstall 0 Feb 9 23:45 recover_standby_SID_090213.log
-rw-r--r-- 1 oracle oinstall 0 Feb 10 23:45 recover_standby_SID_100213.log
-rw-r--r-- 1 oracle oinstall 0 Feb 11 23:45 recover_standby_SID_110213.log
-rw-r--r-- 1 oracle oinstall 0 Feb 12 23:45 recover_standby_SID_120213.log
I basically want to delete off files older than x days old, which I thought logrotate would be perfect for...
I've configured logrotate with the following config file:
/data/tier2/scripts/logs/recover_standby_*.log {
daily
dateext
dateformat %d%m%Y
maxage 7
missingok
}
Is there something I'm missing to get the desired outcome?
I guess I could remove the date from the Crontab log file, and then have logrotate rotate that file, however then the date in the log file wont reflect the day the logs were generated... i.e. Recoveries on 010313 would be in file with a date of 020313 due to logrotate firing on 020313 and rotating the file...
Any other ideas?
And thank-you in advance for any responses.
Regards
Gavin

Logrotate removes files according to order in lexically sorted list of rotated log file names, and also by file age (using last modification time of the file)
rotate is maximal number of rotated files, you may find. If there is higher number of rotated log files, their names are lexically sorted and the lexically smallest ones are removed.
maxage defines another criteria for removing rotated log files. Any rotated log file, being older than given number of days is removed. Note, that the date is detected from the file last modification time, not from file name.
dateformat allows specific formatting for date in rotated files. Man page notes, that the format shall result in lexically correct sorting.
dateyesterday allows using dates in log file names one day back.
To keep given number of days in daily rotated files (e.g. 7), you must set rotate to value of 7 and you may ignore maxage, if your files are created and rotated really every day.
If log creation does not happen for couple of days, a.g. for 14 days, number of rotated log files will be still the same (7).
maxage will improve the situation in "logs not produced" scenarios by always removing too old files. After 7 days of no log production there will be no rotated log files present.
You cannot use dateformat as OP shows, as it is not lexically sortable. Messing up with dateformat would probably result in removing other rotated log files than you really wanted.
Tip: Run logrotate from command line with -d option to perform a dry run: you will see what logrotate would do but doesn't actually do anything. Then perform a manual run using -v (verbose) so that you can confirm that what is done is what you want.
Solution: clean logs created by cron
The concept is:
Let cron to create and update the log files, but make small modification to create files, following logrotate standard file names when using default dateext
/data/tier2/scripts/logs/recover_standby_SID.log-`date +\%Y\%m\%d`.log
Use logrotate only for removing too old log files
aim at not existing log file /data/tier2/scripts/logs/recover_standby_SID.log
use missingok to let logrotate cleanup to happen
set rotate high enough, to cover number of log files to keep (at least 7, if there will be one "rotated" log file a day, but you can safely set it very high like 9999)
set maxage to 7. This will remove files which have last modification time higher than 7 days.
dateext is used just to ensure, logrotate searches for older files looking like rotated.
Logrotate configuration file would look like:
data/tier2/scripts/logs/recover_standby_SID.log {
daily
missingok
rotate 9999
maxage 7
dateext
}
Solution: rotate directly by logrotate once a day
I am not sure, how is source recovery standby file created, but I will assume, Oracle or some script of yours is regularly or continually appending to a file /data/tier2/scripts/logs/recover_standby_SID.log
The concept is:
rotate the file once a day by logrotate
working directly with log file containing recovery data /data/tier2/scripts/logs/recover_standby_SID.log
daily will cause rotation once a day (in terms of how cron understands daily)
rotate must be set to 7 (or any higher number).
maxage set to 7 (days)
dateext to use default logrotate date suffix
dateyesterday used to cause date suffixes in rotated files being one day back.
missingok to clean older files even when no new content to rotate is present.
Logrotate config would look like:
data/tier2/scripts/logs/recover_standby_SID.log {
daily
missingok
rotate 7
maxage 7
dateext
dateyesterday
}
Note, that you may need to play a bit with copytruncate and other similar options which are related to how is the source log file created by external process and how it reacts to the act of rotation.

You can use find command to do that task easily! It will delete all 7 Days old files. Put it in crontab and run nightly basis:
$ cd /data/tier2/scripts/logs/
$ /usr/bin/find . -mtime +7 -name "*.log" -print -delete
Or Better way
$ /usr/bin/find /data/tier2/scripts/logs/ -mtime +7 -name "*.log" -print -delete;

(Updated)
Your options are:
As Satish answered, abandon logrotate and put a find script in cron
You could even, use logrotate and put a find script in the postrotate command
Initially, I thought that changing the dateformat to match your logs might work but as Reid Nabinger pointed out the date format was not compatible with logrotate anyway. Recently, I tried to configure the same thing but for Java rotated logs that I wanted logrotate to delete. I tried the configuration below but it kept trying to delete all logs
/opt/jboss/log/server.log.* {
missingok
rotate 0
daily
maxage 30
}
I ended up just implementing what Satish suggested - a simple find with rm script in cron.

(Unable to comment as not enough reputation)
I had a similar issue. By all accounts logrotate is useless for use on filenames with built in datestamps.
If all else was equal I would probably go with find in a cron job.
For my own reasons I wanted to use logrotate and eventually found a way: https://stackoverflow.com/a/23108631
In essence it was a way of encapsulating the cron job in a logrotate file. Maybe not the prettiest or most efficient but like I said, I had my reasons.

As per #Jan Vlcinsky, you can let logrotate add the date - just use dateyesterday to get the right date.
Or, if you want to put in the date yourself, you can 'aim' at the name without the date , and then the names with the date will be cleaned up.
However, what I found is that if I don't have a log file there, logrotate doesn't do the cleanup of the files with dates.
But if you're prepared to have an empty log file lying around, then it can be made to work.
For example, to clean up /var/log/mylogfile.yyyymmdd.log after 7 days, touch /var/log/mylogfile.log, then configure logrotate as follows:
/var/log/mylogfile.log
{
daily
rotate 7
maxage 7
dateext
dateformat .%Y%m%d
extension .log
ifempty
create
}
This entry, combined with the existence of mylogfile.log, triggers logrotate to clean up old files, as if they had been created by logrotate.
daily, rotate plus maxage cause old log files to be deleted after 7 days (or 7 old log files, whichever comes first).
dateext, dateformat plus extension causes logrotate to match our filesnames.
And ifempty plus create ensure that there continues to be an empty file there, or the log rotation would stop.
Another tip for testing, be prepared to edit /var/lib/logrotate.status to reset the 'last rotated' date or logrotate won't do anything for you.

FYI I know that this is an old question, but the reason why it does not work for you is because your dateformat is not lexically sortable. From the manpage:
dateformat format_string
Specify the extension for dateext using the notation similar to strftime(3) function. Only
%Y %m %d and %s specifiers are allowed. The default value is -%Y%m%d. Note that also the
character separating log name from the extension is part of the dateformat string. The sys-
tem clock must be set past Sep 9th 2001 for %s to work correctly. Note that the datestamps
generated by this format must be lexically sortable (i.e., first the year, then the month
then the day. e.g., 2001/12/01 is ok, but 01/12/2001 is not, since 01/11/2002 would sort
lower while it is later). This is because when using the rotate option, logrotate sorts all
rotated filenames to find out which logfiles are older and should be removed.
The solution is to either change to one that goes year-month-date, or call an external process to perform the cleanup.

Related

Recursively replace linux file and folder names such as "%m-%d-%y.tar" with their actual creation month/day/year

I'm looking for something like this but with its original creation date instead of the current date.
Example: This folder (output below is from Linux command ls -ltr)
drwxrwxr-x 2 backup_user backup_user 4096 Apr 26 01:06 "%m-%d-%y"
would have its file name changed to "04-26-20".
Since there are some information missing I try to make assumptions and show a possible solution approach in general.
As already mentioned within the comments, for a filesystem like EXT3 there would be no creation time. It might be possible to use the modification time which could be gathered via the stat command, i.e.
MTIME=$(stat --format="%y" \"%m-%d-%y\" | cut -d " " -f 1)
... or even access time or change time.
The date of MTIME is given in format %Y-%m-%d and can be changed for the new file name via
FNAME=$(date -d ${MTIME} +%m-%d-%y)
Than it it is possible to rename the directory, i.e.
mv \"%m-%d-%y\" ${FNAME}
which will of course change the timestamps within the filesystem for the directory.

How to preserve timestamp of original file post zip compression?

I have a lot of files on our servers which we compression with a filter that only the files older than x days will get compressed.
The zip command compresses the original, makes a filename.zip and removes the original.
This has a small problem that the timestamp changes since the compression job runs after x days.
So when we run files to remove older files (which are by now zip files), not all files get removed since the timestamp has changed from the original file to the compressed file.
I would like to add a condition where while zipping, i want the original timestamp of the file to be retained by the zip archive even though its running at a later date.
One way of doing this would be to
Get timestamp of each original file with a date command
Compress the original, remove the original
Use and insert the earlier stored timestamp to the new zip file using "touch"
I am looking for a simpler solution.
Some old file I had:
$ ls -l foo
-rw-r--r-- 1 james james 120 Sep 5 07:28 foo
Zip and redate:
$ zip foo.zip foo && touch -d "$(date -R -r foo)" foo.zip
Check it out:
$ ls -l foo.zip
-rw-r--r-- 1 james james 120 Sep 5 07:28 foo.zip
Remove the original:
$ rm -i foo
Yes you can unzip a file and preserve the old timestamp from the original time it was created. Steps to do this are as below:
Click on the filename.zip, properties
In the General tab, the security says "This file came from another computer and might be blocked to help protect this computer". Click on the Unblock check box and click OK
Extract the file and volla, the extracted file has the datatime stamp when the file was created/modified

rsync only files younger than xy days

I am using rsync to copy photos form our satellite servers into main server. So the script doing it is basically connecting from PC to PC and executing rsync.
I have been trying to use find to determine files younger than xy days (it will be days, but number can vary). Specifing the files with --files-from=<() BUT the command find /var/dav/davserver/ -mtime -3 -type f -exec basename {} \; is on some machines very very slow, and even makes rsync to timeout. Also they are servers, so running this command every few minutes would cost too much processor power that I don't want to take away.
The second aproach was to take advantage of the way we are storing those files, under /var/dav/davserver/year/month/day/ directory. However as I started to work on it, I have realized that I need to write quite a some code do take care of end of months and years, even more that number of days is not fixed (it can be more than 31 days, thus this scrip could need to run through several months).
So I was wondering if there is not some easier way how to achieve this without killing source PCs processor or write a whole new library to take care of all month/year ends?
EDIT:
I have prepared script that generates paths to files for me. What I did, is that I left handling end of months/year for date..
#!/bin/bash
DATE_now=`date +"%Y-%m-%d"`
DATE_end=`date -d "-$1 days" +"%Y-%m-%d"`
echo "Date now: $DATE_now | Date end: $DATE_end"
start_d=`date +%s`
end_d=`date -d "-$1 days" +%s`
synced_day=$DATE_now
synced_day_s=$start_d
daycount=1
echo "" > /tmp/$2_paths
while [ $synced_day_s -ge $end_d ]; do
DAY=$(date -d "$synced_day" '+%d')
MONTH=$(date -d "$synced_day" '+%m')
YEAR=$(date -d "$synced_day" '+%Y')
SYNC_DIR="/var/dav/davserver/$YEAR/$MONTH/$DAY/**"
echo "Adding day ($synced_day) directory: \"$SYNC_DIR\" to synced paths | Day: $daycount"
echo $SYNC_DIR >> /tmp/$2_paths
synced_day=$(date -d "$synced_day -1 days" +"%Y-%m-%d")
synced_day_s=$(date -d "$synced_day" +%s)
daycount=$((daycount+1))
done
and counting down days using it, than just extract needed info. This script gives me a list of directories to rsync:
rrr#rRr-kali:~/bash_devel$ bash date_extract.sh 8 Z00163
Date now: 2017-06-29 | Date end: 2017-06-21
Adding day (2017-06-29) directory: "/var/dav/davserver/2017/06/29/**" to synced paths | Day: 1
Adding day (2017-06-28) directory: "/var/dav/davserver/2017/06/28/**" to synced paths | Day: 2
Adding day (2017-06-27) directory: "/var/dav/davserver/2017/06/27/**" to synced paths | Day: 3
Adding day (2017-06-26) directory: "/var/dav/davserver/2017/06/26/**" to synced paths | Day: 4
Adding day (2017-06-25) directory: "/var/dav/davserver/2017/06/25/**" to synced paths | Day: 5
Adding day (2017-06-24) directory: "/var/dav/davserver/2017/06/24/**" to synced paths | Day: 6
Adding day (2017-06-23) directory: "/var/dav/davserver/2017/06/23/**" to synced paths | Day: 7
Adding day (2017-06-22) directory: "/var/dav/davserver/2017/06/22/**" to synced paths | Day: 8
rrr#rRr-kali:~/bash_devel$ cat /tmp/Z00163_paths
/var/dav/davserver/2017/06/29/**
/var/dav/davserver/2017/06/28/**
/var/dav/davserver/2017/06/27/**
/var/dav/davserver/2017/06/26/**
/var/dav/davserver/2017/06/25/**
/var/dav/davserver/2017/06/24/**
/var/dav/davserver/2017/06/23/**
/var/dav/davserver/2017/06/22/**
rrr#rRr-kali:~/bash_devel$
However, now is my problem to use this list, I have been trying to use many combination of --include and --exclude commands with both --include-files and --include-from BUT I am getting only 2 results: either everything is being rsynced, or nothing.
Since you already have files ordered by date (in directories), it's easy and efficient to just rsync those directories:
#!/bin/bash
maxage="45" # in days, from today
for ((d=0; d<=maxage; d++)); do
dir="/var/dav/davserver/$(date -d "-$d day" +"%Y/%m/%d")"
rsync -avrz server:"$dir" localdir
done
We're using date to calculate today - x days and iterate over all days from 0 to your maxage.
Edit: use arithmetic for loop instead of iterating over GNU seq range.
So, I have solved it with combination of:
Script generating paths according to actual date. Details are presented in my edit of initial post. It simply uses date to go through previous days and manage month and year ends. And generate paths from those dates. However radomir's solution is simplier, so I will use it. (Its basically same as I did, just simplier way to write it down).
than I have used combination of --include-files=/tmp/files_list and -r a.k.a. --recursive argument to use this list of paths properly.
(It was copying only empty directories without -r. And everything or nothing if I have used --include-from instead of --include-files)
Final rsync command is:
rsync --timeout=300 -Sazrv --force --delete --numeric-ids --files-from=/tmp/date_paths app_core#172.23.160.1:/var/dav/davserver/ /data/snapshots/
However this solution is not deleting old files on my side, despite --delete argument. Will probably need to make an extra script for it.

uwsgi logrotate not working

I have set up the configuration file for uwsgi logrotate. When I tested it, it seemed it will work.
logrotate -dvf /etc/logrotate.d/uwsgi
reading config file /etc/logrotate.d/uwsgi
reading config info for "/var/log/uwsgi/*.log"
Handling 1 logs
rotating pattern: "/var/log/uwsgi/*.log" forced from command line (5 rotations)
empty log files are rotated, old logs are removed
considering log /var/log/uwsgi/uwsgi.log
log needs rotating
rotating log /var/log/uwsgi/uwsgi.log, log->rotateCount is 5
dateext suffix '-20131211'
glob pattern '-[0-9][0-9][0-9][0-9][0-9][0-9][0-9][0-9]'
glob finding old rotated logs failed
copying /var/log/uwsgi/uwsgi.log to /var/log/uwsgi/uwsgi.log-20131211
truncating /var/log/uwsgi/uwsgi.log
compressing log with: /bin/gzip
But the cron job was executed and nothing happened. What could be wrong? My entry is
"/var/log/uwsgi/*.log" {
copytruncate
daily
dateext
rotate 5
compress
}
In cron log I can see
Dec 11 03:45:01 myhost run-parts(/etc/cron.daily)[930]: finished logrotate
Can I get more details about "what happened" somewhere - a detailed output of the logrotate job?
I tried adding
missingok
and that seems to have worked.

Send previous day file to server in unix

I'm devloping a shell script to scp a.txt to different servers(box1 and box2) and script is running in boxmain server. Below are the requirements,
my script will connect to db2 database and generate a.txt file in boxmain
a.txt will be scp'ed to box1 once the file is generated
The file generated in boxmain(a.txt) will be scp'ed to box2 on the next day, i.e. It will be an SCP of the previous day's boxmain file
Note : box1,box2,boxmain are servers
i tried the below, able to finish first 2 tasks but stuck in 3rd. Please suggest how to achieve the third point. Thanks in advance.
db2 -tvf query.sql #creates a.txt
scp a.txt user#box1:/root/a.txt
now=$(date +"%m/%d/%Y")
cp a.txt a_$now.txt
my os version is AIX test 1 6
There is a slight problem with your question definition: using '/' in the name of your source filename will make it interpreted as not just a filename but a path containing directories as well because '/' is the directory separator. It might be a good idea to use now=$(date +"%m-%d-%Y") instead of now=$(date +"%m/%d/%Y").
But to answer your actual problem which I think boils down to this: how to get date(1) to output a date from yesterday on AIX?
The answer was found from The UNIX and Linux Forums: just set the environment variable describing your timezone to have +24 in it and you'll get yesterdays date from output of date.
For example:
user#foo ~]# date
Mon Nov 4 09:40:34 EET 2013
user#foor ~]# TZ=EST+24 date
Sun Nov 3 07:40:36 EST 2013
Applying this to your problem, just set an appropriate value for TZ when you run now=$(date +"%m/%d/%Y") ie. use now=$(TZ=ZONE+24 date +"%m-%d-%Y") (note the corrections on the path separator and replace ZONE with your own timezone).

Resources