I have following files in my someDir:
blacklistadm.out00009 blacklistadm.out00008 blacklistadm.out00007 blacklistadm.out00010 blacklistadm.out00025
I have following log rotation pattern in /etc/logrotate.d/:
someDir/blacklistadm.out*[0-9]{
weekly
missingok
compress
sharedscripts
postrotate
rm -f someDir/blacklistadm.out*[0-9]
endscript
}
When the log rotation script is run, it is somehow deleting all the files in someDir. What I want is to .gz all the files and after compressing delete the original file. I don't want to delete the .gz files.
The files are being deleted because your globbing is being used incorrectly.
blacklistadm.out*[0-9]
literally expands to any file starting with "blacklistadm.out" followed by any sequence of 0 or more characters, ending with a single character within the defined range of 0-9.
This is globbing on to everything obviously, because all your files start with "blacklistadm.out" and end in a number, so when you run your postrotate script with an identical glob you are matching everything in that directory and deleting it.
I am using logrotate version 3.12.3. How do I tell logrotate to exclude files that are already rotated/compressed?
For example, if I am rotating all files in /var/log as
"/var/log/*" {
compress
missingok
dateext
rotate 4
size=5000k
}
after a file is compressed, how do I tell logrotate to not rotate already rotated/compressed files? For example:
/var/log/file1
after logrotation it becomes
/var/log/file1.20211212.gz
I tried
tabooext + .gz
on top of definitions but it doesn't seem to take effect.
From man logrotate
include file_or_directory
Reads the file given as an argument as if it was included inline where the include directive appears. If a directory is given,
most of the files in that
directory are read in alphabetic order before processing of the including file continues. The only files which are ignored are
files which are not regular
files (such as directories and named pipes) and files whose names end with one of the taboo extensions, as specified by the
tabooext directive.
If something like below worked, that would have been good.
/var/log/*[!".gz"] {
}
Thank you.
EDIT
Maybe do something like
/var/log/*[!.][!g][!z] {
..
}
but it skips file named /var/log/test1gz. How do I match the . character with globbing?
Try to find all glob possibilities that does not accept *.gz. As you have three letters (., g, z), there are 2^3 = 8 possible combinations (3 characters, 2 states x and !x), minus one which is *.gz.
The seven other possibilities are the following:
All files that don't have an extension point as third last but end with gz (e.g. filegz)
/var/log/*[!.]gz
All files which 2-letters extension doesn't begin by g but ends by z (e.g. file.7z)
/var/log/*.[!g]z
All files which 2-letters extension begins by g but doesn't end by z (e.g. file.gg)
/var/log/*.g[!z]
All files that don't end with .gz but end with a z (e.g. file.ezz)
/var/log/*[!.][!g]z
All files that don't end with .gz but with g as second to last letter (e.g. file.cgi)
/var/log/*[!.]g[!z]
All files which 2-letters extension is not gz (e.g. file.hh)
/var/log/*.[!g][!z]
Finally, all files that don't finish by .gz (e.g. file.txt)
/var/log/*[!.][!g][!z]
So this gives us:
/var/log/*[!.]gz
/var/log/*.[!g]z
/var/log/*.g[!z]
/var/log/*[!.][!g]z
/var/log/*[!.]g[!z]
/var/log/*.[!g][!z]
/var/log/*[!.][!g][!z]
{
...
The combinations can be generated with python:
import itertools as it
suffix = '.gz'
states = ([x, '![%c]' % x] for x in suffix)
for res in it.product(*states):
print ''.join(res)
Hope it helps!
You don't need to specify a complete folder. You can define a file aswell. So first you should make sub folders for every service that you have a better overview. There you can place your logs into. You shouldn't put that on your complete log folder because there are some more rotate scripts located.
for example:
/var/log/yum.log {
missingok
notifempty
size 30k
yearly
create 0600 root root
}
I have set up the configuration file for uwsgi logrotate. When I tested it, it seemed it will work.
logrotate -dvf /etc/logrotate.d/uwsgi
reading config file /etc/logrotate.d/uwsgi
reading config info for "/var/log/uwsgi/*.log"
Handling 1 logs
rotating pattern: "/var/log/uwsgi/*.log" forced from command line (5 rotations)
empty log files are rotated, old logs are removed
considering log /var/log/uwsgi/uwsgi.log
log needs rotating
rotating log /var/log/uwsgi/uwsgi.log, log->rotateCount is 5
dateext suffix '-20131211'
glob pattern '-[0-9][0-9][0-9][0-9][0-9][0-9][0-9][0-9]'
glob finding old rotated logs failed
copying /var/log/uwsgi/uwsgi.log to /var/log/uwsgi/uwsgi.log-20131211
truncating /var/log/uwsgi/uwsgi.log
compressing log with: /bin/gzip
But the cron job was executed and nothing happened. What could be wrong? My entry is
"/var/log/uwsgi/*.log" {
copytruncate
daily
dateext
rotate 5
compress
}
In cron log I can see
Dec 11 03:45:01 myhost run-parts(/etc/cron.daily)[930]: finished logrotate
Can I get more details about "what happened" somewhere - a detailed output of the logrotate job?
I tried adding
missingok
and that seems to have worked.
I'm currently trying to work out a method of tidying up Oracle Recover log files that are created by Cron...
Currently, our Oracle standby recover process is invoked by Cron every 15mins using the following command:
0,15,30,45 * * * * /data/tier2/scripts/recover_standby.sh SID >> /data/tier2/scripts/logs/recover_standby_SID_`date +\%d\%m\%y`.log 2>&1
This creates files that look like:
$ ls -l /data/tier2/scripts/logs/
total 0
-rw-r--r-- 1 oracle oinstall 0 Feb 1 23:45 recover_standby_SID_010213.log
-rw-r--r-- 1 oracle oinstall 0 Feb 2 23:45 recover_standby_SID_020213.log
-rw-r--r-- 1 oracle oinstall 0 Feb 3 23:45 recover_standby_SID_030213.log
-rw-r--r-- 1 oracle oinstall 0 Feb 4 23:45 recover_standby_SID_040213.log
-rw-r--r-- 1 oracle oinstall 0 Feb 5 23:45 recover_standby_SID_050213.log
-rw-r--r-- 1 oracle oinstall 0 Feb 6 23:45 recover_standby_SID_060213.log
-rw-r--r-- 1 oracle oinstall 0 Feb 7 23:45 recover_standby_SID_070213.log
-rw-r--r-- 1 oracle oinstall 0 Feb 8 23:45 recover_standby_SID_080213.log
-rw-r--r-- 1 oracle oinstall 0 Feb 9 23:45 recover_standby_SID_090213.log
-rw-r--r-- 1 oracle oinstall 0 Feb 10 23:45 recover_standby_SID_100213.log
-rw-r--r-- 1 oracle oinstall 0 Feb 11 23:45 recover_standby_SID_110213.log
-rw-r--r-- 1 oracle oinstall 0 Feb 12 23:45 recover_standby_SID_120213.log
I basically want to delete off files older than x days old, which I thought logrotate would be perfect for...
I've configured logrotate with the following config file:
/data/tier2/scripts/logs/recover_standby_*.log {
daily
dateext
dateformat %d%m%Y
maxage 7
missingok
}
Is there something I'm missing to get the desired outcome?
I guess I could remove the date from the Crontab log file, and then have logrotate rotate that file, however then the date in the log file wont reflect the day the logs were generated... i.e. Recoveries on 010313 would be in file with a date of 020313 due to logrotate firing on 020313 and rotating the file...
Any other ideas?
And thank-you in advance for any responses.
Regards
Gavin
Logrotate removes files according to order in lexically sorted list of rotated log file names, and also by file age (using last modification time of the file)
rotate is maximal number of rotated files, you may find. If there is higher number of rotated log files, their names are lexically sorted and the lexically smallest ones are removed.
maxage defines another criteria for removing rotated log files. Any rotated log file, being older than given number of days is removed. Note, that the date is detected from the file last modification time, not from file name.
dateformat allows specific formatting for date in rotated files. Man page notes, that the format shall result in lexically correct sorting.
dateyesterday allows using dates in log file names one day back.
To keep given number of days in daily rotated files (e.g. 7), you must set rotate to value of 7 and you may ignore maxage, if your files are created and rotated really every day.
If log creation does not happen for couple of days, a.g. for 14 days, number of rotated log files will be still the same (7).
maxage will improve the situation in "logs not produced" scenarios by always removing too old files. After 7 days of no log production there will be no rotated log files present.
You cannot use dateformat as OP shows, as it is not lexically sortable. Messing up with dateformat would probably result in removing other rotated log files than you really wanted.
Tip: Run logrotate from command line with -d option to perform a dry run: you will see what logrotate would do but doesn't actually do anything. Then perform a manual run using -v (verbose) so that you can confirm that what is done is what you want.
Solution: clean logs created by cron
The concept is:
Let cron to create and update the log files, but make small modification to create files, following logrotate standard file names when using default dateext
/data/tier2/scripts/logs/recover_standby_SID.log-`date +\%Y\%m\%d`.log
Use logrotate only for removing too old log files
aim at not existing log file /data/tier2/scripts/logs/recover_standby_SID.log
use missingok to let logrotate cleanup to happen
set rotate high enough, to cover number of log files to keep (at least 7, if there will be one "rotated" log file a day, but you can safely set it very high like 9999)
set maxage to 7. This will remove files which have last modification time higher than 7 days.
dateext is used just to ensure, logrotate searches for older files looking like rotated.
Logrotate configuration file would look like:
data/tier2/scripts/logs/recover_standby_SID.log {
daily
missingok
rotate 9999
maxage 7
dateext
}
Solution: rotate directly by logrotate once a day
I am not sure, how is source recovery standby file created, but I will assume, Oracle or some script of yours is regularly or continually appending to a file /data/tier2/scripts/logs/recover_standby_SID.log
The concept is:
rotate the file once a day by logrotate
working directly with log file containing recovery data /data/tier2/scripts/logs/recover_standby_SID.log
daily will cause rotation once a day (in terms of how cron understands daily)
rotate must be set to 7 (or any higher number).
maxage set to 7 (days)
dateext to use default logrotate date suffix
dateyesterday used to cause date suffixes in rotated files being one day back.
missingok to clean older files even when no new content to rotate is present.
Logrotate config would look like:
data/tier2/scripts/logs/recover_standby_SID.log {
daily
missingok
rotate 7
maxage 7
dateext
dateyesterday
}
Note, that you may need to play a bit with copytruncate and other similar options which are related to how is the source log file created by external process and how it reacts to the act of rotation.
You can use find command to do that task easily! It will delete all 7 Days old files. Put it in crontab and run nightly basis:
$ cd /data/tier2/scripts/logs/
$ /usr/bin/find . -mtime +7 -name "*.log" -print -delete
Or Better way
$ /usr/bin/find /data/tier2/scripts/logs/ -mtime +7 -name "*.log" -print -delete;
(Updated)
Your options are:
As Satish answered, abandon logrotate and put a find script in cron
You could even, use logrotate and put a find script in the postrotate command
Initially, I thought that changing the dateformat to match your logs might work but as Reid Nabinger pointed out the date format was not compatible with logrotate anyway. Recently, I tried to configure the same thing but for Java rotated logs that I wanted logrotate to delete. I tried the configuration below but it kept trying to delete all logs
/opt/jboss/log/server.log.* {
missingok
rotate 0
daily
maxage 30
}
I ended up just implementing what Satish suggested - a simple find with rm script in cron.
(Unable to comment as not enough reputation)
I had a similar issue. By all accounts logrotate is useless for use on filenames with built in datestamps.
If all else was equal I would probably go with find in a cron job.
For my own reasons I wanted to use logrotate and eventually found a way: https://stackoverflow.com/a/23108631
In essence it was a way of encapsulating the cron job in a logrotate file. Maybe not the prettiest or most efficient but like I said, I had my reasons.
As per #Jan Vlcinsky, you can let logrotate add the date - just use dateyesterday to get the right date.
Or, if you want to put in the date yourself, you can 'aim' at the name without the date , and then the names with the date will be cleaned up.
However, what I found is that if I don't have a log file there, logrotate doesn't do the cleanup of the files with dates.
But if you're prepared to have an empty log file lying around, then it can be made to work.
For example, to clean up /var/log/mylogfile.yyyymmdd.log after 7 days, touch /var/log/mylogfile.log, then configure logrotate as follows:
/var/log/mylogfile.log
{
daily
rotate 7
maxage 7
dateext
dateformat .%Y%m%d
extension .log
ifempty
create
}
This entry, combined with the existence of mylogfile.log, triggers logrotate to clean up old files, as if they had been created by logrotate.
daily, rotate plus maxage cause old log files to be deleted after 7 days (or 7 old log files, whichever comes first).
dateext, dateformat plus extension causes logrotate to match our filesnames.
And ifempty plus create ensure that there continues to be an empty file there, or the log rotation would stop.
Another tip for testing, be prepared to edit /var/lib/logrotate.status to reset the 'last rotated' date or logrotate won't do anything for you.
FYI I know that this is an old question, but the reason why it does not work for you is because your dateformat is not lexically sortable. From the manpage:
dateformat format_string
Specify the extension for dateext using the notation similar to strftime(3) function. Only
%Y %m %d and %s specifiers are allowed. The default value is -%Y%m%d. Note that also the
character separating log name from the extension is part of the dateformat string. The sys-
tem clock must be set past Sep 9th 2001 for %s to work correctly. Note that the datestamps
generated by this format must be lexically sortable (i.e., first the year, then the month
then the day. e.g., 2001/12/01 is ok, but 01/12/2001 is not, since 01/11/2002 would sort
lower while it is later). This is because when using the rotate option, logrotate sorts all
rotated filenames to find out which logfiles are older and should be removed.
The solution is to either change to one that goes year-month-date, or call an external process to perform the cleanup.
I would like to introduce a Cron tak that will 'gzip' files with the following rule:
Locate files in a folder name '/log' (can be located anywhere in the filesystem) and
gzip files, older than 2 days, that have './log' in the file name handle
I have written a the script below - which does not work - am I close? What is required to make it work? Thanks.
/usr/bin/find ./logs -mtime +2 -name "*.log*"|xargs gzip
In my crontab, I call:
/usr/sbin/logrotate -s ~/.logrotate/status ~/.logrotate/logrotate.conf
In my ~/.logrotate/logrotate.conf:
# rotate log files weekly
weekly
# keep 4 weeks worth of backlogs
rotate 4
## compression
# gzip(1)
#compresscmd /usr/bin/gzip
#compressoptions -9
#uncompresscmd /usr/bin/gunzip
#compressext .gz
# xz(1)
compresscmd /usr/bin/xz
uncompresscmd /usr/bin/xzdec
compressext .xz
/home/h3xx/.log/*.log /home/h3xx/.log/jack/*.log {
# copy and truncate original (for always-open file handles
# [read: tail -f])
copytruncate
# enable compression
compress
}
/home/h3xx/.usage/*.db {
# back up databases
copy
# enable compression
compress
}
The -name argument takes a glob. Your command would only match files literally named .log. Try -name "*.log".