This the log file that I want to monitor:
/test/James-2018-11-16_15215125111115-16.15.41.111-appserver0.log
I want Nagios to read it this log file so I can monitor a specific string.
The issue is with 15215125111115 this is the random id that gets generated
Here is my script where the Nagios is checking for the Logfile path:
Veriables:
HOSTNAMEIP=$(/bin/hostname -i)
DATE=$(date +%F)
..
CHECK=$(/usr/lib64/nagios/plugins/check_logfiles/check_logfiles
--tag='failorder' --logfile=/test/james-${date +"%F"}_-${HOSTNAMEIP}-appserver0.log
....
I am getting the following output in nagios:
could not find logfile /test/James-2018-11-16_-16.15.41.111-appserver0.log
15215125111115 This number is always generated randomly but I don't know how to get nagios to identify it. Is there a way to add a variable for this or something? I tried adding an asterisk "*" but that didn't work.
Any ideas would be much appreciated.
--tag failorder --type rotating::uniform --logfile /test/dummy \
--rotation "james-${date +"%F"}_\d+-${HOSTNAMEIP}-appserver0.log"
If you add a "-v" you can see what happens inside. Type rotating::uniform tells check_logfiles that the rotation scheme makes no difference between current log and rotated archives regarding the filename. (You frequently find something like xyz..log). What check_logfile does is to look into the directory where the logfiles are supposed to be. From /test/dummy it only uses the directory part. Then it takes all the files inside /test and compares the filenames with the --rotation argument. Those files which match are sorted by modification time. So check_logfiles knows which of the files in question was updated recently and the newest is considered to be the current logfile. And inside this file check_logfiles searches the criticalpattern.
Gerhard
I have following files in my someDir:
blacklistadm.out00009 blacklistadm.out00008 blacklistadm.out00007 blacklistadm.out00010 blacklistadm.out00025
I have following log rotation pattern in /etc/logrotate.d/:
someDir/blacklistadm.out*[0-9]{
weekly
missingok
compress
sharedscripts
postrotate
rm -f someDir/blacklistadm.out*[0-9]
endscript
}
When the log rotation script is run, it is somehow deleting all the files in someDir. What I want is to .gz all the files and after compressing delete the original file. I don't want to delete the .gz files.
The files are being deleted because your globbing is being used incorrectly.
blacklistadm.out*[0-9]
literally expands to any file starting with "blacklistadm.out" followed by any sequence of 0 or more characters, ending with a single character within the defined range of 0-9.
This is globbing on to everything obviously, because all your files start with "blacklistadm.out" and end in a number, so when you run your postrotate script with an identical glob you are matching everything in that directory and deleting it.
I am trying to redirect the output of a continuously running program to a file say error_log.
The command I am using looks like this, where myprogram.sh runs continuously generating data and writing to /var/log/httpd/error_log
myprogram.sh >> /var/log/httpd/error_log
Now I am using logrotate to rotate this file per hour. I am using create command in logrotate so that it renames the original file and creates a new one.
The logrotate config looks like
/var/log/httpd/error_log {
# copytruncate
create
rotate 4
dateext
missingok
ifempty
.
.
.
}
But here redirection fails. What I want is myprogram.sh to write data to error_log file irrespective of it being rotated by logrotate, obviously to newly generated error_log file
Any idea how to make redirection work based on the file name and not the descriptor ?
OR
Any other way of doing it in bash ?
If I understood your problem, one solution (without modify myprogram.sh) could be:
$ myprogram.sh | while true; do head -10 >> /var/log/httpd/error_log; done
Explaining:
myprogram.sh writes to stdout
We redirect this output to while bash sentence through a pipe |.
while true is an infinite loop that will never end, nor even when myprogram.sh ends which should break the pipe.
In each loop head command is called to append the first 10 lines read from the pipe to the end of current /var/log/httpd/error_log file (that may be different from the last loop because of logrotate).
(You can change the number of lines being written in each loop)
And another way is:
$ myprogram.sh | while read line; do echo "$line" >> /var/log/httpd/error_log; done
That's very similar to the first one, but this ends the loop when myprogram.sh ends or closes it's stdout.
It works line by line instead in groups of lines.
I need your help to understand logrotate behaviour .
logrotate.conf:
# see "man logrotate" for details
# rotate log files weekly
weekly
# keep 4 weeks worth of backlogs
rotate 4
# create new (empty) log files after rotating old ones
create
# use date as a suffix of the rotated file
dateext
# uncomment this if you want your log files compressed
#compress
# RPM packages drop log rotation information into this directory
include /etc/logrotate.d
# no packages own wtmp and btmp -- we'll rotate them here
/var/log/wtmp {
monthly
create 0664 root utmp
minsize 1M
rotate 1
}
/var/log/btmp {
missingok
monthly
create 0600 root utmp
rotate 1
}
# system-specific logs may be also be configured here.
In the logrotate.d directory, I have for example one file called consul-log:
/var/log/consul {
size 500M
missingok
rotate 0
compress
notifempty
copytruncate
}
In the /var/lib/logrotate.status, file I can see these lines (related to the consul log file):
logrotate state -- version 2
"/var/log/consul" 2016-1-25
My question is:
If I have rotate 4 inside logrotate.conf, but I have rotate 0 inside logrotate.d/consul-log, will logrotate use rotate 0 or rotate 4?
With respect to your question:
If I have rotate 4 inside logrotate.conf, but I have rotate 0 inside logrotate.d/consul-log, will logrotate use rotate 0 or rotate 4?
logrotate.conf is the configuration file for the systemwide changes and any other configuration file which you create in the logrotate.d folder will be overriden.
Because of this directive:
# RPM packages drop log rotation information into this directory
include /etc/logrotate.d
For /var/log/consul, it will use rotate 0. Quote from the man page:
Each configuration file can set global options (local definitions override global ones, and later definitions override earlier ones) and specify logfiles to rotate.
In your case, rotate 0 is both local and later.
I'm currently trying to work out a method of tidying up Oracle Recover log files that are created by Cron...
Currently, our Oracle standby recover process is invoked by Cron every 15mins using the following command:
0,15,30,45 * * * * /data/tier2/scripts/recover_standby.sh SID >> /data/tier2/scripts/logs/recover_standby_SID_`date +\%d\%m\%y`.log 2>&1
This creates files that look like:
$ ls -l /data/tier2/scripts/logs/
total 0
-rw-r--r-- 1 oracle oinstall 0 Feb 1 23:45 recover_standby_SID_010213.log
-rw-r--r-- 1 oracle oinstall 0 Feb 2 23:45 recover_standby_SID_020213.log
-rw-r--r-- 1 oracle oinstall 0 Feb 3 23:45 recover_standby_SID_030213.log
-rw-r--r-- 1 oracle oinstall 0 Feb 4 23:45 recover_standby_SID_040213.log
-rw-r--r-- 1 oracle oinstall 0 Feb 5 23:45 recover_standby_SID_050213.log
-rw-r--r-- 1 oracle oinstall 0 Feb 6 23:45 recover_standby_SID_060213.log
-rw-r--r-- 1 oracle oinstall 0 Feb 7 23:45 recover_standby_SID_070213.log
-rw-r--r-- 1 oracle oinstall 0 Feb 8 23:45 recover_standby_SID_080213.log
-rw-r--r-- 1 oracle oinstall 0 Feb 9 23:45 recover_standby_SID_090213.log
-rw-r--r-- 1 oracle oinstall 0 Feb 10 23:45 recover_standby_SID_100213.log
-rw-r--r-- 1 oracle oinstall 0 Feb 11 23:45 recover_standby_SID_110213.log
-rw-r--r-- 1 oracle oinstall 0 Feb 12 23:45 recover_standby_SID_120213.log
I basically want to delete off files older than x days old, which I thought logrotate would be perfect for...
I've configured logrotate with the following config file:
/data/tier2/scripts/logs/recover_standby_*.log {
daily
dateext
dateformat %d%m%Y
maxage 7
missingok
}
Is there something I'm missing to get the desired outcome?
I guess I could remove the date from the Crontab log file, and then have logrotate rotate that file, however then the date in the log file wont reflect the day the logs were generated... i.e. Recoveries on 010313 would be in file with a date of 020313 due to logrotate firing on 020313 and rotating the file...
Any other ideas?
And thank-you in advance for any responses.
Regards
Gavin
Logrotate removes files according to order in lexically sorted list of rotated log file names, and also by file age (using last modification time of the file)
rotate is maximal number of rotated files, you may find. If there is higher number of rotated log files, their names are lexically sorted and the lexically smallest ones are removed.
maxage defines another criteria for removing rotated log files. Any rotated log file, being older than given number of days is removed. Note, that the date is detected from the file last modification time, not from file name.
dateformat allows specific formatting for date in rotated files. Man page notes, that the format shall result in lexically correct sorting.
dateyesterday allows using dates in log file names one day back.
To keep given number of days in daily rotated files (e.g. 7), you must set rotate to value of 7 and you may ignore maxage, if your files are created and rotated really every day.
If log creation does not happen for couple of days, a.g. for 14 days, number of rotated log files will be still the same (7).
maxage will improve the situation in "logs not produced" scenarios by always removing too old files. After 7 days of no log production there will be no rotated log files present.
You cannot use dateformat as OP shows, as it is not lexically sortable. Messing up with dateformat would probably result in removing other rotated log files than you really wanted.
Tip: Run logrotate from command line with -d option to perform a dry run: you will see what logrotate would do but doesn't actually do anything. Then perform a manual run using -v (verbose) so that you can confirm that what is done is what you want.
Solution: clean logs created by cron
The concept is:
Let cron to create and update the log files, but make small modification to create files, following logrotate standard file names when using default dateext
/data/tier2/scripts/logs/recover_standby_SID.log-`date +\%Y\%m\%d`.log
Use logrotate only for removing too old log files
aim at not existing log file /data/tier2/scripts/logs/recover_standby_SID.log
use missingok to let logrotate cleanup to happen
set rotate high enough, to cover number of log files to keep (at least 7, if there will be one "rotated" log file a day, but you can safely set it very high like 9999)
set maxage to 7. This will remove files which have last modification time higher than 7 days.
dateext is used just to ensure, logrotate searches for older files looking like rotated.
Logrotate configuration file would look like:
data/tier2/scripts/logs/recover_standby_SID.log {
daily
missingok
rotate 9999
maxage 7
dateext
}
Solution: rotate directly by logrotate once a day
I am not sure, how is source recovery standby file created, but I will assume, Oracle or some script of yours is regularly or continually appending to a file /data/tier2/scripts/logs/recover_standby_SID.log
The concept is:
rotate the file once a day by logrotate
working directly with log file containing recovery data /data/tier2/scripts/logs/recover_standby_SID.log
daily will cause rotation once a day (in terms of how cron understands daily)
rotate must be set to 7 (or any higher number).
maxage set to 7 (days)
dateext to use default logrotate date suffix
dateyesterday used to cause date suffixes in rotated files being one day back.
missingok to clean older files even when no new content to rotate is present.
Logrotate config would look like:
data/tier2/scripts/logs/recover_standby_SID.log {
daily
missingok
rotate 7
maxage 7
dateext
dateyesterday
}
Note, that you may need to play a bit with copytruncate and other similar options which are related to how is the source log file created by external process and how it reacts to the act of rotation.
You can use find command to do that task easily! It will delete all 7 Days old files. Put it in crontab and run nightly basis:
$ cd /data/tier2/scripts/logs/
$ /usr/bin/find . -mtime +7 -name "*.log" -print -delete
Or Better way
$ /usr/bin/find /data/tier2/scripts/logs/ -mtime +7 -name "*.log" -print -delete;
(Updated)
Your options are:
As Satish answered, abandon logrotate and put a find script in cron
You could even, use logrotate and put a find script in the postrotate command
Initially, I thought that changing the dateformat to match your logs might work but as Reid Nabinger pointed out the date format was not compatible with logrotate anyway. Recently, I tried to configure the same thing but for Java rotated logs that I wanted logrotate to delete. I tried the configuration below but it kept trying to delete all logs
/opt/jboss/log/server.log.* {
missingok
rotate 0
daily
maxage 30
}
I ended up just implementing what Satish suggested - a simple find with rm script in cron.
(Unable to comment as not enough reputation)
I had a similar issue. By all accounts logrotate is useless for use on filenames with built in datestamps.
If all else was equal I would probably go with find in a cron job.
For my own reasons I wanted to use logrotate and eventually found a way: https://stackoverflow.com/a/23108631
In essence it was a way of encapsulating the cron job in a logrotate file. Maybe not the prettiest or most efficient but like I said, I had my reasons.
As per #Jan Vlcinsky, you can let logrotate add the date - just use dateyesterday to get the right date.
Or, if you want to put in the date yourself, you can 'aim' at the name without the date , and then the names with the date will be cleaned up.
However, what I found is that if I don't have a log file there, logrotate doesn't do the cleanup of the files with dates.
But if you're prepared to have an empty log file lying around, then it can be made to work.
For example, to clean up /var/log/mylogfile.yyyymmdd.log after 7 days, touch /var/log/mylogfile.log, then configure logrotate as follows:
/var/log/mylogfile.log
{
daily
rotate 7
maxage 7
dateext
dateformat .%Y%m%d
extension .log
ifempty
create
}
This entry, combined with the existence of mylogfile.log, triggers logrotate to clean up old files, as if they had been created by logrotate.
daily, rotate plus maxage cause old log files to be deleted after 7 days (or 7 old log files, whichever comes first).
dateext, dateformat plus extension causes logrotate to match our filesnames.
And ifempty plus create ensure that there continues to be an empty file there, or the log rotation would stop.
Another tip for testing, be prepared to edit /var/lib/logrotate.status to reset the 'last rotated' date or logrotate won't do anything for you.
FYI I know that this is an old question, but the reason why it does not work for you is because your dateformat is not lexically sortable. From the manpage:
dateformat format_string
Specify the extension for dateext using the notation similar to strftime(3) function. Only
%Y %m %d and %s specifiers are allowed. The default value is -%Y%m%d. Note that also the
character separating log name from the extension is part of the dateformat string. The sys-
tem clock must be set past Sep 9th 2001 for %s to work correctly. Note that the datestamps
generated by this format must be lexically sortable (i.e., first the year, then the month
then the day. e.g., 2001/12/01 is ok, but 01/12/2001 is not, since 01/11/2002 would sort
lower while it is later). This is because when using the rotate option, logrotate sorts all
rotated filenames to find out which logfiles are older and should be removed.
The solution is to either change to one that goes year-month-date, or call an external process to perform the cleanup.