I have an application generating a really heavy big log file every days (~800MB a day), thus I need to compress them but since the compression takes time, I want that logrotate compress the file after reloading/sending HUP signal to the application.
/var/log/myapp.log {
rotate 7
size 500M
compress
weekly
postrotate
/bin/kill -HUP `cat /var/run/myapp.pid 2>/dev/null` 2>/dev/null || true
endscript
}
Is it already the case that the compression takes place after the postrotate (which would be counter-intuitive)?
If not Can anyone tell me if it's possible to do that without an extra command script (an option or some trick)?
Thanks
Thomas
Adding this info here in case of anyone else that comes across this thread when actually searching for wanting a way to run a script on a file once compression has completed.
As suggested above using postrotate/endscript is no good for that.
Instead you can use lastaction/endscript, which does the job perfectly.
The postrotate script always runs before compression even when sharedscripts is in effect. Hasturkun's additional response to the first answer is therefore incorrect. When sharedscripts is in effect the only compression performed before the postrotate is for old uncompressed logs left lying around because of a delaycompress. For the current logs, compression is always performed after running the postrotate script.
The postrotate script does run before compression occurs: from the man page for logrotate
The next section of the config files defined how to handle the log file
/var/log/messages. The log will go through five weekly rotations before
being removed. After the log file has been rotated (but before the old
version of the log has been compressed), the command /sbin/killall -HUP
syslogd will be executed.
In any case, you can use the delaycompress option to defer compression to the next rotation.
#Hasturkun - One cannot add a comment unless their reputation is first above 50.
To make sure of what logrotate will do, either
test your configuration with, -d: debug which tests but does not do
anything, and -f: force it to run
or you can execute logrotate with
the -v verbose flag
With a configuration that uses a sharedscript for postrotate
$ logrotate -d -f <logrotate.conf file>
Shows the following steps:
rotating pattern: /tmp/log/messages /tmp/log/maillog /tmp/log/cron
...
renaming /tmp/log/messages to /tmp/log/messages.1
renaming /tmp/log/maillog to /tmp/log/maillog.1
renaming /tmp/log/cron to /tmp/log/cron.1
running postrotate script
<kill-hup-script executed here>
compressing log with: /bin/gzip
compressing log with: /bin/gzip
compressing log with: /bin/gzip
Related
I've started to get some problems with corrupt gzip files. I'm not sure exactly when it happens and the colleague how set our storage have quit etc. so I'm not an expert in cron jobs etc. but this is how it looks today:
/var/spool/new_files/*.csv
{
daily
rotate 12
missingok
notifempty
delaycompress
compress
sharedscripts
postrotate
service capture_data restart >/dev/null 2>&1 || true
endscript
}
In principle at midnight the script restart all csv files in var/spool/new_files/, change the name to them (increment them by 1) and gzip the one which is then named "2" and moves that to our long time storage.
I don't know if the files are corrupt just after they have been gzip or if this happens during the "transfer" to the storage. If I run zcat file_name | tail I get an invalid compressed data--length error. This error happens randomly 1-3 times per month.
So the first thing I want to do is to run gzip -k and keep the original,
Check if the files are corrupt after they have been gziped
Retry once
If this also fails add an error in logs
Stop the cron job
If the gzip file is ok after creation move it to long time storage
Test if they are ok, if not:
Retry once
If this also fails add an error in logs
Stop the cron job
Throw away the original file
Does the logic that I suggest make sense? Any suggestions how to add it to the cron job?
It seems ok... for the integrity check you could see something here How to check if a Unix .tar.gz file is a valid file without uncompressing?
You can make a .sh where you add all the commands you need and after that you can add the script .sh in the crontab in:
/var/spool/cron/
if you want to run .sh script with root just add or modify /var/spool/cron/root file... in a similar way you can add cron runned by other users.
the cron would be something like:
0 0 * * * sh <path to your sh>
I am trying to the setup log for the httpd server then I write a script in /etc/logrotate.d/apache.conf
/var/log/httpd/* {
daily
rotate 3
size 20K
compress
delaycompress
}
what I understood from this file
/var/log/httpd/* = where all the logs the stored and you want to logrotate them
daily = when you want to rotate the log
rotate= only 3 rotated logs should be kept
size = when your log file size meet with this condition
compress= make a zip file of rotated logs
delaycompress = kind of compress don't know much
so I hit to apache server that generates a lot of logs
after the log generated
where is my log are store how it is run only on when size condition matches or else
thanks for any guidance or help
one more thing when and how log rotate run why some people suggested to use cron job with logrotate
where is my log are store
Unless you specify the olddir directive, they are rotated within the same directory that they exist.
how it is run only on when size condition matches or else
If you specify the size directive, logs are only rotated if they are larger than that size:
size size
Log files are rotated only if they grow bigger then size bytes.
Files that do not meet the size requirement are ignored (https://linux.die.net/man/8/logrotate).
why some people suggested to use cron job with logrotate
logrotate is just an executable; it does not handle any facet of how or when it is executed. cron is typically how logrotate's execution is scheduled. For example, on CentOS, inside the /etc/cron.daily directory is an executable shell script:
#!/bin/sh
/usr/sbin/logrotate -s /var/lib/logrotate/logrotate.status /etc/logrotate.conf
EXITVALUE=$?
if [ $EXITVALUE != 0 ]; then
/usr/bin/logger -t logrotate "ALERT exited abnormally with [$EXITVALUE]"
fi
exit 0
This script is executed one per day by cron and is how logrotate's execution is actually initiated.
Couple other problems:
/var/log/httpd/* - Unless you're rotating files out of the original directory with the olddir directive, never end your logrotate's directory definition with a wildcard (*). This definition is going to glom on to every file in that directory, including the files that you've already rotated. Logrotate has no way of keeping track of what files are actual logs and which are stored rotations. Your directory definition should be something like /var/log/httpd/*_log instead.
You should be reloading httpd after you rotate the log files, else it will probably continue to log into the rotated file because you never closed its handle.
sharedscripts
postrotate
/bin/systemctl reload httpd.service > /dev/null 2>/dev/null || true
endscript
Again this is a CentOS-specific example.
So it seems setting the maximum log rotate size only works when it's set in uwsgi.ini and not /etc/logrotate.d/uwsgi (even though manually testing logrotate using /etc/logrotate.d/uwsgi file works, and I see no errors in the cron or logrotate status logs).
/var/log/uwsgi/*.log {
daily
missingok
dateext
rotate 7
size 100M
copytruncate
create
compress
}
Is there a setting that makes logrotate use the above instead of uwsgi.ini?
I see that you have copytruncate which is correct. If you don't then you have to use postrotate to stop and restart UWSGI.
As you mentioned manually triggering /etc/logrotate.d/uwsgi works, this seems to be a problem with your cron job.
You can put this in /etc/cron.d/ then check the output of /tmp/logrotate.status after it runs (Modify the schedule expression to suit your debugging needs).
0 2 * * * root logrotate /etc/logrotate.d/uwsgi --state /tmp/logrotate.status
Check this crontab.guru link if you need help with the schedule expression.
The cause of the issue stemmed from SELinux security contexts, where the log directory that was being included in the original logrotate config didn't have the right security permissions. So when I kept doing ls -al, I couldn't find anything wrong because it did not show the SELInux security settings for each directory/file. I had to do ls -Z or something to that effect.
I want to clean a Docker container logs every day (no need to store/archive the data). I created a file called docker in /etc/logrotate.d and put the following inside:
/var/lib/docker/containers/*/*.log {
rotate 0 #do not keep archives
daily
missingok
copytruncate #continue working in the same log file
}
well, but it doesn't work. So, obviously something in my logrotate configuration isn't right. AFAIK, I don't need to setup crontab for this. Is my configuration wrong? Is there anything that I am missing?
Is there a way to run and test logrotate without having to wait a day?
rotate 0 is redundant here, 0 is the default for rotation, which means, no rotation and old ones are removed.
You can test it using,
$ logrotate -f /etc/logrotate.d/docker
It might be better to test it by putting rotate 1 and then with rotate 0 to check if logrotate is functioning correctly.
If logrotate is installed correctly, then one would have a crontab entry in /etc/cron.daily/logrotate. This file reads the configuration file /etc/logrotate.conf which includes anything under /etc/logrotate.d
HTH
// , The question's a bit ambiguous.
Here is the scenario:
I have logs with the following three extensions, but my current rule only applies to *.log files:
.1
.log
.txt
Plus, because Tomcat is rotating logs, I have the following:
.gz
I want to rotate all of these files, but I don't want to end up with any .gz.gz files. How do I do this?
Logrotate Rule for Tomcat
Currently, I have the following rule for the Tomcat logs:
% sudo cat /etc/logrotate.d/tomcat
# THIS FILE IS AUTOMATICALLY DISTRIBUTED BY PUPPET. ANY CHANGES WILL BE
# OVERWRITTEN.
/opt/apache-tomcat_1/logs/*.log {
compress
daily
delaycompress
maxage 60
maxsize 500M
minsize 1k
rotate 20
size 100k
}
To try to solve this, I could change the /opt/apache-tomcat_1/logs/*.log path to something like /opt/apache-tomcat_1/logs/*, but I wonder if that would re-compress or process the .gz files in the same way as the .log and .txt files.
Does logrotate have some way of knowing to leave existing .gz files well enough alone?
Other Files
The last time the /etc/cron.daily/logrotate got an update was about 12 days ago:
% sudo ls -lanh /etc/cron.daily/logrotate
-r-xr-xr-x 1 0 0 313 Jun 29 21:48 /etc/cron.daily/logrotate
Its contents are as follows:
#!/bin/sh
# THIS FILE IS AUTOMATICALLY DISTRIBUTED BY PUPPET. ANY CHANGES WILL BE
# OVERWRITTEN.
OUTPUT=$(/usr/sbin/logrotate /etc/logrotate.conf 2>&1)
EXITVALUE=$?
if [ $EXITVALUE != 0 ]; then
/usr/bin/logger -t logrotate "ALERT exited abnormally with [$EXITVALUE]"
echo "${OUTPUT}"
fi
exit $EXITVALUE
And, in case it's relevant:
% cat /etc/logrotate.conf
# THIS FILE IS AUTOMATICALLY DISTRIBUTED BY PUPPET.
# ANY CHANGES WILL BE OVERWRITTEN.
create
weekly
rotate 4
# configurable file rotations
include /etc/logrotate.d
I did a quick search online for this, and found some results. The askubuntu.com answer was closest, but I am still not sure whether logrotate will "rotate" .gz files created by another service, like Tomcat.
Nothing in those results answers this specific question about pre-existing .gz files created by another service (e.g. Tomcat) when * globbing is used in the logrotate path.
Right now I'm simply solving this with multiple paths/rules: https://v.gd/yNfAAu
But I'm curious. What script behavior intelligently makes logrotate ignore the existing .gz files, or process them differently, while still removing those that are sufficiently old or large? Does Logrotate have a way to do this already?
From "man logrotate"
You can use the variable "tabooext" to specify file extensions that you wish logrotate to ignore.