how does logrotate actually works - linux

I am trying to the setup log for the httpd server then I write a script in /etc/logrotate.d/apache.conf
/var/log/httpd/* {
daily
rotate 3
size 20K
compress
delaycompress
}
what I understood from this file
/var/log/httpd/* = where all the logs the stored and you want to logrotate them
daily = when you want to rotate the log
rotate= only 3 rotated logs should be kept
size = when your log file size meet with this condition
compress= make a zip file of rotated logs
delaycompress = kind of compress don't know much
so I hit to apache server that generates a lot of logs
after the log generated
where is my log are store how it is run only on when size condition matches or else
thanks for any guidance or help
one more thing when and how log rotate run why some people suggested to use cron job with logrotate

where is my log are store
Unless you specify the olddir directive, they are rotated within the same directory that they exist.
how it is run only on when size condition matches or else
If you specify the size directive, logs are only rotated if they are larger than that size:
size size
Log files are rotated only if they grow bigger then size bytes.
Files that do not meet the size requirement are ignored (https://linux.die.net/man/8/logrotate).
why some people suggested to use cron job with logrotate
logrotate is just an executable; it does not handle any facet of how or when it is executed. cron is typically how logrotate's execution is scheduled. For example, on CentOS, inside the /etc/cron.daily directory is an executable shell script:
#!/bin/sh
/usr/sbin/logrotate -s /var/lib/logrotate/logrotate.status /etc/logrotate.conf
EXITVALUE=$?
if [ $EXITVALUE != 0 ]; then
/usr/bin/logger -t logrotate "ALERT exited abnormally with [$EXITVALUE]"
fi
exit 0
This script is executed one per day by cron and is how logrotate's execution is actually initiated.
Couple other problems:
/var/log/httpd/* - Unless you're rotating files out of the original directory with the olddir directive, never end your logrotate's directory definition with a wildcard (*). This definition is going to glom on to every file in that directory, including the files that you've already rotated. Logrotate has no way of keeping track of what files are actual logs and which are stored rotations. Your directory definition should be something like /var/log/httpd/*_log instead.
You should be reloading httpd after you rotate the log files, else it will probably continue to log into the rotated file because you never closed its handle.
sharedscripts
postrotate
/bin/systemctl reload httpd.service > /dev/null 2>/dev/null || true
endscript
Again this is a CentOS-specific example.

Related

Check gzip is ok within a cron job

I've started to get some problems with corrupt gzip files. I'm not sure exactly when it happens and the colleague how set our storage have quit etc. so I'm not an expert in cron jobs etc. but this is how it looks today:
/var/spool/new_files/*.csv
{
daily
rotate 12
missingok
notifempty
delaycompress
compress
sharedscripts
postrotate
service capture_data restart >/dev/null 2>&1 || true
endscript
}
In principle at midnight the script restart all csv files in var/spool/new_files/, change the name to them (increment them by 1) and gzip the one which is then named "2" and moves that to our long time storage.
I don't know if the files are corrupt just after they have been gzip or if this happens during the "transfer" to the storage. If I run zcat file_name | tail I get an invalid compressed data--length error. This error happens randomly 1-3 times per month.
So the first thing I want to do is to run gzip -k and keep the original,
Check if the files are corrupt after they have been gziped
Retry once
If this also fails add an error in logs
Stop the cron job
If the gzip file is ok after creation move it to long time storage
Test if they are ok, if not:
Retry once
If this also fails add an error in logs
Stop the cron job
Throw away the original file
Does the logic that I suggest make sense? Any suggestions how to add it to the cron job?
It seems ok... for the integrity check you could see something here How to check if a Unix .tar.gz file is a valid file without uncompressing?
You can make a .sh where you add all the commands you need and after that you can add the script .sh in the crontab in:
/var/spool/cron/
if you want to run .sh script with root just add or modify /var/spool/cron/root file... in a similar way you can add cron runned by other users.
the cron would be something like:
0 0 * * * sh <path to your sh>

Wipe clean logfile on daily basis with logrotate

I want to clean a Docker container logs every day (no need to store/archive the data). I created a file called docker in /etc/logrotate.d and put the following inside:
/var/lib/docker/containers/*/*.log {
rotate 0 #do not keep archives
daily
missingok
copytruncate #continue working in the same log file
}
well, but it doesn't work. So, obviously something in my logrotate configuration isn't right. AFAIK, I don't need to setup crontab for this. Is my configuration wrong? Is there anything that I am missing?
Is there a way to run and test logrotate without having to wait a day?
rotate 0 is redundant here, 0 is the default for rotation, which means, no rotation and old ones are removed.
You can test it using,
$ logrotate -f /etc/logrotate.d/docker
It might be better to test it by putting rotate 1 and then with rotate 0 to check if logrotate is functioning correctly.
If logrotate is installed correctly, then one would have a crontab entry in /etc/cron.daily/logrotate. This file reads the configuration file /etc/logrotate.conf which includes anything under /etc/logrotate.d
HTH

Size rule not working in logrotate

I created a log-rotate option for our docker logs and it is working fine. Here is the configuration file
*root#aerogear:/var/lib/docker/containers/b8da13f8dc6cb642959103c23db2a02ef2c7291ae5f94625a92ac9329db1647e# cat /etc/logrotate.d/docker-container
/var/lib/docker/containers/*/*.log {
rotate 7
hourly
compress
size=100M
missingok
delaycompress
copytruncate
}*
It seems that hourly logrotate is working fine. But because of some error, this log file was increased up to 18G, because the size=100M rule didn't work in that case. Do you know any specific reason for that?
There shouldn't be an equal sign after size.
If you want to rotate both hourly and when the file grows bigger than 100M, then you should use maxsize instead of size.
So you should try maxsize 100M instead of size=100M.
As mentioned by Ronan, size and hourly are "contradictory". One or the other may be used, but it is likely that one has priority, so you need to use maxsize instead.
Next:
Normally, logrotate is run as a daily cron job. It will not modify a log more than once in one day unless the criterion for that log is based on the log's size and logrotate is being run more than once each day, or unless the -f or --force option is used.
(from logrotate manual page)
On a standard Ubuntu installation, logrotate runs once per day. If you look at the cron installation, you will see logrotate under the cron.daily directory:
prompt$ ls -l /var/cron.daily
...
-rwxr-xr-x 1 root root 372 May 6 2015 logrotate
...
And there is nothing under cron.hourly:
prompt$ ls -l /var/cron.hourly
(nothing)
This shows that an hourly setup for logrotate is not going to be responding any more than a daily setup on a default setup. Of course, you can change that and get logrotate to run once an hour or even once a minute (in this last case, you need a crontab for root). But by default a maxsize with an hourly or daily setup is not useful since the file will be rotated each time anyway, whatever the size.
Also there has been versions of logrotate where the maxsize parameter did not work. This should not be an issue in the newer versions.
And as Ronan mentioned, no = sign between the option and value.

Does logrotate remove .gz files? That is, does logrotate with a * path rotate an existing .gz file?

// , The question's a bit ambiguous.
Here is the scenario:
I have logs with the following three extensions, but my current rule only applies to *.log files:
.1
.log
.txt
Plus, because Tomcat is rotating logs, I have the following:
.gz
I want to rotate all of these files, but I don't want to end up with any .gz.gz files. How do I do this?
Logrotate Rule for Tomcat
Currently, I have the following rule for the Tomcat logs:
% sudo cat /etc/logrotate.d/tomcat
# THIS FILE IS AUTOMATICALLY DISTRIBUTED BY PUPPET. ANY CHANGES WILL BE
# OVERWRITTEN.
/opt/apache-tomcat_1/logs/*.log {
compress
daily
delaycompress
maxage 60
maxsize 500M
minsize 1k
rotate 20
size 100k
}
To try to solve this, I could change the /opt/apache-tomcat_1/logs/*.log path to something like /opt/apache-tomcat_1/logs/*, but I wonder if that would re-compress or process the .gz files in the same way as the .log and .txt files.
Does logrotate have some way of knowing to leave existing .gz files well enough alone?
Other Files
The last time the /etc/cron.daily/logrotate got an update was about 12 days ago:
% sudo ls -lanh /etc/cron.daily/logrotate
-r-xr-xr-x 1 0 0 313 Jun 29 21:48 /etc/cron.daily/logrotate
Its contents are as follows:
#!/bin/sh
# THIS FILE IS AUTOMATICALLY DISTRIBUTED BY PUPPET. ANY CHANGES WILL BE
# OVERWRITTEN.
OUTPUT=$(/usr/sbin/logrotate /etc/logrotate.conf 2>&1)
EXITVALUE=$?
if [ $EXITVALUE != 0 ]; then
/usr/bin/logger -t logrotate "ALERT exited abnormally with [$EXITVALUE]"
echo "${OUTPUT}"
fi
exit $EXITVALUE
And, in case it's relevant:
% cat /etc/logrotate.conf
# THIS FILE IS AUTOMATICALLY DISTRIBUTED BY PUPPET.
# ANY CHANGES WILL BE OVERWRITTEN.
create
weekly
rotate 4
# configurable file rotations
include /etc/logrotate.d
I did a quick search online for this, and found some results. The askubuntu.com answer was closest, but I am still not sure whether logrotate will "rotate" .gz files created by another service, like Tomcat.
Nothing in those results answers this specific question about pre-existing .gz files created by another service (e.g. Tomcat) when * globbing is used in the logrotate path.
Right now I'm simply solving this with multiple paths/rules: https://v.gd/yNfAAu
But I'm curious. What script behavior intelligently makes logrotate ignore the existing .gz files, or process them differently, while still removing those that are sufficiently old or large? Does Logrotate have a way to do this already?
From "man logrotate"
You can use the variable "tabooext" to specify file extensions that you wish logrotate to ignore.

logrotate compress files after the postrotate script

I have an application generating a really heavy big log file every days (~800MB a day), thus I need to compress them but since the compression takes time, I want that logrotate compress the file after reloading/sending HUP signal to the application.
/var/log/myapp.log {
rotate 7
size 500M
compress
weekly
postrotate
/bin/kill -HUP `cat /var/run/myapp.pid 2>/dev/null` 2>/dev/null || true
endscript
}
Is it already the case that the compression takes place after the postrotate (which would be counter-intuitive)?
If not Can anyone tell me if it's possible to do that without an extra command script (an option or some trick)?
Thanks
Thomas
Adding this info here in case of anyone else that comes across this thread when actually searching for wanting a way to run a script on a file once compression has completed.
As suggested above using postrotate/endscript is no good for that.
Instead you can use lastaction/endscript, which does the job perfectly.
The postrotate script always runs before compression even when sharedscripts is in effect. Hasturkun's additional response to the first answer is therefore incorrect. When sharedscripts is in effect the only compression performed before the postrotate is for old uncompressed logs left lying around because of a delaycompress. For the current logs, compression is always performed after running the postrotate script.
The postrotate script does run before compression occurs: from the man page for logrotate
The next section of the config files defined how to handle the log file
/var/log/messages. The log will go through five weekly rotations before
being removed. After the log file has been rotated (but before the old
version of the log has been compressed), the command /sbin/killall -HUP
syslogd will be executed.
In any case, you can use the delaycompress option to defer compression to the next rotation.
#Hasturkun - One cannot add a comment unless their reputation is first above 50.
To make sure of what logrotate will do, either
test your configuration with, -d: debug which tests but does not do
anything, and -f: force it to run
or you can execute logrotate with
the -v verbose flag
With a configuration that uses a sharedscript for postrotate
$ logrotate -d -f <logrotate.conf file>
Shows the following steps:
rotating pattern: /tmp/log/messages /tmp/log/maillog /tmp/log/cron
...
renaming /tmp/log/messages to /tmp/log/messages.1
renaming /tmp/log/maillog to /tmp/log/maillog.1
renaming /tmp/log/cron to /tmp/log/cron.1
running postrotate script
<kill-hup-script executed here>
compressing log with: /bin/gzip
compressing log with: /bin/gzip
compressing log with: /bin/gzip

Resources