Linux tail on rotating log file using busybox - linux

In my bash script am trying to monitor the out from the /var/log/message log file - and continue even when the file rotates (is re-created and started again). I tried using tail -f filename but quickly realised this is no good for when the file rotates.
So there are lots of answers for using tail -F filename or tail -f --retry filename (and a few other variants).
But on my embedded Linux I am using busybox which has a lightweight version of tail:
tail [OPTIONS] [FILE]...
Print last 10 lines of each FILE to standard output. With more than one
FILE, precede each with a header giving the file name. With no FILE, or
when FILE is -, read standard input.
Options:
-c N[kbm] Output the last N bytes
-n N[kbm] Print last N lines instead of last 10
-f Output data as the file grows
-q Never output headers giving file names
-s SEC Wait SEC seconds between reads with -f
-v Always output headers giving file names
If the first character of N (bytes or lines) is a '+', output begins with
the Nth item from the start of each file, otherwise, print the last N items
in the file. N bytes may be suffixed by k (x1024), b (x512), or m (1024^2).
So I can't do the usual tail -F ... since that option is not implemented. The above document snippet is the latest busybox version - and mine is a bit older.
So I need another way of logging /var/log/messages since the file gets overwritten at a certain size.
I was thinking of some simple bash line. So I saw things like inotifywait, but busybox does not have that. I looked here:
busybox docs and there is a inotifyd, but my version does not have that particular command. So I am wandering if there is a clever way of doing this with simple Linux commands/combination of commands like watch and tail -f and cat/less/more etc... I can't quite figure out what I need to do with the limited commands that I have :(

How are the logs rotated? Are you using a logrotate utility?
If yes, have you tried to add your line to postrotate section in the config file?
from man logrotate
postrotate/endscript
The lines between postrotate and endscript (both of which must
appear on lines by themselves) are executed after the log file
is rotated. These directives may only appear inside of a log
file definition. See prerotate as well.

Related

How to store lines in a file as latest at top in linux

I have lines that can store logs of my script in a separate file and it working perfectly but their is an issue it stores latest at bottom of a file which is a default behavior of Linux what I want is that it redirect output to a logs file as latest at the top.
Here are the lines which i include at the top of my Hooks_script and these lines store output of every command written below these lines in a separate logs file
#!/bin/bash
exec 3>&1 4>&2
trap 'exec 2>&4 1>&3' 0 1 2 3
exec 1>>/tmp/git_hooks_logs.log 2>&1
#create logs of every command written below these lines
The general consensus is there is no reliable way to do this in a single operation, and several ways to do it with more than a single operation, some of which can be enetered on one line if that is important to you.
You might want to look at https://www.cyberciti.biz/faq/bash-prepend-text-lines-to-file/

Grep errors from a file and put them in a new file but not override new file

Tried to run a bash script for this as I want to put this into cron and run every night to pull out errors from a file and write to a file in same directory:
My script which just hangs and pulls out nothing:
#!/bin/bash
tail -f /var/log/syslog | grep -i "error" > /var/log/syserrorlog.log
When this runs I would like this to write to/update the same file and not override it.
Change your > to >> (the latter means append). Also, tail -f will hang by definition: it monitors files for new data.
If your syslog is rotated every day, then you can simply use
#!/bin/bash
cat /var/log/syslog | grep -i "error" >> /var/log/syserrorlog.log
If it is not rotated, you can add a grep to the pipeline which filters out the relevant date
Try using logrotate. This is daemon used to periodically rotate logs (e.g. archive logs files and then clear them every night). It supports many configuration options, one of them is postrotate - custom script executed after rotation is done. Description copied from logrotate's man page:
postrotate/endscript
The lines between postrotate and endscript (both of which must appear on lines by themselves) are executed (using /bin/sh) after the
log file is rotated. These directives may only appear inside a log
file definition. Normally, the absolute path to the log file is passed
as first argument to the script. If sharedscripts is specified, whole
pattern is passed to the script. See also prerotate. See sharedscripts
and nosharedscripts for error handling.
Syslog is a standard daemon too, so it should have configuration file in /etc/logrotate.d/. You could add your commands there.

Catalina.out not logging after edition by bash script

This is a tomcat7 installation with the default logging configuration, the catalina.out is only being rolled out when we restart the server. As it is prod server we can not restart it very often. We have a huge number of entries going to that file which causes our catalina.out to grow very high in few days until it consumes the whole diskspace.
As we don't want to change the logging configuration as it is puppetized and we would need to create devops tickets and all that slow stuff, I wrote a bash script that is being run every 5 min via crontab that will cut the log file by half when a limit is reached, the script is like the following:
if [ $catalinaSize -gt $catalinaThreshold ]; then
middle=$(wc -l $catalinaLoc | awk '{ print $1 }')
middle=$(( $middle / 2 ))
sed -i -e 1,${middle}d $catalinaLoc
echo "+++ catalina.out was cut by half"
Basically this script checks the current size of the file and compares it to a threshold value, then it uses wc and awk to retrieve the number of lines in that file so it can use then sed for cutting the file by half.
I tested the script in other environments and it worked. The problem is that after some successful runs for several days in production, suddenly the catalina.out is not getting any log entries from tomcat since some days ago.
The explanation I think about is that Tomcat is not able to write into that file anymore because of the cut by half operation.
Is it possible to know what is preventing Tomcat to write into that file?
I suspect it is sed -i doing the damage: behind the scenes, it writes the output stream to a temp file, then moves the temp file to the original name. I suspect the file handle held by catalina no longer points to any file.
You'll have to find a way to actually edit the file, not replace it. This might be a valid replacement for sed:
printf "%s\n" "1,${middle}d" "wq" | ed "$catalinaLoc"
Tangentially, an easier way to get the number of lines:
middle=$(( $(wc -l < "$catalinaLoc") / 2 ))
When you redirect to wc, it no longer prints out the filename.

How to make nohup.out update with perl script?

I have a perl script that copies a large amount of files. It prints some text to standard out and also writes a logfile. However, when running with nohup, both of these display a blank file:
tail -f nohup.out
tail -f logfile.log
The files don't update until the script is done running. Moreover, for some reason tailing the .log file does work if I don't use nohup!
I found a similar question for python (
How come I can't tail my log?)
Is there a similar way to flush the output in perl?
I would use tmux or screen, but they don't exist on this server.
Check perldoc,
HANDLE->autoflush( EXPR );
To disable buffering on standard output that would be,
STDOUT->autoflush(1);

Shell Script - Linux

I want to write a very simple script , which takes a process name , and return the tail of the last file name which contains the process name.
I wrote something like that :
#!/bin/sh
tail $(ls -t *"$1"*| head -1) -f
My question:
Do I need the first line?
Why isn't ls -t *"$1"*| head -1 | tail -f working?
Is there a better way to do it?
1: The first line is a so called she-bang, read the description here:
In computing, a shebang (also called a
hashbang, hashpling, pound bang, or
crunchbang) refers to the characters
"#!" when they are the first two
characters in an interpreter directive
as the first line of a text file. In a
Unix-like operating system, the
program loader takes the presence of
these two characters as an indication
that the file is a script, and tries
to execute that script using the
interpreter specified by the rest of
the first line in the file
2: tail can't take the filename from the stdin: It can either take the text on the stdin or a file as parameter. See the man page for this.
3: No better solution comes to my mind: Pay attention to filenames containing spaces: This does not work with your current solution, you need to add quotes around the $() block.
$1 contains the first argument, the process name is actually in $0. This however can contain the path, so you should use:
#!/bin/sh
tail $(ls -rt *"`basename $0`"*| head -1) -f
You also have to use ls -rt to get the oldest file first.
You can omit the shebang if you run the script from a shell, in that case the contents will be executed by your current shell instance. In many cases this will cause no problems, but it is still a bad practice.
Following on from #theomega's answer and #Idan's question in the comments, the she-bang is needed, among other things, because some UNIX / Linux systems have more than one command shell.
Each command shell has a different syntax, so the she-bang provides a way to specify which shell should be used to execute the script, even if you don't specify it in your run command by typing (for example)
./myscript.sh
instead of
/bin/sh ./myscript.sh
Note that the she-bang can also be used in scripts written in non-shell languages such as Perl; in the case you'd put
#!/usr/bin/perl
at the top of your script.

Resources