Using Net::OpenSSH tail the message file and grep - linux

I am using Net::OpenSSH
my $ssh = Net::OpenSSH->new("$linux_machine_host")
Using the SSH object, fews commands are executed multiple times for N hours.
At times I need to look for any error messages, such as Timeout, in the var/adm/message file.
My suggestion
$ssh->capture2("echo START >> /var/adm/messages");
$ssh->capture2("some command which will be run in background for n hours");
$ssh->capture2("echo END >> /var/adm/messages");
Then read all lines between START and END and grep for the required error message.
$ssh->capture2("grep -A 100000 "START" /var/adm/messages | grep -B 100000 END");`
Without writing START and END into the message file, can I tail the var/adm/message file at some point and capture any new messages appearing afterwards.
Are there any Net::OpenSSH methods which would capture new lines and write them into a file?

You can read the messages file via SFTP (see Net::SFTP::Foreign):
# untested!
use Net::SFTP::Foreign::Constants qw(:flags);
...
my $sftp = $ssh->sftp;
# open the messages file creating it if it doesn't exist
# and move to the end:
my $fh = $sftp->open("/var/adm/messages",
SSH2_FXF_READ|SSH2_FXF_CREAT)
or die $sftp->error;
seek($fh, 0, 2);
$ssh->capture2("some command which...");
# look for the size of /var/adm/messages now so that we
# can ignore any lines that may be appended while we are
# reading it:
my $end = (stat $fh)[7];
# and finally read any lines added since we opened it:
my #msg;
while (1) {
my $pos = tell $fh;
last if $pos < 0 or $pos >= $end;
my $line = <$fh>;
last unless defined $line;
push #msg, $line;
}
Note that you are not taking into account that the messages file may be rotated. Handling that would require more convoluted approaches.

Related

How to display content of a file which added from past 5 mins without scanning whole file in Linux?

I have DB error log file, it will grow continuously.
Now i want to set some error monitoring on that file for every 5 minutes.
The problem is i don’t want to scan whole file for every 5 minutes(when monitoring cron executed), because it may grow very big in future. Scanning through whole(big) file for every 5 mins will consume bit more resources.
So i just want to scan only the lines which were inserted/written to the log during last 5 mins interval.
Each error recorded in log will have Timestamp prepend to it like below:
180418 23:45:00 [ERROR] mysql got signal 11.
So i want to search with pattern [ERROR] only on lines which were added from last 5 mins(not whole file) and place the output to another file.
Please help me here.
Feel free if u need more clarification on my question.
I’m using RHEL 7 and i’m trying to implement above monitoring through bash shell script
Serializing the Byte Offset
This picks up where the last instance left off. If you run it every 5 minutes, then, it'll scan 5 minutes of data.
Note that this implementation knowingly can scan data added during an invocation's run twice. This is a little sloppy, but it's much safer to scan overlapping data twice than to never read it at all, which is a risk that can be run if relying on cron to run your program on schedule (likewise, sleeps can run over the requested time if the system is busy).
#!/usr/bin/env bash
file=$1; shift # first input: filename
grep_opts=( "$#" ) # remaining inputs: grep options
dir=$(dirname -- "$file") # extract directory name to use for offset storage
basename=${file##*/} # pick up file name w/o directory
size_file="$dir/.$basename.size" # generate filename to use to store offset
if [[ -s $size_file ]]; then # ...if we already have a file with an offset...
old_size=$(<"$size_file") # ...read it from that file
else
old_size=0 # ...otherwise start at the front.
fi
new_size=$(stat --format=%s -- "$file") || exit # Figure out current size
if (( new_size < old_size )); then
old_size=0 # file was truncated, so we can't trust old_size
elif (( new_size == old_size )); then
exit 0 # no new contents, so no point in trying to search
fi
# read starting at old_size and grep only that content
dd iflag=skip_bytes skip="$old_size" if="$file" | grep "${grep_opts[#]}"; grep_retval=$?
# if the read failed, don't store an updated offset
(( ${PIPESTATUS[0]} != 0 )) && exit 1
# create a new tempfile to store offset in
tempfile=$(mktemp -- "${size_file}.XXXXXX") || exit
# write to that temporary file...
printf '%s\n' "$new_size" > "$tempfile" || { rm -f "$tempfile"; exit 1; }
# ...and if that write succeeded, overwrite the last place where we serialized output.
mv -- "$tempfile" "$new_size" || exit
exit "$grep_retval"
Alternate Mode: Bisect For The Timestamp
Note that this can miss content if you're relying on, say, cron to invoke your code every 5 minutes on-the-dot; storing byte offsets can thus be more accurate.
Using the bsearch tool by Ole Tange:
#!/usr/bin/env bash
file=$1; shift
start_date=$(date -d 'now - 5 minutes' '+%y%m%d %H:%M:%S')
byte_offset=$(bsearch --byte-offset "$file" "$start_date")
dd iflag=skip_bytes skip="$byte_offset" if="$file" | grep "$#"
Another approach could be something like this:
DB_FILE="FULL_PATH_TO_YOUR_DB_FILE"
current_db_size=$(du -b "$DB_FILE" | cut -f 1)
if [[ ! -a SOME_PATH_OF_YOUR_CHOICE/last_size_db_file ]] ; then
tail --bytes $current_db_size $DB_FILE > SOME_PATH_OF_YOUR_CHOICE/log-file_$(date +%Y-%m-%d_%H-%M-%S)
else
if [[ $(cat last_size_db_file) -gt $current_db_size ]] ; then
previously_readed_bytes=0
else
previously_readed_bytes=$(cat last_size_db_file)
fi
new_bytes=$(($current_db_size - $previously_readed_bytes))
tail --bytes $new_bytes $DB_FILE > SOME_PATH_OF_YOUR_CHOICE/log-file_$(date +%Y-%m-%d_%H-%M-%S)
fi
printf $current_db_size > SOME_PATH_OF_YOUR_CHOICE/last_size_db_file
this prints all bytes of DB_FILE not previously printed to SOME_PATH_OF_YOUR_CHOICE/log-file_$(date +%Y-%m-%d_%H-%M-%S)
Note that $(date +%Y-%m-%d_%H-%M-%S) will be the current 'full' date at the time of creating the log file
you can make this an script, and use cron to execute that script every five minutes; something like this:
*/5 * * * * PATH_TO_YOUR_SCRIPT
Here is my approach:
First, read the whole log once so far.
If you reach the end, collect and read new lines for a timespan (in my example 9 seconds, for faster testing, while my dummy server appends to the logfile every 3 seconds).
After the timespan, echo the cache, clear the cache (an array arr), loop and sleep for some time, so that this process doesn't consume all CPU time.
First, my dummy logfile writer:
#!/bin/bash
#
# dummy logfile writer
#
while true
do
s=$(( $(date +%s) % 3600))
echo $s server msg
sleep 3
done >> seconds.log
Startet via ./seconds-out.sh &.
Now the more complicated part:
#!/bin/bash
#
# consume a logfile as written so far. Then, collect every new line
# and show it in an interval of $interval
#
interval=9 # 9 seconds
#
printf -v secnow '%(%s)T' -1
start=$(( secnow % (3600*24*365) ))
declare -a arr
init=false
while true
do
read line
printf -v secnow '%(%s)T' -1
now=$(( secnow % (3600*24*365) ))
# consume every line created in the past
if (( ! init ))
then
# assume reading a line might not take longer than a second (rounded to whole seconds)
while (( ${#line} > 0 && (now - start) < 2 ))
do
read line
start=$now
echo -n "." # for debugging purpose, remove
printf -v secnow '%(%s)T' -1
now=$(( secnow % (3600*24*365) ))
done
init=1
echo "init=$init" # for debugging purpose, remove
# collect new lines, display them every $interval seconds
else
if ((${#line} > 0 ))
then
echo -n "-" # for debugging purpose, remove
arr+=("read: $line \n")
fi
if (( (now - start) > interval ))
then
echo -e "${arr[#]]}"
arr=()
start=$now
fi
fi
sleep .1
done < seconds.log
Output with logfile generator in 3 seconds, running for some time, then starting the read-seconds.sh script, with debugging output activated:
./read-seconds.sh
.......................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................init=1
---read: 1688 server msg
read: 1691 server msg
read: 1694 server msg
---read: 1697 server msg
read: 1700 server msg
read: 1703 server msg
----read: 1706 server msg
read: 1709 server msg
read: 1712 server msg
read: 1715 server msg
^C
Every dot represents a logfile line from the past and therefor skipped.
Every dash represents a logfile line collected.

How to tail file in perl when copytruncate is used

Problem
I have created a simple perl script to read log files and process the data asynchronously.
The reading sub also checks for changes in inode number so a new filehandle is created when the logs rotate.
The problem i am facing is that when copytruncate is used in logrotation then then inode does not change when the file is rotated.
This shouldn't be an issue as the script should just continue reading the file but for some reason that i cannot immediately see, as soon as the logs rotate no new lines are ever read.
Question
How can i modify the below script (or completely scrap and start again) to continously tail a file which is logrotated using copytruncate using perl ?
Code
use strict;
use warnings;
use threads;
use Thread::Queue;
use threads::shared;
my $logq = Thread::Queue->new();
my %Servers :shared;
my %servername :shared;
#########
#This sub just reads the data off the queue and processes it, i have
#reduced it to a simple print statement for simplicity.
#The sleep is to prevent it from eating cpu.
########
sub process_data
{
while(sleep(5)){
if ($logq->pending())
{
while($logq->pending() > 0){
my $data = $logq->dequeue();
print "Data:$data\n";
}
}
}
}
sub read_file
{
my $myFile=$_[0];
#Get the argument and assign to var.
open(my $logfile,'<',$myFile) || die "error";
#open file
my $Inode=(stat($logfile))[1];
#Get the current inode
seek $logfile, 0, 2;
#Go to the end of the file
for (;;) {
while (<$logfile>) {
chomp( $_ );
$logq->enqueue( $_ );
#Add lines to queue for processing
}
sleep 5;
if($Inode != (stat($myFile))[1]){
close($logfile);
while (! -e $myFile){
sleep 2;
}
open($logfile,'<',$myFile) || die "error";
$Inode=(stat($logfile))[1];
}
#Above checks if the inode has changed and the file exists still
seek $logfile, 0, 1;
#Remove eof
}
}
my $thr1 = threads->create(\&read_file,"test");
my $thr4 = threads->create(\&process_data);
$thr1->join();
$thr4->join();
#Creating the threads, can add more log files for processing or multiple processing sections.
Possibly relevant info
Log config for logrotate contains
compress
compresscmd /usr/bin/bzip2
uncompresscmd /usr/bin/bunzip2
daily
rotate 5
notifempty
missingok
copytruncate
for this file.
Specs
GNU bash, version 3.2.57(1)-release (s390x-ibm-linux-gnu)
perl, v5.10.0
(if logrotate has version and someone knows how to check then i will also add that)
Any more info needed just ask.
So the reason that this was failing is pretty obvious when you look at copytruncate, it copies the original file and then truncates the current one.
Whilst this ensure that the inode is kept, it created another problem.
As the current way i tail the file is by simply staying at the end and removing the eof flag this means that when the file is truncated, the pointer stays at the position of the last line before truncation, which in turn means that no more lines would be read until it reached that pointer again.
The obvious solution then is to simply check the size of the file and reset the pointer if it is ever pointing past the end of the file.
I found it easier to just check that file size never got smaller though, using the two lines below.
my $fileSize=(stat($logfile))[7];
#Added after the inode is assigned
and changing
if($Inode != (stat($myFile))[1]){
to
if($Inode != (stat($myFile))[1] || (stat($myFile))[7] < $fileSize){

grep lines before and after in aix/ksh shell

I want to extract lines before and after a matched pattern.
eg: if the file contents are as follows
absbasdakjkglksagjgj
sajlkgsgjlskjlasj
hello
lkgjkdsfjlkjsgklks
klgdsgklsdgkldskgdsg
I need find hello and display line before and after 'hello'
the output should be
sajlkgsgjlskjlasj
hello
lkgjkdsfjlkjsgklks
This is possible with GNU but i need a method that works in AIX / KSH SHELL WHERE NO GNU IS INSTALLED.
sed -n '/hello/{x;G;N;p;};h' filename
I've found it is generally less frustrating to build the GNU coreutils once, and benefit from many more features http://www.gnu.org/software/coreutils/
Since you'll have Perl on the machine, you could use the following code, but you'd probably do better to install the GNU utilities. This has options -b n1 for lines before and -f n1 for lines following the match. It works with PCRE matches (so if you want case-insensitive matching, add an i after the regex instead using a -i option. I haven't implemented -v or -l; I didn't need those.
#!/usr/bin/env perl
#
# #(#)$Id: sgrep.pl,v 1.7 2013/01/28 02:07:18 jleffler Exp $
#
# Perl-based SGREP (special grep) command
#
# Print lines around the line that matches (by default, 3 before and 3 after).
# By default, include file names if more than one file to search.
#
# Options:
# -b n1 Print n1 lines before match
# -f n2 Print n2 lines following match
# -n Print line numbers
# -h Do not print file names
# -H Do print file names
use warnings;
use strict;
use constant debug => 0;
use Getopt::Std;
my(%opts);
sub usage
{
print STDERR "Usage: $0 [-hnH] [-b n1] [-f n2] pattern [file ...]\n";
exit 1;
}
usage unless getopts('hnf:b:H', \%opts);
usage unless #ARGV >= 1;
if ($opts{h} && $opts{H})
{
print STDERR "$0: mutually exclusive options -h and -H specified\n";
exit 1;
}
my $op = shift;
print "# regex = $op\n" if debug;
# print file names if -h omitted and more than one argument
$opts{F} = (defined $opts{H} || (!defined $opts{h} and scalar #ARGV > 1)) ? 1 : 0;
$opts{n} = 0 unless defined $opts{n};
my $before = (defined $opts{b}) ? $opts{b} + 0 : 3;
my $after = (defined $opts{f}) ? $opts{f} + 0 : 3;
print "# before = $before; after = $after\n" if debug;
my #lines = (); # Accumulated lines
my $tail = 0; # Line number of last line in list
my $tbp_1 = 0; # First line to be printed
my $tbp_2 = 0; # Last line to be printed
# Print lines from #lines in the range $tbp_1 .. $tbp_2,
# leaving $leave lines in the array for future use.
sub print_leaving
{
my ($leave) = #_;
while (scalar(#lines) > $leave)
{
my $line = shift #lines;
my $curr = $tail - scalar(#lines);
if ($tbp_1 <= $curr && $curr <= $tbp_2)
{
print "$ARGV:" if $opts{F};
print "$curr:" if $opts{n};
print $line;
}
}
}
# General logic:
# Accumulate each line at end of #lines.
# ** If current line matches, record range that needs printing
# ** When the line array contains enough lines, pop line off front and,
# if it needs printing, print it.
# At end of file, empty line array, printing requisite accumulated lines.
while (<>)
{
# Add this line to the accumulated lines
push #lines, $_;
$tail = $.;
printf "# array: N = %d, last = $tail: %s", scalar(#lines), $_ if debug > 1;
if (m/$op/o)
{
# This line matches - set range to be printed
my $lo = $. - $before;
$tbp_1 = $lo if ($lo > $tbp_2);
$tbp_2 = $. + $after;
print "# $. MATCH: print range $tbp_1 .. $tbp_2\n" if debug;
}
# Print out any accumulated lines that need printing
# Leave $before lines in array.
print_leaving($before);
}
continue
{
if (eof)
{
# Print out any accumulated lines that need printing
print_leaving(0);
# Reset for next file
close ARGV;
$tbp_1 = 0;
$tbp_2 = 0;
$tail = 0;
#lines = ();
}
}
I had a situation where I was stuck with a slow telnet session on a tablet, believe it or not, and I couldn't write a Perl script very easily with that keyboard. I came up with this hacky maneuver that worked in a pinch for me with AIX's limited grep. This won't work well if your grep returns hundreds of lines, but if you just need one line and one or two above/below it, this could do it. First I ran this:
cat -n filename |grep criteria
By including the -n flag, I see the line number of the data I'm seeking, like this:
2543 my crucial data
Since cat gives the line number 2 spaces before and 1 space after, I could grep for the line number right before it like this:
cat -n filename |grep " 2542 "
I ran this a couple of times to give me lines 2542 and 2544 that bookended line 2543. Like I said, it's definitely fallable, like if you have reams of data that might have " 2542 " all over the place, but just to grab a couple of quick lines, it worked well.

Assigning a system command to an array in Perl

I'm attempting to assign the Linux system command top -n1 to an array and then eliminate the first seven lines of the text that is written to a text file. When I go to print elements from my array, I get an error saying use of uninitialized value. Can someone show what I'm doing wrong when assigning my array to the system command? Thank you.
Edit: I might also add, I'm looking to delete the first seven lines by using array slicing.
sub processlist
{
my $num_of_lines = 0;
my #top_command = `top -bn1 >toptmp.txt`;
#opening temp file, and output file.
open(my $in, '<' , "toptmp.txt") or die "Can't read the file: $!"; #file used for initial reading
open(my $out, '>', "top.txt") or die "can't write to file: $!";
print $top_command[0], "\n";
#looping deleting first 7 lines
while(<$in>)
{
if($num_of_lines > 6) #starts writing to top.txt past line 7 (counting starts at 0)
{
print $out $_;
}
$num_of_lines++;
}
close $out;
close $in;
system("rm toptmp.txt"); #erasing tmp file.
}
Use instead
top -bn1 | tail -n +8
No need to reinvent the wheel when the tail command will already do what you want
You are writing the top results to a file, if you want to get them to the variable you should not do that.
Use top -bn1 instead of top -bn1 >toptmp.txt

A way to parse terminal output / input ? (.bashrc ?)

Is there a way to parse input and output from bash commands in an interactive terminal before they reach the screen ? I was thinking maybe something in .bashrc, but I'm new to using bash.
For example:
I type "ls /home/foo/bar/"
That gets passed through a script that replaces all instances of 'bar' with 'eggs'
"ls /home/foo/eggs/" gets executed
The output gets sent back to the replace script
The output of the script is sent to the screen
Yes. Here's something I wrote for my own use, to wrap old command line Fortran programs that ask for file-paths. It allows escaping back to the shell for e.g. running 'ls'. This only works one way, i.e. intercepts user-input and then passes it on to a program, but gets you most of what you want. You can adapt it to your needs.
#!/usr/bin/perl
# shwrap.pl - Wrap any process for convenient escape to the shell.
# ire_and_curses, September 2006
use strict;
use warnings;
# Check args
my $executable = shift || die "Usage: shwrap.pl executable";
my #escape_chars = ('#'); # Escape to shell with these chars
my $exit = 'exit'; # Exit string for quick termination
open my $exe_fh, "|$executable #ARGV" or die "Cannot pipe to program $executable: $!";
# Set magic buffer autoflush on...
select((select($exe_fh), $| = 1)[0]);
# Accept input until the child process terminates or is terminated...
while ( 1 ) {
chomp(my $input = <STDIN>);
# End if we receive the special exit string...
if ( $input =~ m/$exit/ ) {
close $exe_fh;
print "$0: Terminated child process...\n";
exit;
}
foreach my $char ( #escape_chars ) {
# Escape to the shell if the input starts with an escape character...
if ( my ($command) = $input =~ m/^$char(.*)/ ) {
system $command;
}
# Otherwise pass the input on to the executable...
else {
print $exe_fh "$input\n";
}
}
}

Resources