How to read continuous log file from last read line | Linux Shell - linux

Platform: RHEL7
Situation:
A JMeter report file is being appended with new results every 5 minutes by crontab script
Another awk script looks for response time greater than 500ms and sends email alerts
Problem Statement:
The requirement is to scan only newly added lines in the report file.
Presently, the awk script is reading the complete report every time
and sends alerts even for older events. awk -F "," '$4 != 200 || $14>
500' results.jtl
Good-to-Have if the awk script can read from the end of the file up to line read last time. This shall help in creating an alert for the latest event first.
Any suggestion shall be a great help.

Any reason for not using:
Duration Assertion: for failing samples which response times are over 500 ms
If Controller: with condition ${JMeterThread.last_sample_ok} which checks whether last sampler is successful or not
SMTP Request Sampler: to send an email when there is a failure

Related

Why do Period will be ignored at the beginning of a sentence in SendEmail on Linux

I want to use command line to send an email on Linux so I choose sendEmail (a lightweight, command line SMTP email client). However, I find Period (.) at the beginning of a sentence will be ignored and it really confused me.
-m MESSAGE message body
My command:
sendEmail -f sender#example.com -t receiver#example.com -u "Test mail" -s smtp.example.com -xu sender#example.com -xp sender_password -m ".Hello\n..Hello\nHello.world" -o tls=no
What I want to display is:
.Hello
..Hello
Hello.world
But the result is:
Hello
.Hello
Hello.world
Thanks a million.
This is a bug in the sendEmail client. In SMTP a line containing nothing else but a single period . is used to indicate the end of the message (DATA segment in SMTP). To avoid unintended transmission termination if a message contains a line with a single period, on all lines starting with a period an extra period has to be added before the message goes onto the wire; and removed upon receive. It's the task of a proper SMTP client to take care of this. It's clearly a faulty behavior of the client.
To get around the bug, add an extra period.
For details see RFC5321, sections 4.1.1.4 and 4.5.4.

mail can't send messages: Process exited with a non-zero status

I wrote a bash script that sends out a mail, but after 50 e-mails it starts to say "mail can't send messages: Process exited with a non-zero status". Can anyone help solve my problem. The code I used is below if you want to take a look at it.
#!/bin/bash
#Declare variables area.
emailBody=email_body.txt; #you have to use without “ symbol for some reason
emailList=email_list_delimiter.txt;
#send mail command. using a read file loop.
while IFS= read -r emailTo; do
cat $emailBody |
mail -s "Hi, I'm looking for a position in IT Field." $emailTo |
echo “Success”;
done < <(grep . $emailList)
You are probably hitting a server-side limit on the number of messages you can send in a fixed time, or equivalently the number of connections allowed within a moving window of time.
If you can (the message is not "personalized") it is best to send one message to multiple recipients, rather than many messages, each to one recipient. Do that by perhaps putting your own e-mail address in the To field, and then Bccing the whole of the list of recipients in one go. You'll have to check your mail command for how to do that.

Bash Script Efficient For Loop (Different Source File)

First of all i'm a beginner in bash scripting. I usually code in Java but this certain task requires me to create some bash scripts in Linux. FYI i've already made a working script but I think its not efficient enough because of the large files I'm dealing with.
The problem is simple I have 2 logs that I need to compare and make some correction on one of the logs... ill call it logA and logB. This 2 logs contains different format here is an example:
01/04/2015 06:48:59|9691427842139609|601113090635|PR|30.00|8.3|1.7| <- log A
17978712 2015-04-01 06:48:44 601113090635 SUCCESS DealerERecharge.<-log B
17978714 2015-04-01 06:48:49 601113090635 SUCCESS DealerERecharge.<-log B
As you can see there is a gap in time stamp. The actual logs that will match with log A is the one with the ID 17978714 because it is the closest time from it. The highest time gap I've seen is 1 minute. I cant use the RANGE logic because if there are more than one line on log B that is within the 1 minute range then all of the line will show in my regenerated log.
The script I made contains a for loop which iterate the timestamp of log A until it hits something in log B (The first one it hits is the closest)
Inside the for loop I have this line of code which makes the loop slow.
LINEOUTPUT=$(less $File2 | grep "Validation 1" | grep "Validation 2" | grep "Timestamp From Log A")
I've read some sample using SED but the problem is I have 2 more validation to consider before matching it with the time stamp.
The validation works as a filter to narrow down the exact match for log A and B.
Additional Info: I tried doing some benchmark test for the script I made by performing some loop. One thing I've noticed is that even though I only use 1 pipe for that script the loop tick is still slow.

Need to capture the commands fired on Linux

I would like to capture all the commands fired by a user in a session. This is needed for the purpose of auditing.
I used some thing like below,
LoggedIn=`date +"%B-%d-%Y-%M:%H"`
HostName=`hostname`
UNIX_USER=`who am i | cut -d " " -f 1`
echo " Please enter a Change Request Number for which you are looging in : "
read CR_NUMBER
FileName=$HostName-$LoggedIn-$CR_NUMBER-$UNIX_USER
script $FileName
I have put this snippet in .profile file, so that as soon as the user logs in to a SU account this creates the file. The plan is to push this file to a central repository where an auditor can look into those files.
But there are couple of problems in this.
The "script" command spools all the data from the session, for example say, a user cats a property file, It appends all the data of the property file to the auditing file.
Unless user fires the 'exit' command, the data will not be spooled to auditing file, by any chance if user logs out with out firing exit command, the auditing file will be empty.
Is there any better solution for auditing ? History file is not an option since it does not tell me for which Change Request number ( internal to my organisation) the commands are fired. Is there any other way just capture only the commands fired but not the output ?
Some of the previous discussion are here and here
I think this software exactly matches your need:
https://github.com/a2o/snoopy

Calculating the difference between the first words(timestamp) using perl dynamically

I have a program that keeps on writing the icmp echo requests being received by a machine into a file.
I am using system ("tcpdump icmpecho[0] == 8 | tee abc.txt") to do that.
So this process keeps on going till I end the program manually.
Each line has the timestamp as its first word.
now i want to calculate the frequency of the echo requests I am receiving using a separate script so that if it reaches a certain threshold , I can print an alert.
I tried to use grep -Eo '^[^ ]+' file
to get the timestamps into an array, but I dont know what to do after getting them into an array. grep goes on in a while loop since the file it is reading from keeps on getting populated infinitely.(I'll not have an option of monitoring the differences and printing an alert if grep goes on like that right?)
All I am trying to do is to keep track of the frequency of icmp echo requests that are coming in on my machine and print an alert message whenever that frequency crosses a threshold. is there any alternative way?
All timestamps are saved in #arr
perl -ne '$f{$_}++ or push #arr, $_ for /(\d+:\d+)/ }{ print "$_ [$f{$_} times]\n" for #arr' file
constantly reading from log file,
perl -e 'open$T,pop;while(1){while(<$T>){ ++$f{$_}>10 and print "[$f{$_}]$_" for /(\d+:\d+)/ }sleep 1;seek $T,0,1}' file
I am using
tcpstat -i eth1 -f icmp[0] == 8
to get the request count. it gives me 3 more parameters but got to research a bit bout them!

Resources