capture line and post it - linux

there is a log file that I need to capture specific lines in, and send a specific word out of it to a url
This line does the job of tracing that log file and finding that word
tail -f /var/log/mail.log | awk '/status=bounced/ { sub(/^to=</,"",$7); sub(/>,$/,"",$7); print $7}'
Now, I need the result of $7 to be sent to some url, I'm assuming by using curl.
Assuming that this log file will only get bigger and that this script will need to run endlessly in the background..
What's the best way of putting a bash script that will answer those needs?
Thanks!

Related

Powershell script to parse a log file and then append to a file

I am new to Shellscripting.I am working on a poc in which a script should read a log file and then append to a existing file for the purpose of alert.It should work as per below
There will be some predefined format according to which it will decide whether to append in file or not.For example:
WWXXX9999XS message
**XXX** - is a 3 letter acronym (application code) like for **tom** for tomcat application
9999 - is a 4 numeric digit in the range 1001-1999
**E or X** - For notification X ,If open/active alerts already existing for same error code and same message,new alerts will not be raised for existing one.Once you have closed existing alerts,it will raise alarm for new error.There is any change in message for same error code from existing one, it will raise a alarm even though open/active alerts present.
X option is only for drop duplicates on code and message otherwise all alert mechanisms are same.
**S** - is the severity level, I.e 2,3
**message** - is any text that will be displayed
The script will examine the log file, and look for error like cloud server is down,then it would append 'wwclo1002X2 cloud server is down'if its a new alert.
2.If the same alert is coming again,then it should append 'wwclo1002E2 cloud server is down
There are some very handy commands you can use to do this type of File manipulation. I've updated this in response to your comment to allow functionality that will check if the error has already been appended to the new file.
My suggestion would be that there is enough functionality here to warrant saving it in a bash script.
My approach would be to use a combination of less, grep and > to read and parse the file and then append to the new file. First save the following into a bash script (e.g. a file named script.sh)
#!/bin/bash
result=$(less $1 | grep $2)
exists=$(less $3 | grep $2)
if [[ "$exists" == "$result" ]]; then
echo "error, already present in file"
exit 1
else
echo $result >> $3
exit 0
fi
Then use this file in the command passing in the log file as the first argument, the string to search for as the second argument and the target results file as the third argument like this:
./script.sh <logFileName> "errorToSearchFor" <resultsTargetFileName>
Don't forget to run the file you will need to change the permissions - you can do this using:
chmod u+x script.sh
Just to clarify as you have mentioned you are new to scripting - the less command will output the entire file, the | command (an unnamed pipe) will pass this output to the grep command which will then search the file for the expression in quotes and return all lines from the file containing that expression. The output of the grep command is then appended to the new file with >>.
You may need to tailor the expression in quotes after grep to get exactly the output you want from the log file.
The filenames are just placeholders, be sure to update these with the correct file names. Hope this helps!
Note updated > to >> (single angle bracket overwrites, double angle bracket appends

grep specific part out of a line of text

this is my first question here so please bear with me.
I have a large text file from which I need only one specific part of one line. I can grep the line but I do not know how I can get that specific part out of that line.
here is my text line (stored in output.txt)
><source src="https://download.foobar.com/content/mp4/web01/2017/05/08/24599/mp4_web01.mp4" type="video/mp4" data-label="Laag - 360p" /><source src="https://download.foobar.com/content/mp4/web02/2017/05/08/24599/mp4_web02.mp4" type="video/mp4" data-label="Hoog - 720p" /><source src="https://download.foobar.com/content/mp4/web03/2017/05/08/24599/mp4_web03.mp4" type="video/mp4" data-label="Normaal - 480p" /></video></div></div>
the part I need to extract from this line is:
https://download.foobar.com/content/mp4/web02/2017/05/08/24599/mp4_web02.mp4
Now I can do a grep like this but that gives me back three lines:
grep -Po '><source src="\K[^"]+' output.txt
gives me:
https://download.omroep.nl/nos/content/mp4/web01/2017/05/08/24599/mp4_web01.mp4
https://download.omroep.nl/nos/content/mp4/web02/2017/05/08/24599/mp4_web02.mp4
https://download.omroep.nl/nos/content/mp4/web03/2017/05/08/24599/mp4_web03.mp4
I would like to get only the line I am looking for without making the extra sed command to remove the first and third line of the results.
How can I grep the input line and only get back the intended line. I only need the link to the mp4_web02.mp4 file.
Can anyone help me get this into one grep command?

how to filter the huge log file to just contain the useful messages

I have a huge log files. Every time open the file will cause the system not responsive. I only need to check the log messages that contains certain strings.
Is there an simple way to do it?
$cat testlogfile.txt | grep --color=auto TRACE > newlogfile.txt
For example, your huge log file called testlogfile.txt. You only need check the log messages that contains "TRACE".
try this command under linux terminal and go to where the huge log is.
You can open the newlogfile.txt that only contains lines with "TRACE"
If you would like to exclude the lines with "TRACE", try -v option:
$cat testlogfile.txt | grep --color=auto -v TRACE > newlogfile.txt

egrep command with piped variable in ssh throwing No Such File or Directory error

Ok, here I'm again, struggling with ssh. I'm trying to retrieve some data from remote log file based on tokens. I'm trying to pass multiple tokens in egrep command via ssh:
IFS=$'\n'
commentsArray=($(ssh $sourceUser#$sourceHost "$(egrep "$v" /$INSTALL_DIR/$PROP_BUNDLE.log)"))
echo ${commentsArray[0]}
echo ${commentsArray[1]}
commax=${#commentsArray[#]}
echo $commax
where $v is something like below but it's length is dynamic. Meaning it can have many file names seperated by pipe.
UserComments/propagateBundle-2013-10-22--07:05:37.jar|UserComments/propagateBundle-2013-10-22--07:03:57.jar
The output which I get is:
oracle#172.18.12.42's password:
bash: UserComments/propagateBundle-2013-10-22--07:03:57.jar/New: No such file or directory
bash: line 1: UserComments/propagateBundle-2013-10-22--07:05:37.jar/nouserinput: No such file or directory
0
Thing worth noting is that my log file data has spaces in it. So, in the code piece I've given, the actual comments which I want to extract start after the jar file name like : UserComments/propagateBundle-2013-10-22--07:03:57.jar/
The actual comments are 'New Life Starts here' but the logs show that we are actually getting it till 'New' and then it breaks at space. I tried giving IFS but of no use. Probably I need to give it on remote but I don't know how should I do that.
Any help?
Your command is trying to run the egrep "$v" /$INSTALL_DIR/$PROP_BUNDLE.log on the local machine, and pass the result of that as the command to run via SSH.
I suspect that you meant for that command to be run on the remote machine. Remove the inner $() to get that to happen (and fix the quoting):
commentsArray=($(ssh $sourceUser#$sourceHost "egrep '$v' '/$INSTALL_DIR/$PROP_BUNDLE.log'"))
You should use fgrep to avoid regex special interpretation from your input:
commentsArray=($(ssh $sourceUser#$sourceHost "$(fgrep "$v" /$INSTALL_DIR/$PROP_BUNDLE.log)"))

How can I replace a specific line by line number in a text file?

I have a 2GB text file on my linux box that I'm trying to import into my database.
The problem I'm having is that the script that is processing this rdf file is choking on one line:
mismatched tag at line 25462599, column 2, byte 1455502679:
<link r:resource="http://www.epuron.de/"/>
<link r:resource="http://www.oekoworld.com/"/>
</Topic>
=^
I want to replace the </Topic> with </Line>. I can't do a search/replace on all lines but I do have the line number so I'm hoping theres some easy way to just replace that one line with the new text.
Any ideas/suggestions?
sed -i yourfile.xml -e '25462599s!</Topic>!</Line>!'
sed -i '25462599 s|</Topic>|</Line>|' nameoffile.txt
The tool for editing text files in Unix, is called ed (as opposed to sed, which as the name implies is a stream editor).
ed was once intended as an interactive editor, but it can also easily scripted. The way ed works, is that all commands take an address parameter. The way to address a specific line is just the line number, and the way to change the addressed line(s) is the s command, which takes the same regexp that sed would. So, to change the 42nd line, you would write something like 42s/old/new/.
Here's the entire command:
FILENAME=/path/to/whereever
LINENUMBER=25462599
ed -- "${FILENAME}" <<-HERE
${LINENUMBER}s!</Topic>!</Line>!
w
q
HERE
The advantage of this is that ed is standardized, while the -i flag to sed is a proprietary GNU extension that is not available on a lot of systems.
Use "head" to get the first 25462598 lines and use "tail" to get the remaining lines (starting at 25462601). Though... for a 2GB file this will likely take a while.
Also are you sure the problem is just with that line and not somewhere previous (ie. the error looks like an XML parse error which might mean the actual problem is someplace else).
My shell script:
#!/bin/bash
awk -v line=$1 -v new_content="$2" '{
if (NR == line) {
print new_content;
} else {
print $0;
}
}' $3
Arguments:
first: line number you want change
second: text you want instead original line contents
third: file name
This script prints output to stdout then you need to redirect. Example:
./script.sh 5 "New fifth line text!" file.txt
You can improve it, for example, by taking care that all your arguments has expected values.

Resources