I have a log file, in this pattern:
IP - - [date] "command" response time
I want to search in the log the lines which contains the ip:68.45.3.1 and part of the command: "/api/con"
So this is a correct result:
68.45.3.1 - - [05/Nov/2015:03:48:25 -0500] "GET /5.0/api/con/1" 20:01
How can I do it?
Try something like:
zgrep "^68.45.3.1.*\/api\/con" access.log.*.gz
assuming of course that your files are something like access.log.10.gz etc. (change the name of the file if this isn't the case).
Related
This the log file that I want to monitor:
/test/James-2018-11-16_15215125111115-16.15.41.111-appserver0.log
I want Nagios to read it this log file so I can monitor a specific string.
The issue is with 15215125111115 this is the random id that gets generated
Here is my script where the Nagios is checking for the Logfile path:
Veriables:
HOSTNAMEIP=$(/bin/hostname -i)
DATE=$(date +%F)
..
CHECK=$(/usr/lib64/nagios/plugins/check_logfiles/check_logfiles
--tag='failorder' --logfile=/test/james-${date +"%F"}_-${HOSTNAMEIP}-appserver0.log
....
I am getting the following output in nagios:
could not find logfile /test/James-2018-11-16_-16.15.41.111-appserver0.log
15215125111115 This number is always generated randomly but I don't know how to get nagios to identify it. Is there a way to add a variable for this or something? I tried adding an asterisk "*" but that didn't work.
Any ideas would be much appreciated.
--tag failorder --type rotating::uniform --logfile /test/dummy \
--rotation "james-${date +"%F"}_\d+-${HOSTNAMEIP}-appserver0.log"
If you add a "-v" you can see what happens inside. Type rotating::uniform tells check_logfiles that the rotation scheme makes no difference between current log and rotated archives regarding the filename. (You frequently find something like xyz..log). What check_logfile does is to look into the directory where the logfiles are supposed to be. From /test/dummy it only uses the directory part. Then it takes all the files inside /test and compares the filenames with the --rotation argument. Those files which match are sorted by modification time. So check_logfiles knows which of the files in question was updated recently and the newest is considered to be the current logfile. And inside this file check_logfiles searches the criticalpattern.
Gerhard
I am new to Shellscripting.I am working on a poc in which a script should read a log file and then append to a existing file for the purpose of alert.It should work as per below
There will be some predefined format according to which it will decide whether to append in file or not.For example:
WWXXX9999XS message
**XXX** - is a 3 letter acronym (application code) like for **tom** for tomcat application
9999 - is a 4 numeric digit in the range 1001-1999
**E or X** - For notification X ,If open/active alerts already existing for same error code and same message,new alerts will not be raised for existing one.Once you have closed existing alerts,it will raise alarm for new error.There is any change in message for same error code from existing one, it will raise a alarm even though open/active alerts present.
X option is only for drop duplicates on code and message otherwise all alert mechanisms are same.
**S** - is the severity level, I.e 2,3
**message** - is any text that will be displayed
The script will examine the log file, and look for error like cloud server is down,then it would append 'wwclo1002X2 cloud server is down'if its a new alert.
2.If the same alert is coming again,then it should append 'wwclo1002E2 cloud server is down
There are some very handy commands you can use to do this type of File manipulation. I've updated this in response to your comment to allow functionality that will check if the error has already been appended to the new file.
My suggestion would be that there is enough functionality here to warrant saving it in a bash script.
My approach would be to use a combination of less, grep and > to read and parse the file and then append to the new file. First save the following into a bash script (e.g. a file named script.sh)
#!/bin/bash
result=$(less $1 | grep $2)
exists=$(less $3 | grep $2)
if [[ "$exists" == "$result" ]]; then
echo "error, already present in file"
exit 1
else
echo $result >> $3
exit 0
fi
Then use this file in the command passing in the log file as the first argument, the string to search for as the second argument and the target results file as the third argument like this:
./script.sh <logFileName> "errorToSearchFor" <resultsTargetFileName>
Don't forget to run the file you will need to change the permissions - you can do this using:
chmod u+x script.sh
Just to clarify as you have mentioned you are new to scripting - the less command will output the entire file, the | command (an unnamed pipe) will pass this output to the grep command which will then search the file for the expression in quotes and return all lines from the file containing that expression. The output of the grep command is then appended to the new file with >>.
You may need to tailor the expression in quotes after grep to get exactly the output you want from the log file.
The filenames are just placeholders, be sure to update these with the correct file names. Hope this helps!
Note updated > to >> (single angle bracket overwrites, double angle bracket appends
I have a file where each line begins with a specific logging info. Here is an example:
12-May 02:01:18:INFO:root:restapid=>someurlhere
12-May 02:01:19:INFO:root:response=>loremipsum
I want to catch these time info and replace it with the current date&time. I'am able to get the target part by using egrep but unfortunately I couldn't find a way to change it (It is probably because I'm not familiar with sed). How can I do that ? My egrep solution is the following:
egrep '^[0-9]+-[a-Z]+ [0-9]+:[0-9]+:[0-9]+'weblog.api
If I can manage this part, I want to assign this command to a function(or alias) in my bashrc and when I run it I want to change all the time info with current time by calling something like
sample_alias weblog.api
The desired output format is the following (Let's say right now time is 04 Feb 05:02:03. I believe I can get the time info by date "+%y%m%d%H%M" command)
05-Feb 05:02:03:INFO:root:restapid=>someurlhere
05-Feb 05:02:03:INFO:root:response=>loremipsum
cat v1
12-May 02:01:18:INFO:root:restapid=>someurlhere
12-May 02:01:19:INFO:root:response=>loremipsum
sed 's/[0-9]*-[a-Z]* [0-9]*:[0-9]*:[0-9]*/'"$(date +"%d-%b %H:%M:%S")"'/g' v1
05-Feb 06:21:30:INFO:root:restapid=>someurlhere
05-Feb 06:21:30:INFO:root:response=>loremipsum
I need a help in matching the pattern and concatenating fields in two files.
eg: I have the following contents in one file:
186.110.12.152 xxx
186.110.16.123 yyy
and the following contents in another file.
186.110.12.152 www.google.com
186.110.16.123 www.facebook.com
now I need to get the user name at the beginning of the output.
if I search for the xxx, I have to get the output as
xxx 186.110.12.152 www.google.com
Thanks in advance!!!!
Use join command
join firstfile secondfile > output.txt
For more information check this article.
And to be exact according to output FORMAT in question you need to follow this formatting using -o option,
join -o 1.2 2.1 2.2 firstfile secondfile |tee output.txt
the output will be;
xxx 186.110.12.152 www.google.com
yyy 186.110.16.123 www.facebook.com
Here is what I tried,
The explanation of above command is as follows,
-o It used to format the output of join command.
1.2 It signify firstfile's second column.
2.1 It signify secondfile's first column.
2.2 It signify secondfile's second column.
tee command will redirect the output of join command to a file as well as standard output (i.e:console).
output.txt will record the output of the join command.
Look up the join command. This is for joining files based on the contents of a column.
http://linux.die.net/man/1/join
I have been working on this for quite some time and decided to ask for some help. I'm trying to use a command to find a multiple occurrences of a function (basically a string) within a directory (that has multiple files) and would like to view only the file names which the string is found.
Lets say this was the directory I want to search filled with multiple .h and .cpp files is:
~/Project/Files
and I was looking for occurrences of a function called 'doThis'
So far I have tried:
grep -r doThis ~/Project/Files
But I get the path and where it occurs in the file, I only need the file names.
Also grep -f wont work because I get an error message saying "No such file or directory" and when using just grep I get an error message saying "path is a directory"
Any help would be great: Thanks guys!
Simply use the -l switch ;)
So :
grep -rl foobar dir