How to use IF ELSE condition in grok pattern in logstash - logstash

I have web and API log combined and I want to save it separately in elasticsearch. So I want to write one pattern if the request is for API then if past should execute, the request is web then else part of the log should be executed.
Below are few web and API logs.
00:06:27,778 INFO [stdout] (ajp--0.0.0.0-8009-38) 00:06:27.777 [ajp--0.0.0.0-8009-38] INFO c.r.s.web.rest.WidgetController - Method getWidgetDetails() started to get widget details.
00:06:27,783 INFO [stdout] (ajp--0.0.0.0-8009-38) ---> HTTP GET http://api.survey.me/v1/getwidgetdetails?profileName=jeremy-steffens&profileLevel=INDIVIDUAL&companyProfileName=premier-nationwide-lending&hideHistory=true
00:06:27,817 INFO [stdout] (ajp--0.0.0.0-8009-38) <--- HTTP 200 http://api.survey.me/v1/getwidgetdetails?profileName=jeremy-steffens&profileLevel=INDIVIDUAL&companyProfileName=premier-nationwide-lending&hideHistory=true (29ms)
00:06:27,822 INFO [stdout] (ajp--0.0.0.0-8009-38) 00:06:27.822 [ajp--0.0.0.0-8009-38] INFO c.r.s.web.rest.WidgetController - Method getWidgetDetails() finished.
00:06:27,899 INFO [stdout] (ajp--0.0.0.0-8009-40) 00:06:27.899 [ajp--0.0.0.0-8009-40] INFO c.r.s.web.controller.LoginController - Inside initLoginPage() of LoginController
I tried to write condition but it's not working. It's working only up to thread name. After thread I have multiple type log so not able to write witout if condition.
(?:%{TIME:CREATED_ON})(?:%{SPACE})%{WORD:LEVEL}%{SPACE}\[%{NOTSPACE}\]%{SPACE}\(%{NOTSPACE:THREAD}\)
Can anybody give me suggestion?

You don't need to use an if/else conditon to do this, you can use multiple patterns, one will match the API log lines and the other will match the WEB log lines.
For the API log lines you can use the following pattern:
(?:%{TIME:CREATED_ON})(?:%{SPACE})%{WORD:LEVEL}%{SPACE}\[%{NOTSPACE}\]%{SPACE}\(%{NOTSPACE:THREAD}\)%{SPACE}(?:%{DATA})%{SPACE}\[%{DATA}\]%{SPACE}%{WORD}%{SPACE}%{GREEDYDATA:MSG}
And your return will be something like this:
{
"MSG": "c.r.s.web.controller.LoginController - Inside initLoginPage() of LoginController",
"CREATED_ON": "00:06:27,899",
"LEVEL": "INFO",
"THREAD": "ajp--0.0.0.0-8009-40"
}
For the web lines you can use the following pattern:
(?:%{TIME:CREATED_ON})(?:%{SPACE})%{WORD:LEVEL}%{SPACE}\[%{NOTSPACE}\]%{SPACE}\(%{NOTSPACE:THREAD}\)%{SPACE}%{DATA}%{WORD:PROTOCOL}%{SPACE}%{WORD:MethodOrStatus}%{SPACE}%{GREEDYDATA:ENDPOINT}
And the result will be:
{
"CREATED_ON": "00:06:27,783",
"PROTOCOL": "HTTP",
"ENDPOINT": "http://api.survey.me/v1/getwidgetdetails?profileName=jeremy-steffens&profileLevel=INDIVIDUAL&companyProfileName=premier-nationwide-lending&hideHistory=true",
"LEVEL": "INFO",
"THREAD": "ajp--0.0.0.0-8009-38",
"MethodOrStatus": "GET"
}
To use multiple patterns in grok just do this:
grok {
match => ["message", "pattern1", "pattern2"]
}
Or you can save your patterns to a file and use patterns_dir to point to the directory of the file.
If you still want to use a conditional, just check for anything in the message, for example:
if "HTTP" in [message] {
grok { your grok for the web messages }
} else {
grok { your grok for the api messages }
}

Related

GROK pattern for optional field

I have a log string like :
2018-08-02 12:02:25.904 [http-nio-8080-exec-1] WARN o.s.w.s.m.s.DefaultHandlerExceptionResolver.handleTypeMismatch - Failed to bind request element
In the above string [http-nio-8080-exec-1] is a optional field, it can be there in some log statements.
i created a grok patterns like with some references on net :
%{TIMESTAMP_ISO8601:timestamp} (\[%{DATA:thread}\])? %{LOGLEVEL:level}%{SPACE}%{JAVACLASS:class}\.%{DATA:method} - %{GREEDYDATA:loggedString}
seems its not working if i remove the thread name string.
you need to make the space character following the thread name optional: (\[%{DATA:thread}\] )?
input:
2018-08-02 12:02:25.904 WARN o.s.w.s.m.s.DefaultHandlerExceptionResolver.handleTypeMismatch - Failed to bind request element
pattern:
%{TIMESTAMP_ISO8601:timestamp} (\[%{DATA:thread}\] )?%{LOGLEVEL:level}%{SPACE}%{JAVACLASS:class}\.%{DATA:method} - %{GREEDYDATA:loggedString}
output:
{
"loggedString": "Failed to bind request element",
"method": "handleTypeMismatch",
"level": "WARN",
"class": "o.s.w.s.m.s.DefaultHandlerExceptionResolver",
"timestamp": "2018-08-02 12:02:25.904"
}

How to remove filebeat tags like id, hostname, version, grok_failure message

I am new to elk my sample log is look like
2017-01-05T14:28:00 INFO zeppelin IDExtractionService transactionId abcdef1234 operation extractOCRData received request duration 12344 exception error occured
my filebeat configuration is below
filebeat.prospectors:
- input_type: log
paths:
- /opt/apache-tomcat-7.0.82/logs/*.log
document_type: apache-access
fields_under_root: true
output.logstash:
hosts: ["10.2.3.4:5044"]
And my logstash filter.conf file:
filter {
grok {
match => [ "message", "transactionId %{WORD:transaction_id} operation %{WORD:otype} received request duration %{NUMBER:duration} exception %{WORD:error}" ]
}
}
filter {
if "beats_input_codec_plain_applied" in [tags] {
mutate {
remove_tag => ["beats_input_codec_plain_applied"]
}
}
}
;
In kibana dashboard i can see log output as below
beat.name:
ebb8a5ec413b
beat.hostname:
ebb8a5ec413b
host:
ebb8a5ec413b
tags:
beat.version:
6.2.2
source:
/opt/apache-tomcat-7.0.82/logs/IDExtraction.log
otype:
extractOCRData
duration:
12344
transaction_id:
abcdef1234
#timestamp:
April 9th 2018, 16:20:31.853
offset:
805,655
#version:
1
error:
error
message:
2017-01-05T14:28:00 INFO zeppelin IDExtractionService transactionId abcdef1234 operation extractOCRData received request duration 12344 exception error occured
_id:
7X0HqmIBj3MEd9pqhTu9
_type:
doc
_index:
filebeat-2018.04.09
_score:
6.315
1 First question is how to remove filebeat tag like id,hostname,version,grok_failure message
2 how to sort logs on timestamp basis because Newly generated logs not appearing on top of kibana dashboard
3 Is there any changes required in my grok filter
You can remove filebeat tags by setting the value of fields_under_root: false in filebeat configuration file. You can read about this option here.
If this option is set to true, the custom fields are stored as
top-level fields in the output document instead of being grouped under
a fields sub-dictionary. If the custom field names conflict with other
field names added by Filebeat, the custom fields overwrite the other
fields.
you can check if _grokparsefailure is in tags using, if "_grokparsefailure" in [tags] and remove it with remove_tag => ["_grokparsefailure"]
Your grok filter seems to be alright.
Hope it helps.

grok filter for processing log4j logs pattern in Logstash

I am stuck in finding grok filter for processing conversion pattern %d{HH:mm:ss.SSS} %-5p [%t][%c] %m%n in log4j logs
here is an example log entry:
2018-02-12 12:10:03 INFO classname:25 - Exiting application.
2017-12-31 05:09:06 WARN foo:133 - Redirect Request : login
2015-08-19 08:07:03 INFO DBConfiguration:47 - Initiating DynamoDb Configuration...
2016-02-12 11:06:49 ERROR foo:224 - Error Code : 500
can anyone help in finding the Logstash grok filter.
Here I found the filter for your log4j pattren.
filter{
mutate {
gsub => ['message', "\n", " "]
}
grok {
match => { "message" => "(?<date>[0-9]{4}-[0-9]{2}-[0-9]{2} [0-9]{2}:[0-9]{2}:[0-9]{2}) (?:%{LOGLEVEL:loglevel}) +(?:%{WORD:caller_class}):(?:%{NONNEGINT:caller_line}) - (?:%{GREEDYDATA:msg})" }
}
}
However, this is specific to the above log.

Logstash custom log parsing

Need your help in custom log parsing through logstash
Here is the log format that I am trying to parse through logstash
2015-11-01 07:55:18,952 [abc.xyz.com] - /Enter, G, _null, 2702, 2, 2, 2, 2, PageTotal_1449647718950_1449647718952_2_App_e9c00521-eeec-4d47-bf5b-b842ec14a4ff_178.255.153.2___, , , NEW,
And my logstash conf file looks like below
input {
file {
path => [ "/tmp/access.log" ]
}
}
filter{
grok {
match => { "message" => "%{TIMESTAMP_ISO8601:timestamp} %{GREEDYDATA:message}" }
}
date {
match => ["timestamp","yyyy-MM-dd HH:mm:ss,SSSS"]
}
}
For some reason running the logstash command passing the conf file doesnt parse the logs, not sure whats wrong with the config. Any help would be highly appreciated.
bin/logstash -f conf/access_log.conf
Settings: Default filter workers: 6
Logstash startup completed
I have checked your Grok Match filter and is fine with:
Grok Debugger
You don't have to use the date matcher because the grok matcher already correctly match the TIMESTAMP_ISO8601 timestamp.
I think your problem is with "since_db" file.
Here is the documentation:
since_db
In few words, logstash remember if a file is already read and doesn't read it anymore. Logstash remember if one file was already read because write it in the since Database.
If you would like to test your filter reading always the same file, you could try:
input {
file {
path => [ "/tmp/access.log" ]
sincedb_path => "/dev/null"
}
}
Regards

Logstash drop filter for event

In my log file I have entries like the following:
2014-06-25 12:36:18,176 [10] ((null)) INFO [s=(null)] [u=(null)] Hello from Serilog, running as "David"! [Program]
2014-06-25 12:36:18,207 [10] ((null)) WARN [s=(null)] [u=(null)] =======MyOwnLogger====== Hello from log4net, running as David! [MyOwnLogger]
2014-06-25 12:36:18,209 [10] ((null)) ERROR [s=(null)] [u=(null)] =======MyOwnLogger====== Hello from log4net, running as David! [MyOwnLogger]
which are of loglevel INFO, WARN and ERROR respectively.
What I would like to do is to only output to Elasticsearch those entries which are of ERROR level. Here is my Logstash configuration file:
input {
file {
path => "Somepath/*.log"
}
}
# This filter doesn't work
filter {
if [loglevel] != "error" {
drop { }
}
}
output {
elasticsearch { host => localhost }
stdout {}
}
Effectively, currently nothing gets sent to Elasticsearch. I know it is related to the filter because if it's not there, all the entries get sent to Elastisearch.
Try this grok filter. It is works at me with your logs
filter {
grok {
match => ["message","%{TIMESTAMP_ISO8601:logtime} \[%{NUMBER}\] \(\(%{WORD}\)\) %{WORD:loglevel} %{GREEDYDATA:other}"]
}
if [loglevel]!= "ERROR" {
drop {}
}
}
First, you need to grok the loglevel, then just you can use the field to do if else condition and decide drop or not.

Resources