Logstash if field contains value - logstash

I'm using Filebeat to forward logs into Logstash.
I have filenames that contain "v2" in them, for an example:
C:\logs\Engine\v2.latest.log
I'd like to perform a different grok on these files.
I tried both of the following:
filter{
if "v2" in [filename] {
grok {
.....
.....
}
}
}
OR
filter{
if [filename] =~ /v2/ {
grok {
.....
.....
}
}
}

Well, my issue was that the "Filename" field was being generated AFTER the filter. So my syntax was correct but it simply was not catching anything because it didnt exist. However, Starting from version 6.7 they've added a "log.file.path" field which is the "Filename" field I previously generated.

Related

Logstash - how to split array by comma?

My message looks like this
[Metric][methodName: someName][methodParams: [ClassName{field1="val1", field2="val2", field3="val3"}, ClassName{field1="val1", field2="val2", field3="val3"}, ClassName{field1="val1", field2="val2", field3="val3"}]]
Is there a way to separate this log in more smaller ones and filter them separately?
If the first option isn't possible, how can I parse to get all elements of the array?
(?<nameOfClass>[A-Za-z]+)\{field1='%{DATA:textfield1}',\sfield2='%{DATA:textfield2}',\sfield3='%{DATA:textfield3}'\}
Since everything after methodParams: looks to be JSON you could use a JSON filter to parse it. Something like:
filter{
# Parse your JSON out here using GROK to a field called myjson
grok {
match => {
"message" => "methodParams: %{GREEDYDATA:myjson}"
}
}
#
json{
source => "myjson"
}
}

grok filtering pattern issue

I try to match the loglevel of a log file with a grok filter, but still getting a _grokparsefailure. The problem is maybe with the space between [ and the log level.
example of log: 2017-04-21 10:12:03,004 [ INFO] Message
my filter:
filter {
grok {
match => {
"log.level" => "\[ %{LOGLEVEL:loglevel}\]"
}
}
}
I also tried some other solutions without success:
"\[ *%{LOGLEVEL:loglevel}\]"
"\[%{SPACE}%{LOGLEVEL:loglevel}\]"
Thanks in advance for your help
The issue is with the option match in your filter: this option is a hash that tells the filter which field to look at and which field to look at.
Your regex is fine (you can check with http://grokconstructor.appspot.com/do/match), the issue is with the field name; it should be message.
So in your case, your filter should look like this:
grok {
match => {
"message" => "\[ %{LOGLEVEL:loglevel}\]"
}
}
The point is the default field is message and you need to match all the string
filter {
grok {
match => {
"message" => "%{TIMESTAMP_ISO8601:logDate} \[ %{LOGLEVEL:loglevel}\]%{GREEDYDATA:messages}"
}
}
}

Indexing the logs into different types(schema) in elastic search based on matching patterns

For example here is my log file
[2016-10-18 12:05:53.228] log example
[2016-10-18 11:55:53.228] 19249060-91df-11e6-be68-753fa0e2c729 logg example
[2016-10-18 11:35:53.228] 19249060-91ff-11e6-be68-753fa0e2c729 loggg example /api/userbasic/userinfo?requestedUserId=19249060-91df-11e6-be68-753fa0e2c729
grok filter for my log.here i have used multiple patterns
filter {
grok {
match => [
"message","\[%{TIMESTAMP_ISO8601:timestamp1}\] %{WORDS_EX:msg}",
"message","\[%{TIMESTAMP_ISO8601:timestamp2}\] %{UUID:user_id1} %{WORDS_EX:msg2} %{URIPATHPARAM:path}",
"message","\[%{TIMESTAMP_ISO8601:timestamp3}\] %{UUID:user_id2} %{WORDS_EX:msg3}"
]
}
}
now i want index the logs into elasticsearch with different types(schema) like
logstash/type1,
logstash/type2,
logstash/type3,
Any help appreciated!
First, there is a problem with your filters: the grok pattern are evaluated one by one and when one pattern match, the others will not be evaluated, so the pattern needs to be sorted from the most specific (the one with %{URIPATHPARAM:path}) to the most general (the one with %{WORDS_EX:msg}) like so:
"message","\[%{TIMESTAMP_ISO8601:timestamp2}\] %{UUID:user_id1} %{WORDS_EX:msg2} %{URIPATHPARAM:path}",
"message","\[%{TIMESTAMP_ISO8601:timestamp3}\] %{UUID:user_id2} %{WORDS_EX:msg3}",
"message","\[%{TIMESTAMP_ISO8601:timestamp1}\] %{WORDS_EX:msg}"
Then you can use the presence/absence of various fields with conditionnals like so:
if [path] {
elasticsearch {
...
}
} else if [user_id2] {
elasticsearch {
...
}
} else {
elasticsearch {
...
}
}

Parse a log using Losgtash

I am using Logstash to parse a log file. A sample log line is shown below.
2011/08/10 09:51:34.450457,1.048908,tcp,213.200.244.217,47908, ->,147.32.84.59,6881,S_RA,0,0,4,244,124,flow=Background-Established-cmpgw-CVUT
I am using following filter in my confguration file.
grok {
match => ["message","%{DATESTAMP:timestamp},%{BASE16FLOAT:value},%{WORD:protocol},%{IP:ip},%{NUMBER:port},%{GREEDYDATA:direction},%{IP:ip2},%{NUMBER:port2},%{WORD:status},%{NUMBER:port3},%{NUMBER:port4},%{NUMBER:port5},%{NUMBER:port6},%{NUMBER:port7},%{WORD:flow}" ]
}
It works well for error-free log lines. But when I have a line like below, it fails. Note that the second field is missing.
2011/08/10 09:51:34.450457,,tcp,213.200.244.217,47908, ->,147.32.84.59,6881,S_RA,0,0,4,244,124,flow=Background-Established-cmpgw-CVUT
I want to put a default value in there in my output Json object, if a value is missing. how can I do that?
Use (%{BASE16FLOAT:value})? for second field to make it optional - ie. regex ()? .
Even if the second field is null the grok will work.
So entire grok look like this:
%{DATESTAMP:timestamp},(%{BASE16FLOAT:value})?,%{WORD:protocol},%{IP:ip},%{NUMBER:port},%{GREEDYDATA:direction},%{IP:ip2},%{NUMBER:port2},%{WORD:status},%{NUMBER:port3},%{NUMBER:port4},%{NUMBER:port5},%{NUMBER:port6},%{NUMBER:port7},%{WORD:flow}
Use it in your conf file. Now, if value field is empty it will omit it in response.
input {
stdin{
}
}
filter {
grok {
match => ["message","%{DATESTAMP:timestamp},%{DATA:value},%{WORD:protocol},%{IP:ip},%{NUMBER:port},%{GREEDYDATA:direction},%{IP:ip2},%{NUMBER:port2},%{WORD:status},%{NUMBER:port3},%{NUMBER:port4},%{NUMBER:port5},%{NUMBER:port6},%{NUMBER:port7},%{WORD:flow}" ]
}
}
output {
stdout {
codec => rubydebug
}
}

Extracting fields from input file path logstash?

I want to read my log files from various directories, like: Server1, Server2...
Server1 has subdirectories as cron, auth...inside these subdirectories is the log file respectively.
So I am contemplating of reading files like this:
input{
file{
#path/to/folders/server1/cronLog/cron_log
path => "path/to/folders/**/*_log"
}
}
However, I am having difficulty in filtering them i.e to know that for which server (Server1) and logtype (cron), I must apply the grok pattern:
Eg: I thought of doing something like this
if [path] =~ "auth"{
grok{
match => ["message", ***patteren****]
}
}else if [path] =~ "cron"{
grok{
match => ["message", ***pattern***]
}
Above cron is for log file (not cronLog directory).
But like this I also want to filter on server name as every server will have cron, auth,etc logs.
How to filter on both?
Is there a way to grab directory names from path in input ?? Like from here
path => "path/to/folders/**/*_log"
How should I proceed? Any help is appreciated?
its very straight forward, and almost exactly like in my other answer... you use the grok on the path to extract the pieces out that you care about and then you can do whatever you want from there
filter {
grok { "path", "path/to/here/(?<server>[^/]+)/(?<logtype>[^/]+)/(?<logname>.*) }
if [server] == "blah" && [logtype] =~ "cron" {
grok { "message", "** pattern **" }
}
}

Resources