Logstash grok filter - name fields dynamically - logstash

I've got log lines in the following format and want to extract fields:
[field1: content1] [field2: content2] [field3: content3] ...
I neither know the field names, nor the number of fields.
I tried it with backreferences and the sprintf format but got no results:
match => [ "message", "(?:\[(\w+): %{DATA:\k<-1>}\])+" ] # not working
match => [ "message", "(?:\[%{WORD:fieldname}: %{DATA:%{fieldname}}\])+" ] # not working
This seems to work for only one field but not more:
match => [ "message", "(?:\[%{WORD:field}: %{DATA:content}\] ?)+" ]
add_field => { "%{field}" => "%{content}" }
The kv filter is also not appropriate because the content of the fields may contain whitespaces.
Is there any plugin / strategy to fix this problem?

Logstash Ruby Plugin can help you. :)
Here is the configuration:
input {
stdin {}
}
filter {
ruby {
code => "
fieldArray = event['message'].split('] [')
for field in fieldArray
field = field.delete '['
field = field.delete ']'
result = field.split(': ')
event[result[0]] = result[1]
end
"
}
}
output {
stdout {
codec => rubydebug
}
}
With your logs:
[field1: content1] [field2: content2] [field3: content3]
This is the output:
{
"message" => "[field1: content1] [field2: content2] [field3: content3]",
"#version" => "1",
"#timestamp" => "2014-07-07T08:49:28.543Z",
"host" => "abc",
"field1" => "content1",
"field2" => "content2",
"field3" => "content3"
}
I have try with 4 fields, it also works.
Please note that the event in the ruby code is logstash event. You can use it to get all your event field such as message, #timestamp etc.
Enjoy it!!!

I found another way using regex:
ruby {
code => "
fields = event['message'].scan(/(?<=\[)\w+: .*?(?=\](?: |$))/)
for field in fields
field = field.split(': ')
event[field[0]] = field[1]
end
"
}

I know that this is an old post, but I just came across it today, so I thought I'd offer an alternate method. Please note that, as a rule, I would almost always use a ruby filter, as suggested in either of the two previous answers. However, I thought I would offer this as an alternative.
If there is a fixed number of fields or a maximum number of fields (i.e., there may be fewer than three fields, but there will never be more than three fields), this can be done with a combination of grok and mutate filters, as well.
# Test message is: `[fieldname: value]`
# Store values in [#metadata] so we don't have to explicitly delete them.
grok {
match => {
"[message]" => [
"\[%{DATA:[#metadata][_field_name_01]}:\s+%{DATA:[#metadata][_field_value_01]}\]( \[%{DATA:[#metadata][_field_name_02]}:\s+%{DATA:[#metadata][_field_value_02]}\])?( \[%{DATA:[#metadata][_field_name_03]}:\s+%{DATA:[#metadata][_field_value_03]}\])?"
]
}
}
# Rename the fieldname, value combinations. I.e., if the following data is in the message:
#
# [foo: bar]
#
# It will be saved in the elasticsearch output as:
#
# {"foo":"bar"}
#
mutate {
rename => {
"[#metadata][_field_value_01]" => "[%{[#metadata][_field_name_01]}]"
"[#metadata][_field_value_02]" => "[%{[#metadata][_field_name_02]}]"
"[#metadata][_field_value_03]" => "[%{[#metadata][_field_name_03]}]"
}
tag_on_failure => []
}
For those who may not be as familiar with regex, the captures in ()? are optional regex matches, meaning that if there is no match, the expression won't fail. The tag_on_failure => [] option in the mutate filter ensures that no error will be appended to tags if one of the renames fails because there was no data to capture and, as a result, there is no field to rename.

Related

Using grok to create fields in sample log

FMT="1358 15:41:07W19/03/21 (A) Interlocking Link 116 Restored" STY="A" AMSEQ="LINKFAIL" AMSST="RTN" ALTID="1358" TS="20210319154107" CP="LOC A" CP="LOC X" MP="104.95" MP="104.95" EQ="MDIPRIMARYOFF" POS="TC-NORTH"
The log format is as above. I would like to capture the following fields using grok
Time - 15:41:07
Date - 19/03/21
Message - Interlocking Link 116 Restored
Location - Loc X
Anyone help with creating grok pattern that I can use on my logstash filter to parse my logs?
I would not use grok to start with. This is key/value data, so a kv filter will get you started, then you can grok the parts of the FMT field out.
kv { include_keys => [ "FMT", "CP" ] target => "[#metadata]" }
mutate { add_field => { "Location" => "%{[#metadata][CP][1]}" } }
grok { match => { "[#metadata][FMT]" => "%{NUMBER} %{TIME:Time}W%{DATE_EU:Date} \(%{WORD}\) %{GREEDYDATA:Message}" } }
will result in
"Message" => "Interlocking Link 116 Restored",
"Date" => "19/03/21",
"Time" => "15:41:07",
"Location" => "LOC X",
Although having multiple CP fields feels fragile.
The include_keys option on the kv filter tells the filter to ignore other keys. Using target to put the fields under [#metadata] means they are available to other filters but are not sent to the output. The remove_field option on the kv filter is only processed if the filter is able to parse the message, so if your kv data is invalid you will have a [message] field on the event that you can look at.

logstash converting ruby code to logstash filters

I was wondering what will be the best way to implement the following task in logstash :
I have the following field that contains multiple paths divided by ':' :
my_field : "/var/log/my_custom_file.txt:/var/log/otherfile.log/:/root/aaa.jar
I want to add a new field called "first_file" that will contain only the file_name(without suffix) of the first path :
first_file : my_custom_file
I implemented it with the following ruby code ;
code => 'event.set("first_file",event.get("[my_field]").split(":")[0].split("/")[-1].split(".")[0])'
How can I use logstash filters (add_field,split,grok) to do the same task ? I feel like using ruby code should be my last option.
You could do it using just grok, but I think it would be clearer to use mutate to pull out the first value
mutate { split => { "my_field" => ":" } }
mutate { replace => "{ "my_field" => "[my_field][0]" } }
grok { match => { "my_field" => "/(?<my_field>[^/]+)\.%{WORD}$" } overwrite => [ "my_field" ] }
rather than
grok { match => { "my_field" => "/(?<my_field>[^/]+)\.%{WORD}:" } overwrite => [ "my_field" ] }
The (?<my_field>[^/]+) is a custom pattern (documented here) which creates a field called [my_field] from a sequence of one or more (+) characters which are not /
Yes with a basic grok you could match every field in the value.
This kind of filter must work (put it in your logstash configuration file), this one extract the "basename" of the file (filename without extension and path) :
filter{
grok {
match => { "my_field" => "%{GREEDYDATA}/%{WORD:filename}.%{WORD}:%{GREEDYDATA}/%{WORD:filename2}.%{WORD}:%{GREEDYDATA}/%{WORD:filename3}.%{WORD}" }
}
}
You could be more strict in grok with use of PATH in place of GREYDATA, I let you determine your best approach that works in your context.
You could debug the grok pattern with the online tool grokdebug.

how to write the logstash output config to get ABC,AD_EF,123?

my log format is:
XXX: 03-20 17:52:28: XXX. * 0 XXX [XXX] [X XX: X]:XXX\tABC:AD_EF:123\t0\tXXXXXXXXXXXXXXXX\tXXXXXXXXXXXXXXXXXXX
how to write the logstash output config to get ABC, AD_EF, 123 ?
output example:
good,ABC,DEF,123
output {
file {
path => "/xxx/xxx/xxx/output.txt"
codec => plain {
format => "good,ABC,DEF,123" # how to write this regular expression????
}
flush_interval => 0
}
}
Your log output seems to have embedded tabs in it, and those tabs bracket your data. This is good, as it means the csv filter can pull that out for you.
filter {
csv {
separator => " "
columns => [ 'garbage1', 'good', 'garbage2', 'garbage3', 'garbage4' ]
source => "message"
}
}
Note, that is the actual tab character in there, which is hard to represent here.
You would then output the content of the good field to your file.
Thanks For All help, but maybe I made a mistake.
And Finally, I get the answer for my quest:
filter {
grok {
match => {
"message" => "XXX\t(?<field1>\w+?):(?<field2>\w+?):(?<field3>\d+?)\t"
}
}}

Hard to stash a log file with different occurrence of order for a field using Logstash

I am trying to stash a log file to elasticsearch using Logstash. I am facing a problem while doing this.
If the log file has same kind of log lines like the below,
[12/Sep/2016:18:23:07] VendorID=5037 Code=C AcctID=5317605039838520
[12/Sep/2016:18:23:22] VendorID=9108 Code=A AcctID=2194850084423218
[12/Sep/2016:18:23:49] VendorID=1285 Code=F AcctID=8560077531775179
[12/Sep/2016:18:23:59] VendorID=1153 Code=D AcctID=4433276107716482
where the date, vendorId, code and acctID's order of occurrence of fields does not change or a new element is not added in to it, then the filter(given below) in the config files work well.
\[%{MONTHDAY}/%{MONTH}/%{YEAR}:%{TIME}\] VendorID=%{INT:VendorID} Code=%{WORD:Code} AcctID=%{INT:AcctID}
Suppose the order changes like the example given below or if a new element is added to one of the log lines, then the grokparsefailure occurs.
[12/Sep/2016:18:23:07] VendorID=5037 Code=C AcctID=5317605039838520
[12/Sep/2016:18:23:22] VendorID=9108 Code=A AcctID=2194850084423218
[12/Sep/2016:18:23:49] VendorID=1285 Code=F AcctID=8560077531775179
[12/Sep/2016:18:23:59] VendorID=1153 Code=D AcctID=4433276107716482
[12/Sep/2016:18:24:50] AcctID=3168124750473449 VendorID=1065 Code=L
[12/Sep/2016:18:24:50] AcctID=3168124750473449 VendorID=1065 Code=L
[12/Sep/2016:18:24:50] AcctID=3168124750473449 VendorID=1065 Code=L
Here in the example, the last three log lines are different from the first four log lines in order of occurrence of the fields. And because of this, the filter message with the grok pattern could not parse the below three lines as it is written for the first four lines.
How should I handle this scenario, when i come across this case? Please help me solve this problem. Also provide any link to any document for detailed explanation with examples.
Thank you very much in advance.
As correctly pointed out by baudsp, this can be achieved by multiple grok filters. The KV filter seems like a nicer option, but as for grok, this is one solution:
input {
stdin {}
}
filter {
grok {
match => {
"message" => ".*test1=%{INT:test1}.*"
}
}
grok {
match => {
"message" => ".*test2=%{INT:test2}.*"
}
}
}
output {
stdout { codec => rubydebug }
}
By having 2 different grok filter applied, we can disregard the order of the logs coming in. The patterns specified basically do not care about what comes before or after the String test and rather just standalone match their respective patterns.
So, for these 2 strings:
test1=12 test2=23
test2=23 test1=12
You will get the correct output. Test:
artur#pandaadb:~/dev/logstash$ ./logstash-2.3.2/bin/logstash -f conf_grok_ordering/
Settings: Default pipeline workers: 8
Pipeline main started
test1=12 test2=23
{
"message" => "test1=12 test2=23",
"#version" => "1",
"#timestamp" => "2016-12-21T16:48:24.175Z",
"host" => "pandaadb",
"test1" => "12",
"test2" => "23"
}
test2=23 test1=12
{
"message" => "test2=23 test1=12",
"#version" => "1",
"#timestamp" => "2016-12-21T16:48:29.567Z",
"host" => "pandaadb",
"test1" => "12",
"test2" => "23"
}
Hope that helps

Can I use gsub to recursively replace all fieldnames with another field?

After changing my mapping in ElasticSearch to more definitively type the data I am inputting into the system, I have unwittingly made my new variables a nested object. Upon thinking about it more, I actually like the idea of those fields being nested objects because that way I can explicitly know if that src_port statistic is from netflow or from the ASA logs, as an example.
I'd like to use a mutate (gsub, perhaps?) to cause all of my fieldnames for a given type to be renamed to newtype.fieldname. I see that there is gsub which uses a regexp, and rename which takes the literal field name, but I would like to prevent having 30 distinct gsub/rename statements when I will be replacing all of the fields in that type with the "newtype" prefix.
Is there a way to do this?
Here is an example for your reference.
input {
stdin{
type => 'netflow'
}
}
filter {
mutate {
add_field => {"%{type}.message" => "%{message}"}
remove_field => ["message"]
}
}
output {
stdout{
codec => rubydebug
}
}
In this example I have change the message field name to type.message, then delete the origin message field. I think you can use this sample to do what you want.
Hope this can help you.
I have updated my answer!
Use the ruby plugin to do what you want!
Please notice that elasticsearch uses #timestamp field to do index, so I recommend do not change the field name.
input {
stdin{
type => 'netflow'
}
}
filter {
ruby {
code => "
data = event.clone.to_hash;
type = event['type']
data.each do |k,v|
if k != '#timestamp'
newFieldName = type +'.'+ k
event[newFieldName] = v
event.remove(k)
end
end
"
}
}
output {
stdout{
codec => rubydebug
}
}

Categories

Resources