Logstash difference between add_field and replace - logstash

What is the difference between add_field and replace, when configuring Logstash?
Semantically, they might expected to do different things, but the current manual does not have anything to say about how the two functions differ depending on the pre-existance of the relevant fields.
Example 1:
filter {
mutate {
add_field => { "foo_%{somefield}" => "Hello world, from %{host}" }
}
}
The manual does not say what the expected behavour is if the field already exists. Is the behaviour the same as replace?
Example 2:
filter {
mutate {
replace => { "message" => "%{source_host}: My new message" }
}
}
Once again the manual does not say what the expected behaviour is if the field does not already exist. Is it the same as add_field?

Semantically, they might seem equivalent but they are different as add_field might not always kick in.
replace is part of the mutate filter and if you're looking at the source code of that plugin, you'll see that replace will always set a field even if it doesn't exist:
def replace(event)
#replace.each do |field, newvalue|
event.set(field, event.sprintf(newvalue))
end
end
add_field, however, is a generic configuration of any input and filter plugins and you'll often see in the documentation of some specific filter plugins something like this (e.g. for the grok filter):
If this filter is successful, add any arbitrary fields to this event. [...]
This configuration is inherited from LogStash::Filters::Base, which is the superclass of all filter plugins. This means that in certain filter plugins, in order for add_field to actually add the field, the filter must be successful, otherwise no fields is added.
So in your case, mutate/replace will always set a field even if it doesn't exist, and mutate/add_field will add the specified fields after running all the operations of the mutate filter.
So if you have a mutate filter that does some specific operation like gsub and fails for some reason, no fields will be added as a result.

Related

Logstash add field values

I have a Logstash filter set which sets a field Alert_level to an integer based on regex matching the message.
Example:
if [message] =~ /(?i)foo/ {mutate {add_field => { "Alert_level" => "3" }}}
if [message] =~ /(?i)bar/ {mutate {add_field => { "Alert_level" => "2" }}}
These cases are not mutually exclusive and will sometimes result in events with 2 or more values in Alert_level:
message => "foobar"
Alert_level => "2, 3"
I want to add up the values in Alert_level to a total integer, where the above example would result in this:
message => "foobar"
Alert_level => "5"
There is no math in logstash itself, but I like darth_vader's tag idea (if your levels are only hit once each).
You could set a tag for the alert levels, e.g. "alert_3", "alert_4", etc., and then drop into the ruby filter to loop across them, split out the numeric value, and add them together into a new field. (Using a sentinel prefix like "alert_" will prevent you from trying to add a "_grokparsefailure" or other non-alert tag).
There are other examples on SO for looping across fields in ruby.
As I understood your question, you need the AND operator within your if to check both the conditions:
if "foobar" in [message] and "5" in [Alert_level]{
//do something
}

How to pull specific data out of a message in LogStash

I am trying to take log data from a custom application that has a well defined format. I am trying to pick out certain pieces of the data using the grok filter, but I am not having any luck. Here is a sample log:
- System.Data.SqlClient.SqlException (0x80131904): Arithmetic overflow error converting IDENTITY to data type int.
Arithmetic overflow occurred.
What I would like to do is extract out the SqlException out of the string. Here is the grok that I am using:
grok{
match =>
{
"message" =>
[
"(?m)%{DATE:TIMESTAMP_DATE}%{SPACE}%{TIME:TIMESTAMP_TIME}%{SPACE}%{WORD:LOG_LEVEL}%{SPACE}(?<THREAD>[^\s]+)%{SPACE}(?<HOST>[^\s]+)%{SPACE}%{GREEDYDATA:MESSAGE}",
"(?<EXCEPTION>[.*]+)"
]
}
}
I have tried several different ways, but I guess I am not completely understanding the documentation. What I would expect to happen is all of the fields that I have extracts in the first set would include the result of the second set. In other words:
TIMESTAMP_DATE,TIMESTAMP_TIME,LOG_LEVEL,THREAD,HOST,MESSAGE,EXCEPTION
I am getting the other fields perfectly, it is just additional matching that I am missing. Any help would be appreciated. Thanks
If you specify multiple patterns grok by default only looks checks the patterns until the first match is encountered. If you want to match against both patterns regardless of whether the first one matched or not you can change the behaviour like that:
grok{
break_on_match => false
match =>
{
"message" =>
[
"(?m)%{DATE:TIMESTAMP_DATE}%{SPACE}%{TIME:TIMESTAMP_TIME}%{SPACE}%{WORD:LOG_LEVEL}%{SPACE}(?<THREAD>[^\s]+)%{SPACE}(?<HOST>[^\s]+)%{SPACE}%{GREEDYDATA:MESSAGE}",
"(?<EXCEPTION>[.*]+)"
]
}
}
Check out the docs under: https://www.elastic.co/guide/en/logstash/current/plugins-filters-grok.html#plugins-filters-grok-break_on_match

How to define grok pattern for pipe delimited log message?

setting up ELK is very easy until you hit the logstash filter. I have a log delimited 10 fields. I may have some field blank but I am sure there will be 10 fields:
7/5/2015 10:10:18 AM|KDCVISH01|
|ClassNameUnavailable:MethodNameUnavailable|CustomerView|xwz261|ef315792-5c41-4bdf-aa66-73317e82e4d6|52|6182d1a1-7916-4874-995b-bc9a23437dab|<Exception>
afkh akla 487234 &*<Exception>
Q:
1- I am confused how grok or regex pattern will pick only the field that I am looking and not the similar match from another field. For example, what is the guarantee that DATESTAMP pattern picks only the first value and not the timestamp present in the last field (buried in stack trace)?
2- Is there a way to define positional mapping? For example, 1st fiels is dateTime, 2nd is machine name, 3rd is class name and so on. This will make sure I have fields displayed in Kibana no matter the field value is present or not.
I know i am little late, But here is a simple solution which i am using,
replace your | with space
option 1:
filter {
mutate {
gsub => ["message","\|"," "]
}
grok {
match => ["message","%{DATESTAMP:time} %{WORD:MESSAGE1} %{WORD:EXCEPTION} %{WORD:MESSAGE2}"]
}
}
option 2: excepting |
filter {
grok {
match => ["message","%{DATESTAMP:time}\|%{WORD:MESSAGE1}\|%{WORD:EXCEPTION}\|%{WORD:MESSAGE2}"]
}
}
it is working fine : http://grokdebug.herokuapp.com/. check here.

Logstash: Reading multiline data from optional lines

I have a log file which contains lines which begin with a timestamp. An uncertain number of extra lines might follow each such timestamped line:
SOMETIMESTAMP some data
extra line 1 2
extra line 3 4
The extra lines would provide supplementary information for the timestamped line. I want to extract the 1, 2, 3, and 4 and save them as variables. I can parse the extra lines into variables if I know how many of them there are. For example, if I know there are two extra lines, the grok filter below will work. But what should I do if I don't know, in advance, how many extra lines will exist? Is there some way to parse these lines one-by-one, before applying the multiline filter? That might help.
Also, even if I know I will only have 2 extra lines, is the filter below the best way to access them?
filter {
multiline {
pattern => "^%{SOMETIMESTAMP}"
negate => "true"
what => "previous"
}
if "multiline" in [tags] {
grok {
match => { "message" => "(?m)^%{SOMETIMESTAMP} %{DATA:firstline}(?<newline>[\r\n]+)%{DATA:secondline}(?<newline>[\r\n]+)%{DATA:thirdline}$" }
}
}
# After this would be grok filters to process the contents of
# 'firstline', 'secondline', and 'thirdline'. I would then remove
# these three temporary fields from the final output.
}
(I separated the lines into separate variables since this allows me to do additional pattern matching on the contents of the lines separately, without having to refer to the entire pattern all over again. For example, based on the contents of the first line, I might want to present branching behavior for the other lines.)
Why do you need this?
Are you going to be inserting one single event with all of the values or are they really separate events that just need to share the same time stamp?
If they all need to appear in the same event, you'll like need to resort to a ruby filter to separate out the extra lines into fields on the event that you can then further work on.
For example:
if "multiline" in [tags] {
grok {
match => { "message" => "(?m)^%{SOMETIMESTAMP} %{DATA:firstline}(?<newline>[\r\n]+)" }
}
ruby {
code => '
event["lines"] = event["message"].scan(/[^\r\n]+[\r\n]*/);
'
}
}
If they are really separate events, you could use the memorize plugin for logstash 1.5 and later.
This has changed over versions of ELK
Direct event field references (i.e. event['field']) have been disabled in favor of using event get and set methods (e.g. event.get('field')).
filter {
grok {
match => { "message" => "%{TIMESTAMP_ISO8601:logtime} %{LOGLEVEL:level}%{DATA:firstline}" }
}
ruby { code => "event.set('message', event.get('message').scan(/[^\r\n]+[\r\n]*/))" }
}

Different structure in a few lines in my log file

My log file contains different structures in a few lines, and I can not grok it, I don't know if we can test by lines or attribute, I'm still a beginner.
if you don't understand me I can give you some examples :
input :
id=firewall action=bloc type=web
id=firewall fw="ER" type=filter
id=firewall fw="Az" tz="loo" action=bloc
Pattern:
id=%{WORD:id} ...
I thought to add some patterns between ()?,
but i don't know exactly how to do it.
you can use this site to test it http://grokdebug.herokuapp.com/
Any help please? What should i do :(
Logstash supports key-value Values, take a look at http://logstash.net/docs/1.4.2/filters/kv.
Or you could use multiple match values:
grok {
patterns_dir => "./patterns"
match => [
"message", "%{BASE_PATTERN} %{EXTRA_PATTERN}",
"message", "%{BASE_PATTERN}",
"message", "%{SOME_OTHER_PATTERN}"
]
}
Not sure if I understood well your question but I will try to answer. I think the first thing you have to do is to parse the different fields from your input. Example of pattern to parse your first line input :
PATTERN %{NOTSPACE} %{NOTSPACE} %{NOTSPACE} (in $LOGSTASH_HOME/pattern/extra)
Then in your logstash configuration file :
filter {
grok {
patterns_dir => "$LOGSTASH_HOME/pattern"
match => [ "message" => "%{PATTERN}" ]
}
}
This will match your first line as 3 fields ("id=firewall" "action=bloc" "type=web") (you have to adapt it if you have more than 3 fields).
And the last thing you seem be looking for is splitting field (in key-value scheme) like id=firewall would become id => "firewall". This can be done with the kv plugin. I never used it but I recommend you the logstash docs here
If I did not understand you question, please be more clear.

Resources