logstash if statement within grok statement - logstash

I'm creating a logstash grok filter to pull events out of a backup server, and I want to be able to test a field for a pattern, and if it matches the pattern, further process that field and pull out additional information.
To that end I'm embedding an if statement within the grok statement itself. This is causing the test to fail with Error: Expected one of #, => right after the if.
This is the filter statement:
filter {
grok {
patterns_dir => "./patterns"
# NetWorker logfiles have some unusual fields that include undocumented engineering codes and what not
# time is in 12h format (ugh) so custom patterns need to be used.
match => [ "message", "%{NUMBER:engcode1} %{DATESTAMP_12H:timestamp} %{NUMBER:engcode2} %{NUMBER:engcode3} %{NUMBER:engcode4} %{NUMBER:ppid} %{NUMBER:pid} %{NUMBER:engcode5} %{WORD:processhost} %{WORD:processname} %{GREEDYDATA:daemon_message}" ]
# attempt to find completed savesets and pull that info from the daemon_message field
if [daemon_message] =~ /done\ saving\ to\ pool/ {
grok {
match => [ "daemon_message", "%{WORD:savehost}\:%{WORD:saveset} done saving to pool \'%{WORD:pool}\' \(%{WORD:volume}\) %{WORD:saveset_size}" ]
}
}
}
date {
# This is requred to set the time from the logline to the timestamp and not have it create it's own.
# Note the use of the trailing 'a' to denote AM or PM.
match => ["timestamp", "MM/dd/yyyy HH:mm:ss a"]
}
}
This block fails with the following:
$ /opt/logstash/bin/logstash -f ./networker_daemonlog.conf --configtest
Error: Expected one of #, => at line 12, column 12 (byte 929) after # Basic dumb simple networker daemon log grok filter for the NetWorker daemon.log
# no smarts to this and not really pulling any useful info from the files (yet)
filter {
grok {
... lines deleted ...
# attempt to find completed savesets and pull that info from the daemon_message field
if
I'm new to logstash, and I realise that using a conditional within the grok statement may not be possible, but I'd prefer doing conditional processing this way to additional match lines as this would leave the daemon_message field intact for other uses while pulling out the data I want.
ETA: I should also point out that totally removing the if statement allows the configtest to pass and the filter to parse logs.
Thanks in advance...

Conditionals go outside the filters, so something like:
if [field] == "value" {
grok {
...
}
]
would be correct. In your case, do the first grok, then test to run the second, i.e.:
grok {
match => [ "message", "%{NUMBER:engcode1} %{DATESTAMP_12H:timestamp} %{NUMBER:engcode2} %{NUMBER:engcode3} %{NUMBER:engcode4} %{NUMBER:ppid} %{NUMBER:pid} %{NUMBER:engcode5} %{WORD:processhost} %{WORD:processname} %{GREEDYDATA:daemon_message}" ]
}
if [daemon_message] =~ /done\ saving\ to\ pool/ {
grok {
match => [ "daemon_message", "%{WORD:savehost}\:%{WORD:saveset} done saving to pool \'%{WORD:pool}\' \(%{WORD:volume}\) %{WORD:saveset_size}" ]
}
}
This is really running two regexps for a record that matches. Since grok will only make fields when the regexp matches, you can do this:
grok {
match => [ "message", "%{NUMBER:engcode1} %{DATESTAMP_12H:timestamp} %{NUMBER:engcode2} %{NUMBER:engcode3} %{NUMBER:engcode4} %{NUMBER:ppid} %{NUMBER:pid} %{NUMBER:engcode5} %{WORD:processhost} %{WORD:processname} %{GREEDYDATA:daemon_message}" ]
}
grok {
match => [ "daemon_message", "%{WORD:savehost}\:%{WORD:saveset} done saving to pool \'%{WORD:pool}\' \(%{WORD:volume}\) %{WORD:saveset_size}" ]
}
You'd have to measure the performance across your actual log files since this will run fewer regexps, but the second one is more complicated.
If you really want to go nuts, you can do all of this in one grok{}, using the break_on_match feature.

Related

Getting optional field out of message in logstash grok filter

I´m trying to extract the number of ms in this logline
20190726160424 [INFO]
[concurrent/forasdfMES-managedThreadFactory-Thread-10] -
Metricsdceptor: ## End of call: Historirrtory.getHistrrOrder took 2979
ms
The problem is, that not all loglines contain that string
Now I want to extract it optionally into a duration field. I tried this, but nothing happend .... no error, but also no result.
grok
{
match => ["message", "(took (?<duration>[\d]+) ms)?"]
}
What I´m I doing wrong ?
Thanks guys !
A solution would be to only apply the grok filter on the log lines ending with ms. It can be done using conditionals in your configuration.
if [message] =~ /took \d+ ms$/ {
grok {
match => ["message", "took %{NUMBER:duration} ms"]
}
}
I cannot explain why, but it works if you anchor it
grok { match => { "message" => "(took (?<duration>\d+) ms)?$" } }

Hard to stash a log file with different occurrence of order for a field using Logstash

I am trying to stash a log file to elasticsearch using Logstash. I am facing a problem while doing this.
If the log file has same kind of log lines like the below,
[12/Sep/2016:18:23:07] VendorID=5037 Code=C AcctID=5317605039838520
[12/Sep/2016:18:23:22] VendorID=9108 Code=A AcctID=2194850084423218
[12/Sep/2016:18:23:49] VendorID=1285 Code=F AcctID=8560077531775179
[12/Sep/2016:18:23:59] VendorID=1153 Code=D AcctID=4433276107716482
where the date, vendorId, code and acctID's order of occurrence of fields does not change or a new element is not added in to it, then the filter(given below) in the config files work well.
\[%{MONTHDAY}/%{MONTH}/%{YEAR}:%{TIME}\] VendorID=%{INT:VendorID} Code=%{WORD:Code} AcctID=%{INT:AcctID}
Suppose the order changes like the example given below or if a new element is added to one of the log lines, then the grokparsefailure occurs.
[12/Sep/2016:18:23:07] VendorID=5037 Code=C AcctID=5317605039838520
[12/Sep/2016:18:23:22] VendorID=9108 Code=A AcctID=2194850084423218
[12/Sep/2016:18:23:49] VendorID=1285 Code=F AcctID=8560077531775179
[12/Sep/2016:18:23:59] VendorID=1153 Code=D AcctID=4433276107716482
[12/Sep/2016:18:24:50] AcctID=3168124750473449 VendorID=1065 Code=L
[12/Sep/2016:18:24:50] AcctID=3168124750473449 VendorID=1065 Code=L
[12/Sep/2016:18:24:50] AcctID=3168124750473449 VendorID=1065 Code=L
Here in the example, the last three log lines are different from the first four log lines in order of occurrence of the fields. And because of this, the filter message with the grok pattern could not parse the below three lines as it is written for the first four lines.
How should I handle this scenario, when i come across this case? Please help me solve this problem. Also provide any link to any document for detailed explanation with examples.
Thank you very much in advance.
As correctly pointed out by baudsp, this can be achieved by multiple grok filters. The KV filter seems like a nicer option, but as for grok, this is one solution:
input {
stdin {}
}
filter {
grok {
match => {
"message" => ".*test1=%{INT:test1}.*"
}
}
grok {
match => {
"message" => ".*test2=%{INT:test2}.*"
}
}
}
output {
stdout { codec => rubydebug }
}
By having 2 different grok filter applied, we can disregard the order of the logs coming in. The patterns specified basically do not care about what comes before or after the String test and rather just standalone match their respective patterns.
So, for these 2 strings:
test1=12 test2=23
test2=23 test1=12
You will get the correct output. Test:
artur#pandaadb:~/dev/logstash$ ./logstash-2.3.2/bin/logstash -f conf_grok_ordering/
Settings: Default pipeline workers: 8
Pipeline main started
test1=12 test2=23
{
"message" => "test1=12 test2=23",
"#version" => "1",
"#timestamp" => "2016-12-21T16:48:24.175Z",
"host" => "pandaadb",
"test1" => "12",
"test2" => "23"
}
test2=23 test1=12
{
"message" => "test2=23 test1=12",
"#version" => "1",
"#timestamp" => "2016-12-21T16:48:29.567Z",
"host" => "pandaadb",
"test1" => "12",
"test2" => "23"
}
Hope that helps

Prevent _grokparsefailure if one of the multipe groks match

I have this grok:
grok {
patterns_dir => "/etc/logstash/patterns/"
break_on_match => false
keep_empty_captures => true
match => [
"message", "(%{EXIM_DATE:exim_date} )(%{EXIM_PID:exim_pid} )(%{EXIM_MSGID:exim_msg_id} )(%{EXIM_FLAGS:exim_flags} )(%{GREEDYDATA})",
"message", "(%{EXIM_MSGID} )(<= )(%{NOTSPACE:env_sender} )(%{EXIM_REMOTE_HOST} )?(%{EXIM_INTERFACE} )?(%{EXIM_PROTOCOL} )?(X=%{NOTSPACE:tls_info} )?(%{EXIM_MSG_SIZE} )?(%{EXIM_HEADER_ID} )?(%{EXIM_SUBJECT})",
"message", "(%{EXIM_MSGID} )([=-]> )(%{NOTSPACE:env_rcpt} )(<%{NOTSPACE:env_rcpt_outer}> )?(R=%{NOTSPACE:exim_router} )(T=%{NOTSPACE:exim_transport} )(%{EXIM_REMOTE_HOST} )(X=%{NOTSPACE:tls_info} )?(QT=%{EXIM_QT:exim_qt})",
"message", "(%{EXIM_DATE:exim_date} )(%{EXIM_PID:exim_pid} )(%{EXIM_MSGID:exim_msg_id} )(Completed )(QT=%{EXIM_QT:exim_qt})",
"message", "(%{EXIM_DATE:exim_date} )(%{EXIM_PID:exim_pid} )(%{EXIM_MSGID:exim_msg_id} )?(%{EXIM_REMOTE_HOST} )?(%EXIM_INTERFACE} )?(F=<%{NOTSPACE:env_sender}> )?(.+(rejected after DATA|rejected \(but fed to sa-learn\)|rejected [A-Z]+ (or [A-Z]+ %{NOTSPACE}?|<%{NOTSPACE:env_rcpt}>)?): (?<exim_rej_reason>.+))"
]
}
If I test the grok patterns individually everything works as expected, but in production with multiple matches they do not. The result is OK, I got everything parsed, but I got every time a _grokparsefailure tag, also if one of the 5 is a match. How do I prevent this?
Tag removal is not what I want because if there is no match the tag should be added so I can drop the message.
The reason, that you get a failure is that you have set the break_on_match, which test every entry in your match. This results in one of your patterns not matching and setting the _grokparsefailure tag.
From the looks of it your patterns are all exclusive to one another so you wouldn't need to set the break_on_match and still retain the functionality.

How to modify #timestamp with an entry in logs using logstash

I have some logs which has only time as its entries
1. 17:20:45.331|ERR|....
2. 17:20:54.715|SYS|.....Logging started for [....] (Date=[07/28/2014], ...
3. 17:20:54.716|SYS....
and so on
I have the date in only one line of the logs. based on that i want to create a timestamp such as that logging date in logs + the time in each entry
Iam able to get the time in each entry. i can get the log_message => " Logging started for [....] (Date=[07/28/2014], ..." as one entry.
Is it possible to get the date from this entry and modify all other entry's timestamp?
how can I add time and the date and modify the timestamp?
Any help will be appreciated as iam new to logstash
My filter in logstash conf
filter {
grok { match => [ "message", "%{TIME:time}\|%{WORD:Message_type}\|%{GREEDYDATA:Component}\|%{NUMBER:line_number}\| %{GREEDYDATA:log_message}"]
}
date {
match => ["timestamp" , "dd/MMM/yyyy:HH:mm:ss Z" ] => need to modify this as date+%{time}
}
}
time field has milliseconds also.
Your options are:
Change how things are logged to get the date included
Write something to fix the logs before they are picked up by logstash (ie something that looks for the entry any modifies the log)
use the memorize plugin that I wrote (and I submitted a pull request for to try and get it in a future version).
The plugin is detailed in this answer. The caveat with this solution is that if the plugin misses the line that has the date, you'll have issues with the remainder of the file. This could happen if you restart logstash, so you'll need to add in some logic to handle this -- in this case below, I assume that if it hasn't seen the date, it's today.
An implementation using the memorize plugin would look like this:
filter {
if ([message] =~ /Date=/) {
grok { match => [ "message", "Date=%{DATE:date}" ] }
}
# either add the field date to the saved date or pull the date from the saved data
memorize { fields => ["date"] }
# if we still don't have a date, lets just assume it's today
if ([date] == '') {
ruby {
code => 'event["date"]=ime.now.strftime("%m/%d/%Y")'
}
}
if ([message] !~ /Date=/) {
# grok to parse message
grok { match => [ "message", "%{TIME:time}\|%{WORD:Message_type}\|%{GREEDYDATA:Component}\|%{NUMBER:line_number}\| %{GREEDYDATA:log_message}"]
# now add in date
mutate {
add_field => {
datetime => "%{date} %{time}"
}
}
}
}
(This example has not been tested, so there may be syntax/logic errors, but it should get you down the right path).

logstash config file - separating out user and message

New to logstash. I am trying to parse application log lines such as:
2014-11-05 16:59:36,779 ERROR DOMAINNAME\bob [This is an error. ]
My config file looks like this:
input {
file {
path => "C:/tmp/*.log"
}
}
filter {
grok {
match => [
"message", "%{TIMESTAMP_ISO8601:timestamp}\s*%{LOGLEVEL:level}\s*%{DATA:userAlias}\s*%{GREEDYDATA:message}"
]
overwrite => [ "message" ]
}
if [level] =~ "INFO" {
drop {
}
}
}
output {
elasticsearch {
host => "localhost"
protocol => "http"
}
}
The timestamp and level are parsed out fine, but the message displays in Kibana as:
message:
DOMAINNAME\bob [This is an error. ]
The grok pattern for DATA is .*?
so I would assume that it should handle the backslash \ and properly set
userAlias to DOMAINNAME\bob and
message to [This is an error. ]
But this isn't the case. What am I doing wrong here? Thanks.
The problem with your grok pattern is that .*? is non-greedy (i.e. optional) and .* is greedy, so the latter "takes over" the string that could have been matched by the preceding .*? pattern.
I suggest you avoid the DATA and GREEDYDATA patterns except for matching the remainder of the string (like your use of GREEDYDATA here). In this case you could e.g. use the NOTSPACE pattern to match the username. You could use an even more specific pattern that e.g. excludes characters that are invalid in usernames, but I don't see the point of that. This works:
"%{TIMESTAMP_ISO8601:timestamp}\s+%{LOGLEVEL:level}\s+%{NOTSPACE:userAlias}\s+%{GREEDYDATA:message}"
(I also took the liberty of replacing \s* with \s+ since the whitespace between the fields isn't optional.)

Resources