Logstash Grok regex expression works fine alone but doesn't work when grouped with other grok expressions - logstash

My grok expression works fine when used with the matching string alone but when I use this grok expression with other grok expressions to capture other data that's also present in the log line, it doesn't match with the same matching string.
Case1: Below grok expression is working fine when running alone for the below log string and the value is captured in the field targetMessage
Log string: Tracking : sent request to msgDestination
Grok expression: (?<targetMessage>^Tracking : (?:received response from|sent request to) msgDestination$)
Case2: When I try to run the expression with other some other data also present in the log string it doesn't work i.e. grok expression doesn't match with the same string as used above.
Log string:
2022-11-26 8:16:39,873 INFO [task.SomeTask] Tracking : sent request to msgDestination : MODULE1|SERVICE1|20220330051054|TASK1
Grok expression: %{TIMESTAMP_ISO8601:timestamp} %{LOGLEVEL:loglevel} \[(?<classname>[^\]]+)\] (?<targetMessage>^Tracking : (?:received response from|sent request to) msgDestination$) : %{WORD:moduleName}\|%{WORD:service}\|%{INT:requestId}\|%{WORD:taskName}
Debug tool used: https://grokdebug.herokuapp.com/
If anyone can please suggest what mistake I'm making here?

^ and $ anchor an expression to the start and end of a line respectively. You have both inside the targetMessage custom pattern, and that is in the middle of the line, so neither one matches. Remove both ^ and $

Related

Grok filter is not working when id has dashes

I have a sample input like below.
[2022-01-06 19:51:42,143] [http-nio-8080-exec-7] DEBUG [50a4f8740c30b9ca,c1b11682-1eeb-4538-b7f6-d0fb261b3e1d]
I implemented a grok filter to validate the text.
\[%{TIMESTAMP_ISO8601:timestamp}\] \[(?<threadname>[^\]]+)\] %{LOGLEVEL:logLevel} \[%{WORD:traceId},%{WORD:correlationId}\]
When I validate it, it says there are no matches.
But If I remove - in correlation id, that filter is working fine. Is there any modification to do to the filter to accept - in the correlation id?
Try this.
\[%{TIMESTAMP_ISO8601:timestamp}\] \[%{DATA:threadName}\] %{LOGLEVEL:logLevel} \[%{DATA:traceId},%{DATA:correlationId}\]
Acording to this %{WORD} pattern is defined by this regular expression \b\w+\b
\w captures alphanumeric
\b captures word boundaries. It helps you to perform whole words only
So if your original text contains a - it will never be capturing it.
You can try %{DATA} instead as it captures .*?

How to prevent "Timeout executing grok" and _groktimeout tag

I have a log entry whose last part keeps changing depending on few HTTPS conditions.
sample Logs:
INFO [2021-09-27 23:07:58,632] [dw-1001 - POST /abc/api/v3/pqr/options] [386512709095023:] [ESC[36mUnicornClientESC[0;39m]:
<"type": followed by 11000 characters including space words symbols <----- variable length.
grok pattern:
%{LOGLEVEL:loglevel}\s*\[%{TIMESTAMP_ISO8601:date}\]\s*\[%{GREEDYDATA:requestinfo}\]\s*\[%{GREEDYDATA:logging_id}\:%{GREEDYDATA:token}\]\s*\[(?<method>[^\]]+)\]\:\s*(?<messagebody>(.|\r|\n)*)
(.|\r|\n)*)
this works fine if the variable part of the log is small, but when a large log is encountered, it throws an exception:
[2021-09-27T17:24:40,867][WARN ][logstash.filters.grok ] Timeout executing grok '%{LOGLEVEL:loglevel}\s*\[%{TIMESTAMP_ISO8601:date}\]\s*\[%{GREEDYDATA:requestinfo}\]\s*\[%{GREEDYDATA:logging_id}\:%{GREEDYDATA:token}\]\s*\[(?<method>[^\]]+)\]\:\s*(?<messagebody>(.|\r|\n)*)' against field 'message' with value 'Value too large to output (178493 bytes)! First 255 chars are: INFO [2021-09-27 11:50:14,005] [dw-398 - POST /xxxxx/api/v3/xxxxx/options] [e3acfd76-28a6-0000-0946-0c335230a57e:]
and CPU starts choking and persistent queue increases and Lag in kibana. Any suggestions?
Performance problems in grok and timeouts are not usually a problem when the pattern matches the message, they are a problem when the pattern fails to match.
The first thing to do is anchor your patterns if possible. This blog post has performance data on how effective this is. In your case, when the pattern does not match, grok will start at the beginning of the line to see if LOGLEVEL matches. If it does NOT match, then it will start at the second character of the line and see if LOGLEVEL matches. If it keeps not matching it will have to make thousands of attempts to match the pattern, which is really expensive. If you change your pattern to start with ^%{LOGLEVEL:loglevel}\s*\[ then the ^ means that grok only has to evaluate the match against LOGLEVEL at the start of each line of [message]. If you change it to be "\A%{LOGLEVEL:loglevel}\s*\[ then it will only evaluate the match at the very beginning of the [message] field.
Secondly, if possible, avoid GREEDYDATA except at the end of the pattern. When matching a 10 KB string against a pattern that has multiple GREEDYDATAs, if the pattern does not match then each GREEDYDATA will be tried against thousands of different substrings, resulting in millions of attempts to do the match for each event (it's not quite this simple, but failing to match does get very expensive). Try changing GREEDYDATA to DATA and if it still works then keep it.
Thirdly, if possible, replace GREEDYDATA/DATA with a custom pattern. For example, it appears to me that \[%{GREEDYDATA:requestinfo}\] could be replaced with \[(?<requestinfo>[^\]]+) and I would expect that to be cheaper when the overall pattern does not match.
Fourthly, I would seriously consider using dissect rather than grok
dissect { mapping => { "message" => "%{loglevel->} [%{date}] [%{requestinfo}] [%{logging_id}:%{token}] [%{method}]: %{messagebody}" } }
However, there is a bug in the dissect filter where if "->" is used in the mapping then a single delimiter does not match, multiple delimiters are required. Thus that %{loglevel->} would match against INFO [2021, but not against ERROR [2021. I usually do
mutate { gsub => [ "message", "\s+", " " ] }
and remove the -> to workaround this. dissect is far less flexible and far less powerful than grok, which makes is much cheaper. Note that dissect will create empty fields, like grok with keep_empty_captures enabled, so you will get a [token] field that contains "" for that message.

Issues with Pattern matching in logstash

I'm having issues with Pattern matching with Logstash.
Sample log pattern
[DEBUG] 2021-09-13T23:58:24.361 [http-nio-8080-exec-1] [FB-3D] localhost - [i.i.i.a.f.AuthFilter] :: doFilter :: formName B-3D
Grok Pattern that works
\s?\[%{DATA:loglevel}\] %{TIMESTAMP_ISO8601:logts} \[%{DATA:threadname}\] \[?%{DATA:formname}\] %{DATA:podname} %{DATA:filler1} \[%{DATA:classname}\] %{GREEDYDATA:fullmesg}
For the sample log mentioned above, the above grok pattern works fine. But I have some log files where the fourth field does not exist 'not even the empty []. I want to know how to handle the same.
Sample log (which is not working using the above pattern)
[DEBUG] 2021-09-13T23:58:22.633 [http-nio-8080-exec-1] localhost - [i.i.i.a.f.AuthFilter] :: Requested going to check the
In the above case, the fourth field [?%{DATA:formname}] does not exist. With the optional condition included in the above grok pattern for formname, it still does not work. It expects the presence of an empty [] field. Is there a way to make the 4th field optional?. I.e pattern to accomodate even if the field does not exist.
Any help on this is much appreciated.
Thanks in Advance

Grok parsing issue using parsing log containing text starting [date] [hostname]

I am trying to parse below log using grok
[2018-10-06 12:04:03:0227] [MYMACHINENAME]
and the grok expression which I used is
/[%{DATESTAMP:date}/] /[%{WORD:data}%/]
and this expression is not working. I tried to replace WORD with hostname even then it not working and if I try to either of the matchers alone then it works.
Can anyone provide me the better tutorial pages to learn grok expressions?
There are few errors in your pattern.
First off, you escape character using backslash / not forward slash \. Second, you don't need % to match ] in the end.
Third, DATESTAMP doesn't match your date pattern, you need TIMESTAMP_ISO8601.
Your final pattern should become,
\[%{TIMESTAMP_ISO8601}\] \[%{WORD}\]
Regex pattern DATESTAMP is not correct for your string. Try using TIMESTAMP_ISO8601.
Here you can see all grok regex patterns: grok-patterns.

Logstash and Grok always show _grokparsefailure

I am using https://grokdebug.herokuapp.com/ to build grok filters for logstash, but even though grokdebug shows corrected parsed message, my kibana showing _grokparsefailure
message [2015-12-01 08:53:16] app.INFO: Calories 4 [] []
pattern %{SYSLOG5424SD} %{JAVACLASS}: %{WORD} %{INT} %{GREEDYDATA}
What am I doing wrong? Notice that first filter with tag "google" and GREEDYDATA works, and second always fails
Ok so I found the solution. Correct pattern is:
\[%{TIMESTAMP_ISO8601:timestamp}\] %{DATA}%{LOGLEVEL:level}: Calories %{WORD:calories_count} %{GREEDYDATA:msg}
Even tough I used https://grokdebug.herokuapp.com to find the pattern, it was completely irrelevant.

Resources