Creating custom grok filters - logstash

I need to find grok pattern for files where the lines are of the format :
3 dbm.kfa 0 340220 7766754 93.9
3 swapper/3 0 340220 7766754 93.9
This is the grok pattern that I have done so far.
\s*%{INT:no1}\s*%{USERNAME:name}\s*%{INT:no2}\s*%{INT:no3}\s*%{INT:no4}\s*%{GREEDYDATA:ans}
The field USERNAME works for dbm.kfa but not for swapper/3 as USERNAME does not include \ character. I would like to create some custom filter for this purpose, but have no idea how to create one.
Any help would be really appreciated. Thanks a lot !

To create a custom pattern you need to use an external file in the following format and put that file in a directory the will be used only for pattern files.
PATTERN_NAME [regex for your pattern]
Then you will need to change your grok config to point to the pattern files directory.
grok {
patterns_dir => ["/path/to/patterns/dir"]
match => { "message" => "%{PATTERN_NAME:fieldName}" }
}
But in your specific case if you change %{USERNAME:name} to %{DATA:name} it should work.
For a better explanation about the custom patterns you should read this part of the documentation.
You also can find all the core grok patterns that ships with logstash in this github repository, the most used are in the grok-patterns file.

Related

Is there a way to use Grok or Regex Match in File Input Plugin

I want to ask that is there a way to use grok or regex match in file input plugin in logstash ? And if yes, how ?
Thanks for answering.
With your limited details, I assume you want to pick up some specific files in your directories, based on some matching patterns. This can be handled in multiple ways:
Exclusions (matched against the filename, not full path).
Filename patterns are valid here, too. For example, if you have to just include ".log" files from a directory, you can use:
input { file {
path => "/var/log/applicationDir/*.log"
} }
In Tail mode, you might want to exclude gzipped files:
input { file {
path => "/var/log/applicationDir/"
exclude => "*.gz"
} }

Logstash - match filename

I have two servers. The first one hosts elastic stack. Both servers have a file /var/log/commands.log which is configured in the same way and are being shipped with filebeat to logstash.
Using grok, I tried parsing the data into custom fields using this statement:
if [log][file][path] == "/var/log/commands.log" {
grok{
match => { "message" => "*some grok stuff*"
}
}
}
Problem is, even though on both servers the file is /var/log/commands.log & they're configured the same - it skips the if statement as if it's false.
I've noticed that if I ship the logs locally (without filebeat - just do input{file{input => "/var/log/commands.log}} ) it works for the local "/var/log/commands.log" file on that machine that hosts logstash.
For reference, this is the full .conf file for logstash: https://pastebin.com/1QbnAG7G
This is how elastic sees the file path: https://i.imgur.com/5h9HXf2.png
Does anyone why it skips the "if" statement? How to make it filter by name. Thanks ahead!
So, it looks like you're using =~ in your pastebin code. If that's the case, you'll be matching a regex. Is that what you meant to do?
If you're intending to use a regex, then you'd want something like this, probably:
if [log][file][path] =~ /\/var\/log\/commands\.log/
Example: https://regex101.com/r/nqGg9G/1

logstash filter , unfiltered lines

I am new to Logstash filter and going through different blogs and links to understand in detail. I have few questions which are still unanswered.
. If my log file has different log pattern e.g.
2017-01-30 14:30:58 INFO ThreadName:33 - {"t":1485786658088,"h":"abcd1234", "l":"INFO", "cN":"org.logstash.demo", "mN":"getNextvalue", "m":"fetching next value"}
2017-01-30 14:30:58 INFO AnotherThread:33 -my log pattern is different
I have below filter which is successfully filtering line 1 of the log
grok
{
match => [ "message", "%{TIMESTAMP_ISO8601:LogDate} %{LOGLEVEL:loglevel} %{WORD:threadName}:%{NUMBER:ThreadID} - %{GREEDYDATA:Line}" ]
}
json
{
source => "Line"
}
what will happen with the lines which can not be filtered using filter pattern?
Is there any way to capture all the lines which were not filtered and send to elasticSearch ?
Is there any good reading material where I can read about Input, Filter, Output plugins with the examples ?
To answer your questions:
The lines which cannot be filtered using grok would end up in a
grok_parsefailure. Make sure you handle it by dropping the lines
which don't actually match the filter criteria.
As far as I know you can't capture them separately and push it to ES. Maybe for this, you can have multiple grok patterns so that you can filter it out and send it to different ES indices thereafter.
I've added the links in the comment above.
This SO could come in handy. Hope it helps!
As #darth_vader points out, you'll get a "grok_parsefailure" tag on each document that doesn't match your pattern(s) in a grok{} filter. However, how you handle this failure is up to you.
By default, all the events will fall through to your output{} section, which presumably would send them to elasticsearch. You could also have a conditional output{} section, which sent parsed logs to one output and unparsed logs to another (a file{} output, or a different index, or...).
As for examples, the official doc tends to include incomplete fragments (at best), so you're probably going to find better examples in random internet blogs.

grok pattern for extracting info in logstash

I am using the grok pattern to extract some data from file path, but it does not seem to work right
path: /home/shard/logstash/test/12/23/abc_132.log
pattern: %{GREEDYDATA}/%{INT:group}/%{INT:id}/%{DATA:job_type}(_%{UUID:uuid})*\.log
I want to extract 132 as the uuid field and it works ok when tested in grok debugger [http://grokdebug.herokuapp.com/] but when applied in logstash indexer, it fetches all of abc_132 under job_type field.
What may be the issue here and how can I extract uuid (perhaps a different regex?).
You can try to get the uuid from the job_type by using the ruby filter
ruby {
code => "event['uuid'] = event['job_type'].split('_')[1]"
}
Hope this can help you.

Extracting fields in Logstash

I am using Logstash (with Kibana as the UI). I would like to extract some fields from my logs so that I can filter by them on the LHS of the UI.
A sample line from my log looks like this:
2013-07-04 00:27:16.341 -0700 [Comp40_db40_3720_18_25] client_login=C-316fff97-5a19-44f1-9d87-003ae0e36ac9 ip_address=192.168.4.1
In my logstash conf file, I put this:
filter {
grok {
type => "mylog"
pattern => "(?<CLIENT_NAME>Comp\d+_db\d+_\d+_\d+_\d+)"
}
}
Ideally, I would like to extract Comp40_db40_3720_18_25 (the number of digits can vary, but will always be at least 1 in each section separated by _) and client_login (can also be client_logout). Then, I can search for CLIENT_NAME=Comp40... CLIENT_NAME=Comp55, etc.
Am I missing something in my config to make this a field that I can use in Kibana?
Thanks!
If you are having any difficulty getting the pattern to match correctly, using the Grok Debugger is a great solution.
For your given problem you could just separate out your search data into another variable, and save the additional varying digits in another (trash) variable.
For example:
(?<SEARCH_FIELD>Comp\d+)%{GREEDYDATA:trash_variable}]
(Please use the Grok Debugger on the above pattern)

Resources