logstash pattern don't match in the expected way - logstash

I'm using logstash to collect my server.log from several glassfish domains. Unfortunatly in the log is no domainname. But the pathname have.
So I tried to get a part of the filename to match it to the GF-domain. The Problem is, that the pattern I defined don't matche the right part.
here the logstash.conf
file {
type => "GlassFish_Server"
sincedb_path => "D:/logstash/.sincedb_GF"
#start_position => beginning
path => "D:/logdir/GlassFish/Logs/GF0/server.log"
}
grok {
patterns_dir => "./patterns"
match =>
[ 'path', '%{DOMAIN:Domain}']
}
I' ve created a custom-pattern file and filled it with a regexp
my custom-pattern-file
DOMAIN (?:[a-zA-Z0-9_-]+[\/]){3}([a-zA-Z0-9_-]+)
And the result is:
"Domain" => "logdir/GlassFish/Logs/GF0"
I've tested my RegExp on https://www.regex101.com/ and is working fine.
Using http://grokdebug.herokuapp.com/ to verify the pattern brings the same "unwanted" result.
What I'm doing wrong? Has anybody an idea to get only the domain name "GF0", e.g. modify my pattern or using mutate in the logstash.conf?

I'm assuming that you're trying to strip out the GF0 portion from path?
If that's the case and you know that the path will always be in the same format, you could just use something like this for the grok:
filter {
grok {
match => [ 'path', '(?i)/Logs/%{WORD:Domain}/' ]
}
}
not as elegant as a regexp, but it should work.

Related

logstash converting ruby code to logstash filters

I was wondering what will be the best way to implement the following task in logstash :
I have the following field that contains multiple paths divided by ':' :
my_field : "/var/log/my_custom_file.txt:/var/log/otherfile.log/:/root/aaa.jar
I want to add a new field called "first_file" that will contain only the file_name(without suffix) of the first path :
first_file : my_custom_file
I implemented it with the following ruby code ;
code => 'event.set("first_file",event.get("[my_field]").split(":")[0].split("/")[-1].split(".")[0])'
How can I use logstash filters (add_field,split,grok) to do the same task ? I feel like using ruby code should be my last option.
You could do it using just grok, but I think it would be clearer to use mutate to pull out the first value
mutate { split => { "my_field" => ":" } }
mutate { replace => "{ "my_field" => "[my_field][0]" } }
grok { match => { "my_field" => "/(?<my_field>[^/]+)\.%{WORD}$" } overwrite => [ "my_field" ] }
rather than
grok { match => { "my_field" => "/(?<my_field>[^/]+)\.%{WORD}:" } overwrite => [ "my_field" ] }
The (?<my_field>[^/]+) is a custom pattern (documented here) which creates a field called [my_field] from a sequence of one or more (+) characters which are not /
Yes with a basic grok you could match every field in the value.
This kind of filter must work (put it in your logstash configuration file), this one extract the "basename" of the file (filename without extension and path) :
filter{
grok {
match => { "my_field" => "%{GREEDYDATA}/%{WORD:filename}.%{WORD}:%{GREEDYDATA}/%{WORD:filename2}.%{WORD}:%{GREEDYDATA}/%{WORD:filename3}.%{WORD}" }
}
}
You could be more strict in grok with use of PATH in place of GREYDATA, I let you determine your best approach that works in your context.
You could debug the grok pattern with the online tool grokdebug.

How can I extract key word value from a string

I am using this tool https://grokdebug.herokuapp.com/ to test my grok parser. The origin string I have is something like:
2020-05-01 01:59:10 server1 17.5.36.8 POST /v1/user.aspx r=1000&11:59:11.219&Method=Start&Credentials=xxxxxx:++http://localhost/v1/user.aspx&Reque
I'd like to parse the data to:
{
Method: Start,
r: 1000
Credential: xxxxx
}
I am looking at the parser https://github.com/logstash-plugins/logstash-patterns-core/blob/master/patterns/grok-patterns but I can't find a good one to use.
I don't know how to achieve this using solely grok patterns. If you're also using Logstash, you can try the following solution:
The pattern:
%{TIMESTAMP_ISO8601:timestamp}\s%{GREEDYDATA:server}\s%{IP:ip}\s%{GREEDYDATA:request_type}\s%{PATH:path}\sr=%{NUMBER:r}&%{TIME:some_time}&Method=%{GREEDYDATA:method}&Credentials=%{GREEDYDATA:username}:%{GREEDYDATA:password}%{MY_URI:uri}
It works with the custom pattern:
MY_URI http://(?:%{USER}(?::[^#]*)?#)?(?:%{URIHOST})?(?:%{URIPATHPARAM})?
I used the custom pattern because the %{URIPROTO} from the original %{URI} pattern won't separate credentials from the uri properly. I assumed that credentials are given in the following format:
username:password
In case I'm wrong, please replace:
Credentials=%{GREEDYDATA:username}:%{GREEDYDATA:password}
with:
Credentials=%{GREEDYDATA:credentials}
If the :++ string separates credentials from uri, you can use it to apply the %{URI} instead of %{MY_URI}.
The pattern works in the Grok Debugger. However the output is more readable in the Grok Constructor Matcher.
Since you're only interested in some fields, use the remove_field plugin. Use the add_field plugin to enclose the result in a new field.
Your logstash.conf file may look like this (if you place the file with the custom pattern in the patterns directory alongside the config file):
# logstash.conf
…
filter {
grok {
patterns_dir => ["./patterns"]
match => {
"message" => "%{TIMESTAMP_ISO8601:timestamp}\s%{GREEDYDATA:server}\s%{IP:ip}\s%{GREEDYDATA:request_type}\s%{PATH:path}\sr=%{NUMBER:r}&%{TIME:some_time}&Method=%{GREEDYDATA:method}&Credentials=%{GREEDYDATA:username}:%{GREEDYDATA:password}%{MY_URI:uri}"
}
mutate {
add_field => { "result" => "Method: %{method}, r: %{r} Credential: %{username}}
remove_field => ["timestamp", "server", "ip", "request_type", "path", "some_time", "password", "uri", "method", "r", "username"]
}
}
…

Parsing log file with logs of different patterns Logstash

I am new to logstash in that matter ELK stack. A log file is having different processes logging data to it. Each process writes logs with different patterns. I want to parse this log file. Each log in this log file is started with below grok pattern,
%{SYSLOGTIMESTAMP:timestamp} %{SYSLOGHOST:logsource} %{SYSLOGPROG}: +
%{SRCFILE:srcfile}:%{NUMBER:linenumber} where SRCFILE is defined as
[a-zA-Z0-9._-]+
Please let me know how can I parse this file so that different type of logs from each process logging in this file can be parsed.
Since you're trying to pass in log files, you might have to use the file input plugin in order to retrieve a file or x number of files from a given path. So a basic input could look something like this:
input {
file {
path => "/your/path/*"
exclude => "*.gz"
start_position => "beginning"
ignore_older => 0
sincedb_path => "/dev/null"
}
}
The above is just a sample for you to reproduce. So once you get the files and start processing them line by line, you could use the grok filter in order to match the keywords from your log file. A sample filter could look something like this:
grok {
patterns_dir => ["/pathto/patterns"]
match => { "message" => "%{SYSLOGTIMESTAMP:timestamp} %{SYSLOGHOST:logsource} %{SYSLOGPROG}: + %{SRCFILE:srcfile}:%{NUMBER:linenumber} where SRCFILE is defined as [a-zA-Z0-9._-]+" }
}
You might have to use different filters if you're having different type of logs printed within a single file OR you could have it in the same line with a , comma separated values. Something like:
grok {
match => { "message" => [
"TYPE1,%{WORD:a1},%{WORD:a2},%{WORD:a3},%{POSINT:a4}",
"TYPE2,%{WORD:b1},%{WORD:b2},%{WORD:b3},%{WORD:b4}",
"TYPE3,%{POSINT:c1},%{WORD:c2},%{POSINT:c3},%{WORD:c4}" ]
}
}
And then maybe you could play around with the message, since you've got all the values you needed right within it. Hope it helps!

How to combine characters to create custom pattern in GROK

I'm new to logstash and grok and have a question regarding a pattern.
Jul 26 09:46:37
The above content contains %{MONTH} %{MONTHDAY} %{TIME} and white spaces.
I need to know how to combine all these and create a pattern %{sample_timestamp}
Thanks!
Quotes from the Grok Custom Patterns Docs (RTFM):
First, you can use the Oniguruma syntax for named capture which will
let you match a piece of text and save it as a field:
(?<field_name>the pattern here)
...
Alternately, you can create a custom patterns file.
Create a directory called patterns with a file in it called extra (the file name doesn’t matter, but name it meaningfully for yourself)
In that file, write the pattern you need as the pattern name, a space, then the regexp for that pattern.
So you could create a pattern file that contained the line:
CUST_DATE %{MONTH} %{MONTHDAY} %{TIME}
Then use the patterns_dir setting in this plugin to tell logstash
where your custom patterns directory is.
filter {
grok {
patterns_dir => ["./patterns"]
match => { "message" => "%{CUST_DATE:datestamp}" }
}
}
Would result in the field:
datestamp => "Jul 26 09:46:37"
Filter
use pattern_definitions to define your patterns
filter {
grok {
pattern_definitions => { "MY_DATE" => "%{MONTH} %{MONTHDAY} %{TIME}" }
match => { "message" => "%{MY_DATE:timestamp}" }
}
}
Result
{
"timestamp": "Jul 26 09:46:37"
}
Tested using Logstash 6.5

logstash config file - separating out user and message

New to logstash. I am trying to parse application log lines such as:
2014-11-05 16:59:36,779 ERROR DOMAINNAME\bob [This is an error. ]
My config file looks like this:
input {
file {
path => "C:/tmp/*.log"
}
}
filter {
grok {
match => [
"message", "%{TIMESTAMP_ISO8601:timestamp}\s*%{LOGLEVEL:level}\s*%{DATA:userAlias}\s*%{GREEDYDATA:message}"
]
overwrite => [ "message" ]
}
if [level] =~ "INFO" {
drop {
}
}
}
output {
elasticsearch {
host => "localhost"
protocol => "http"
}
}
The timestamp and level are parsed out fine, but the message displays in Kibana as:
message:
DOMAINNAME\bob [This is an error. ]
The grok pattern for DATA is .*?
so I would assume that it should handle the backslash \ and properly set
userAlias to DOMAINNAME\bob and
message to [This is an error. ]
But this isn't the case. What am I doing wrong here? Thanks.
The problem with your grok pattern is that .*? is non-greedy (i.e. optional) and .* is greedy, so the latter "takes over" the string that could have been matched by the preceding .*? pattern.
I suggest you avoid the DATA and GREEDYDATA patterns except for matching the remainder of the string (like your use of GREEDYDATA here). In this case you could e.g. use the NOTSPACE pattern to match the username. You could use an even more specific pattern that e.g. excludes characters that are invalid in usernames, but I don't see the point of that. This works:
"%{TIMESTAMP_ISO8601:timestamp}\s+%{LOGLEVEL:level}\s+%{NOTSPACE:userAlias}\s+%{GREEDYDATA:message}"
(I also took the liberty of replacing \s* with \s+ since the whitespace between the fields isn't optional.)

Resources