Logstash - filter based on a dedicated time field for the last 30 days - logstash

I have an application that receives logs remotely from IoT devices. Those logs have timestamps from when they really happened - I process and log these logs and therefore have a specific time field original-log-time in my JSON logs.
So every one of my log files has the known #timestamp field of when this line was written to the file and an original-log-time field that contains the original log time.
Now I would like to only forward logs to elastic search if the original log time is less than 30 days in the past. The reason for this is, that logs from each day get their own index and only the last 30 indices are warm - the others are actually closed and I don't want them to be reopened.
What I am trying:
if [original_log_time].compareTo(ctx._source.time.plusDays(30)) <= 0 {
elasticsearch {
hosts => ["${ELASTICSEARCH_HOSTS:?ELASTICSEARCH_HOSTS environment variable is required}"]
index => "my-logs-" + original_log_time
user => "${ELASTICSEARCH_USERNAME:}"
password => "${ELASTICSEARCH_PASSWORD:}"
}
}
But this leads to the following error
Expected one of [, #, in, not , ==, !=, <=, >=, <, >, =~, !~, and, or, xor, nand, { at line 28, column 24*(byte 512) after output if [log_index_date]"
I read about Logstash ignore_older but it looks like I can't specify which timestamp it should take into account for that check? Any smarter solution?
UPDATE
As I got some errors that there is neither a compareTonor a plusDaysin logstash I tried another approach I read here: https://discuss.elastic.co/t/adding-1-day-to-the-date/129168
which was a filter
filter {
date {
match => [ "log_index_date", "dd.MM.yyyy" ]
target => "log_index_plus_thirty"
}
ruby {
code => 'event.set("log_index_plus_thirty", LogStash::Timestamp.new(event.get("log_index_plus_thirty")+86400*30))'
}
}
with the following if condition:
if [#timestamp] <= [log_index_plus_thirty] {
elasticsearch {
hosts => ["${ELASTICSEARCH_HOSTS:?ELASTICSEARCH_HOSTS environment variable is required}"]
index => "device-logs-" + log_index_date
user => "${ELASTICSEARCH_USERNAME:}"
password => "${ELASTICSEARCH_PASSWORD:}"
}
}
but this complains at the _plus_thirty part of the if condition as if that variable would not exist.
Also log_index_date is an optional field so not sure if that leads to a problem as well?

Related

multiline log (array) with logstash

I want to parse logfile with logstash which contains both single line and multiple line. [e.g first 2 lines with 1 line log entry whereas 3rd one has multiple line entry ]
ERROR - 2015-12-05 20:48:53 --> Could not find page
ERROR - 2015-12-05 20:48:53 --> Could not find VAR
ERROR - 2015-12-05 20:48:59 --> Array
(
[id] => 12344
[studentid] => 33
[fname] =>
[lname] =>
[address] => tokyo
)
This log entry is forwarded from client (logstatsh-forwarder) which sets type as "multilineclient"
filter{
if [type] == "multilineclient" {
multiline {
pattern => "^ERROR"
what => "previous"
}
grok{
match => {"message" => "%{LOGLEVEL:loglevel}\s+%{TIMESTAMP_ISO8601:timestamp}\s+%{DATA:message}({({[^}]+},?\s*)*})?\s*$(?<stacktrace>(?m:.*))?"}
}
mutate {
remove => [ "#loglevel" ]
}
}
}
I did try both Grok Debugger and grok constructer (but couldn't quite solve issue with LOGLEVEL being start of logfile ),
My multiline logs (array) are parsed as separate message.
message: [id] =>
message: [studentid] =>
message: [fname] =>
I was expecting this to come as single "message:"
Any suggestion?
The first step is to get multiline{} (codec or filter) working properly. When it does, you should end up with three documents based on your example.
Your multiline construct can be read as "when I find a line that begins with ERROR, keep it with the previous", which I don't think is what you want. Sounds like you should add the 'negate' option.
If that solves the multiline problem, then you should run one grok{} to pull the common stuff off the front (level, date, time). A second grok{} could then separate all the fields inside the parens from the rest. The data inside the parens could probably be fed to the kv{} ("key value") filter to produce fields from the key/value pairs.

How to pull specific data out of a message in LogStash

I am trying to take log data from a custom application that has a well defined format. I am trying to pick out certain pieces of the data using the grok filter, but I am not having any luck. Here is a sample log:
- System.Data.SqlClient.SqlException (0x80131904): Arithmetic overflow error converting IDENTITY to data type int.
Arithmetic overflow occurred.
What I would like to do is extract out the SqlException out of the string. Here is the grok that I am using:
grok{
match =>
{
"message" =>
[
"(?m)%{DATE:TIMESTAMP_DATE}%{SPACE}%{TIME:TIMESTAMP_TIME}%{SPACE}%{WORD:LOG_LEVEL}%{SPACE}(?<THREAD>[^\s]+)%{SPACE}(?<HOST>[^\s]+)%{SPACE}%{GREEDYDATA:MESSAGE}",
"(?<EXCEPTION>[.*]+)"
]
}
}
I have tried several different ways, but I guess I am not completely understanding the documentation. What I would expect to happen is all of the fields that I have extracts in the first set would include the result of the second set. In other words:
TIMESTAMP_DATE,TIMESTAMP_TIME,LOG_LEVEL,THREAD,HOST,MESSAGE,EXCEPTION
I am getting the other fields perfectly, it is just additional matching that I am missing. Any help would be appreciated. Thanks
If you specify multiple patterns grok by default only looks checks the patterns until the first match is encountered. If you want to match against both patterns regardless of whether the first one matched or not you can change the behaviour like that:
grok{
break_on_match => false
match =>
{
"message" =>
[
"(?m)%{DATE:TIMESTAMP_DATE}%{SPACE}%{TIME:TIMESTAMP_TIME}%{SPACE}%{WORD:LOG_LEVEL}%{SPACE}(?<THREAD>[^\s]+)%{SPACE}(?<HOST>[^\s]+)%{SPACE}%{GREEDYDATA:MESSAGE}",
"(?<EXCEPTION>[.*]+)"
]
}
}
Check out the docs under: https://www.elastic.co/guide/en/logstash/current/plugins-filters-grok.html#plugins-filters-grok-break_on_match

How to define grok pattern for pipe delimited log message?

setting up ELK is very easy until you hit the logstash filter. I have a log delimited 10 fields. I may have some field blank but I am sure there will be 10 fields:
7/5/2015 10:10:18 AM|KDCVISH01|
|ClassNameUnavailable:MethodNameUnavailable|CustomerView|xwz261|ef315792-5c41-4bdf-aa66-73317e82e4d6|52|6182d1a1-7916-4874-995b-bc9a23437dab|<Exception>
afkh akla 487234 &*<Exception>
Q:
1- I am confused how grok or regex pattern will pick only the field that I am looking and not the similar match from another field. For example, what is the guarantee that DATESTAMP pattern picks only the first value and not the timestamp present in the last field (buried in stack trace)?
2- Is there a way to define positional mapping? For example, 1st fiels is dateTime, 2nd is machine name, 3rd is class name and so on. This will make sure I have fields displayed in Kibana no matter the field value is present or not.
I know i am little late, But here is a simple solution which i am using,
replace your | with space
option 1:
filter {
mutate {
gsub => ["message","\|"," "]
}
grok {
match => ["message","%{DATESTAMP:time} %{WORD:MESSAGE1} %{WORD:EXCEPTION} %{WORD:MESSAGE2}"]
}
}
option 2: excepting |
filter {
grok {
match => ["message","%{DATESTAMP:time}\|%{WORD:MESSAGE1}\|%{WORD:EXCEPTION}\|%{WORD:MESSAGE2}"]
}
}
it is working fine : http://grokdebug.herokuapp.com/. check here.

Logstash: Reading multiline data from optional lines

I have a log file which contains lines which begin with a timestamp. An uncertain number of extra lines might follow each such timestamped line:
SOMETIMESTAMP some data
extra line 1 2
extra line 3 4
The extra lines would provide supplementary information for the timestamped line. I want to extract the 1, 2, 3, and 4 and save them as variables. I can parse the extra lines into variables if I know how many of them there are. For example, if I know there are two extra lines, the grok filter below will work. But what should I do if I don't know, in advance, how many extra lines will exist? Is there some way to parse these lines one-by-one, before applying the multiline filter? That might help.
Also, even if I know I will only have 2 extra lines, is the filter below the best way to access them?
filter {
multiline {
pattern => "^%{SOMETIMESTAMP}"
negate => "true"
what => "previous"
}
if "multiline" in [tags] {
grok {
match => { "message" => "(?m)^%{SOMETIMESTAMP} %{DATA:firstline}(?<newline>[\r\n]+)%{DATA:secondline}(?<newline>[\r\n]+)%{DATA:thirdline}$" }
}
}
# After this would be grok filters to process the contents of
# 'firstline', 'secondline', and 'thirdline'. I would then remove
# these three temporary fields from the final output.
}
(I separated the lines into separate variables since this allows me to do additional pattern matching on the contents of the lines separately, without having to refer to the entire pattern all over again. For example, based on the contents of the first line, I might want to present branching behavior for the other lines.)
Why do you need this?
Are you going to be inserting one single event with all of the values or are they really separate events that just need to share the same time stamp?
If they all need to appear in the same event, you'll like need to resort to a ruby filter to separate out the extra lines into fields on the event that you can then further work on.
For example:
if "multiline" in [tags] {
grok {
match => { "message" => "(?m)^%{SOMETIMESTAMP} %{DATA:firstline}(?<newline>[\r\n]+)" }
}
ruby {
code => '
event["lines"] = event["message"].scan(/[^\r\n]+[\r\n]*/);
'
}
}
If they are really separate events, you could use the memorize plugin for logstash 1.5 and later.
This has changed over versions of ELK
Direct event field references (i.e. event['field']) have been disabled in favor of using event get and set methods (e.g. event.get('field')).
filter {
grok {
match => { "message" => "%{TIMESTAMP_ISO8601:logtime} %{LOGLEVEL:level}%{DATA:firstline}" }
}
ruby { code => "event.set('message', event.get('message').scan(/[^\r\n]+[\r\n]*/))" }
}

Logstash: How to save an entry from earlier in a log for use across multiple lines later in the log?

So the format of my logs looks somethings like this
02:00:30> First line of log for date of 2014-08-13
...
04:03:30> Every other line of log
My question is: how can I save the date from the first line to create the timestamp for the other lines in the files?
Is there a way to set some kind of "global" field that I can reuse for other lines?
I'm looking at historical logs so the current time isn't much use.
I posted a memorize filter that you could use to do that. It was posted here.
You'd use it like this:
filter {
if [message] =~ /date of/ {
grok {
match => [ "message", "date of (?<date>\d\d\d\d-\d\d-\d\d)" ]
}
} else {
// parse your log with grok or some other method that doesn't capture date
}
memorize {
field => date
}
}
So on the first line, because you extract a date, it'll memorize it... since it's not on the remaining lines, it'll add the memorized date to the events.

Resources