Kibana visualization by full path - iis

I'm trying to do a visualization (e.g. vertical bar chart) with the page requested being the x-axis. But if I create bucket type of an x-axis, setting the aggregate to Terms and "page", it splits the path by folder, file name, and extension.
For example, if the path was /images/icons/up.png, my visualization is creating a bar chart with bars for "images", "icons", "up", and "png". I want the chart to use the full path (/images/icons/up.png) for the bars.
These are from IIS logs and I don't want to include the querystring.
grok {
match => ["message", "%{TIMESTAMP_ISO8601:log_timestamp} %{WORD:serviceName} %{WORD:serverName} %{IP:serverIP} %{WORD:method} %{URIPATH:page} %{NOTSPACE:uriQuery} %{NUMBER:port} %{NOTSPACE:username} %{IPORHOST:clientIP} %{NOTSPACE:protocolVersion} %{NOTSPACE:userAgent} %{NOTSPACE:cookie} %{NOTSPACE:referer} %{NOTSPACE:requestHost} %{NUMBER:response} %{NUMBER:subresponse} %{NUMBER:win32response} %{NUMBER:bytesSent} %{NUMBER:bytesReceived} %{NUMBER:timetaken}"]
}
Thanks!

For this type of aggregations your string field should be defined as "not analyzed" in elasticsearch. otherwise elasticsearch automatically tokenizes your string field.
In kibana when you use a terms aggregation on an analyzed field you get a warning, do you see that?
if that does not solve your problem, can you post your elasticsearch index mapping?

Related

Logstash Grok regex parsing

I am trying to do a parsing on a plaintext message using Grok; my goal is to explode the plaintext to a JSON log.
The message has a quite rigid format, as follows:
<timestamp> <loglevel> <greedydata> field1=value1, field2=value2, .... fieldN=valueN
Where the number of fields is not fixed.
It's possible to capture every field=value pair using a named capturing group, being able to use the same "field" name as the key in the output message?
Thanks
TL;DR - use dissect instead of grok
You want something like:
{
"timestamp": <timestamp>,
"loglevel": <loglevel>,
"field1": value1,
"field2": value2,
....
"fieldN": valueN
}
Where the keys (field1, fieldN etc) are dynamic.
You cannot use grok to do this. Even using a pattern like this (then using array position indices) won't work:
( field[0-9]+=%{DATA:value})+$
You need to handle this a different way. Your options are:
handle this before it hits logstash
use a ruby filter
use the dissect filter

Elasticsearch: How to get the length of a string field(before analysis)?

My index has a string field containing a variable length random id. Obviously it shouldn't be analysed.
But I don't know much about elasticsearch especially when I created the index.
Today I tried a lot to filter documents based on the length of id, finally I got this groovy script:
doc['myfield'].values.size()
or
doc['myfield'].value.size()
both returns mysterious numbers, I think that's because of the field got analysed.
If it's really the case, is there any way to get the original length or fix the problem, without rebuild the whole index?
Use _source instead of doc. That's using the source of the document, meaning the initial indexed text:
_source['myfield'].value.size()
If possible, try to re-index the documents to:
use doc[field] on a not-analyzed version of that field
even better, find out the size of the field before you index the document and consider adding its size as a regular field in the document itself
Elasticsearch stores a string as tokenized in the data structure ( Field data cache )where we have script access to.
So assuming that your field is not not_analyzed , doc['field'].values will look like this
"In america" => [ "in" , "america" ]
Hence what you get from doc['field'].values is a array and not a string.
Now the story doesn't change even if you have a single token or have the field as not_analyzed.
"america" => [ "america" ]
Now to see the size of the first token , you can use the following request
{
"script_fields": {
"test1": {
"script": "doc['field'].values[0].size()"
}
}
}

How to define grok pattern for pipe delimited log message?

setting up ELK is very easy until you hit the logstash filter. I have a log delimited 10 fields. I may have some field blank but I am sure there will be 10 fields:
7/5/2015 10:10:18 AM|KDCVISH01|
|ClassNameUnavailable:MethodNameUnavailable|CustomerView|xwz261|ef315792-5c41-4bdf-aa66-73317e82e4d6|52|6182d1a1-7916-4874-995b-bc9a23437dab|<Exception>
afkh akla 487234 &*<Exception>
Q:
1- I am confused how grok or regex pattern will pick only the field that I am looking and not the similar match from another field. For example, what is the guarantee that DATESTAMP pattern picks only the first value and not the timestamp present in the last field (buried in stack trace)?
2- Is there a way to define positional mapping? For example, 1st fiels is dateTime, 2nd is machine name, 3rd is class name and so on. This will make sure I have fields displayed in Kibana no matter the field value is present or not.
I know i am little late, But here is a simple solution which i am using,
replace your | with space
option 1:
filter {
mutate {
gsub => ["message","\|"," "]
}
grok {
match => ["message","%{DATESTAMP:time} %{WORD:MESSAGE1} %{WORD:EXCEPTION} %{WORD:MESSAGE2}"]
}
}
option 2: excepting |
filter {
grok {
match => ["message","%{DATESTAMP:time}\|%{WORD:MESSAGE1}\|%{WORD:EXCEPTION}\|%{WORD:MESSAGE2}"]
}
}
it is working fine : http://grokdebug.herokuapp.com/. check here.

logstash grok filter for logs with arbitrary attribute-value pairs

(This is related to my other question logstash grok filter for custom logs )
I have a logfile whose lines look something like:
14:46:16.603 [http-nio-8080-exec-4] INFO METERING - msg=93e6dd5e-c009-46b3-b9eb-f753ee3b889a CREATE_JOB job=a820018e-7ad7-481a-97b0-bd705c3280ad data=71b1652e-16c8-4b33-9a57-f5fcb3d5de92
14:46:17.378 [http-nio-8080-exec-3] INFO METERING - msg=c1ddb068-e6a2-450a-9f8b-7cbc1dbc222a SET_STATUS job=a820018e-7ad7-481a-97b0-bd705c3280ad status=ACTIVE final=false
I built a pattern that matched the first line:
%{TIME:timestamp} %{NOTSPACE:http} %{WORD:loglevel}%{SPACE}%{WORD:logtype} - msg=%{NOTSPACE:msg}%{SPACE}%{WORD:action}%{SPACE}job=%{NOTSPACE:job}%{SPACE}data=%{NOTSPACE:data}
but obviously that only works for lines that have the data= at the end, versus the status= and final= at the end of the second line, or other attribute-value pairs on other lines? How can I set up a pattern that says that after a certain point there will be an arbitrary of foo=bar pairs that I want to recognize and output as attribute/value pairs in the output?
You can change your grok pattern like this to have all the key value pairs in one field (kvpairs):
%{TIME:timestamp} %{NOTSPACE:http} %{WORD:loglevel}%{SPACE}%{WORD:logtype} - %{GREEDYDATA:kvpairs}
Afterwards you can use the kv filter to parse the key value pairs.
kv {
source => "kvpairs"
remove_field => [ "kvpairs" ] # Delete the field afterwards
}
Unfortunately, you have some simple values inside your kv pairs (e.g. CREATE_JOB). You could parse them with grok and use one kv filter for the values before and another kv filter for the values after those simple values.

Logstash: How to remove part of field value (remove url query items)

Situation:
In my logs I have a field "Url". In some cases there are one or more query items in the url.
Desired situation:
I'm looking for a way to get rid of the query items in the url (to get a 'clean' url). This in order to have a better analysis in Kibana (what are the most use pages, without query items in url).
What I have done until now is to add a new field "url_nonquery" with the value of the existing "Url" field. Then I use the mutate { split => filter on this new field to split at the ? character. This will result in an array: index 0 with the 'clean' url and index 1 with the query string. But now I don't seem to find out how to delete the index 1.
Does someone has some ideas to help me further with this?
Thanks.
All you need to do is a grok filter like this:
filter {
grok { match => [ "url", "%{URIPATH:url_nonquery}" ] }
}
This would work even if there isn't a ? in the URL. The split method could be troublesome if you don't have a ? in your url.

Resources