Logstash: How to remove part of field value (remove url query items) - logstash

Situation:
In my logs I have a field "Url". In some cases there are one or more query items in the url.
Desired situation:
I'm looking for a way to get rid of the query items in the url (to get a 'clean' url). This in order to have a better analysis in Kibana (what are the most use pages, without query items in url).
What I have done until now is to add a new field "url_nonquery" with the value of the existing "Url" field. Then I use the mutate { split => filter on this new field to split at the ? character. This will result in an array: index 0 with the 'clean' url and index 1 with the query string. But now I don't seem to find out how to delete the index 1.
Does someone has some ideas to help me further with this?
Thanks.

All you need to do is a grok filter like this:
filter {
grok { match => [ "url", "%{URIPATH:url_nonquery}" ] }
}
This would work even if there isn't a ? in the URL. The split method could be troublesome if you don't have a ? in your url.

Related

Irrelevant results returned from view search in arangodb

We have a collection AbstractEvent with field 'text', which contains 1~30 Chinese characters and we want to perform LIKE match with %keyword%, with high performance(less than 0.3 second, for more 2 million records).
After a bunch of effort, we decided to use VIEW and analyzer identity to do this:
FOR i IN AbstractEventView
SEARCH ANALYZER(i.text LIKE '%keyword%', 'identity')
LIMIT 10
RETURN i.text
And here is the definition of view AbstractEventView
{
"name":"AbstractEventView",
"type":"arangosearch",
"links":{
"AbstractEvent":{
"analyzers":[
"identity"
],
"fields":{
"text":{}
}
}
}
}
However, records returned contain irrelevant ones.
The flowlling is an example:
FOR i IN AbstractEventView
SEARCH ANALYZER(i.text LIKE '%速%', 'identity')
LIMIT 10
RETURN i.text
and the result is
[
"全球经济增速虽军官下滑",
"油食用消费出现明显下滑",
"本次国家经济快速下行",
"这场所迅速爆发的情况",
"经济减速风景空间资本大规模流出",
"苜蓿草众人食品物资价格不稳定",
"荤菜价格快速走低",
"情况快速升级",
"情况快速进展",
"四季功劳增速断崖式回落后"
]
油食用消费出现明显下滑and苜蓿草众人食品物资价格不稳定 are irrelavent.
We've been struggling on this for days, can anyone help me out? Thanks.
PS:
Why we do not use FULL-TEXT index?
full-text index indexed fields by tokenized text, so that we can not get matching '货币超发' when keyword is '货',because '货币' is recgonized as a word.
Why we do not use FILTER with LIKE operator directly?
Filtering without index will cost about 1 second and it is not acceptable.

Elasticsearch: How to get the length of a string field(before analysis)?

My index has a string field containing a variable length random id. Obviously it shouldn't be analysed.
But I don't know much about elasticsearch especially when I created the index.
Today I tried a lot to filter documents based on the length of id, finally I got this groovy script:
doc['myfield'].values.size()
or
doc['myfield'].value.size()
both returns mysterious numbers, I think that's because of the field got analysed.
If it's really the case, is there any way to get the original length or fix the problem, without rebuild the whole index?
Use _source instead of doc. That's using the source of the document, meaning the initial indexed text:
_source['myfield'].value.size()
If possible, try to re-index the documents to:
use doc[field] on a not-analyzed version of that field
even better, find out the size of the field before you index the document and consider adding its size as a regular field in the document itself
Elasticsearch stores a string as tokenized in the data structure ( Field data cache )where we have script access to.
So assuming that your field is not not_analyzed , doc['field'].values will look like this
"In america" => [ "in" , "america" ]
Hence what you get from doc['field'].values is a array and not a string.
Now the story doesn't change even if you have a single token or have the field as not_analyzed.
"america" => [ "america" ]
Now to see the size of the first token , you can use the following request
{
"script_fields": {
"test1": {
"script": "doc['field'].values[0].size()"
}
}
}

How can you parse 2 properties out of 1 value with grok in logstash?

Some context:
I want to parse the following log statement using grok in logstash
07:51:45,729 TRACE [com.company.Class] (ajp-/1.2.3.4:8080-251) USERID called path: /url and took: 1000 ms
I am now using the following syntax to parse the complete message:
%{DATA:time}\s%{DATA:level}\s%{DATA:class}\s%{DATA:thread}\s%{DATA:userid}\s.*path:\s%{DATA:url}\s.*:\s%{NUMBER:duration:int}\sms
Which gives me all the properties that i have defined.
My question:
I want to parse this part (ajp-/1.2.3.4:8080-251) into a 'thread' property and an ip property.
The result needs to be:
thread: (ajp-/1.2.3.4:8080-251)
ip: 1.2.3.4
How can i do this?
Thanks
Just add a second grok filter after your working one. Do not put this in your existing grok filter because it will finish after the first match.
Example:
grok {
match => [ 'thread', '%{IP:ip}' ]
}
This obtains your previous field thread => "(ajp-/1.2.3.4:8080-251)" and adds a new field ip => "1.2.3.4"
Apart from that, I would recommend you to be more specific with your pattern. You used DATA everytime which is kind of imprecise. Start with something like this:
%{TIME:timestamp} %{WORD:method} \[%{JAVACLASS:class}\] \(%{DATA:thread}\) %{NUMBER:userid} %{DATA}%{URIPATH:uri}%{DATA}

How to define grok pattern for pipe delimited log message?

setting up ELK is very easy until you hit the logstash filter. I have a log delimited 10 fields. I may have some field blank but I am sure there will be 10 fields:
7/5/2015 10:10:18 AM|KDCVISH01|
|ClassNameUnavailable:MethodNameUnavailable|CustomerView|xwz261|ef315792-5c41-4bdf-aa66-73317e82e4d6|52|6182d1a1-7916-4874-995b-bc9a23437dab|<Exception>
afkh akla 487234 &*<Exception>
Q:
1- I am confused how grok or regex pattern will pick only the field that I am looking and not the similar match from another field. For example, what is the guarantee that DATESTAMP pattern picks only the first value and not the timestamp present in the last field (buried in stack trace)?
2- Is there a way to define positional mapping? For example, 1st fiels is dateTime, 2nd is machine name, 3rd is class name and so on. This will make sure I have fields displayed in Kibana no matter the field value is present or not.
I know i am little late, But here is a simple solution which i am using,
replace your | with space
option 1:
filter {
mutate {
gsub => ["message","\|"," "]
}
grok {
match => ["message","%{DATESTAMP:time} %{WORD:MESSAGE1} %{WORD:EXCEPTION} %{WORD:MESSAGE2}"]
}
}
option 2: excepting |
filter {
grok {
match => ["message","%{DATESTAMP:time}\|%{WORD:MESSAGE1}\|%{WORD:EXCEPTION}\|%{WORD:MESSAGE2}"]
}
}
it is working fine : http://grokdebug.herokuapp.com/. check here.

Multi-value fields only store last value

I'm relatively new to ELK and grok. I'm trying to parse a log file that may contain 1 or more repetitions of the same value. For example the log file could contain:
value1;value2;value3;
value1;
value1;value2;value3;value4;........value900;
For this example, I'm using the following grok pattern:
((?[a-z0-9]*)[;])+
This appears to work properly, and parse each value. The problem is that the "tag" field only contains the last value (ie value900). All of the previous values seem to be overwritten.
Is there a way to gather all of the values and store them into an array instead of only getting the last value?
Simply use mutate:
mutate {
split => ["tag",";"]
}
This will split the value that's in the tag field into an array. So just match the whole string in your grok ((?<tag>[a-z0-9;]+) and then split it from there.

Resources