How to compare the value of two fields in Kibana - dsl

I have two kinds of logs in ES (rtmp and apache), apache has clientip.raw and rtmp has ipclient.raw. The problem is: how can I see in my Kibana panel just the data that has this condition "ipclient"="clientip" ?
I tried writing this in my search bar, but doesn´t work:
{
"query": {
"filtered": {
"filter": {
"script": {"script": "doc['clientip.raw'].value == doc['ipclient.raw'].value"}
}
}
}
}

You can write the below query:-
{"constant_score":{"filter":{"script" : { "script" : "doc['clientip.raw'].value == doc['ipclient.raw'].value"}}}}
You may see an error while using above query such as:-
ScriptException[scripts of type [inline], operation [search] and lang [groovy] are disabled]
To solve this error, edit your elasticsearch.yml file and enter the following property at the end:-
script.inline:on
Then you can restart your Elasticsearch node or cluster and then query the same on Kibana which will fetch you desired records.

Related

How to add/update data at nested level in existing Index using Logstash mutate plugin

I have multiple logstash pipelines set up on server that feeds data in Index. Every pipelines adds bunch of fields at the first level of Index along with their nested level.
I already have kpi1 and kpi2 values inside Metrics => data with Metrics being nested array. And I have a requirement to add a new pipeline that will feed the value of kpi3. Here is my filter section in the new pipeline that I created:
filter {
ruby {
code => "
event.set('kpi3', event.get('scoreinvitation'))
"
}
mutate {
# Rename the properties according to the document schema.
rename => {"kpi3" => "[metrics][data][kpi3]"}
}
}
It overwrites the Metrics section ( may be because it is an array??). Here is my mapping :
"metrics" : {
"type" : "nested",
"properties" : {
"data" : {
"properties" : {
"kpi1" : {
....
}
}
}
"name" : {
"type" : "text",
....
}
}
}
How can I keep the existing fields (and values) and still add the new fields inside Metrics => Data ? Any help is appeciated.
The Logstash pipeline looks good, however your mapping doesn't make much sense to me if I'm understanding your requirement correctly.
The metrics property doesn't have to be of type nested. In fact, the metrics property is just a json namespace that contains sub-fields / -objects.
Try the following mapping instead
"metrics": {
"properties": {
"data": {
"properties": {
"kpi1": {
# if you want to assign a value to the kpi1 field, it must have a type
}
}
},
"name": {
"type": "text"
}
}
}

logstash filter for Java logs

I am trying to write a logstash filter for my Java logs so that I can insert them into my database cleanly.
Below is an example of my log format:
FINE 2016-01-28 22:20:42.614+0000 net.myorg.crypto.CryptoFactory:getInstance:73:v181328
AppName : MyApp AssocAppName:
Host : localhost 127.000.000.001 AssocHost:
Thread : http-bio-8080-exec-5[23]
SequenceId: -1
Logger : net.myorg.crypto.CryptoFactory
Message : ENTRY
---
FINE 2016-01-28 22:20:42.628+0000 net.myorg.crypto.CryptoFactory:getInstance:75:v181328
AppName : MyApp AssocAppName:
Host : localhost 127.000.000.001 AssocHost:
Thread : http-bio-8080-exec-5[23]
SequenceId: -1
Logger : net.myorg.crypto.CryptoFactory
Message : RETURN
---
My logstash-forwarder is pretty simple. It just includes all logs in the directory (they all have the same format as above)
"files": [
{
"paths": [ "/opt/logs/*.log" ],
"fields": { "type": "javaLogs" }
}
]
The trouble I'm having is on the logstash side. How can I write a filter in logstash to match this log format?
Using something like this, gets me close:
filter {
if [type] == "javaLogs" {
multiline {
pattern => "^%{TIMESTAMP_ISO8601}"
negate => true
what => "previous"
}
}
}
But I want to break each line in the log down to its own mapping in logstash. For example, creating fields like AppName, AssocHost, Host, Thread, etc.
I think the answer is using grok.
Joining them with multiline (the codec or filter, depending on your needs) is a great first step.
Unfortunately, your pattern says "If the log entry doesn't start with a timestamp, join it with the previous eentry".
Note that none of your log entries start with a timestamp.

ElasticSearch Custom Script for Ordering Performance

I wrote a simple scoring based on a document parameter like below:
POST /_scripts/groovy/CustomScoring
{
"script": "(_source.ProductHits==null ? 0.1 :
(_source.ProductHits[myval]==null?0.2:_source.ProductHits[myval]))"
}
When I use this custom script to sort search results like this:
POST /ecs/product/_search
{
"query": {
"bool": {
"must": [
{
"function_score":{
"query" : {"match_all": {}}
,"script_score": {
"script_id": "CustomScoring",
"lang" : "groovy",
"params":{
"myval": "iphone"
}
}
}
}
]
}
}
}
It takes 800ms to run on 50'000 documents (vs initial run-time which was around 1ms).
How can I optimize this groovy function?
Can Elasticsearch use some kind of caching for this function?
p.s. When I tried to use sum complex formulas based on doc.some_param.value and embedded functions like log it took 40ms instead which is still reasonable.

Elasticsearch: how to get matching types list?

My elasticsearch index has 10 types in it. When searching for the term "test" I want to get all the documents that matched that query and a list of all the types that has a least one match for that query.
I know I can get this list by going over all results but I guess there's a better way..
Thanks!
Since facets have been deprecated (https://www.elastic.co/guide/en/elasticsearch/reference/current/search-facets.html) and replaced with aggregations, here is the solution for aggregations:
{
"query": {
...
},
"aggs": {
"your_aggregation_name": {
"terms": {
"field": "_type"
}
}
}
}
Link to documentation: https://www.elastic.co/guide/en/elasticsearch/reference/current/search-aggregations-bucket-terms-aggregation.html
Just managed to do that with elasticsearch facets like described here:
http://www.elasticsearch.org/guide/en/elasticsearch/reference/current/search-facets.html#_facet_filter
In short you add this to your query:
"facets" : { "facet_name" : { "terms" : {"field" : "_type"} } }
Hope this help someone.

ElasticSearch and Couchdb view

New to the whole ElasticSearch and couchDB setup. Just got a river going from ES to a db I have in couchDB. If I have a view in a db is there a way to just index that view? For example I have a db named "Movies" and a view called "Action" and another called "byActor".
I was thinking that I could do an index and point it to that, like below, but that doesn't seem to work.
{
"type" : "couchdb",
"couchdb" : {
"host" : "localhost",
"port" : 5984,
"db" : "Movies",
"filter" : null
},
"index" : {
"index" : "Action",
"bulk_size" : "100",
"bulk_timeout" : "10ms"
}
}
I think I may not understand what index is exactly because when I run http://localhost:9200/Movies/Action/_search?pretty=true nothing is returned.
Edit: In looking around more it's seeming like this isn't the way to do this. Index just seems to be the way ES indexes? Anyways, I'm reading that mapping might accomplish this. Is that true?
Indexing views is not yet in CouchDb River. See this pull request.

Resources