How max_docs and doc_frequency calculated in elastic search TDF - search

Totally we have 6 documents and in that we have text 'fox' in 3 document. When we search using.
GET /my_index1/doc/_search?explain
{ "query": { "term": { "text": "fox" } } }
Please find the explanation below
{
"text": "fox",
"value": 1,
"description": "idf(docFreq=1, maxDocs=2)",
"details": []
}
How th maxDocs is calculated. Since i have 6 documents in that 3 document has value fox. But why maxDocs displays 2 ? and also how the docFrequency is calculated ?
edited document
PUT /my_index1/doc/1
{ "text" : "fox" }
PUT /my_index1/doc/2
{ "text" : "fox" }
PUT /my_index1/doc/3
{ "text" : "document contain the fox trem to analyse the values" }
PUT /my_index1/doc/4
{ "text" : "hello" }
PUT /my_index1/doc/5
{ "text" : " this the the document trem to analyse the values" }
PUT /my_index1/doc/6
{ "text" : " this the the document trem to analyse the values" }

Related

Elasticsearch Search/filter by occurrence or order in an array

I am having a data field in my index in which,
I want only doc 2 as result i.e logically where b comes before
a in the array field data.
doc 1:
data = ['a','b','t','k','p']
doc 2:
data = ['p','b','i','o','a']
Currently, I am trying terms must on [a,b] then checking the order in another code snippet.
Please suggest any better way around.
My understanding is that the only way to do that would be to make use of Span Queries, however it won't be applicable on an array of values.
You would need to concatenate the values into a single text field with whitespace as delimiter, reingest the documents and make use of Span Near query on that field:
Please find the below mapping, sample document, the query and response:
Mapping:
PUT my_test_index
{
"mappings": {
"properties": {
"data":{
"type": "text"
}
}
}
}
Sample Documents:
POST my_test_index/_doc/1
{
"data": "a b"
}
POST my_test_index/_doc/2
{
"data": "b a"
}
Span Query:
POST my_test_index/_search
{
"query": {
"span_near" : {
"clauses" : [
{ "span_term" : { "data" : "a" } },
{ "span_term" : { "data" : "b" } }
],
"slop" : 0, <--- This means only `a b` would return but `a c b` won't.
"in_order" : true <--- This means a should come first and the b
}
}
}
Note that slop controls the maximum number of intervening unmatched positions permitted.
Response:
{
"took" : 0,
"timed_out" : false,
"_shards" : {
"total" : 1,
"successful" : 1,
"skipped" : 0,
"failed" : 0
},
"hits" : {
"total" : {
"value" : 1,
"relation" : "eq"
},
"max_score" : 0.36464313,
"hits" : [
{
"_index" : "my_test_index",
"_type" : "_doc",
"_id" : "1",
"_score" : 0.36464313,
"_source" : {
"data" : "a b"
}
}
]
}
}
Let me know if this helps!

Logstash: Renaming nested fields based on some condition

I am trying to rename the nested fields from Elasticsearch while migrating to Amazonelasticsearch
In the document, I want to change the
1.If the value field has JSON type. Change the value field to value-keyword and remove "value-whitespace" and "value-standard" if present
2.If the value field has a size of more than 15. Change the value field to value-standard
"_source": {
"applicationid" : "appid",
"interactionId": "716bf006-7280-44ea-a52f-c79da36af1c5",
"interactionInfo": [
{
"value": """{"edited":false}""",
"value-standard": """{"edited":false}""",
"value-whitespace" : """{"edited":false}"""
"title": "msgMeta"
},
{
"title": "msg",
"value": "hello testing",
},
{
"title": "testing",
"value": "I have a text that can be done and changed only the size exist more than 20 so we applied value-standard ",
}
],
"uniqueIdentifier": "a21ed89c-b634-4c7f-ca2c-8be6f31ae7b3",
}
}
the end result should be
"_source": {
"applicationid" : "appid",
"interactionId": "716bf006-7280-44ea-a52f-c79da36af1c5",
"interactionInfo": [
{
"value-keyword": """{"edited":false}""",
"title": "msgMeta"
},
{
"title": "msg",
"value": "hello testing",
},
{
"title": "testing",
"value-standard": "I have a text that can be done and changed only the size exist more than 20 and so we applied value-standard ",
}
],
"uniqueIdentifier": "a21ed89c-b634-4c7f-ca2c-8be6f31ae7b3",
}
}
For 2), you can do it like this:
filter {
if [_source][interactionInfo][2][value] =~ /.{15,15}/ {
mutate {
rename => ["[_source][interactionInfo][2][value]","[_source][interactionInfo][2][value-standard]"]
}
}
}
The regex .{15,15} matches any string 15 characters long. If the field is shorter than 15 characters long, the regex doesn't match and the mutate#rename isn't applied.
For 1), one possible solution would be trying to parse the field with the json filter and if there's no _jsonparsefailure tag, rename the field.
Founded the solution for this one. I have used a ruby filter in Logstash to check each and every document as well as nested document
Here is the ruby code
require 'json'
def register(param)
end
def filter(event)
infoarray = event.get("interactionInfo")
infoarray.each { |x|
if x.include?"value"
value = x["value"]
if value.length > 15
apply_only_keyword(x)
end
end
if x.include?"value"
value = x["value"]
if validate_json(value)
apply_only_keyword(x)
end
end
}
event.set("interactionInfo",infoarray)
return [event]
end
def validate_json(value)
if value.nil?
return false
end
JSON.parse(value)
return true
rescue JSON::ParserError => e
return false
end
def apply_only_keyword(x)
x["value-keyword"] = x["value"]
x.delete("value")
if x.include?"value-standard"
x.delete("value-standard")
end
if x.include?"value-whitespace"
x.delete("value-whitespace")
end
end

Searching after indexing in ElasticSearch

I want to index 1 billion records. each record has 2 attributes (attribute1 and attribute2).
each record that has same value in attribute1 must be merge. for example, I have two record
attribute1 attribute2
1 4
1 6
my elastic document must be
{
"attribute1": "1"
"attribute2": "4,6"
}
due to huge amount of data, I must to read a bulk (about 1000 records) and merge them based on the above rule (in memory) and then search them in ElasticSearch and merge them with search result and then index/reindex them.
In summary I have to Search and Index per bulk respectively.
I implemented this rule but in some cases Elastic does not return all results and some documents have been indexed duplicately.
after each Index I Refresh ElasticSearch so that it be ready for next search. but in some case it doesn’t work.
my index setting is followed as:
{
"test_index": {
"settings": {
"index": {
"refresh_interval": "-1",
"translog": {
"flush_threshold_size": "1g"
},
"max_result_window": "1000000",
"creation_date": "1464577964635",
"store": {
"throttle": {
"type": "merge"
}
}
},
"number_of_replicas": "0",
"uuid": "TZOse2tLRqGk-vHRMGc2GQ",
"version": {
"created": "2030199"
},
"warmer": {
"enabled": "false"
},
"indices": {
"memory": {
"index_buffer_size": "40%"
}
},
"number_of_shards": "5",
"merge": {
"policy": {
"max_merge_size": "2g"
}
}
}
}
how can I resolve this problem?
Is there any other setting to handle this situation?
In your bulk commands, you need to use the index operation for the first occurence and then update with a script to update your attribute2 property:
{ "index" : { "_index" : "test_index", "_type" : "test_type", "_id" : "1" } }
{ "attribute1" : "1", "attribute2": [4] }
{ "update" : { "_index" : "test_index", "_type" : "test_type", "_id" : "1" } }
{ "script" : { "inline": "ctx._source.attribute2 += attr2", "params" : {"attr2" : 6}}}
After the first index operation your document will look like
{
"attribute1": "1"
"attribute2": [4]
}
After the second update operation, your document will look like
{
"attribute1": "1"
"attribute2": [4, 6]
}
Note that it is also possible to only use update operations with doc_as_upsert and script.

Elastic.co/Elastic search - Relevance feedback with multiple Boosting Queries

I'm trying to implement relevance feedback for Elastic Search (Elastic.co).
I'm aware of boosting queries, which allow for the specification of postiive and negative terms, with the idea being to discount the negative terms, while not excluding them as would be the case in a boolean must_not.
However, I'm trying to achieve tiered boosting, of both positive and negative terms.
That is, I want to take a list of binned positive and negative terms and generate a query such that there are different positive and negative boost tiers, each containing their own query terms.
something like (pseudo query):
query{
{
terms: [very relevant terms]
pos_boost: 3
}
{
terms: [relevant terms]
pos_boost: 2
}
{
terms: [irrelevant terms]
neg_boost: 0.6
}
{
terms: [very irrelevant terms]
neg_boost: 0.3
}
}
My question is whether or not this can be achieved with nested boosting queries, or if I'm better off with multiple should clauses.
My concern is that I'm not sure if a boost of 0.2 in the should clause of a bool query still gives the document a positive increase in the score or not, as I want to discount the document, rather than provide any increase in score.
With boosting queries, the concern is that I can't control the degree to which positive terms are weighted.
Any help, or suggestions for other implementations, would be greatly appreciated. (What I really wanted to do was create a language model for relevant documents and use that to rank, but I don't see how that can easily be achieved in elastic.)
Seems that you can combine bool query and use boosting query clauses tweaking boost values.
POST so/boost/ {"text": "apple computers"}
POST so/boost/ {"text": "apple pie recipe"}
POST so/boost/ {"text": "apple tree garden"}
POST so/boost/ {"text": "apple iphone"}
POST so/boost/ {"text": "apple company"}
GET so/boost/_search
{
"query": {
"bool": {
"must": [
{
"match": {
"text": "apple"
}
}
],
"should": [
{
"match": {
"text": {
"query": "pie",
"boost": 2
}
}
},
{
"match": {
"text": {
"query": "tree",
"boost": 2
}
}
},
{
"match": {
"text": {
"query": "iphone",
"boost": -0.5
}
}
}
]
}
}
}
Alternately, if you want to encode your language model into your collection at index-time, you can try the approach described here: Elasticsearch: Influence scoring with custom score field in document
To boost the elastic search document(priority based search query) based on custom/variable boost value at query time i.e. conditional boosting.
Java Coding example:
customerKeySearch = QueryBuilders.constantScoreQuery(QueryBuilders.termQuery(keys.type", "xxx"));
customerTypeSearch = QueryBuilders.constantScoreQuery(QueryBuilders.termQuery("keys.keyValues.value", "xxxx"));
keyValueQuery = QueryBuilders.boolQuery().must(customerKeySearch).must(customerTypeSearch).boost(2f);
customerKeySearch = QueryBuilders.constantScoreQuery(QueryBuilders.termQuery(keys.type", "xxx"));
customerTypeSearch = QueryBuilders.constantScoreQuery(QueryBuilders.termQuery("keys.keyValues.value", "xxxx"));
keyValueQuery = QueryBuilders.boolQuery().must(customerKeySearch).must(customerTypeSearch).boost(6f);
Description and search query:
elastic search has its internal score calculation technic so we need to disable this mechanism by setting disableCoord(true) property to true in java for BoleanQuery to apply custom boost effect.
Following Boolean query is running query for boosting the documents in elastic search index based on boost value.
{
"bool" : {
"should" : [ {
"bool" : {
"must" : [ {
"constant_score" : {
"query" : {
"term" : {
"keys.type" : "XXX"
}
}
}
}, {
"constant_score" : {
"query" : {
"term" : {
"keys.keyValues.value" : "XXXX"
}
}
}
} ],
"boost" : 2.0
}
}, {
"bool" : {
"must" : [ {
"constant_score" : {
"query" : {
"term" : {
"keys.type" : "XXX"
}
}
}
}, {
"constant_score" : {
"query" : {
"term" : {
"keys.keyValues.value" : "500072388315"
}
}
}
} ],
"boost" : 6.0
}
}, {
"bool" : {
"must" : [ {
"constant_score" : {
"query" : {
"term" : {
"keys.type" : "XXX"
}
}
}
}, {
"constant_score" : {
"query" : {
"term" : {
"keys.keyValues.value" : "XXXXXX"
}
}
}
} ],
"boost" : 10.0
}
} ],
"disable_coord" : true
}
}

elasticsearch predective search solution

Trying to get predictive drop down search ,How can i make search always starts from left to right
like in example "I_kimchy park" , "park"
If i search only "par" i have to get only park in return , but here i am getting both words , how to treat empty space as a character
POST /test1
{
"settings":{
"analysis":{
"analyzer":{
"autocomplete":{
"type":"custom",
"tokenizer":"standard",
"filter":[ "standard", "lowercase", "stop", "kstem", "edgeNgram" ,"whitespace"]
}
},
"filter":{
"ngram":{
"type":"edgeNgram",
"min_gram":2,
"max_gram":15,
"token_chars": [ "letter", "digit"]
}
}
}
}
}
PUT /test1/tweet/_mapping
{
"tweet" : {
"properties" : {
"user": {"type":"string", "index_analyzer" : "autocomplete","search_analyzer" : "autocomplete"}
}
}}
POST /test1/tweet/1
{"user" : "I_kimchy park"}
POST /test1/tweet/3
{ "user" : "park"}
GET /test1/tweet/_search
{
"query": {
"match_phrase_prefix": {
"user": "park"
}
}
}
That happens because your standard tokenizer splits your user field by white spaces. You can use Keyword Tokenizer in order to treat whole string as a single value (single token).
Please keep in mind that this change may affect other of your functionalities that use this field. You may have to add dedicated "not tokenized" user field for this purpose.

Resources