ElasticSearch fuzzy percolator query does not match - elasticsearch-2.0

Please explain to me the following problem:
Query with fuzziness 0 doesn't match. Why?
I have the mapping:
$ curl -XGET 'http://localhost:9200/words/_mapping?pretty'
{
"words" : {
"mappings" : {
".percolator" : {
"properties" : {
"category" : {
"type" : "string",
"index" : "not_analyzed"
},
"fuzziness" : {
"type" : "long"
},
"list" : {
"type" : "string",
"index" : "not_analyzed"
},
"query" : {
"type" : "object",
"enabled" : false
}
}
},
"query_doc" : {
"properties" : {
"category" : {
"type" : "string",
"index" : "not_analyzed"
},
"text" : {
"type" : "string"
}
}
}
}
}
}
I have the percolator queries:
$ curl 'http://localhost:9200/words/.percolator/_search?pretty=true&q=*:*'
{
"took" : 4,
"timed_out" : false,
"_shards" : {
"total" : 1,
"successful" : 1,
"failed" : 0
},
"hits" : {
"total" : 2,
"max_score" : 1.0,
"hits" : [ {
"_index" : "words",
"_type" : ".percolator",
"_id" : "id4_0",
"_score" : 1.0,
"_source" : {
"query" : {
"fuzzy" : {
"text" : {
"fuzziness" : 0,
"value" : "Banes"
},
"category" : "cuba"
}
}
}
}, {
"_index" : "words",
"_type" : ".percolator",
"_id" : "id4_1",
"_score" : 1.0,
"_source" : {
"query" : {
"fuzzy" : {
"text" : {
"fuzziness" : 1,
"value" : "Banes"
},
"category" : "cuba"
}
}
}
} ]
}
}
When I run the percolate query only the query with fuzziness 1 is matching:
$ curl 'http://localhost:9200/words/query_doc/_percolate?pretty' -d '
{
"doc": {
"text": "Just Banes"
}
}'
{
"took" : 2,
"_shards" : {
"total" : 1,
"successful" : 1,
"failed" : 0
},
"total" : 1,
"matches" : [ {
"_index" : "words",
"_id" : "id4_1"
} ]
}
What is wrong? Could someone explain this?
Thanks

Related

ElasticSearch can't get multiple suggestor values from the same document

Can you help me please?
I have a problem with Completion Suggester in ElasticSearch
Example: I have this mapping :
PUT music
{
"mappings": {
"properties": {
"suggest": {
"type": "completion"
},
"title": {
"type": "keyword"
}
}
}
}
and index multiple suggestions for a document as follows:
PUT music/_doc/1?refresh
{
"suggest": [
{
"input": "Nirva test",
"weight": 10
},
{
"input": "Nirva hola",
"weight": 3
}
]
}
Querying: you can do this request on kibana
POST music/_search?pretty
{
"suggest": {
"song-suggest": {
"prefix": "nirv",
"completion": {
"field": "suggest"
}
}
}
}
and the result I retrieve only the first value but not both.
I did the test on kibana dev tool too and this is the result
{
"took" : 1,
"timed_out" : false,
"_shards" : {
"total" : 1,
"successful" : 1,
"skipped" : 0,
"failed" : 0
},
"hits" : {
"total" : {
"value" : 0,
"relation" : "eq"
},
"max_score" : null,
"hits" : [ ]
},
"suggest" : {
"song-suggest" : [
{
"text" : "nir",
"offset" : 0,
"length" : 3,
"options" : [
{
"text" : "Nirvana test",
"_index" : "music",
"_type" : "_doc",
"_id" : "1",
"_score" : 10.0,
"_source" : {
"suggest" : [
{
"input" : "Nirvana test",
"weight" : 10
},
{
"input" : "Nirvana best",
"weight" : 3
}
]
}
}
]
}
]
}
}
expected result :
"suggest" : {
"song-suggest" : [
{
"text" : "nirvana",
"offset" : 0,
"length" : 7,
"options" : [
{
"text" : "Nirvana test",
"_index" : "music",
"_type" : "_doc",
"_id" : "1",
"_score" : 10.0,
"_source" : {
"suggest" : [
{
"input" : "Nirvana test",
"weight" : 10
},
{
"input" : "Nirvana best",
"weight" : 3
}
]
}
}
]
},
{
"text" : "nirvana b",
"offset" : 0,
"length" : 9,
"options" : [
{
"text" : "Nirvana best",
"_index" : "music",
"_type" : "_doc",
"_id" : "1",
"_score" : 3.0,
"_source" : {
"suggest" : [
{
"input" : "Nirvana test",
"weight" : 10
},
{
"input" : "Nirvana best",
"weight" : 3
}
]
}
}
]
}
]
}
This is the default behavior of current implementations. You can check #31738. Below is one of the comment for an explanation why it is returning only one document/suggestion.
The completion suggester is document-based by design so we cannot
return one entry per matching suggestion. It is documented that it
returns documents not suggestions and a single input can be indexed in
multiple suggestions (if you have synonyms in your analyzer for
instance) so it is not trivial to differentiate a match from its
variations. Also the completion suggester does not visit all
suggestions to select the top N, it has a special structure (a
weighted FST) that can visit suggestions in the order of their scores
and early terminates the query once enough documents have been found.

findOne returning full document

This is the Data stored in MongoDB database under the collection name of padlos
{
"_id" : ObjectId("60ffb89473dc672be32909c2"),
"courses" : [
{
"_id" : ObjectId("60ffb89473dc672be32909c3"),
"course" : "BCA",
"semesters" : [
{
"_id" : ObjectId("60ffb89473dc672be32909c4"),
"sem" : 1,
"subjects" : [
{
"_id" : ObjectId("60ffb89473dc672be32909c5"),
"subject" : "C++",
"units" : [
{
"_id" : ObjectId("60ffb89473dc672be32909c6"),
"unit" : 1,
"topics" : [
{
"_id" : ObjectId("60ffb89473dc672be32909c7"),
"topic" : "Basics"
},
{
"_id" : ObjectId("60ffb89473dc672be32909c8"),
"topic" : "DataTypes"
}
]
}
]
},
{
"_id" : ObjectId("60ffb89473dc672be32909c9"),
"subject" : "IC & IT",
"units" : [
{
"_id" : ObjectId("60ffb89473dc672be32909ca"),
"unit" : 1,
"topics" : [
{
"_id" : ObjectId("60ffb89473dc672be32909cb"),
"topic" : "Basics"
},
{
"_id" : ObjectId("60ffb89473dc672be32909cc"),
"topic" : "DataTypes"
}
]
}
]
}
]
},
{
"_id" : ObjectId("60ffb89473dc672be32909cd"),
"sem" : 2,
"subjects" : [
{
"_id" : ObjectId("60ffb89473dc672be32909ce"),
"subject" : "Java",
"units" : [
{
"_id" : ObjectId("60ffb89473dc672be32909cf"),
"unit" : 1,
"topics" : [
{
"_id" : ObjectId("60ffb89473dc672be32909d0"),
"topic" : "Basics"
},
{
"_id" : ObjectId("60ffb89473dc672be32909d1"),
"topic" : "DataTypes"
}
]
}
]
},
{
"_id" : ObjectId("60ffb89473dc672be32909d2"),
"subject" : "SQL",
"units" : [
{
"_id" : ObjectId("60ffb89473dc672be32909d3"),
"unit" : 1,
"topics" : [
{
"_id" : ObjectId("60ffb89473dc672be32909d4"),
"topic" : "Basics"
},
{
"_id" : ObjectId("60ffb89473dc672be32909d5"),
"topic" : "DataTypes"
}
]
}
]
}
]
}
]
}
],
"__v" : 0
}
This is the code that i am using to query and fetch data
app.get('/',function(req, res)
{
PadhloSchema.findOne(
{"courses.course" : "BCA" , "courses.semesters.sem" : 1, "courses.semesters.subjects.subject" :"C++"},
function(err, courses){
if(!err)
{
res.send(courses);
}
else
{
res.send(err);
}
})
});
I only want to fetch data where the course is BCA ,the semester(sem) is 1 and the subject is C++ and the data in the response must look like this :
{
"_id" : ObjectId("60ff08a977ec48b84ec07b46"),
"subject" : "C++",
"units" : [
{
"_id" : ObjectId("60ff08a977ec48b84ec07b47"),
"unit" : 1,
"topics" : [
{
"_id" : ObjectId("60ff08a977ec48b84ec07b48"),
"topic" : "Basics"
},
{
"_id" : ObjectId("60ff08a977ec48b84ec07b49"),
"topic" : "DataTypes"
}
]
}
]
}
But rather i am all the data back in the response:
{
"_id" : ObjectId("60ffb89473dc672be32909c2"),
"courses" : [
{
"_id" : ObjectId("60ffb89473dc672be32909c3"),
"course" : "BCA",
"semesters" : [
{
"_id" : ObjectId("60ffb89473dc672be32909c4"),
"sem" : 1,
"subjects" : [
{
"_id" : ObjectId("60ffb89473dc672be32909c5"),
"subject" : "C++",
"units" : [
{
"_id" : ObjectId("60ffb89473dc672be32909c6"),
"unit" : 1,
"topics" : [
{
"_id" : ObjectId("60ffb89473dc672be32909c7"),
"topic" : "Basics"
},
{
"_id" : ObjectId("60ffb89473dc672be32909c8"),
"topic" : "DataTypes"
}
]
}
]
},
{
"_id" : ObjectId("60ffb89473dc672be32909c9"),
"subject" : "IC & IT",
"units" : [
{
"_id" : ObjectId("60ffb89473dc672be32909ca"),
"unit" : 1,
"topics" : [
{
"_id" : ObjectId("60ffb89473dc672be32909cb"),
"topic" : "Basics"
},
{
"_id" : ObjectId("60ffb89473dc672be32909cc"),
"topic" : "DataTypes"
}
]
}
]
}
]
},
{
"_id" : ObjectId("60ffb89473dc672be32909cd"),
"sem" : 2,
"subjects" : [
{
"_id" : ObjectId("60ffb89473dc672be32909ce"),
"subject" : "Java",
"units" : [
{
"_id" : ObjectId("60ffb89473dc672be32909cf"),
"unit" : 1,
"topics" : [
{
"_id" : ObjectId("60ffb89473dc672be32909d0"),
"topic" : "Basics"
},
{
"_id" : ObjectId("60ffb89473dc672be32909d1"),
"topic" : "DataTypes"
}
]
}
]
},
{
"_id" : ObjectId("60ffb89473dc672be32909d2"),
"subject" : "SQL",
"units" : [
{
"_id" : ObjectId("60ffb89473dc672be32909d3"),
"unit" : 1,
"topics" : [
{
"_id" : ObjectId("60ffb89473dc672be32909d4"),
"topic" : "Basics"
},
{
"_id" : ObjectId("60ffb89473dc672be32909d5"),
"topic" : "DataTypes"
}
]
}
]
}
]
}
]
}
],
"__v" : 0
}
Please help and tell me where i am doing wrong.I am new and noob in mongoDB.
I am using the latest MongoDB version 5.0.1
UPDATED ANSWER after Comment by Rahul Soni
app.get('/',function(req, res)
{
PadhloSchema.find(
{
courses : {course : "BCA",semesters : {sem : 1 ,subjects :{subject : "C++"}}}
},
function(err, courses){
if(!err)
{
res.send(courses);
}
else
{
res.send(err);
}
})
});
But now it is giving me null array
[]
MongoDB returns data at the document level. https://docs.mongodb.com/manual/tutorial/query-array-of-documents/
Basically, it is working as expected. You are searching for the document and you are getting precisely that. If you are looking for a subdocument, you will need to unwind it like so:
db.tmp.aggregate({$unwind:"$courses"},{$unwind:"$courses.semesters"},{$unwind:"$courses.semesters.subjects"},{$match:{"courses.course":"BCA", "courses.semesters.sem": 1, "courses.semesters.subjects.subject" :"C++"}}).pretty()
NB: I have simply added your document to a database and showing it at a the mongo level. Node.js would work similarly.
{
"_id" : ObjectId("60ffd533bccc96b9985944a9"),
"courses" : {
"_id" : ObjectId("60ffb89473dc672be32909c3"),
"course" : "BCA",
"semesters" : {
"_id" : ObjectId("60ffb89473dc672be32909c4"),
"sem" : 1,
"subjects" : {
"_id" : ObjectId("60ffb89473dc672be32909c5"),
"subject" : "C++",
"units" : [
{
"_id" : ObjectId("60ffb89473dc672be32909c6"),
"unit" : 1,
"topics" : [
{
"_id" : ObjectId("60ffb89473dc672be32909c7"),
"topic" : "Basics"
},
{
"_id" : ObjectId("60ffb89473dc672be32909c8"),
"topic" : "DataTypes"
}
]
}
]
}
}
}
}

Elasticsearch search explain query with fieldNorm

{ "query": { "match" : { "name": "abcd" } } }
Hits: {
"index" : "indexuniq_v5",
"_type" : "Playlist",
"_id" : "49680",
"_score" : 0.5945348,
"_source" : {
"name" : "abcd",
"ratin" : 4
}
}, {
"index" : "indexuniq_v5",
"_type" : "Playlist",
"_id" : "49682",
"_score" : 0.5945348,
"_source" : {
"name" : "abcdeert",
"ratin" : 4
}
}
If see _explain, all details score have equal value. And fieldNorm too :confused:
Why scores are equal?
https://www.elastic.co/guide/en/elasticsearch/guide/current/scoring-theory.html

AQL Query Really Slow (~20 seconds)

The following query is taking around 20 seconds to execute:
FOR p IN PATHS(locations, connections, "outbound", { maxLength: 1 }) FILTER p.source._key == "26094" RETURN p.vertices[*].name
I believe this is a simple query (and the database is not that big) and it should execute fairly quick... I must be doing something wrong... Here is the query result:
==> [object ArangoQueryCursor - count: 286, hasMore: false]
The locations (vertices) collection has 23753 documents, and the connections (edges) collection has 123414 documents.
I tried to filter by _id as well but the performance is somewhat the same.
Is there anything I could do to get a better performance?
Here is the query's .explain() report:
{
"plan" : {
"nodes" : [
{
"type" : "SingletonNode",
"dependencies" : [ ],
"id" : 1,
"estimatedCost" : 1,
"estimatedNrItems" : 1
},
{
"type" : "CalculationNode",
"dependencies" : [
1
],
"id" : 2,
"estimatedCost" : 2,
"estimatedNrItems" : 1,
"expression" : {
"type" : "function call",
"name" : "PATHS",
"subNodes" : [
{
"type" : "array",
"subNodes" : [
{
"type" : "collection",
"name" : "locations"
},
{
"type" : "collection",
"name" : "connections"
},
{
"type" : "value",
"value" : "outbound"
},
{
"type" : "object",
"subNodes" : [
{
"type" : "object element",
"name" : "maxLength",
"subNodes" : [
{
"type" : "value",
"value" : 1
}
]
}
]
}
]
}
]
},
"outVariable" : {
"id" : 2,
"name" : "2"
},
"canThrow" : true
},
{
"type" : "EnumerateListNode",
"dependencies" : [
2
],
"id" : 3,
"estimatedCost" : 102,
"estimatedNrItems" : 100,
"inVariable" : {
"id" : 2,
"name" : "2"
},
"outVariable" : {
"id" : 0,
"name" : "p"
}
},
{
"type" : "CalculationNode",
"dependencies" : [
3
],
"id" : 4,
"estimatedCost" : 202,
"estimatedNrItems" : 100,
"expression" : {
"type" : "compare ==",
"subNodes" : [
{
"type" : "attribute access",
"name" : "_key",
"subNodes" : [
{
"type" : "attribute access",
"name" : "source",
"subNodes" : [
{
"type" : "reference",
"name" : "p",
"id" : 0
}
]
}
]
},
{
"type" : "value",
"value" : "26094"
}
]
},
"outVariable" : {
"id" : 3,
"name" : "3"
},
"canThrow" : false
},
{
"type" : "FilterNode",
"dependencies" : [
4
],
"id" : 5,
"estimatedCost" : 302,
"estimatedNrItems" : 100,
"inVariable" : {
"id" : 3,
"name" : "3"
}
},
{
"type" : "CalculationNode",
"dependencies" : [
5
],
"id" : 6,
"estimatedCost" : 402,
"estimatedNrItems" : 100,
"expression" : {
"type" : "expand",
"subNodes" : [
{
"type" : "iterator",
"subNodes" : [
{
"type" : "variable",
"name" : "1_",
"id" : 1
},
{
"type" : "attribute access",
"name" : "vertices",
"subNodes" : [
{
"type" : "reference",
"name" : "p",
"id" : 0
}
]
}
]
},
{
"type" : "attribute access",
"name" : "name",
"subNodes" : [
{
"type" : "reference",
"name" : "1_",
"id" : 1
}
]
}
]
},
"outVariable" : {
"id" : 4,
"name" : "4"
},
"canThrow" : false
},
{
"type" : "ReturnNode",
"dependencies" : [
6
],
"id" : 7,
"estimatedCost" : 502,
"estimatedNrItems" : 100,
"inVariable" : {
"id" : 4,
"name" : "4"
}
}
],
"rules" : [
"move-calculations-up",
"move-filters-up",
"move-calculations-up-2",
"move-filters-up-2"
],
"collections" : [
{
"name" : "connections",
"type" : "read"
},
{
"name" : "locations",
"type" : "read"
}
],
"variables" : [
{
"id" : 0,
"name" : "p"
},
{
"id" : 1,
"name" : "1_"
},
{
"id" : 2,
"name" : "2"
},
{
"id" : 3,
"name" : "3"
},
{
"id" : 4,
"name" : "4"
}
],
"estimatedCost" : 502,
"estimatedNrItems" : 100
},
"warnings" : [ ],
"stats" : {
"rulesExecuted" : 21,
"rulesSkipped" : 0,
"plansCreated" : 1
}
}
PATHS() will build all paths of the graph and then post-filter the results using the FILTER on the _key attribute. This may create a huge result set first (for all paths) before filtering out all non-matches.
If all that's required is to find connected vertices on depth 1, I think it will be more efficient to do something like this:
querying using TRAVERSAL:
This is more efficient because it will build all paths in the graph but only those starting at the specified start vertex:
FOR p IN TRAVERSAL(locations, connections, "1", "outbound", { minDepth: 1, maxDepth: 1, paths: true })
RETURN p.path.vertices[*].name
querying direct neighbors using NEIGHBORS:
This may be slightly more efficient even because it will construct a smaller intermediate result.
Additionally, it won't return the start vertex (26094) but all vertices directly connected to it:
FOR p IN NEIGHBORS(locations, connections, "26094", "outbound")
RETURN p.vertex.name
querying the edges directly (not using graph functions)
Finally you can query the edge collection directly.
Again, this won't return the start vertex (26094) but all vertices directly connected to it:
FOR edge IN connections
FILTER edge._from == "locations/26094"
FOR vertex IN locations
FILTER vertex._id == edge._to
RETURN vertex.name

Query Rule for non-indexed attribute FILTER

I observere an enormous runtime difference between those two AQL statements an a DB set with about 20 Mio records:
FOR e IN EAll
FILTER e.lastname == "Kmp" // <-- skip-index
FILTER e.lastpaff != "" // <-- no index
RETURN e
// runs in less than a second
AND
FOR e IN EAll
FILTER e.lastpaff != "" // <-- no index
FILTER e.lastname == "Kmp" // <-- skip-index
RETURN e
// needs about a minute to execute.
In addition to be (or not) indexed, the selectivity of those statements is highly different: the indexedAttribute is highly selective where-as the nonIndexedAttribute only filters 50%.
Is it possible that there is not yet an optimization rule for that? I currently am using ArangoDB 2.4.0.
DETAILS:
There is a SKIP-Index on the indexed Attribute (which seems to be used in the execuation plan 1).
Here are the execuation plan, in which only the order of the filters are changed:
FAST QUERY:
arangosh [Uni]> stmt.explain()
{
"plan" : {
"nodes" : [
{
"type" : "SingletonNode",
"dependencies" : [ ],
"id" : 1,
"estimatedCost" : 1,
"estimatedNrItems" : 1
},
{
"type" : "IndexRangeNode",
"dependencies" : [
1
],
"id" : 8,
"estimatedCost" : 170463.32,
"estimatedNrItems" : 170462,
"database" : "Uni",
"collection" : "EAll",
"outVariable" : {
"id" : 0,
"name" : "i"
},
"ranges" : [
[
{
"variable" : "i",
"attr" : "lastname",
"lowConst" : {
"bound" : "Kmp",
"include" : true,
"isConstant" : true
},
"highConst" : {
"bound" : "Kmp",
"include" : true,
"isConstant" : true
},
"lows" : [ ],
"highs" : [ ],
"valid" : true,
"equality" : true
}
]
],
"index" : {
"type" : "skiplist",
"id" : "13295598550318",
"unique" : false,
"fields" : [
"lastname"
]
},
"reverse" : false
},
{
"type" : "CalculationNode",
"dependencies" : [
8
],
"id" : 5,
"estimatedCost" : 340925.32,
"estimatedNrItems" : 170462,
"expression" : {
"type" : "compare !=",
"subNodes" : [
{
"type" : "attribute access",
"name" : "lastpaff",
"subNodes" : [
{
"type" : "reference",
"name" : "i",
"id" : 0
}
]
},
{
"type" : "value",
"value" : ""
}
]
},
"outVariable" : {
"id" : 2,
"name" : "2"
},
"canThrow" : false
},
{
"type" : "FilterNode",
"dependencies" : [
5
],
"id" : 6,
"estimatedCost" : 511387.32,
"estimatedNrItems" : 170462,
"inVariable" : {
"id" : 2,
"name" : "2"
}
},
{
"type" : "ReturnNode",
"dependencies" : [
6
],
"id" : 7,
"estimatedCost" : 681849.3200000001,
"estimatedNrItems" : 170462,
"inVariable" : {
"id" : 0,
"name" : "i"
}
}
],
"rules" : [
"move-calculations-up",
"move-filters-up",
"move-calculations-up-2",
"move-filters-up-2",
"use-index-range",
"remove-filter-covered-by-index"
],
"collections" : [
{
"name" : "EAll",
"type" : "read"
}
],
"variables" : [
{
"id" : 0,
"name" : "i"
},
{
"id" : 1,
"name" : "1"
},
{
"id" : 2,
"name" : "2"
}
],
"estimatedCost" : 681849.3200000001,
"estimatedNrItems" : 170462
},
"warnings" : [ ],
"stats" : {
"rulesExecuted" : 19,
"rulesSkipped" : 0,
"plansCreated" : 1
}
}
SLOW Query:
arangosh [Uni]> stmt.explain()
{
"plan" : {
"nodes" : [
{
"type" : "SingletonNode",
"dependencies" : [ ],
"id" : 1,
"estimatedCost" : 1,
"estimatedNrItems" : 1
},
{
"type" : "EnumerateCollectionNode",
"dependencies" : [
1
],
"id" : 2,
"estimatedCost" : 17046233,
"estimatedNrItems" : 17046232,
"database" : "Uni",
"collection" : "EAll",
"outVariable" : {
"id" : 0,
"name" : "i"
},
"random" : false
},
{
"type" : "CalculationNode",
"dependencies" : [
2
],
"id" : 3,
"estimatedCost" : 34092465,
"estimatedNrItems" : 17046232,
"expression" : {
"type" : "compare !=",
"subNodes" : [
{
"type" : "attribute access",
"name" : "lastpaff",
"subNodes" : [
{
"type" : "reference",
"name" : "i",
"id" : 0
}
]
},
{
"type" : "value",
"value" : ""
}
]
},
"outVariable" : {
"id" : 1,
"name" : "1"
},
"canThrow" : false
},
{
"type" : "FilterNode",
"dependencies" : [
3
],
"id" : 4,
"estimatedCost" : 51138697,
"estimatedNrItems" : 17046232,
"inVariable" : {
"id" : 1,
"name" : "1"
}
},
{
"type" : "CalculationNode",
"dependencies" : [
4
],
"id" : 5,
"estimatedCost" : 68184929,
"estimatedNrItems" : 17046232,
"expression" : {
"type" : "compare ==",
"subNodes" : [
{
"type" : "attribute access",
"name" : "lastname",
"subNodes" : [
{
"type" : "reference",
"name" : "i",
"id" : 0
}
]
},
{
"type" : "value",
"value" : "Kmp"
}
]
},
"outVariable" : {
"id" : 2,
"name" : "2"
},
"canThrow" : false
},
{
"type" : "FilterNode",
"dependencies" : [
5
],
"id" : 6,
"estimatedCost" : 85231161,
"estimatedNrItems" : 17046232,
"inVariable" : {
"id" : 2,
"name" : "2"
}
},
{
"type" : "ReturnNode",
"dependencies" : [
6
],
"id" : 7,
"estimatedCost" : 102277393,
"estimatedNrItems" : 17046232,
"inVariable" : {
"id" : 0,
"name" : "i"
}
}
],
"rules" : [
"move-calculations-up",
"move-filters-up",
"move-calculations-up-2",
"move-filters-up-2"
],
"collections" : [
{
"name" : "EAll",
"type" : "read"
}
],
"variables" : [
{
"id" : 0,
"name" : "i"
},
{
"id" : 1,
"name" : "1"
},
{
"id" : 2,
"name" : "2"
}
],
"estimatedCost" : 102277393,
"estimatedNrItems" : 17046232
},
"warnings" : [ ],
"stats" : {
"rulesExecuted" : 19,
"rulesSkipped" : 0,
"plansCreated" : 1
}
}
Indeed, conditions like the following disabled the usage of indexes even though an index could be used:
FILTER doc.indexedAttribute != ... FILTER doc.indexedAttribute == ...
Interestingly an index is used when the two conditions are put into the same FILTER condition and combined with &&:
FILTER doc.indexedAttribute != ... && doc.indexedAttribute == ...
Though these two statements are equivalent, they trigger a slightly different code path. The former will be AND-combining two existing FILTER ranges, the latter one will produce a range from a single FILTER. The case of AND-combination for the FILTER ranges was overly defensive and rejected both sides even if only a single side (in this case the one with the non-equality operator) could not be used for an index scan.
This has been fixed in 2.4, and the fix will be contained in 2.4.2. A workaround for now is to combine the two FILTER statements in a single one.

Resources