I am trying to search the results for the negation of particular id in solr. It have found that this can be done in two ways:
(1) fq=userid:(-750376)
(2) fq=-userid:750376
Both are working fine and both are giving correct results. But I can one tell me which is the better way of either two. Which one should I prefer?
You can find out what query the fq parameter's value is parsed into by turning on debugQuery (add the parameter debug=true). Then, in the Solr response, there should be an entry "parsed_filter_queries" under "debug", and the entry should show the string representation of the parsed filter query (or queries) being used.
In your case, both forms of fq should be parsed into the same query, i.e. a boolean query with a single clause stating that the term userid:750376 must not occur. Therefore, which form you use does not matter, at least in terms of correctness or performance.
For us the query looks little different. But for Solr, both are same.
First, Solr parse the query provided by you. Then search for the result. In your case, for both the queries Solr's "parsed_filter_queries" is fq=-userid:750376 only.
fq=userid:(-750376)
fq=-userid:750376
You can check this by enabling debugQuery from Admin window. You can also pass debugQuery=true with query. Hope this will help.
Related
In my application, I have a collection of 50 million data. I am using like search and then count the results on a particular field(i.e Patientfirstname). I also created an index on the Patientfirstname field it improved the performance but still it is taking a lot of time.
db.patients.find({"Patientfirstname":{"$regex":"Testuser"}}).count() without index 40 sec
db.patients.find({"Patientfirstname":{"$regex":"Testuser"}}).count() after adding index on the Patientfirstname field 31 sec
db.patients.find({"Patientfirstname":{"$regex":"Testuser"}}).count()
I tried with a different approach (aggregate) but still, response is very slow
db.patients.aggregate.([{$match:{"Patientfirstname":{"$regex":"Testuser"}}},
{$project:{"Patientfirstname":1,"_id":1}},
{$group : {_id:"$Patientfirstname", count:{$sum:1}}},
{$sort:{"count":-1}} ])
this query also takes the same to time fetch the results 31 sec
another approach was tried but the results are not correct
select only the field from the entire collection and then apply like search and count and result.
db.patients.find({},{Patientfirstname:1,_id:1}).count({"Patientfirstname":{"$regex":"Testuser"}})
applying a filter in the count is not working, entire collection count is displayed
Please help in this query to fetch results faster.Thanks in advance
So here is the deal:
As rightly pointed in the comments, $regex is an operator that would not perform well with or without indexes. Here is the reason why:
Queries without indexes are slow because they executed using COLLSCAN - which is essentially iteration of the whole 50 Million documents on the disk one-by-one, filtering data and returning only the ones that match. Disks being an inherently slow piece of hardware does not help the situation either.
Now, When indexed - MongoDB creates a B-Tree in the RAM. And $regex operator being not very selective in nature, it forces a complete Tree Scan (as compared to a reduced / partial tree scan in case of equalities or ranges) in the index b-tree - which is as bad as a Collection Scan itself. The only reason you get a benefit on 9 seconds is because this Tree Scan occurs in the RAM and not the disk.
Having said that, there are a few alternatives to it:
Optimize your $regex. From the MongoDB Documentation itself:
For case sensitive regular expression queries, if an index exists for the field, then MongoDB matches the regular expression against the values in the index, which can be faster than a collection scan. Further optimization can occur if the regular expression is a "prefix expression", which means that all potential matches start with the same string. This allows MongoDB to construct a "range" from that prefix and only match against those values from the index that fall within that range.
A regular expression is a "prefix expression" if it starts with a caret (^) or a left anchor (\A), followed by a string of simple symbols. For example, the regex /^abc.*/ will be optimized by matching only against the values from the index that start with abc.
Additionally, while /^a/, /^a./, and /^a.$/ match equivalent strings, they have different performance characteristics. All of these expressions use an index if an appropriate index exists; however, /^a./, and /^a.$/ are slower. /^a/ can stop scanning after matching the prefix.
Case insensitive regular expression queries generally cannot use indexes effectively. The $regex implementation is not collation-aware and is unable to utilize case-insensitive indexes.
Create a Text Index - This would tokenize your text string and enable faster text based searches
If you are deployed on MongoDB Atlas - Then you can use Atlas Search which is a Lucene based Text Search Engine (Works almost like elasticsearch on steroids). This offers significantly greater performance and functionalities like fuzzy text search, text automcomplete etc.
I'm working with "edismax" and "function-query" parsers in Solr and have difficulty in understanding whether the query time taken by "function-query" makes sense. The query I'm trying to optimize looks as follows:
q={!func sum($q1,$q2,$q3)} where q1,q2,q3 are edismax queries.
The QTime returned by edismax queries takes well under 50ms but it seems that function-query is the rate determining step since combined query above takes around 200-300ms. I also analyzed the performance of function query using only constants.
The QTime results for different q are as follows:
097ms for q={!func} sum(10,20)
109ms for q={!func} sum(10,20,30)
127ms for q={!func} sum(10,20,30,40)
145ms for q={!func} sum(10,20,30,40,50)
Does this trend make sense? Are function-queries expected to be this slow?
What makes edismax queries so much faster?
What can I do to optimize my original query (which has edismax subqueries q1,q2,q3) to work under 100ms?
func query enumerates all docs, thus it doesn't provide any selectivity. You probably don't need to evaluate it on docs, which doesn't match dismaxes eg
q=+{!v=$q1} +{!v=$q2} +{!v=$q3} {!func sum($q1,$q2,$q3)}
Ok, so I am using many fields with qf, like:
[qf] => frpId^5 fundraise_title^3 fundraiser_display_name^3 charity_name^2 participantFname^2 participantLname^2 participantEmail^1 groupName^3 fundraise_text^ fundraiseTitleExact^15 fundraiserDisplayNameExact^15 charityNameExact^15 participantFnameExact^10 participantLnameExact^10 groupNameExact^10 all^
but I really want that exact matches for the field fundraiseTitleExact to be on top.
With this previous set up of qf, they are on the position 32.
Let's say that I am boosting fundraiseTitleExact like:
[qf] => frpId^5 fundraise_title^3 fundraiser_display_name^3 charity_name^2 participantFname^2 participantLname^2 participantEmail^1 groupName^3 fundraise_text^ fundraiseTitleExact^15000000000000000 fundraiserDisplayNameExact^15 charityNameExact^15 participantFnameExact^10 participantLnameExact^10 groupNameExact^10 all^
But even now the fundraiseTitleExact exact match is only on the position 27 (5 positions up) and is not going upper.
How can I prioritise this field over the rest?
This looks more like a tuning problem, however you have several options:
Tune up your relevancy modifying all the boosts until you get the expected results (I would advise to work with lower boosts than the ones in your questions and then increase the boost of the most important field);
If you are using edismax query parser then You probably want to check the bq and bf parameters in order to boost your term;
If worse come to worst you could use Query Elevation Component to put some entries at the top of the list.
I advise to read the following books to widen your knowledge of solr boosting and relevancy mechanisms:
Solr in Action
Relevant Search
The following filter query returns zero results (using *:* as query):
-startDate:[* TO *] OR startDate:[* TO NOW/DAY+1DAY]
But if I filter only by:
-startDate:[* TO *]
I get 3 results.
If I filter only by:
startDate:[* TO NOW/DAY+1DAY]
I get 161 reults.
Why is the combined FQ returning zero results? What I want is the filter to return any doc whose start date is null or start date is before today.
EDIT:
I'm using Solr 4.2.1.2013.03.26.08.26.55
EDIT:
Well, strange it may sound a colleague suggested putting parenthesis on the two parts like this:
(-startDate:[* TO *]) OR (startDate:[* TO NOW/DAY+1DAY])
And somehow it worked. I'm still curious why that made a difference. Hope someone can shed some light.
Thanks!
Solr supports pure negative queries. They do this, essentially, by expanding the pure negative to something like:
*:* -startDate:[* TO *]
However, what you combine it in a BooleanQuery, I don't believe it applies this sort of logic anymore. A negative query does not, in lucene, fetch anything, but rather filters out matches brought in by other, positive, query terms. This differs from SQL queries, which in a sense start with an implicit *:*, or a full table of results, and allow you to pare it down.
I believe your OR is effectively being ignored, since it doesn't, strictly speaking, make sense in context. Generally, OR is just syntactic sugar, I believe (field:this OR field:that is equivalent to field:this field:that).
So, in effect your query is: startDate:[* TO NOW/DAY+1DAY] -startDate:[* TO *], which makes the results you see more obvious. When you wrap it in parentheses, then each term query is treated separately, and you gain access to solr's support of lonely negative queries.
A much better idea is to store a default value, if you need to search for unset/null values. *:* and by extension pure negative queries like this have to scan the entire index, and so perform very poorly. Providing a default value will improve performance, and prevent this sort of confusing situation.
I used femtoRgon's answer and was able to construct a query that included a range and blank values.
The following includes all docs with a StartDate on or after 1/1/2014 and all docs without a StartDate.
(StartDate:[2014-01-01T00:00:00Z TO *]) OR (-StartDate:([* TO *]) AND *:*)
The magic is (-StartDate:([* TO *]) AND *:*). This will select the docs without a StartDate.
Pure negative queries don't work, because they are omitting results from nothing.
Try:
: AND -startDate:[* TO *]
When you query with -startDate:[* TO *] you get documents which do not have any data for the startDate field.
When you query for startDate:[* TO NOW/DAY+1DAY] you get documents which have a value less than or equal to NOW/DAY+1DAY in the startDate field.
You could try -startDate:* OR startDate:[* TO NOW/DAY+1DAY]. The first part says documents that do not have a value and the second part says document having value less than or equal to NOW/DAY+1DAY in the startDate field.
Here's a text with ambiguous words:
"A man saw an elephant."
Each word has attributes: lemma, part of speech, and various grammatical attributes depending on its part of speech.
For "saw" it is like:
{lemma: see, pos: verb, tense: past}, {lemma: saw, pos: noun, number: singular}
All this attributes come from the 3rd party tools, Lucene itself is not involved in the word disambiguation.
I want to perform a query like "pos=verb & number=singular" and NOT to get "saw" in the result.
I thought of encoding distinct grammatical annotations into strings like "l:see;pos:verb;t:past|l:saw;pos:noun;n:sg" and searching for regexp "pos\:verb[^\|]+n\:sg", but I definitely can't afford regexp queries due to performance issues.
Maybe some hacks with posting list payloads can be applied?
UPD: A draft of my solution
Here are the specifics of my project: there is a fixed maximum of parses a word can have (say, 8).
So, I thought of inserting the parse number in each attribute's payload and use this payload at the posting lists intersectiion stage.
E.g., we have a posting list for 'pos = Verb' like ...|...|1.1234|...|..., and a posting list for 'number = Singular': ...|...|2.1234|...|...
While processing a query like 'pos = Verb AND number = singular' at all stages of posting list processing the 'x.1234' entries would be accepted until the intersection stage where they would be rejected because of non-corresponding parse numbers.
I think this is a pretty compact solution, but how hard would be incorporating it into Lucene?
So... the cheater way of doing this is (indeed) to control how you build the lucene index.
When constructing the lucene index, modify each word before Lucene indexes it so that it includes all the necessary attributes of the word. If you index things this way, you must do a lookup in the same way.
One way:
This means for each type of query you do, you must also build an index in the same way.
Example:
saw becomes noun-saw -- index it as that.
saw also becomes noun-past-see -- index it as that.
saw also becomes noun-past-singular-see -- index it as that.
The other way:
If you want attribute based lookup in a single index, you'd probably have to do something like permutation completion on the word 'saw' so that instead of noun-saw, you'd have all possible permutations of the attributes necessary in a big logic statement.
Not sure if this is a good answer, but that's all I could think of.