Multiple queries in Solr - search

My problem is I have n fields (say around 10) in Solr that are searchable, they all are indexed and stored. I would like to run a query first on my whole index of say 5000 docs which will hit around an average of 500 docs. Next I would like to query using a different set of keywords on these 500 docs and NOT on the whole index.
So the first time I send a query a score will be generated, the second time I run a query the new score generated should be based on the 500 documents of the previous query, or in other words Solr should consider only these 500 docs as the whole index.
To summarise this, Index of 5000 will be filtered to 500 and then 50 (5000>500>50). Its basically filtering but I would like to do this in Solr.
I have reasonable basic knowledge and still learning.
Update: If represented mathematically it would look like this:
results1=f(query1)
results2=f(query2, results1)
final_results=f(query3, results2)
I would like this to be accomplish using a program and end-user will only see 50 results. So faceting is not an option.

Two likely implementations occur to me. The simplest approach would be to just add the first query to the second query;
+(first query) +(new query)
This is a good approach if the first query, which you want to filter on, changes often. If the first query is something like a category of documents, or something similar where you can benefit from reuse of the same filter, then a filter query is the better approach, using the fq parameter, something like:
q=field:query2&fq=categoryField:query1
filter queries cache a set of document ids to filter against, so for commonly used searches, like categories, common date ranges, etc., a significant performance benefit can be gained from it (for uncommon searches, or user-entered search strings, it may just incur needless overhead to cache the results, and pollute the cache with a useless result set)

Filter queries (fq) are specifically designed to do quick restriction of the result set by not doing any score calculation.
So, if you put your first query into fq parameter and your second score-generating query in the normal 'q' parameter, it should do what you ask for.
See also a question discussing this issue from the opposite direction.

I believe you want to use a nested query like this:
text:"roses are red" AND _query_:"type:poems"
You can read more about nested queries here:
http://searchhub.org/2009/03/31/nested-queries-in-solr/

Should take a look at "faceted search" from Solr: http://wiki.apache.org/solr/SolrFacetingOverview This will help you in this kind of "iterative" search.

Related

Using top with Azure Search Suggestions

I am building a search page with Azure Search. On my page, I have a search box. I want to provide suggestions to the users. In an attempt to do this, I'm using the Suggestions endpoint on my index. At this time, I have a request that includes the following query string:
search=sta&suggesterName=sites&$top=3
My question is, how does top determine which three results to return? Is it the first three matches it encounters when going through the search index? Or is it something else? Based on the URL structure, I don't think it's using a scoring profile. So, I ruled out relevancy. But then I started reading about the minimumCoverage field and I got confused.
If the suggest endpoint just returns the first [top] matches it encounters, then why is the minimumCoverage field even needed?
In general, $top will give you the top N results based on whatever order the rest of the query specifies. For queries with no $orderby, the sort order is descending by relevance score. This applies to both Suggest and Search.
Note that just because you don't have a scoring profile (such as with Suggest), that doesn't mean Azure Search doesn't calculate relevance scores for each document. Scoring profiles can influence the score, but they do not completely define it.
For queries with an $orderby, the order of results is defined first by the fields in the $orderby, and then by score if there are any ties to be broken.
minimumCoverage has nothing to do with ordering or $top. It has to do with the way search queries are distributed. Every query is executed concurrently against different subsets of the index (this happens regardless of whether or not you have multiple search units). Sometimes one of these subsets fails to execute for whatever reason, usually when your search service is under heavy load. The minimumCoverage parameter provides a way to relax the rule that normally says "X% of the index must successfully execute the query in order to consider the overall query a success" (X is 100 by default for Search and 80 by default for Suggest). This is a way to tradeoff completeness of search results for higher availability in case of heavy load or partial outages.

Fastest way to search a SQL Server table (or indexed view) column with "like '%search%'"?

Suppose there's a table with columns (UserID, FieldID, Value), with half a million records. I want to see if some search term T(N) occurs anywhere in each Value (i.e. Value.Contains( T(N) ) ).
I think I'm just hitting a wall volume wise, just too many values to sift through. I don't think a Full Text index will help, because it's only useful for StartsWith queries that look at individual words, not occurrences anywhere within the string at all.
Is there a good approach to indexing this kind of data for such a search in SQL Server?
A half-million records is not terribly large, although I don't know the size of the field contents. A couple of ideas - this was too long for a comment or else I may have posted as such.
You could implement a full-text search engine like Elastic, Solr, etc and use it as a sidecar. If when you are doing text searches, you are not otherwise making much use of the other data, this might be easy enough. Note that you could put other data for searching into Elastic or Solr, but I'm not sure if you'd want to duplicate all your data, and those tools aren't really great for a transactional data store.
Another option for volumes this small, assuming you only need basic "contains" searching: create two more tables: keywords and keyword_index (or whatever). When saving, tokenize your text content and write out any new keywords to keywords table and then add the data to the join table. Index everything, and then do your search off the keywords table, joining back to the master via the intermediate keyword_index table.
This is fairly hackish, and getting your keyword handling really dialed in (for stemming, etc) may be a pain. It is a reasonable quick & dirty solution for smaller-scale needs though.

What indexer do I use to find the list in the collection that is most similar to my list?

Lets say I have my list of ingredients:
{'potato','rice','carrot','corn'}
and I want to return lists from a database that are most similar to mine:
{'beans','potato','oranges','lettuce'},
{'carrot','rice','corn','apple'}
{'onion','garlic','radish','eggs'}
My query would return this first:
{'carrot','rice','corn','apple'}
I've used Solr, and have looked at CloudSearch, ElasticSearch, Algolia, Searchify and Swiftype. These engines only seem to let me put in one query string and then filter by other facets.
In a real scenario my search list will be about 200 items long and will be matching against about a million lists in my database.
What technology should I use to accomplish what I want to do?
Should I look away from search indexers and more towards database-esque things like mongo, map reduce, hadoop... All I know are the names of other technologies and I just need someone to point me in the right direction on what technology path I should be exploring for this.
With so much data I can't really loop through it, I need to query everything at once.
I wonder what keeps you from trying it with Solr, as Solr provides much of what you need. You can declare the field as type="string" multiValued="true and save each list item as a value. Then, when querying, you specify each of the items in the list to look for as a search term for that field, and Solr will – by default – return the closest match.
If you need exact control over what will be regarded as a match (e.g. at least 40% of the terms from the search list have to be in a matching list) you can use the mm EDisMax parameter, cf. Solr Wiki
Having said that, I must add that I’ve never searched for 200 query terms (do I unerstand correctly that the list whose contents should be searched will contain about 200 items?) and do not know how well that performs. But I guess that setting up a test core and filling it with random lists using a script should not take more than a few hours, so it should be possible to evaluate the performance of this approach without investing too much time.

Reduce the query time in solr

I am using solr for searching. My index size is getting larger hour by hour. So the query time is also getting higher. Many people suggested for sharding. Is this the last option. What should I do now?
before rushing into sharding, which will definetly make your search faster, you might have a look at your schema, and see if you can do any optimisations there.
Use Stop words: stop words are very common words that might inflate the index size un-necessarily. Try to use stop words whenever you need them.
Avoid Synonyms with 'Expand' option if you can. Those also expand the index enormously.
Avoid using N-Grams with large range. This will generate too many combinations if you have large size.
Use query filters (fq parameter) when you just need a filter. Filter queries are faster than normal queries, and they don't apply any scoring. It is just a filter. So if you need to AND queries together, put the filter queries in the fq parameter.
Run "Optimise Index" from time to time to get rid of deleted docs in the index, and to reduce index size.
use debugQuery=on and see if you can spot any thing that is taking long time.
try to use documentCache if you have large document size
try to use filterCache if you have repeated filter queries
try to use queryResultCache if you have repeated queries.
If non of the above resulted in any performance gains, then you might consider sharding/distributed search

SOLR - How to have facet counts restricted to rows returned in resultset

/select/?q=*:*&rows=100&facet=on&facet.field=category
I have around 100 000 documents indexed. But I return only 100 documents using rows=100. The facet counts returned for category, however return the counts for all documents indexed.
Can we somehow restrict the facets to the result set returned? i.e 100 rows only?
I don't think it is possible in any direct manner, as was pointed out by Pascal.
I can see two ways to achieve this:
Method I: do the counting yourself visiting the 100 results returned. This is very easy and fast if they are categorical fields, but harder if they are text fields that need to be tokenized, etc.
Method II: do two passes:
Do a normal query without facets (you only need to request doc ids at this point)
Collect all the IDs of the documents returned
Do a second query for all fields and facets, adding a filter to restrict result to those IDs collected in setp 2. Something like:
select/?q=:&facet=on&facet.field=category&fq=id:(312
OR 28 OR 1231 ...)
The first is way more efficient and I would recommend for non-textual filds. The second is computationally expensive but has the advantage of working for all types od fields.
Sorry, but i don't think it is possible. The facets are always based on all the documents matching the query.
Not a real answer but maybe better than nothing: the results grouping feature (check out from trunk!):
http://wiki.apache.org/solr/FieldCollapsing
where facet.field=category is then similar to group.field=category and you will get only as much groups ('facet hits') as you specified!
If you always execute the same query (q=*:*), maybe you can use facet.limit, for example:
select/?q=*:*&rows=100&facet=on&facet.field=category&facet.limit=100
Tell us if the order that solr uses is the same in the facet as in the query :.

Resources