I want to build a smart search with Algolia. The point is to use keywords to rank the results. Lets say user types "smarphone blue cheap good camera". This should find all blue smarthones and order them by price and camera characteristics.
The idea is to somehow map those keywords to a ranking formula.
Doea any one know if it is possible with Algolia and if so what is the best way to achieve the desired result?
To automatically detect and filter by facet values (like blue, good camera), you could use Query Rules, in particular Dynamic Filtering.
However, that shouldn't be necessary. If you include the color (containing for instance the blue value) and characteristics (containing for instance the good camera value) attributes in your searchableAttributes list, then the search request will return relevant results based on purely textual relevance matched in those attributes.
On the other hand, sorting strategies impact the Algolia indices at build time, therefore in order to change the sorting strategy based on the query (e.g. sort results by ascending price if the search query contains cheap), you will need to setup a new replica index for which results are sorted by price. On the frontend, when detecting a relevant keyword (e.g. cheap), you can decide to switch the search queries to the primary index or to the sorted replica.
Related
The Azure Search docs state that:
A high cardinality field consists of a facetable or filterable field that has a significant number of unique values, and as a result, consumes significant resources when computing results
But it's not clear on whether this poor performance is limited to when the fields are specifically used in a filter/facet query, or whether it also affects performance when the field is queried against using search terms.
Can anyone with some deeper Azure Search knowledge weigh in?
After getting clarification from Microsoft, I can confirm that the answer is "no, performance is only affected when using the field in a facet/filter".
This poor performance is limited to when the fields are specifically used in a filter/facet query. The searchable terms will not be affected.
Fields that work best in faceted navigation have low cardinality: a small number of distinct values that repeat throughout documents in your search corpus (for example, a list of colors, countries/regions, or brand names).
If the field that has a significant number of unique values, it will consume significant resources when computing the facet navigation. Because each distinct value will be 1 facet and need to be calculated.
At query time, a filter parser accepts criteria as input, converts the expression into atomic Boolean expressions represented as a tree, and then evaluates the filter tree over filterable fields in an index.
If the field that has a significant number of unique values, the tree will be deep and consume significant computing resources. Because each unique value will be calculated in filter, there will be no cached result for duplicate items to reduce the calculation.
The searchable fields will not be affected if the fields have a significant number of unique values. Because searchable fields have inverted index to accelerate query.
When you load the index, each field's inverted index is populated with all of the unique, tokenized words from each document, with a map to corresponding document IDs. For example, when indexing a hotels data set, an inverted index created for a City field might contain terms for Seattle, Portland, and so forth. Documents that include Seattle or Portland in the City field would have their document ID listed alongside the term.
I reached out to MS as well, this is the answer that I got:
“High cardinality” means different things to filterable vs searchable fields. Cardinality for filterable fields amounts to the uniqueness of the full value of the field. For searchable fields, it’s about the aggregate number of indexed terms that results from writing a document to the index. Complex custom analyzers, for example, can bloat the index by producing several tokens for each word in a string. Inverted indexes scale really well, so I wouldn’t be too concerned about having a high number of unique words in the index. But, this should help understand the unit of scale each.
This mention in the documentation is primarily to raise awareness about what contributes to query performance and why they may see reduced performance as they add additional fields to the filter clause. I will add…You can improve the performance of individual queries by scaling up the number of partitions in your service. Going from 1 to 2 not only doubles the storage available to your service, it also doubles the amount of compute power available to execute queries. The data workload is divided roughly equally between each partition. It doesn’t usually equate to exactly twice the performance for your queries, but it can have a significant impact if you are seeing slow queries.
I have a database of product information indexed by name, type, manufacturer, etc. Users often submit search queries whose results would be contained neatly in one or more facets. When this situation arises, I would like for Solr to parse the query and apply the relevant facets.
For example, searching shoes should return results in the shoe category. More ambitiously, searching plaid shirt should query plaid on items in the shirt category.
Is it possible to configure Solr to do this?
Thanks in advance.
Asking Solr to do what you want is a tall order. Your best bet would be to store categories in a field that is weighted very highly. For example, if you have a category field with the value of "shoes", having a hit on that field will increase the relevance of documents on that category, thus having them show up first. Same goes for the second example.
As for faceting, your question is not clear on how you want to apply faceting.
I know that ElasticSearch uses relevance ranking algorithms such as Lucene's tf/idf, length normalization and couple of more algorithms to rank term queries applied on textual fields (i.e. searching words "medical" AND "journal" in the "title" and "body" fields).
My question is how does ElasticSearch rank and retrieve results of a filter or range query (i.e. age=25, or weight>60)?
I know these types of queries are just filtering documents based on the condition(s). But lets say I have 200 documents which their age field value is 25. Which of those documents will be retrieved as top 10 results?
Does ElasticSearch retrieve them by the order it indexed them?
From the Elasticsearch documentation:
Filters: As a general rule, filters should be used instead of queries:
for binary yes/no searches
for queries on exact values
Queries: As a general rule, queries should be used instead of filters:
for full text search
where the result depends on a relevance score
So when running a search such as "age=25, or weight>60" you should be using a filter.
However - Filters do not affect the scoring - i.e. if you only used a filter your search results would all have the same score.
There is a range query - this is a query that would affect score and I would guess that it scores documents based on things like the document timestamp (most recent gets a higher score).
You'd need to explore the documentation further and dig into Lucene documentation to understand exactly how and why the a document got its score - but as above, you may be better using Filters that don't affect scoring.
I need to know how Lucene orders the records in a result set if I use composite queries.
It looks like it sorts it using "score" value for exact queries and it sorts it lexicographically for range queries. But what if have the query which looks like
q = type:TAG OR type:POST AND date:[111 to 999]
You mix together logical search and scoring. When you pass query like date:[111 to 999], Lucene searches for all documents with the date in specified range. But you give it no advice on how to sort them - is date 111 more preferable for you than 555? or is 701 better than 398? Lucene have no idea about it, so the score is the same for all found documents. Just to make some order, Lucene sorts results lexicographically, but that's mostly detail of implementation, not some key idea.
On other hand, if you pass some other parameters with a query - be it keywords or tags - Lucene can apply its similarity algorithm and assign different scores to different docs in results. You may find more on Lucene's scoring here.
So, to give you short answer: Lucene sorts results by score, and only if the score for 2 documents is the same, it uses other types sorting options like lexicographical order.
I've recently started experimenting with Solr. My data is indexed and searchable. My problem is in the sorting. I have three fields: Author, Title, Sales.
I would like to search against the author & title fields, but have the sales value influence the score so that matches with higher sales move toward the top, even if the initial match score is not the highest.
Simply sorting by sales does not produce valid results as a result with a near 0 score for the search term, but a lot of sales in general could end up above a perfect match for the term that has never been sold.
I am seeing results that, while great term matches, are not necessarily the product I want showing at the top of the list.
If you're using the dismax handler, you can add a boost function (bf) with the field you want to boost on, e.g.
http://...?q=foo&bf="fieldValue(sales)^1.5"
...to make the value of the sales figure give a bump. You can, of course, make the function more complex if you want to munge the sales data in some way.
More info is easily found.
You may also just want to do this at index time since the sales data isn't going to be changing on the fly.
You can also use Index-time boosting.
And here's detailed info on using function queries to influence scoring.