PouchDB get documents by ID with certain string in them - couchdb

I would like to get all documents that contain a certain string in them, I can't seem to find a solution for it..
for example I have the following doc ids
vw_10
vw_11
bmw_12
vw_13
bmw_14
volvo_15
vw_16
how can I get allDocs with the string vw_ "in" it?

Use batch fetch API:
db.allDocs({startkey: "vm_", endkey: "vm_\ufff0"})
Note: \ufff0 is the highest Unicode character which is used as sentinel to specify ranges for ordered strings.

You can use PouchDB find plugin API which is way more sophisticated than allDocs IMO for querying. With the PouchDB find plugin, there is a regex search operator which will allow you do exactly this.
db.find({selector: {name: {$regext: '/vw_'}}});
It's in BETA at the time of writing but we are about to ship a production app with it. That's how stable it has been so far. See https://github.com/nolanlawson/pouchdb-find for more on Pouch Db Find

You better have a view with the key you want to search. This ensures that the key is indexed. Otherwise, the search might be too slow.

Related

Honoring previous searches in the next search result in solr

I am using solr for searching. I wants to improve my search result quality based on previously searched terms. Suppose, I have two products in my index with names 'Jewelry Crystal'(say it belongs to Group 1) & 'Compound Crystal'(say it belongs to Group 2). Now, if we query for 'Crystal', then both the products will come.
Let say, if I had previously searched for 'Jewelry Ornament', then I searches for 'Crystal', then I expects that only one result ('Jewelry Crystal') should come. There is no point of showing 'Compound Crystal' product to any person looking for jewelry type product.
Is there any way in SOLR to honour this kind of behavior or is there any other method to achieve this.
First of all, there's nothing built-in in Solr to achive this. What you need for this is some kind of user session, which is not supported by Solr, or a client side storage like a cookie or something for the preceding query.
But to achive the upvote you can use a runtime Boost Query.
Assuming you're using the edismax QueryParser, you can add the following to your Solr query:
q=Crystal&boost=bq:(Jewelry Ornament)
See http://wiki.apache.org/solr/ExtendedDisMax#bq_.28Boost_Query.29

Misconeptions about search indexing? (Haystack/Whoosh)

I'm using haystack with whoosh for development purposes.
I want search results based on django models to be filtered by the user that created them.
Please see my other post Filter haystack result with SearchQuerySet for details.
Basically I had to add User to my search index. But I noticed, when I manually change the user_id of a record, search is broken. After thinking about it this even makes sense. But, this means I have to rebuild the index after each field update in each model? Surely that doesn't scale at all?
I thought the engine would find the object by id, then look it up in the database, and return a current instance for further processing like filtering. It seems like everything is cached in the index so must be synchronized in realtime for search results to show up? Am I missing something here?
This documentation helped shed some light:
http://docs.haystacksearch.org/dev/searchindex_api.html

How to implement faceted search suggestion with number of relevant items in Solr?

Hi
I have a very specific need in my company for the system's search engine, and I can't seem to find a solution.
We have a SOLR index of items, all of them have the same fields, with one of the fields being "Type", (And ofcourse, "Title", "Text", and so on).
What I need is: I get an Item Type and a Query String, and I need to return a list of search suggestion with each also saying how meny items of the correct type will that suggested string return.
Something like, if the original string is "goo" I'll get
Goo 10
Google 52
Goolag 2
and so on.
now, How do I do it?
I don't want to re-query SOLR for each different suggestion, but if there is no other way, I just might.
Thanks in advance
you can try edge n-gram tokenization
http://search.lucidimagination.com/search/document/CDRG_ch05_5.5.6
You can try facets. Take a look at my more detailed description ('Autocompletion').
This was implemented at http://jetwick.com with Solr ... now using ElasticSearch but the Solr sources are still available and the idea is also the identical https://github.com/karussell/Jetwick
The SpellCheckComponent of Solr (that gives the suggestions) have extended results that can give the frequency of every suggestion in the index - http://wiki.apache.org/solr/SpellCheckComponent#Extended_Results.
However, the .Net component SolrNet, does not currently seem to support the extendedResults option: "All of the SpellCheckComponent parameters are supported, except for the extendedResults option" - http://code.google.com/p/solrnet/wiki/SpellChecking.
This is implemented using a facet field query with a Prefix set. You can test this using the xml handler like this:
http://localhost:8983/solr/select/?rows=0&facet=true&facet.field=type&f.type.prefix=goo

Is there any way to search through CouchDB documents for substring

CouchDB gives an opportunity to search values from startkey, for exact key-value pair etc
But is there any way to search for substring in specified field?
The problem is like this. Our news database consists of about 40,000 news documents. Say, they have title, content and url fields. We want to find news documents which have "restaurant" in their title. Is there any way to do it?
View Collation wiki page tells nothing :( And it seems strange to me that there's no tool to handle this problem and all I can to do is just parsing JSON results with Python, PHP or smth else. In MySQL it's simply LOCATE() function..
Use couchdb-lucene.
Be careful here. Lucene is not always the best answer.
If your only searching one limited field and only searching for a word like restaurant then lucene which is really meant to tokenize large texts/documents can be way overkill, you can get the same effect by splitting the title.
function(doc){
var stringarray = doc.title.split(" ");
for(var idx in stringarray)
emit(stringarray[idx],doc);
}
Also Lucene and Couchdb do not support substring search, where the string is not in the beginning of a word.

WildcardQuery error in Solr

I use solr to search for documents and when trying to search for documents using this query "id:*", I get this query parser exception telling that it cannot parse the query with * or ? as the first character.
HTTP Status 400 - org.apache.lucene.queryParser.ParseException: Cannot parse 'id:*': '*' or '?' not allowed as first character in WildcardQuery
type Status report
message org.apache.lucene.queryParser.ParseException: Cannot parse 'id:*': '*' or '?' not allowed as first character in WildcardQuery
description The request sent by the client was syntactically incorrect (org.apache.lucene.queryParser.ParseException: Cannot parse 'id:*': '*' or '?' not allowed as first character in WildcardQuery).
Is there any patch for getting this to work with just * ? Or is it very costly to do such a query?
If you want all documents, do a query on *:*
If you want all documents with a certain field (e.g. id) try id:[* TO *]
Lucene doesn't allow you to start WildcardQueries with an asterisk by default, because those are incredibly expensive queries and will be very, very, very slow on large indexes.
If you're using the Lucene QueryParser, call setAllowLeadingWildcard(true) on it to enable it.
If you want all of the documents with a certain field set, you are much better off querying or walking the index programmatically than using QueryParser. You should really only use QueryParser to parse user input.
id:[a* TO z*] id:[0* TO 9*] etc.
I just did this in lukeall on my index and it worked, therefore it should work in Solr which uses the standard query parser. I don't actually use Solr.
In base Lucene there's a fine reason for why you'd never query for every document, it's because to query for a document you must use a new indexReader("DirectoryName") and apply a query to it. Therefore you could totally skip applying a query to it and use the indexReader methods numDocs() to get a count of all the documents, and document(int n) to retrieve any of the documents.
If you are just trying to get all documents, Solr does support the *:* query. It's the only time I know of that Solr will let you begin a query with an *. I'm sure you've probably seen this as the default query in the Solr admin page.
If you are trying to do a more specific query with an * as the first character, like say id:*456 then one of the best ways I've seen is to index that field twice. Once normally (field name: id), and once with all the characters reversed (field name: reverse_id). Then you could essentially do the query id:456 by sending the query reverse_id:654 instead. Hope that makes sense.
You can also search the Solr user group mailing list at http://www.mail-archive.com/solr-user#lucene.apache.org/ where questions like this come up quite often.
The following Solr issue is a request to be able to configure the default lucene query parser.
https://issues.apache.org/jira/browse/SOLR-218
In this issue you can find the following description how to 'patch' Solr. This modification would allow you to start queries with a *.
Jonas Salk: I've basically updated only one Java file: SolrQueryParser.java.
public SolrQueryParser(IndexSchema schema, String defaultField) {
...
setAllowLeadingWildcard(true);
setLowercaseExpandedTerms(true);
...
}
...
public SolrQueryParser(QParser parser, String defaultField, Analyzer analyzer) {
...
setAllowLeadingWildcard(true);
setLowercaseExpandedTerms(true);
...
}
I'm not sure if setLowercaseExpandedTerms is needed...
I'm assuming with id:* you're just trying to match all documents, right?
I've never used solr before, but in my Lucene experience, when ingesting data, we've added a hidden field to every document, then when we need to return every record we do a search for the string constant in that field that's the same for every record.
If you can't add a field like that in your situation, you could use a RegexQuery with a regex that would match anything that could be found in the id field.
Edit: actually answering the question. I've never heard of a patch to get that to work, but I would be surprised if it could even be made to work reasonably well. See this question for a reason why unconstrained PrefixQuery's can cause a problem.
Actually, I have been using a workaround for this. I append a character to the id, eg: A1, A2, etc.
With such values in the field, it is possible to search using the query id:A*
But would love to find whether a true solution exists.

Resources