MongoDb fulltext search AND condition - node.js

I try to use text-index in mongodb:
{$text: {$search: 'sport hockey'}}
Is uses OR condition for searching by phrase: 'sport' OR 'hockey'. For example the result can be the following list of titles:
Sport something today...
Sport is hockey
Hockey players
The most relevant document is on the second position (actually I'd like to exclude all other results except this one: 'Sport is hockey').
Is it possible to use AND condition in $text?
Quotation marks are not suitable because {$text: {$search: "\"sport hockey\""}} uses search by exact match.

try to include textScore in your output then sort by it to get most relevant documents (optionally use limit to get top n relevant documents).
db.your_collection.find(
{
$text: { $search: "sport hockey" }
}, {
score: { $meta: "textScore" }
}
).sort(
{
score: { $meta: "textScore" }
}
).limit(2)

The text search does not provide this out-of-the-box. But there are two possible workarounds.
Use the pre-2.6 solution for text search: Break your text field into individual words and safe the individual words as an array in the same documents. You can then perform a $all query on that array.
Perform separate text-searches for each word and generate the set intersection on the application layer.

use as Follow
db.your_collection.find({$text:{$search:' "sport" "hockey" '}})
"text" type search word must be included!

For anyone coming to this thread again, required search criteria of "AND" can be achieved using following find query
{$text:{$search:"\"sport\" \"hockey\""}}
Give attention to how different words are escaped.
In node js, following can be done to create desire search text for "AND" condition
searchText = '"' + userInput.split(" ").join('" "') + '"';
then
collectionObj.find({$text:{$search:searchText}})

Related

mongoDB/mongoose: Text search does not work on searches of two or less characters

I have a model I want to perform a text search on with mongoose.
let models = await MyModel.find({$text: {$search: '"go"'}});
However, I have noticed when I search for strings with two or less characters no results are returned.
How can I search for strings with two or less characters?
You can try using the aggregate query and RegEx, as follows:
let searchText = "go";
searchText = new RegExp(searchText, "i");
let models = await MyModel.aggregate([ {$match: {"text": { $regex: searchText}}} ])
here text is the field on which search will be applied. You can add $or conditions to search in more than one field.

How to match different instances of the same query in Elasticsearch?

Example 1:
My query term is "abcd".
My query structure is like this:
query: {
query_string: {
query: "abc",
fields: ["field1", "field2", "field3"]
}
},
size: 50,
"highlight": {
"fields": {
"field1": {},
"field2": {},
"field3": {}
}
It matches the following instances:
abc abcs abc_def_ghi
But it does not match def_abc or def_abc_ghi.
Basically instances where abc is in the middle of a string.
Example 2:
In the same example above, if my query is abc_def
It does not match abc_def_ghi, although abc_def is present.
I have tried prefix_phrase and it solves scenario 2 but misses out on example 1's problems.
Any help would be appreciated.
for these usages you should use wildcard in query or regular expression
if you are using term query you can utilize wildcard term query or regexp query instead.
As name suggests phase_prefix is like poor mans autocomplete it searches for fields which starts with given phrase in your case abc,abcs,abc_def_ghi. as your field doesn't start with abc in case of def_abc,def_abc_ghi it won't work with phrase prefix.
Try using character filters specifically Pattern Replace Character Filter to replace _ with (space) from your field while analyzing your field. check this answer . so your token would result in [def,abc,ghi] instead of single token like [def_abc_ghi]. then you can search it using cross_field on analyzed field which should satisfy all of your mentioned cases.

How to search full or partial match with first name and last name in mongodb

How to search full or partial match with first name and last name in mongodb?
I tried using this,
{"name":{ $regex: str, $options: 'i'}}
But it is only for full match of the string.
Can I use regex for partial match?
For this type of search better to create text index. mongo shell command to create text index for name field.
db.colectionName.createIndex( { name: "text" } );
then you can search using $text and $search
var text = 'John Test''
db.collectionName.find({ $text: { $search: text } });
for this query you will get if name contain john or test and this is case insensitive
I have to do this for my project, how does this work for you? {"name":new RegExp('^'+name, 'i')} you may need to encode the string first str.replace(/[.*+?^${}()|[\]\\]/g, '\\$&');
Try this
{'name': {'$regex': '.*str.*', $options: 'i'}}
I have responded to similar question here, combining text index and regex pattern makes it work nicely. Note, text index is searching by terms so if you try to search for padavan by supplying pad you won't get what you are expecting having only text index in place.

Elasticsearch Completion Suggester field contains comma separated values

I have a field that contains comma separated values which I want to perform suggestion on.
{
"description" : "Breakfast,Sandwich,Maker"
}
Is it possible to get only applicable token while performing suggest as you type??
For ex:
When I say break, how can I get only Breakfast and not get Breakfast,Sandwich,Maker?
I have tried using commatokenizer but it seems it does not help
As said in the documentation, you can provide multiple possible inputs by indexing like this:
curl -X PUT 'localhost:9200/music/song/1?refresh=true' -d '{
"description" : "Breakfast,Sandwich,Maker",
"suggest" : {
"input": [ "Breakfast", "Sandwitch", "Maker" ],
"output": "Breakfast,Sandwich,Maker"
}
}'
This way, you suggest with any word of the list as input.
Obtaining the corresponding word as suggestion from Elasticsearch is not possible but as a workaround you could use a tokenizer outside Elasticsearch to split the suggested string and choose only the one that has the input as prefix.
EDIT: a better solution would be to use an array instead of comma-separated values, but it doesn't meet your specs... ( look at this: Elasticsearch autocomplete search on array field )

elasticsearch: phrase search for two adjacent words in any order (analyzed)

The problem is to do a phrase search for two adjacent words in any order with words analysis.
E.g. in Sphinx extended syntax terms the query string can be written as WordToBeAnalyzed1 NEAR/1 WordToBeAnalyzed2. Then both words are being analyzed and the search engine finds either "Word1 Word2" or "Word2 Word1", where both words can be in any form (e.g. "fox jumps", "jumping fox", "foxes jumped", and so on).
Reading the ES docs I could not express the same search in the ES query DSL.
When querying with match_phrase and slop I can query a phrase "WordToBeAnalyzed1 WordToBeAnalyzed2" with a "slop": 2 param to match same words in reverse order. But it will also match such undesirable variants as "Word1 SlopWord1 Word2" and "Word1 SlopWord1 SlopWord2 Word2".
I also tried to use span_near query with the in_order param, but
span queries are term-level queries, so they have no analysis phase
I would be glad if anyone can point me to a way to solve this problem.
What about running the query through an explicit request to the _analyze API first, then the span_near query?
This will work.
{
"query":{
"bool":{
"must":[
{
"query_string":{
"query":"*hello* *there*",
"fields":[
"subject"
],
"default_operator":"and",
}
}]
}
}
}

Resources