We are building a searchmachine with elasticsearch to use intern in our company. We are using one inputfield where users can give in their searchwords (Google like). So it should be possible to search on different kind of words separate by spaces.
Because it is possible to search on names, and names can be written on different kind of ways, we would like to treat different characters as equal.
For example the name "Heymans" can be written like "Hymans", "Heimans", "Hijmans", ...
If a user search on "Hijmans", "Heymans" should be found with, preferably, the same score when searching on "Heymans".
Is it possible to set "ei", "ij", "ey" as equal values?
We know that there is the synonym feature, but if we do it that way, the scores are very low.
We do not want to set "Hymans", "Heimans", "Hijmans" as synoniems, because there are other names with the same problem...
Thanks for the help!
Related
I'm looking into ArangoSearch for the first time and it looks like a pretty good functionality.
However, in all the tutorials, despite having the ability to tell it to index all fields, one cannot do a 'blind' search across all fields of the document. Like when we look at the example below:
FOR d in myView SEARCH d.text IN ["quick", "brown"] RETURN d
I don't seem to have the ability to just search d entirely without specifying each individual field that I want to include in my search. Is that correct and if so, why is that and are there workarounds? I'm dealing with a lot of different collections with a lot of different fields that can contain a relevant term, it would be a shame if I'd have to tabulate all of them to make an expansive search.
I am trying to search for a term in Solr in the Title that contains only the string 1604-04. But the results come back with anything that contains 1604 or 04. What would the syntax be to force solr to search on the exact string of 1604-04?
You can also use Classic Tokenizer.The Classic Tokenizer preserves the same behavior as the Standard Tokenizer with the following exceptions:-
Words are split at hyphens, unless there is a number in the word, in which case the token is not split and
the numbers and hyphen(s) are preserved.
This means if someone searches for 1604-04 then this Tokenizer won't break search string into two tokens.
If you want exact matches only, use a string field or a text field with a KeywordTokenizer as the tokenizer. These will keep your tokens intact as one single entry, and won't break it up into multiple tokens.
The difference is that if you use a Textfield with a KeywordTokenizer, you can still apply other filters, such as a LowercaseFilter, while a string field will store anything verbatim without any further processing possible.
Your analyzer is splitting "1604-04" into two terms, "1604" and "04". You've received answer on how to change your analysis to stop doing that.
Changing your analysis my not be the best solution (can't be entirely sure based on what you've written). Using a phrase query would be the usual way to do this. You can use a phrase query by wrapping it in quotes:
field:"1604-04"
This will still analyze and split it into two terms, but it will look for those terms in sequence. So, that query would match "1604-04" and "1604 04", but not "1604 some other stuff 04".
I am using Solr 6.0.0
I am using data-driven-configuration for my configuration related purpose. Most of the configuration is standard.
I have a document in Solr with
name:"aquickbrownfox"
Now if I do a fuzzy search like:
name:aquickbrownfo~0.7
OR
name:aquickbrownf~0.7
It lists out the record in the results.
But if I do a search like:
name:aquickbrown~0.7
It does not list the record.
Does it have to do something with the maxEdits in solrconfig.xml which is set to 2 ?
I tried increasing it. But I could not create a collection with this configuration. It gave an error:
ERROR: Error CREATEing SolrCore 'my-search': Unable to create core
[my-search] Caused by: Invalid maxEdits
Max 2 Edits seems to be a serious limitation. I wonder what is the use of passing the fractional value after the ~ operator.
My Usecase:
I have a contact database. I am supposed to detect the duplicates based on three parameters : Name, Email and Phone. So I rely on Solr for Fuzzy search. Email and Phone are relatively easy to work with simple assumptions. Name seems to be a bit tricky. For each word in the Name, I plan to do a fuzzy search. I expected the optional parameter after ~ to work without the maxEdit distance limitation.
The documentation no longer suggests using a fractional value after the tilde - see http://lucene.apache.org/core/4_6_0/queryparser/org/apache/lucene/queryparser/classic/package-summary.html#Fuzzy_Searches for more information.
However, you are correct that only 2 changes are allowed to be made to the search string in order to carry out a fuzzy search. I would guess this limitation strikes a balance between efficiency and usefulness.
The maxEdits parameter in solrconfig.xml applies to the DirectSpellChecker configuration, and doesn't affect your searching, unless you're using the spell checker.
For your use case, your best approach may be to index the name field twice, using different field configurations: one using a simple set of analyzers and filters (ie. StandardTokenizerFactory, StandardFilterFactory, LowerCaseFilterFactory), and the other using a phonetic matcher such as the Beider-Morse filter. You can use the first field to carry out fuzzy searches, and the second version to look for names which may be spelled differently but sound the same as the name being checked.
I have a fairly simple index where all 10 or so fields are searchable strings and my searchMode is "all".
For sake of simplicity let's say I issue the following search:
-(x|y|z)
And I get all documents that do not have x, y or z in them.
Let's say I issue the following search:
(i+j)
And I get all docs that contain the terms i and j.
And lets say there is a decent overlap between the docs that are returned by the two searches.
I would have thought that in "all" searchMode if I issue the following:
(i+j) -(x|y|z)
I would receive the subset of i and j that do not contain x, y or z. In other words the results of the combined query would not contain any entries from the results of the individual query -(x|y|z).
But that's not the case.
Either I am misunderstanding the functionality or I am receiving wrong results.
Can someone help explain this to me?
Thanks
Azure Search should give consistent answers for this, if not let us know.
In this case it was an issue with escaping "+" in URLs (see comments). Search text in the URL query string needs to be escaped (e.g. + should show up as %2B, but it's best to use a library function to escape all the input search text instead of special-casing any particular character; there's functions for this in most environments and they know which characters need escaping).
I admit that I havent searched extensively in the SO database. I tried reading the natural npm package but doesnt seem to provide the feature. I would like to know if the below requirement is somewhat possible ?
I have a database that has list of all cities of a country. I also have rating of these cities (best place to live, worst place to live, best rated city, worsrt rated city etc..). Now from the User interface, I would like to enable the user to enter free text and from there I should be able to search my database.
For e.g Best place to live in California
or places near California
or places in California
From the above sentence, I want to extract the nouns only (may be ) as this will be name of the city or country that I can search for.
Then extract 'best' means I can sort is a particular order etc...
Any suggestions or directions to look for?
I risk a chance that the question will be marked as 'debatable'. But the reason I posted is to get some direction to proceed.
[I came across this question whilst looking for some use cases to test a module I'm working on. Obviously the question is a little old, but since my module addresses the question I thought I might as well add some information here for future searchers.]
You should be able to do what you want with a POS chunker. I've recently released one for Node that is modelled on chunkers provided by the NLTK (Python) and Standford NLP (Java) libraries (the chunk() and TokensRegex() methods, resepectively).
The module processes strings that already contain parts-of-speech, so first you'll need to run your text through a parts-of-speech tagger, such as pos:
var pos = require('pos');
var words = new pos.Lexer().lex('Best place to live in California');
var tags = new pos.Tagger()
.tag(words)
.map(function(tag){return tag[0] + '/' + tag[1];})
.join(' ');
This will give you:
Best/JJS place/NN to/TO live/VB in/IN California/NNP ./.
Now you can use pos-chunker to find all proper nouns:
var chunker = require('pos-chunker');
var places = chunker.chunk(tags, '[{ tag: NNP }]');
This will give you:
Best/JJS place/NN to/TO live/VB in/IN {California/NNP} ./.
Similarly you could extract verbs to understand what people want to do ('live', 'swim', 'eat', etc.):
var verbs = chunker.chunk(tags, '[{ tag: VB }]');
Which would yield:
Best/JJS place/NN to/TO {live/VB} in/IN California/NNP ./.
You can also match words, sequences of words and tags, use lookahead, group sequences together to create chunks (and then match on those), and other such things.
You probably don't have to identify what is a noun. Since you already have a list of city and country names that your system can handle, you just have to check whether the user input contains one of these names.
Well firstly you'll need to find a way to identify nouns. There is no core node module or anything that can do this for you. You need to loop through all words in the string and then compare them against some kind of dictionary database so you can find each word and check if it's a noun.
I found this api which looks pretty promising. You query the API for a word and it sends you back a blob of data like this:
<?xml version="1.0" encoding="UTF-8"?>
<results>
<result>
<term>consistent, uniform</term>
<definition>the same throughout in structure or composition</definition>
<partofspeech>adj</partofspeech>
<example>bituminous coal is often treated as a consistent and homogeneous product</example>
</result>
</results>
You can see that it includes a partofspeech member which tells you that the word "consistent" is an adjective.
Another (and better) option if you have control over the text being stored is to use some kind of markup language to identify important parts of the string before you save it. Something like BBCode. I even found a BBCode node module that will help you do this.
Then you can save your strings to the database like this:
Best place to live in [city]California[/city] or places near [city]California[/city] or places in [city]California[/city].
or
My name is [first]Alex[/first] [last]Ford[/last].
If you're letting user's type whole sentences of text and then you're trying to figure out what parts of those sentences is data you should use in your app then you're making things very unnecessarily hard on yourself. You should either ask them to input important pieces of data into their own text boxes or you should give the user a formatting language such as the aforementioned BBCode syntax so they can identify important bits for you. The job of finding out which parts of a string are important is going to be a huge one for you I think.