Xapian vs Apache Solr - search

I'm trying to get a good natural language search going in a website, and trying to understand the advantages of Apache Solr vs Xapian. Xapian seems easier to set up. Do both offer good natural language searches? Any insight appreciated.

Xapian is more like Lucene, a library that you integrate with your application. If you have a C++ app, then Xapian might be a better match. If you have a Java application, Lucene is almost certainly the best choice.
If you want a search server, then compare Omega (built on Xapian) to Solr (built on Lucene). I have not used Omega or Xapian, but Solr has a few features that I have come to depend on, especially the per-field analysis chains. That is a brilliant idea, and one that I wish I had thought of when I was working on Ultraseek.
It is quite easy to extend the Solr analysis chain with your own Java class. I expect that would be more difficult in C++ with Omega/Xapian.
The two engines use different underlying relevance models. Xapian is a probabilistic engine, Lucene is a vector space engine. I have seen both models tuned to perform well, so that might not be a reason to decide.
The Solr/Lucene community is large and very helpful.

Related

How to develop a Decision Support System

I would like to develop a decision support system for diagnosis disease. I am newbie for the programming. Can anyone suggest which programming language is most suitable?
That really depends on what you want to do exactly (do you want to use prebuilt libraries?).
I see you added weka as a tag. Java is a good option, since it's so versatile, powerful, fast and fairly easy to use, implementable as a web service, plus you can use the weka library for quickly building trees.
But really, any other programming language (C++, python, matlab) will have the power to build a tree.
You have to design a decision mathematical model. In order to do that, you need a back ground in statistic, data mining or even fuzzy logic.
And then, you code the decision making algorithm to get the final results. I suggest you should have a validation phase to ensure your program is useful and correct.

how to design a full-text indexing system?

Lucene is a great open source indexng library, my problem is not about how to use this kind of indexing tool, but to learn and understand how they are designed.
Maybe I should read the source code of Lucene, but I can't seem to find any tutorial about how this great work is done.
So, is there any other way or a book that can help me gain a concrete understanding of how to design such a indexing system?
Thank you.
The science behind Lucene is called as Information Retrieval. When you start appreciating the Algorithms and Data Structures behind Information Retrieval, you are all done and Lucene or Sphinx would merely be tools to solve your tasks. The very first thing is you can go through Inverted Index Data Structure.
A great book about Information Retrieval Algorithms and Data Structure can be found here: http://nlp.stanford.edu/IR-book/ This Stanford text is a good resource and a good starting point in coming to know about how Information Retrieval Systems are designed

keyword search in sites

I am planning to build a small social networking site. What is the best way to support keyword search in the content. I am looking for an opinion considering the fact that the contents can grow few TBs in size.
thanks,
GL
You should definitely use Solr/Lucene to index contents resulting in efficient keyword search results and it is also very easy to implement a faceted search based on Solr if you have such a feature in your mind.
Have you looked at Apache Lucene?
It's a high-performance, full-featured text search engine library written entirely in Java. It is a technology suitable for nearly any application that requires full-text search, especially cross-platform.

Search term suggestions

This question has been asked in various ways before, but I'm wondering if people who have experience with automatic search term suggestion could offer advice on the most useful and efficient approaches. Here's the scenario:
I'm just starting on a website for a book that is a dictionary of terms (roughly 1,000 entries, with 300 word explanations on average), many of which are fairly obscure, and it is likely that many visitors to the site would not know how to spell the words. The publisher wants to make full-text search available for every entry. So, I'm hoping to implement a search engine with spelling correction. The main site will probably be done in a PHP framework (or possibly Django) with a MySQL database.
Can anyone with experience in this area give advice on the following:
With a set corpus of this nature, should I be using something like Lucene or Sphinx for the search engine?
As far as I can tell, neither of these has a built-in suggestion function. So it seems I will need to integrate one or more of the following. What are the advantages / disadvantages of:
Suggestion requests through Google's search API
A phonetic comparison algorithm like metaphone() in PHP
A spell checking system like Aspell
A simpler spelling script such as Peter Norvig's
A Levenshtein function
I'm concerned about the specificity of my corpus, and don't want Google to start suggesting things that have nothing to do with this book. I'm also not sure whether I should try to use both a metaphone comparison and a Levenshtein comparison, or some other combination of techniques to capture both typos and attempts at phonetic spelling.
You might want to consider Apache Solr, which is a web service encapsulation of Lucene, and runs in a J2EE container like Tomcat. You'll get term suggestion, spell check, porting, stemming and much more. It's really very nice.
See here for a full listing of its features relating to queries.
There are Django and PHP libraries for Solr.
I wouldn't recommend using Google Suggest for such a specialised corpus anyway, and with Solr you won't need it.
Hope this helps.

Natural Language Processing Package

I have started working on a project which requires Natural Language Processing. We have do the spell checking as well as mapping sentences to phrases and their synonyms. I first thought of using GATE but i am confused on what to use? I found an interesting post here which got me even more confused.
http://lordpimpington.com/codespeaks/drupal-5.1/?q=node/5
Please help me decide on what suits my purpose the best. I am working a web application which will us this NLP tool as a service.
You didn't really give much info, but try this: http://www.nltk.org/
I don't think NLTK does spell checking (I could be wrong on this), but it can do parts of speech tagging for text input.
For finding/matching synonyms you could use something like WordNet http://wordnet.princeton.edu/
If you're doing something really domain specific: I would recommend coming up with your own ontology for domain specific terms.
If you are using Python you can develop a spell checker with Python Enchant.
NLTK is good for developing Sentiment Analysis system too. I have some prototypes of the same too
Jaggu
If you are using deep learning based models, and if you have sufficient data, you can implement task specific models for any purpose. With the development of deep leaning based languages models, you can used word embedding based models with lexicon resources to obtain synonyms and antonyms. You can also follow the links below to obtain more resources.
https://stanfordnlp.github.io/CoreNLP/
https://www.nltk.org/
https://wordnet.princeton.edu/

Resources