I was trying to reverse engineer Twitter-Live Search. Maybe we could discuss it here. I am talking about the feature where Tweets are shown even latest to "1 sec ago" etc. Trying to understand how the following might happen -
There must be some layer between when the user tweets & when the index (updates) happen. Is this layer MySQL or some other caching layer (memcached, cassandra)? Maybe...
Indexing - How might the index updates be happening? They can't possibly build a new index from scratch?
Indexing - There must be a distributed index here. How to update all the Indexes without having to serve stale data from one index & latest data from the other?
Indexing - Or does it matter if something like this happens? Honestly I don't think so :) Which user would notice...
Anybody have anything interesting to add/discuss. I am just trying to understand...
Interesting indeed, but I guess it's more of an "architecture" question, and not really a programming question.
But FYI there's a lot of information at high scalability: posts tagged with twitter
Do they keep all tweets? My guess is they just throw them away after a while, and surely they don't need ACID properties? ..
And I wouldn't trust those timestamps if I where you :)
Related
SITUATION
I have a database with 2,000,000 cities. All of them have coordinates of the city center and mostly all - GeoJSON boundaries. I'm trying to implement a geocoding service that would find cities that intersect with a given point using node.js, mongodb, redis, memcached (and golang, if that is necessary, cause I'm just totally new to it )
PROBLEM
I know how to work with points (lat and lng) since both MongoDB and Redis support geoindexes but I've never seen anything about polygons.
I guess MongoDB won't really help cause of its speed (since it work on disks), but any memory database should deal with this problem. The thing is I can't even think of any way to implement it.
I'll be happy if someone point me how to make it. Thanks.
You may implement a point-in-polygon algorithm yourself. I've done something similar on https://api.3geonames.org
First do a bounding box to identify candidate polygons, then run a PIP. https://en.wikipedia.org/wiki/Point_in_polygon
geo.lua (https://github.com/RedisLabs/geo.lua) works with the requirements you have here but it's not very performant (not sure what has changed since last i checked).
So, a little bit on my problem.
TL;DR
Can I use machine-learning instead of Elastic Search to find results depending on the user's text input? Is it a good idea?
I am working on a car spare parts project, and we have split the car into 300 parts that we store on the database, with some data for each part (weight, availability, etc).
When the customer inputs the text of his part, we need to be able to classify the part, and map it to one in our database.
The current way it's being done is by people on our team manually mapping the customer text with the parts on our database, we want to automate that process.
We tried using MongoDB text search, but it was often inaccurate since parts have different names in different parts of the country.
So we wanted something that got more accurate results, and improved by the more data we have, we immediately considered TensorFlow, after some research and taking part of Google's Machine Learning Crash Course, I got to that point where it specified:
Models can't learn from string values, so you'll have to perform some feature engineering to convert those values to something numeric
That would be useful in the case we have limited number of features as strings, but we don't know what the user will input as a text.
So, my questions are:
1- Can we use Machine Learning to map text input by the user with some documents on our database?
2- If we can do that, is it a good idea to favor it over other search tools like ElasticSearch?
3- Can ElasticSearch improve its results the more data we have? How?
4- How would you go about this problem?
Note: I'd be doing that in Node.js, and since TensorFlow.js is new, I am inclining to go for other solutions, but if push comes to shove, and the results are much better, I would definitely go there.
TL;DR: Yes and yes.
TS;WM:
This is a perfectly suited problem for machine learning. Especially so, if you have a database of past customer texts that have already been mapped to parts. Ideally, you have hundreds of texts mapped to each part. If that is present, you can design and train a network. And models can learn from string values with some engineering, and it's not that bad.
I'm not sure ElasticSearch would improve much on the network. I don't know much about auto parts trading, but as a wild guess, "the large round thingy that helps change direction" would never be mapped to "steering wheel" by ES but could be learned easily by a network - provided there are at least some examples of people using that text to specify steering wheel.
You can but don't have to necessarily use tensorflow.js for your network. The AI could run on your server as a webservice, and you'd just send over the customer's text to it and it would send back it's recommendations of part SKUs and names.
Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 8 years ago.
Improve this question
(not sure if this is the right forum for this question)
I am very curious about how search in major site, say youtube/quora/stackexcahnge, works?
And I'm NOT looking for an answer like 'They Use Lucene Search engine'. I want to understand exactly how the indexing works there.
Is there a different Index for text search than the autocomplete feature?
Is it done in the background like map reduce.
How exactly does map reduce help deliver results? (I know that it counts words in each document but what happens after that when I search for a keyword?)
I also heard that google stopped using map reduce and now using cloud dataFlow here - how does that work?
Help Please :-)
I voted to close, because I think your question is too broad. Each bullet could form the basis of an SO question. That stated, I'll take a crack at answer how SolrCloud attempts to solve each of the problems you are asking about:
Is there a different Index for text search than the autocomplete feature?
The short answer is "yes". Solr has several options for implementing an autocomplete feature and all of them rely on either building a separate index or being supplied a separate dictionary. You can also roll your own in an even more sophisticated fashion as the blog post "Super flexible AutoComplete with Solr" demonstrates.
Is it done in the background like map reduce?
Generally speaking no. SolrCloud is based on the idea of shards with leaders and replicas. A shard being a subset of your overall index. With a shard being comprised of a leader and possibly one or more replicas.
Queries are executed against all shard leaders. With assigning a particular shard to serve as the aggregator of each shard's response, but unlike map reduce where the individual node responses have all the data the reducing node needs, the aggregating Solr shard may make multiple requests back to the other shards to figure out sort order - for example.
How exactly does map reduce help deliver results? (I know that it counts words in each document but what happens after that when I search for a keyword?)
See my response to your previous question. In short the query is executed against each shard, aggregated by one of those shards, and returned to the requestor. What Solr does - Lucene really - that's the useful magic part that people most often associate with it is Term Frequency Inverse Document Frequency indexing usually with stemming on text searches. While this is not exactly what happens under the hood, and you can vary what's actually done via configuration, it provides a fairly good idea of what's being done.
Other searching, on dates and numbers, or simple textual values is done in a fashion similar to database indexing. That is a simplification, if you want to understand it more fully read the JavaDoc on NumericRangeQuery for an in-depth explanation.
I also heard that google stopped using map reduce and now using cloud dataFlow here - how does that work?
If I knew the answer to that I would probably be working for Google and not answering StackOverflow questions :). Seriously whatever they've built is new PhD level work that as far as I know they haven't even release a research paper on, which is what they did with map reduce that led to Yahoo building Hadoop.
Somebody has posted an hour ago or so a question that was about the drupal search engine and was about like this:
I know drupal should index anything that is returned by node_view() but this is not happening for my custom content. Also: are there better alternatives to Drupal built-in functionality?
As the question has been removed while I was answering, and didn't want to throw away 20 minutes of my life for nothing ;) I thought to re-create the question a second time. Hope this is fine by the rules of SO! :)
The Drupal search engine is probably not the most celebrated feature of Drupal, but is fairly solid, sophisticated and reliable. There are plenty of modules that enhance or substitute it but - at least in my experience - there is not a commonly accepted "better way" to manage searching and indexing.
However, for very big and busy sites people prefer to use external tools altogether, like a google searchbox or even dedicated software or hardware, like solr / lucene or google search appliance (GSA).
The link I provided above - however - sorts the search-related modules by descending usage statistics, so you will find on the first page the one most commonly used. One that I personally like for English language sites is the porter-stemmer, which index words by their stem (eg: highness, highest and higher will all be returned as matches for the word "high").
That was for the general information on search and Drupal. As for your problem, there are a number of things you could check to track down your problem:
Have your cron.php been executed lately? Indexing is done as part of the cron run, so - if you do not have a crontab set or if you haven't executed it by hand, your node will likely not been indexed yet.
Are the settings correct? Settings for the search module are located at http://example.com/admin/settings/search : is your minimum word length sufficient for your needs (the default is 3 letters)?
Has the 100% of the site being indexed? (You can check that from the setting page). If it is not, and running cron.php doesn't solve the matter, look further down.
Does a re-index solve the problem? Especially if you inserted data by mean of SQL queries directly on the Drupal tables, chances are Drupal hasn't realised the content of the node has changed and therefore doesn't update the index.
Is the node you are trying to find, visible? Search results about unpublished nodes or nodes that require higher-than-yours permissions to be viewed are not returned, AFAIK.
As for the "stuck indexing" that happened to me once as well. It turned out it was some PHP code within a node body that would trigger a PHP exception when the node was being indexed, and as a result the indexing process would halt and all the following nodes would not be indexed as well.
Hope this helps. Good luck!
Can someone please let me know how do I implement "Did you mean" feature in Lucene.net?
Thanks!
You should look into the SpellChecker module in the contrib dir. It's a port of Java lucene's SpellChecker module, so its documentation should be helpful.
(From the javadocs:)
Example Usage:
import org.apache.lucene.search.spell.SpellChecker;
SpellChecker spellchecker = new SpellChecker(spellIndexDirectory);
// To index a field of a user index:
spellchecker.indexDictionary(new LuceneDictionary(my_lucene_reader, a_field));
// To index a file containing words:
spellchecker.indexDictionary(new PlainTextDictionary(new File("myfile.txt")));
String[] suggestions = spellchecker.suggestSimilar("misspelt", 5);
AFAIK Lucene supports proximity-search, meaning that if you use something like:
field:stirng~0.5
(it s a tilde-sign)
will match "string". the float is how "tolerant" the search would be, where 1.0 is exact match and 0.0 is match everything (sort of).
Different parsers will however implement this differently.
A proximity-search is much slower than a fuzzy-search (stri*) so use it with caution. In your case, one would assume that if you find no matches on a regular search, you try a proximity-search to see what you find, and present "did you mean" based on the result somehow.
Might be useful to cache this sort of lookups for very common mispellings, for performance reasons.
Google's "Did you mean?" is (probably; they're secretive, of course) implemented by consulting their query log. Look to see if people who searched for the query you're processing searched for something very similar soon after; if so, it indicates they made a mistake, and realized what they ought to be searching for.
Since you probably don't have a huge query log, you could approximate it. Take the query, split up the terms, see if there are any similar terms in the database (by edit distance, whatever); replace your terms with those nearby terms, and rerun the query. If you get more hits, that was probably a better query. Suggest it to the user. (And since you've already got the hits, and most people only look at the top 2 results, show them those.)
Take a look at google code project called semanticvectors.
There's a decent amount of discussion on the Lucene mailing lists for doing functionality like what you're after using it - however it is written in java.
You will probably have to parse and use some machine learning algorithms on your search logs to build a feature like this!