Hybrid recommender in spark - apache-spark

I am trying to build a hybrid recommender using prediction.io which functions as a layer on top of spark/mllib under the hood.
I'm looking for a way to incorporate a boost based on tags in the ALS algorithm when doing a recommendation request.
Using content information to improve collaborative filtering seems like such a usual path although I cannot find any documentation on combining a collaborative algorithm (eg ALS) with a content based measure.
Any examples or documentation on incorporating content similarity with collaborative filtering for either mllib (spark) or mahout (hadoop) would be greatly appreciated.

This PredictionIO Template uses Mahout's Spark version of Correlators so it can make use of multiple actions to recommend to users or find similar items. It allows you to include multiple categorical tag-like content to boost or filter recs.
http://templates.prediction.io/PredictionIO/template-scala-parallel-universal-recommendation
The v0.2.0 branch also has date range filtering and popular item backfill is in development.

Related

Tool for storing infromation about tables, their sources and ETL for DWH

I'm searching for tool for storing documentation about tables, datasources, etl processes and etc for my DWH.
I've seen some presentations on youtube, but I've found out, that most of the companies are using custom, own system or something like wiki ith plain text descriptions.
I think, that it is not so useful for Analysts, Mangers and other user to find out , what they need and how to use data to calculate suitable for them statistics.
Can you suggest, please, what may I use for this case? What I must read?
While Airflow was baked with some support for Apache-Atlas, in my opinion
the one of the best data-lake metadata management tools right now is Lyft's Amundsen
and they've also released lyft/amundsendatabuilder, the introduction of which says
Amundsen Databuilder is a data ingestion library, which is inspired by
Apache Gobblin. It could be used in an orchestration
framework(e.g. Apache Airflow) to build data from Amundsen. You could
use the library either with an adhoc python script(example) or
inside an Apache Airflow DAG(example).

Building a collaborative filtering recommendation engine using Spark mlLib

I am trying to build a recommendation engine based on collaborative filtering using apache Spark. I have been able to run the recommendation_example.py on my data, with quite good result. (MSE ~ 0.9). Some of the specific questions that I have are:
How to make recommendation for the users who have not done any activity on the site. Isn't there some API call for popular items, which would give me the most popular items based on user actions. One way to do is to identify the popular items by ourselves, and catch the java.util.NoSuchElementException exception, and return those popular items.
How to reload the model, after some data has been added in the input file. I am trying to reload the model using another function, which tries to save the model, but it gives error as org.apache.hadoop.mapred.FileAlreadyExistsException. One way to do is to listen for the incoming data on a parallel thread, save it using model.save(sc, "target/tmp/<some target>") and then reload the model after significant data has been received. I am lost here, how to achieve that.
It would be very helpful, if I could get some direction here.
For the first part, you can find item_id, Number of times that item_id appeared. You can use map and reduceByKey functions of spark for that. After that find the top 10/20 items having max count. You can also give the weightage depending on recency of the items.
For the second part, you can save the model with new name every time. I generally create a folder name on the go using the current date and time and use the same name to reload the model from the saved folder. You will always have to train the model again, using past data and the new data received and then use the model to predict.
Independent of using platforms like Spark, there are some very good techniques(for ex. non-negative matrix factorization) of link prediction which predicts link between 2 sets.
Other very effective(and good) techniques of recommendations are:-
1. Thompson Sampling, 2.MAB (Multi Arm Bandits). A lot depends on the raw dataset. How is your raw dataset distributed. I would recommend to apply above methods on 5% raw dataset, build a hypothesis, use A/B testing, predicts links and move forward.
Again, all these techniques are independent of platform. I would also recommend of moving from scratch instead of using platforms like spark which are only useful for large datasets. You can always move to these platforms in future for scalability.
Hope it helps!

where to find spark.ml dataframe implements about Collaborative Filtering

I am just going over spark ml tutorials ,but I did't find official documents about Collaborative Filtering.So where can I find implements about Collaborative Filtering using dataFrames?
You can find more information about the Collaborative Filtering for latest spark 1.6.0 in below location.
You can also refer this link to get more info about the same.
There is a PR pending regarding documentation for spark.ml's collaborative filtering here: https://github.com/apache/spark/pull/10411
Unfortunately, there hasn't been much interest on spark committers' side lately despite a few pings.

mongodb approximate string matching

I am trying to implement a search engine for my recipes-website using mongo db.
I am trying to display the search suggestions in type-ahead widget box to the users.
I am even trying to support mis-spelled queries(levenshtein distance).
For example: whenever users type 'pza', type-ahead should display 'pizza' as one of the suggestion.
How can I implement such functionality using mongodb?
Please note, the search should be instantaneous, since the search result will be fetched by type-ahead widget. The collections over which I would run search queries have at-most 1 million entries.
I thought of implementing levenshtein distance algorithm, but this would slow down performance, as collection is huge.
I read FTS(Full Text Search) in mongo 2.6 is quite stable now, but my requirement is Approximate match, not FTS. FTS won't return 'pza' for 'pizza'.
Please recommend me the efficient way.
I am using node js mongodb native driver.
The text search feature in MongoDB (as at 2.6) does not have any built-in features for fuzzy/partial string matching. As you've noted, the use case currently focuses on language & stemming support with basic boolean operators and word/phrase matching.
There are several possible approaches to consider for fuzzy matching depending on your requirements and how you want to qualify "efficient" (speed, storage, developer time, infrastructure required, etc):
Implement support for fuzzy/partial matching in your application logic using some of the readily available soundalike and similarity algorithms. Benefits of this approach include not having to add any extra infrastructure and being able to closely tune matching to your requirements.
For some more detailed examples, see: Efficient Techniques for Fuzzy and Partial matching in MongoDB.
Integrate with an external search tool that provides more advanced search features. This adds some complexity to your deployment and is likely overkill just for typeahead, but you may find other search features you would like to incorporate elsewhere in your application (e.g. "like this", word proximity, faceted search, ..).
For example see: How to Perform Fuzzy-Matching with Mongo Connector and Elastic Search. Note: ElasticSearch's fuzzy query is based on Levenshtein distance.
Use an autocomplete library like Twitter's open source typeahead.js, which includes a suggestion engine and query/caching API. Typeahead is actually complementary to any of the other backend approaches, and its (optional) suggestion engine Bloodhound supports prefetching as well as caching data in local storage.
The best case for it would be using elasticsearch fuzzy query:
https://www.elastic.co/guide/en/elasticsearch/reference/current/query-dsl-fuzzy-query.html
It supports levenshtein distance algorithm out of the box and has additional features which can be useful for your requirements i.e.:
- more like this
- powerful facets / aggregations
- autocomplete

Text recommendation with Lucene/solr/mahout

I'm working on a project where I need to implement an article/news recommendation engine.
I'm thinking of combining different methods (item-based, user based, model CF) and have a question regarding the tool to use.
From my research Lucene is definitely the tool for text processing but for the recommendation part, it's not so clear.
If I want to implement an item CF on articles based on text similarity :
- I've seen case studies using Mahout but also solr (http://fr.slideshare.net/lucenerevolution/building-a-realtime-solrpowered-recommendation-engine), as it's really close to a search problem I would think that solr is maybe better, am I right ?
- What are the differences in term of time processing between the 2 tools (I think Mahout is more batch and solr real time) ?
- Can I get a text distance directly from Lucene (it's not really clear for me what is the added value of solr compared to Lucene) ?
- For more advanced method (model based on matrix factorization), I would use Mahout but is there any SVD-like function in solr for concept/tag discovering ?
Thanks for your help.
it depends on your requirements, if you only need offline recommendaton function, mahout is good. for online, i am testing it too. In fact, I have tested with lucene and mahout, they work fine together. for solr, im not so sure, all i know it uses lucene as its core. so all the heavy liftings are still done by lucene. In my case, I combined mahout and lucene in my java program, basically lucene does preprocessing and primitive similarity calculations and then the result is sent to mahout to be further analysed.

Resources