MC-Stan on Spark? - apache-spark

I hope to use MC-Stan on Spark, but it seems there is no related page searched by Google.
I wonder if this approach is even possible on Spark, therefore I would appreciate if someone let me know.
Moreover, I also wonder what is the widely-used approach to use MCMC on Spark. I heard Scala is widely used, but I need some language that has a decent MCMC library such as MC-Stan.

Yes it's certainly possible but requires a bit more work. Stan (and popular MCMC tools that I know of) are not designed to be run in a distributed setting, via Spark or otherwise. In general, distributed MCMC is an area of active research. For a recent review, I'd recommend section 4 of Patterns of Scalable Bayesian Inference (PoFSBI). There are multiple possible ways you might want to split up a big MCMC computation but I think one of the more straightforward ways would be splitting up the data and running an off-the-shelf tool like Stan, with the same model, on each partition. Each model will produce a subposterior which can be reduce'd together to form a posterior. PoFSBI discusses several ways of combining such subposteriors.
I've put together a very rough proof of concept using pyspark and pystan (python is the common language with the most Stan and Spark support). It's a rough and limited implementation of the weighted-average consensus algorithm in PoFSBI, running on the tiny 8-schools dataset. I don't think this example would be practically very useful but it should provide some idea of what might be necessary to run Stan as a Spark program: partition data, run stan on each partition, combine the subposteriors.

Related

Dynamic Topic Modeling with Gensim / which code?

I want to use Dynamic Topic Modeling by Blei et al. (http://www.cs.columbia.edu/~blei/papers/BleiLafferty2006a.pdf) for a large corpus of nearly 3800 patent documents.
Does anybody has experience in using the DTM in the gensim package?
I identified two models:
models.ldaseqmodel – Dynamic Topic Modeling in Python Link
models.wrappers.dtmmodel – Dynamic Topic Models (DTM) Link
Which one did you use, of if you used both, which one is "better"? In better words, which one did/do you prefer?
Both packages work fine, and are pretty much functionally identical. Which one you might want to use depends on your use case. There are small differences in the functions each model comes with, and small differences in the naming, which might be a little confusing, but for most DTM use cases, it does not matter very much which you pick.
Are the model outputs identical?
Not exactly. They are however very, very close to being identical (98%+) - I believe most of the differences come from slightly different handling of the probabilities in the generative process. So far, I've not yet come across a case where a difference in the sixth or seventh digit after the decimal point has any significant meaning. Interpreting the topics your models finds matters much more than one version finding a higher topic loading for some word by 0.00002
The big difference between the two models: dtmmodel is a python wrapper for the original C++ implementation from blei-lab, which means python will run the binaries, while ldaseqmodel is fully written in python.
Why use dtmmodel?
the C++ code is faster than the python implementation
supports the Document Influence Model from Gerrish/Blei 2010 (potentially interesting for your research, see this paper for an implementation.
Why use ldaseqmodel?
easier to install (simple import statement vs downloading binaries)
can use sstats from a pretrained LDA model - useful with LdaMulticore
easier to understand the workings of the code
I mostly use ldaseqmodel but thats for convenience. Native DIM support would be great to have, though.
What should you do?
Try each of them out, say, on a small sample set and see what the models return. 3800 documents isn't a huge corpus (assuming the patents aren't hundreds of pages each), and I assume that after preprocessing (removing stopwords, images and metadata) your dictionary won't be too large either (lots of standard phrases and legalese in patents, I'd assume). Pick the one that works best for you or has the capabilities you need.
Full analysis might take hours anyway, if you let your code run overnight there is little practical difference, after all, do you care if it finishes at 3am or 5am? If runtime is critical, I would assume the dtmmodel will be more useful.
For implementation examples, you might want to take a look at these notebooks: ldaseqmodel and dtmmodel

How can I know if Apache Spark is the right tool?

Just wondering, is there somewhere a list of questions to ask ourselves in order to know whether Spark is the right tool or not ?
Once again I spent part of the week implementing a POC with Apache Spark in order to compare the performance against pure python code and I was baffled when I saw the 1/100 ratio (in favor of python).
I know that Spark is a "big data" tool and everyone keeps saying "Spark is the right tool to process TB/PB of data" but I think that is not the only thing to take into account.
In brief, my question is, when given small data as input, how can I know if the computing will be consuming enough so that Spark can actually improve things ?
I'm not sure if there is such a list, but if there was, the first question would probably be
Does your data fit on a single machine?
And if the answer is 'Yes', you do not need Spark.
Spark is designed to process lots of data such that it cannot be handled by a single machine, as an alternative to Hadoop, in a fault-tolerant manner.
There are lots of overheads, such as fault-tolerance and network, associated with operating in a distributed manner that cause the apparent slow-down when compared to traditional tools on a single machine.
Because Spark can be used as a parallel processing framework on a small dataset, does not mean that it should be used in such a way. You will get faster results and less complexity by using, say, Python, and parallelizing your processing using threads.
Spark excels when you have to process a dataset that does not fit onto a single machine, when the processing is complex and time-consuming and the probability of encountering an infrastructure issue is high enough and a failure would result in starting again from scratch.
Comparing Spark to native Python is like comparing a locomotive to a bicycle. A bicycle is fast and agile, until you need to transport several tonnes of steel from one end of the country to the other: then - not so fun.

Scala and AKKA for a simulation

Should I learn scala and AKKA for a simulation project. Are these technologies a good fit / worth the investment? The task is to perform https://www.dropbox.com/s/3lby24y26wp60to/assignment.pdf?dl=0 an event-based simulation to simulate an IOT edge data center and implement some scheduling algorithms.
If yes, which libraries would you suggest? https://github.com/scalation/scalation does not seem to be a parallel library.
This is an opinion-based question. Should not be asked here.
Anyways, I'm going to try to give you some pointers. Akka is a generic framework: you can build anything from it, but nothing in particular is an immediate fit (ok, some things fit better than others, but still).
In your case, while Akka is a valid fit (actors = agents), I'd look more into specialized softwares for ABM (Agent based modeling), you can find a massive list here .
In particular, I recommend Netlogo: it's a bit counterintuitive in terms of syntax if you have never used something akin to Lisp or other immutable variables languages ("let" etc.), but once you get the hang of it it's very powerful for the effort required.
And, if you come from a CS background, it should be super easy for you (it's normally used by non-CS people in various fields, and it's designed to be easy).

MapReduce - Anything else except word-counting?

I have been looking at MapReduce and reading through various papers about it and applications of it, but, to me, it seems that MapReduce is only suitable for a very narrow class of scenarios that ultimately result in word-counting.
If you look at the original paper Google's employees provide "various" potential use cases, like "distributed grep", "distributed sort", "reverse web-link graph", "term-vector per host", etc.
But if you look closer, all those problems boil down to simply "counting words" - that is counting the number occurrences of something in a chunk of data, then aggregating/filtering and sorting that list of occurrences.
There also are some cases where MapReduce has been used for genetic algorithms or relational databases, but they don't use the "vanilla" MapReduce published by Google. Instead they introduce further steps along the Map-Reduce chain, like Map-Reduce-Merge, etc.
Do you know of any other (documented?) scenarios where "vanilla" MapReduce has been used to perform more than mere word-counting? (Maybe for ray-tracing, video-transcoding, cryptography, etc. - in short anything "computation heavy" that is parallelizable)
Atbrox had been maintaining mapreduce hadoop algorithms in academic papers. Here is the link. All of these could be applied for practical purpose.
MapReduce is good for problems that can be considered to be embarrassingly parallel. There are a lot of problems that MapReduce is very bad at, such as those that require lots of all-to-all communication between nodes. E.g., fast Fourier transforms and signal correlation.
There are projects using MapReduce for parallel computations in statistics. For instance, Revolutions Analytics has started a RHadoop project for use with R. Hadoop is also used in computational biology and in other fields with large datasets that can be analyzed by many discrete jobs.
I am the author of one of the packages in RHadoop and I wrote the several examples distributed with the source and used in the Tutorial, logistic regression, linear least squares, matrix multiplication etc. There is also a paper I would like to recommend http://www.mendeley.com/research/sorting-searching-simulation-mapreduce-framework/
that seems to strongly support the equivalence of mapreduce with classic parallel programming models such as PRAM and BSP. I often write mapreduce algorithms as ports from PRAM algorithms, see for instance blog.piccolboni.info/2011/04/map-reduce-algorithm-for-connected.html. So I think the scope of mapreduce is clearly more than "embarrassingly parallel" but not infinite. I have myself experienced some limitations for instance in speeding up some MCMC simulations. Of course it could have been me not using the right approach. My rule of thumb is the following: if the problem can be solved in parallel in O(log(N)) time on O(N) processors, then it is a good candidate for mapreduce, with O(log(N)) jobs and constant time spent in each job. Other people and the paper I mentioned seem to focus more on the O(1) jobs case. When you go beyond O(log(N)) time the case for MR seems to get a little weaker, but some limitations may be inherent in the current implementation (high job overhead) rather the fundamental. It's a pretty fascinating time to be working on charting the MR territory.

Examples of simple stats calculation with hadoop

I want to extend an existing clustering algorithm to cope with very large data sets and have redesigned it in such a way that it is now computable with partitions of data, which opens the door to parallel processing. I have been looking at Hadoop and Pig and I figured that a good practical place to start was to compute basic stats on my data, i.e. arithmetic mean and variance.
I've been googling for a while, but maybe I'm not using the right keywords and I haven't really found anything which is a good primer for doing this sort of calculation, so I thought I would ask here.
Can anyone point me to some good samples of how to calculate mean and variance using hadoop, and/or provide some sample code.
Thanks
Pig latin has an associated library of reusable code called PiggyBank that has numerous handy functions. Unfortunately it didn't have variance last time I checked, but maybe that has changed. If nothing else, it might provide examples to get you started on your own implementation.
I should note that variance is difficult to implement in a stable way over huge data sets, so take care!
You might double check and see if your clustering code can drop into Cascading. Its quite trivial to add new functions, do joins, etc with your existing java libraries.
http://www.cascading.org/
And if you are into Clojure, you might watch these github projects:
http://github.com/clj-sys
They are layering new algorithms implemented in Clojure over Cascading (which in turn is layered over Hadoop MapReduce).

Resources