Where to download a trained LDA model for Gensim? - nlp

I want to use an LDA(Latent Dirichlet Allocation) model for an NLP purpose.
To train a such a model from Wikipedia corpus takes about 5 to 6 hours and wiki corpus is about 8GB. Check the Tutorial
Rather than doing so, is there a place to download a built LDA model and use it directly with Gensim?

Related

How to post-train BERT model on custom dataset

I want to get the BERT word embeddings which will be used in another down-stream task later. I have a corpus for my custom dataset and want to further pre-train the pre-trained Huggingface BERT base model. I think this is called post-training. How can I do this using Huggingface transformers? Can I use transformers.BertForMaskedLM?

Fine tune doc2vec on gensim

I am new in NLU and I am doing a project on document embedding. I want to fine-tune the doc2vec model in gensim on my small dataset to see if it can help for document clustering. I read the tutorial on the website but they did not mention anything about fine-tuning. Where I can find doc2vec pertained on wikipedia or twitter on any big dataset.

Is Google word2vec pertrained model CBOW or skipgram

Is Google's pretrained word2vec model CBO or skipgram.
We load pretrained model by:
from gensim.models.keyedvectors as word2vec
model= word2vec.KeyedVectors.load_word2vec_format('GoogleNews-vectors-negative300.bin.gz')
How can we specifically load pretrained CBOW or skipgram model ?
The GoogleNews word-vectors were trained by Google, using a proprietary corpus, but they're never explicitly described all the training-parameters used. (It's not encoded in the file.)
It's been asked a number of times on the Google Group devoted to the word2vec-toolkit code, without a definitive answer. For example, there's a response from word2vec author Mikolov that he doesn't remember the training parameters. Elsewhere, another poster thinks one of the word2vec papers implies skip-gram was used – but as that passage doesn't precisely match other aspects (like vocabulary-size) of the released GoogleNews vectors, I wouldn't be completely confident of that.
As Google hasn't been clear, and in any case hasn't released alternate versions based on different training modes, if you want to run any tests or make any conclusions about the different modes, you'll have to use other vector-sets, or train your own vectors in varying ways.
Late to the party, but Mikolov describes the hyperparameters here. The Google News pretrained vectors were trained using CBOW. I believe that's the only option for you to load; there is no pretrained skip-gram version available.

How to create gensim word2vec model using pre trained word vectors?

I have created word vectors using a distributed word2vec algorithm. Now I have words and their corresponding vectors. How to build a gensim word2vec model using these words and vectors?
I am not sure if you created word2vec model using gensim or some other tools but if understand your question correctly you want to just load the word2vec model using gensim. This is done in the following way:
import gensim
w2v_file = codecs.open(WORD2VEC_PATH, encoding='utf-8')
model = gensim.models.KeyedVectors.load_word2vec_format(w2v_file, binary=True) # or binary=False if the model is not compressed
If, however, what you want to do is to train word2vec model from scratch (i.e. from raw text) using purely gensim here is a tutorial on how to train word2vec model using gensim.

Online learning of LDA model in Spark

Is there a way to train a LDA model in an online-learning fashion, ie. loading a previously train model, and update it with new documents ?
Answering myself : it is not possible as of now.
Actually, Spark has 2 implementations for LDA model training, and one is OnlineLDAOptimizer. This approach is especially designed to incrementally update the model with mini batches of documents.
The Optimizer implements the Online variational Bayes LDA algorithm, which processes a subset of the corpus on each iteration, and updates the term-topic distribution adaptively.
Original Online LDA paper: Hoffman, Blei and Bach, "Online Learning for Latent Dirichlet Allocation." NIPS, 2010.
Unfortunately, the current mllib API does not allow to load a previously trained LDA model, and add a batch to it.
Some mllib models support an initialModel as starting point for incremental updates (see KMeans, or GMM), but LDA does not currently support that. I filled a JIRA for it : SPARK-20082. Please upvote ;-)
For the record, there's also a JIRA for streaming LDA SPARK-8696
I don't think that such a thing would exist. LDA is probabilistic parameter estimation algorithm ( a very simplified explanation of the process here LDA explained), and adding a document or a few would change all previously computed probabilities, so literally recompute the model.
I don't know about your use case, but you can think about doing an update by batch if your model converges in a reasonable time and discard some of the oldest document at each re-computation to make the estimation faster.

Resources