I am pretty new in ml so I am facing some difficulties realizing how could I use spark machine learning libraries with time series data that reflect to a sequence of events.
I have a table that contains this info:
StepN#, element_id, Session_id
Where step n# is the sequence in which each element appears, element_id is the element that has been clicked and session_id in which user session this happened.
It consists of multiple sessions and multiple element-sequence per session. i.e. one session will contain multiple lines of elements. Also each session would have the same starting and ending point.
My objective is to train a model that would use the element sequences observed to predict the next element that is most likely to be clicked. Meaning I need to predict the next event given the previous events.
(in other words I need to average users click behavior for a specific workflow so that the model will be able to predict the next most-relevant click based on the average)
From the papers and the examples I find online I understand that this makes sense when there is a single sequence of events that is meant to be used as an input for the training model.
In my case though, I have multiple sessions/instances of events (starting all at the same point) and I would like to train an averaging model. I find it a bit challenging though to understand how could that be approached using for example HMM in spark. Is there any practical example or tutorial that covers this case?
Thank you for spending the time to read my post. Any ideas would be appreciated!
This can also solve with frequent pattern mining. check this: https://spark.apache.org/docs/latest/mllib-frequent-pattern-mining.html
In this situation, you can find frequent items that occurred frequently together. In the first step you teach the model what is frequent, Then for prediction step, the model can see some events and can predict the most common events to this event
Related
I'm attempting to train multiple texts supplied by myself iteratively. However, I keep running into an issue when I train the model more than once:
ValueError: You must specify either total_examples or total_words, for proper learning-rate and progress calculations. If you've just built the vocabulary using the same corpus, using the count cached in the model is sufficient: total_examples=model.corpus_count.
I'm currently initiating my model like this:
model = Word2Vec(sentences, min_count=0, workers=cpu_count())
model.build_vocab(sentences, update=False)
model.save('firstmodel.model')
model = Word2Vec.load('firstmodel.model')
and subsequently training it iteratively like this:
model.build_vocab(sentences, update = True)
model.train(sentences, totalexamples=model.corpus_count, epochs=model.epochs)
What am I missing here?
Somehow, it worked when I just trained one other model, so not sure why it doesn't work beyond two models...
First, the error message says you need to supply either the total_examples or total_words parameter to train() (so that it has an accurate estimate of the total training-corpus size).
Your code, as currently shown, only supplies totalexamples – a parameter name missing the necessary _. Correcting this typo should remedy the immediate error.
However, some other comments on your usage:
repeatedly calling train() with different data is an expert technique highly subject to error or other problems. It's not the usual way of using Word2Vec, nor the way most published results were reached. You can't count on it to always improve the model with new words; it might make the model worse, as new training sessions update some-but-not-all words, and alter the (usual) property that the vocabulary has one consistent set of word-frequencies from one single corpus. The best course is to train() once, with all available data, so that the full vocabulary, word-frequencies, & equally-trained word-vectors are achieved in a single consistent session.
min_count=0 is almost always a bad idea with word2vec: words with few examples in the corpus should be discarded. Trying to learn word-vectors for them not only gets weak vectors for those words, but dilutes/distracts the model from achieving better vectors for surrounding more-common words.
a count of workers up to your local cpu_count() only reliably helps up to about 4-12 workers, depending on other parameters & the efficiency of your corpus-reading, then more workers can hurt, due to inefficiencies in the Python GIL & Gensim corpus-to-worker handoffs. (inding the actual best count for your setup is, unfortunately, still just a matter of trial and error. But if you've got 16 (or more) cores, your setting is almost sure to do worse than a lower workers number.
I was making some tests by training custom models with crf, and since i don't have a proper training file i would like to make by myself a list of 5 tags and maybe 10 words only to start with and the plan is to keep improving the model with more incoming data in the future. but the results i get are plenty of false positives (it tags many words which have nothing to do with the original one in the training file) i imagine since the models created are probabilistic and take into considerarion more than just separate words
Let's say i want to train corenlp to detect a small list of words without caring about the context are there some special settings for that? if not, is there a way to calculate how much data is needed to get an accurate model?
After some tests and research find out a really good option for my case is RegexNER which works in a deterministic way and can also be combined with NER. So far tried with smaller set of rules and does the job pretty well. Next step is to determine how scalable and usable is in a high traffic stress scenario (the one i'm interested of) and compare with other solutions based in python
I'm investigating various NLP algorithms and tools to solve the following problem; NLP newbie here, so pardon my question if it's too basic.
Let's say, I have a messaging app where users can send text messages to one or more people. When the user types a message, I want the app to suggest to the user who the potential recipients of the message are?
If user "A" sends a lot of text messages regarding "cats" to user "B" and some messages to user "C" and sends a lot of messages regarding "politics" to user "D", then next time user types the message about "cats" then the app should suggest "B" and "C" instead of "D".
So I'm doing some research on topic modeling and word embeddings and see that LDA and Word2Vec are the 2 probable algorithms I can use.
Wanted to pick your brain on which one you think is more suitable for this scenario.
One idea I have is, extract topics using LDA from the previous messages and rank the recipients of the messages based on the # of times a topic has been discussed (ie, the message sent) in the past. If I have this mapping of the topic and a sorted list of users who you talk about it (ranked based on frequency), then when the user types a message, I can again run topic extraction on the message, predict what the message is about and then lookup the mapping to see who can be the possible recipients and show to user.
Is this a good approach? Or else, Word2Vec (or doc2vec or lda2vec) is better suited for this problem where we can predict similar messages using vector representation of words aka word embeddings? Do we really need to extract topics from the messages to predict the recipients or is that not necessary here? Any other algorithms or techniques you think will work the best?
What are your thoughts and suggestions?
Thanks for the help.
Since you are purely looking at topic extraction from previous posts, in my opinion LDA would be a better choice. LDA would describe the statistical relationship of occurrences. Semantics of the words would mostly be ignored (if you are looking for that then you might want to rethink). But also I would suggest to have a look at a hybrid approach. I have not tried it myself but looks quiet interesting.
lda2vec new hybrid approach
Also, if you happen to try it out, would love to know your findings.
I think you're looking for recommender systems (Netflix movie suggestions, amazon purchase recommendations, ect) or Network analysis (Facebook friend recommendations) which utilize topic modeling as an attribute. I'll try to break them down:
Network Analysis:
FB friends are nodes of a network whose edges are friendship relationships. Calculates betweenness centrality, finds shortest paths between nodes, stores shortest edges as a list, closeness centrality is the sum of length between nodes.
Recommender Systems:
recommends what is popular, looks at users similar and suggests things that the user might be interested in, calculates cosine similarity by measuring angels between vectors that point in the same direction.
LDA:
topic modeler for text data - returns topics of interest might be used as a nested algorithm within the algorithms above.
Word2Vec:
This is a neccassary step in building an LDA it looks like this: word -> # say 324 then count frequency say it showed up twice in a sentence:
This is a sentence is.
[(1,1), (2,2), (3,1), (4,1), (2,2)]
It is a neural net you will probably have to use as a pre-processing step.
I hope this helps :)
I'm looking to use some tweets about measles/ the mmr vaccine to see how sentiment about vaccination changes over time. I plan on creating the training set from the corpus of data I currently have (unless someone has a recommendation on where I can get similar data).
I would like to classify a tweet as either: Pro-vaccine, Anti-Vaccine, or Neither (these would be factual tweets about outbreaks).
So the question is: How big is big enough? I want to avoid problems of overfitting (so I'll do a test train split) but as I include more and more tweets, the number of features needing to be learned increases dramatically.
I was thinking 1000 tweets (333 of each). Any input is appreciated here, and if you could recommend some resources, that would be great too.
More is always better. 1000 tweets on a 3-way split seems quite ambitious, I would even consider 1000 per class for a 3-way split on tweets quite low. Label as many as you can within a feasible amount of time.
Also, it might be worth taking a cascaded approach (esp. with so little data), i.e. label a set vaccine vs non-vaccine, and within the vaccine subset you'd have a pro vs anti set.
In my experience trying to model a catch-all "neutral" class, that contains everything that is not explicitly "pro" or "anti" is quite difficult because there is so much noise. Especially with simpler models such as Naive Bayes, I have found the cascaded approach to be working quite well.
Okay, so usually topic models (such as LDA, pLSI, etc.) are used to infer topics that may be present in a set of documents, in an unsupervised fashion. I would like to know if anyone has any ideas as to how I can shoehorn my problem into an LDA framework, as there are very good tools available to solve LDA problems.
For the sake of being thorough, I have the following pieces of information as input:
A set of documents (segments of DNA from one organism, where each segment is a document)
A document can only have one topic in this scenario
A set of topics (segments of DNA from other organisms)
Words in this case are triplets of bases (for now)
The question I want to answer is: For the current document, what is its topic? In other words, for the given DNA segment, which other organism (same species) did it most likely come from? There could have been mutations and such since the exchange of segments occurred, so the two segments won't be identical.
The main difference between this and the classical LDA model is that I know the topics ahead of time.
My initial idea was to take a pLSA model (http://en.wikipedia.org/wiki/PLSA) and just set the topic nodes explicitly, then perform standard EM learning (if only there were a decent library that could handle Bayesian parameter learning with latent variables...), followed by inference using whatever algorithm (which shouldn't matter, because the model is a polytree anyway).
Edit: I think I've solved it, for anyone who might stumble across this. I figured out that you can use labelled LDA and just assign every label to every document. Since each label has a one-to-one correspondence with a topic, you're effectively saying to the algorithm: for each document, choose the topic from this given set of topics (the label set), instead of making up your own.
I have a similar problem, and just thought I'd add the solutions I'm going with for completeness's sake.
I also have a set of documents (pdf documents anywhere from 1 to 200
pages), though mine are regular English text data.
A set of known topics (mine include subtopics, but I won't address that here). Unlike the previous example, I may desire multiple topic labels.
Words (standard English, though named entities and acronyms are included in my corpus)
LDAesk approach: Guided LDA
Guided LDA lets you seed words for your LDA categories. If you have n-topics for your final decisions you just create your guidedLDA algorithm with n-seed topics, each of which contain the keywords that makeup their topic name. Eg: I want to cluster into known topics "biochemistry" and "physics". Then I seed my guidedLDA with d = {0: ['biochemsitry'], 1: ['physics']}. You can incorporate other guiding words if you can identify them, however the guidedLDA algorithm I'm using (python version) makes it relatively easy to identify the top n-words for a given topic. You can run guidedLDA once with only basic seed words then use the top n-words output to consider for more words to add to topics. These top n-words also are potentially helpful for the other approach I'm mentioning.
Non-LDAesk approach: ~KNN
What I've ended up doing is using a word embedding model (word2vec has been superior to alternatives for my case) to create a "topic vector" for every topic based on the words that make up the topic/subtopic. Eg: I have a category Biochemistry with a subcategory Molecular Biology. The most basic topic vector is just the word2vec vectors for Biochemistry, Molecular, and Biology all averaged together.
For every document I want to determine a topic for, I turn it into a "document vector" (same dimension & embedding model as how I made my topic vectors - I've found just averaging all the word2vec vectors in the doc has been the best solution for my so far, after a bit of preprocessing like removing stopwords). Then I just find the k-closest topic vectors to the input document vector.
I should note that there's some ability to hand tune this by changing the words that makeup the topic vectors. One way to potentially identify further keywords is to use the guidedLDA model I mentioned earlier.
I would note that when I was testing these two solutions on a different corpus with labeled data (which I didn't use aside from evaluating accuracy and such) this ~KNN approach proved better than the GuidedLDA approach.
Why not simply use a supervised topic model. Jonathan Chang's lda package in R has an slda function that is quite nice. There is also a very helpful demo. Just install the package and run demo(slda).