How to find similarity of sentence? - nlp

How to find the semantic similarity between any two given sentences?
Eg:
what movies did ron howard direct?
movies directed by ron howard.
I know its a hard problem. But, would like to ask the views of experts.
I don't know how to use the Parts of Speech to achieve this.
http://nlp.stanford.edu:8080/parser/index.jsp

Its a broad problem. I would personally go for cosine similarity.
You need to convert your sentences into a vector. For converting the sentence into vector you can consider several rules, like number of occurances, order, synonyms etc. Then taking the cosine distance as mentioned here
You can also explore elasticsearch for finding associated words. You can create your custom analyzers, stemmer, tokenizer, filters(like synonyms) etc. which can be very helpful in finding similar sentences. Elasticsearch also provides more like this query which finds similar documents using the tf-idf scores.

Related

Using Bert and cosine similarity fo identify similar documents

we have a news website where we have to match news to a particular user.
We have to use for the matching only the user textual information, like for example the interests of the user or a brief description about them.
I was thinking to threat both the user textual information and the news text as document and find document similarity.
In this way, I hope, that if in my profile I wrote sentences like: I loved the speach of the president in Chicago last year, and a news talks about: Trump is going to speak in Illinois I can have a match (the example is purely casual).
I tried, first, to embed my documents using TF-IDF and then I tried a kmeans to see if there was something that makes sense, but I don't like to much the results.
I think the problem derives from the poor embedding that TF-IDF gives me.
Thus I was thinking of using BERT embedding to retrieve the embedding of my documents and then use cosine similarity to check similarity of two document (a document about the user profile and a news).
Is this an approach that could make sense? Bert can be used to retrieve the embedding of sentences, but there is a way to embed an entire document?
What would you advice me?
Thank you
BERT is trained on pairs of sentences, therefore it is unlikely to generalize for much longer texts. Also, BERT requires quadratic memory with the length of the text, using too long texts might result in memory issues. In most implementations, it does not accept sequences longer than 512 subwords.
Making pre-trained Transformers work efficiently for long texts is an active research area, you can have a look at a paper called DocBERT to have an idea what people are trying. But it will take some time until there is a nicely packaged working solution.
There are also other methods for document embedding, for instance Gensim implements doc2vec. However, I would still stick with TF-IDF.
TF-IDF is typically very sensitive to data pre-processing. You certainly need to remove stopwords, in many languages it also pays off to do lemmatization. Given the specific domain of your texts, you can also try expanding the standard list of stop words by words that appear frequently in news stories. You can get further improvements by detecting and keeping together named entities.

Calculating the similarity between two vectors

I did LDA over a corpus of documents with topic_number=5. As a result, I have five vectors of words, each word associates with a weight or degree of importance, like this:
Topic_A = {(word_A1,weight_A1), (word_A2, weight_A2), ... ,(word_Ak, weight_Ak)}
Topic_B = {(word_B1,weight_B1), (word_B2, weight_B2), ... ,(word_Bk, weight_Bk)}
.
.
Topic_E = {(word_E1,weight_E1), (word_E2, weight_E2), ... ,(word_Ek, weight_Ek)}
Some of the words are common between documents. Now, I want to know, how I can calculate the similarity between these vectors. I can calculate cosine similarity (and other similarity measures) by programming from scratch, but I was thinking, there might be an easier way to do it. Any help would be appreciated. Thank you in advance for spending time on this.
I am programming with Python 3.6 and gensim library (but I am open to any other library)
I know someone else has asked similar question (Cosine Similarity and LDA topics) but becasue he didn't get the answer, I ask it again
After LDA you have topics characterized as distributions on words. If you plan to compare these probability vectors (weight vectors if you prefer), you can simply use any cosine similarity implemented for Python, sklearn for instance.
However, this approach will only tell you which topics have in general similar probabilities put in the same words.
If you want to measure similarities based on semantic information instead of word occurrences, you may want to use word vectors (as those learned by Word2Vec, GloVe or FastText).
They learned vectors for representing the words as low dimensional vectors, encoding certain semantic information. They're easy to use in Gensim, and the typical approach is loading a pre-trained model, learned in Wikipedia articles or News.
If you have topics defined by words, you can represent these words as vectors and obtain an average of the cosine similarities between the words in two topics (we did it for a workshop). There are some sources using these Word Vectors (also called Word Embeddings) to represent somehow topics or documents. For instance, this one.
There are some recent publications combining Topic Models and Word Embeddings, you can look for them if you're interested.

Why are TF-IDF vocabulary words represented as axes/dimensions?

I want an intuitive way for understanding why each word in a TF-IDF vocabulary are represented as separate dimensions.
Why can't I just add the TF-IDF values of all the words together and use that as a representation of the document?
I have a basic understanding of why we do this. Apples =/= Oranges.
But apparently I don't know it well enough to convince someone else!
Ultimately all of NLP is arbitrary. If you wanted to add up the tf-idf values for all words in a phrase/sentence/document and found the resulting number useful for some task you were trying to do you are free to do so. But that number probably won't be very useful for most standard NLP tasks such as search, summarization, sentiment analysis, etc. It's hard to represent the meaning of a phrase/sentence/document with a single number.
By representing a phrase/sentence/document as a vector which has a separate row for each word in your vocabulary, you can leverage vector/matrix algebra to represent some standard operations you might want to do when solving NLP problems. For example, you could compute the cosine similarity between the vectors representing 2 documents and use that to judge how similar those 2 documents are.
Something else you might be interested in: There is an NLP concept called word2vec which lets you represent every word as a different vector of numbers and then lets you add/subtract them to discover semantic relations between them.
For example, it might say
king - man + woman ≈ queen
You can read more about this at https://blog.acolyer.org/2016/04/21/the-amazing-power-of-word-vectors/

Gensim: What is difference between word2vec and doc2vec?

I'm kinda newbie and not native english so have some trouble understanding Gensim's word2vec and doc2vec.
I think both give me some words most similar with query word I request, by most_similar()(after training).
How can tell which case I have to use word2vec or doc2vec?
Someone could explain difference in short word, please?
Thanks.
In word2vec, you train to find word vectors and then run similarity queries between words. In doc2vec, you tag your text and you also get tag vectors. For instance, you have different documents from different authors and use authors as tags on documents. Then, after doc2vec training you can use the same vector aritmetics to run similarity queries on author tags: i.e who are the most similar authors to AUTHOR_X? If two authors generally use the same words then their vector will be closer. AUTHOR_X is not a real word which is part of your corpus just something you determine. So you don't need to have it or manually insert it into your text. Gensim allows you to train doc2vec with or without word vectors (i.e. if you only care about tag similarities between each other).
Here is a good presentation on word2vec basics and how they use doc2vec in an innovative way for product recommendations (related blog post).
If you tell me about what problem you are trying to solve, may be I can suggest which method will be more appropriate.

How to find the most meaningful words in the text with using word2vec?

So, for instance, I'm typing, as an input, some sentence with some semantic meaning and, as an output, I get some list of closest (in cosine distance) words (mostly single words).
But I want to understand which cluster my sentence belongs to and compute how far is located each word from it. And eliminate non-meaningful words from sentence.
For example:
"I want to buy a pizza";
"pizza": 0.99123
"buy": 0.7834
"want": 0.1443
How such requirement can be achieved out of the box, without any C coding?
Maybe I need to compute cosine distance equation for this?
Thank you!
It seems like you need topic modeling instead of word2vec. Word2vec is used to capture local information, it is not a good idea to use it directly to classify or clustering words or sentences.
One other aspect can be stop word removal since you are mentioning about non-meaningful words. By the way, they are not non-meaningful, they are actually not aligned with any topic. So, you are thinking them as non-meaningful.
I believe you should use LDA topic modeling approach and you don't need to implement anything since there are many implementation out there for LDA.

Resources