finding semantic similarity between 2 statements - python-3.x

I am currently working with small application in python and my application has search functionality (currently using difflib) but I want to create Semantic Search which can give top 5 or 10 results from my database, based on user inputted text. It is same as google search engine works. I found some solutions Here.
But the problem is, below two statements from one of solution are semantically incorrect. And I don't care about this. because they are making things too hard which I don't want And also solution will be some pretrained neural network model or library from which I can implement easily.
Pete and Rob have found a dog near the station.
Pete and Rob have never found a dog near the station
And also I found some solutions which are showing using gensim and Glove embeddings and finding similarity between words and not sentences.
What I wanted ?
Suppose my db has statement display classes and user inputs show, showed, displayed, displayed class, show types etc are same. And if above 2 statements are given as same then also I don't care. displayed and displayed class already showing in difflib.
Points to be noted
Find from fixed set of statements but user inputted statements can differ
Must work for statements

I think it is not gensim embedding. It is word2vec embedding. Whatever it is.
You need tensorflow_hub
The Universal Sentence Encoder encodes text into high-dimensional vectors that can be used for text classification, semantic similarity, clustering and other natural language tasks.
I believe you need here is Text Classification or Semantic Similarity because you want to find nearest top 5 or 10 statements given statement from user.
It is easy to use. But size of model is ≈ 1GB. It works with words, sentences, phrases or short paragraphs. The input is variable length English text and the output is a 512 dimensional vector. You can find more information about it Here
Code
import tensorflow_hub as hub
import numpy as np
# Load model. It will download first time.
module_url = "https://tfhub.dev/google/universal-sentence-encoder-large/5"
model = hub.load(module_url)
# first data[0] is your actual value
data = ["display classes", "show", "showed" ,"displayed class", "show types"]
# find high-dimensional vectors.
vecs = model(data)
# find distance between statements using inner product
dists = np.inner(vecs[0], vecs)
# print dists
print(dists)
Output
array([0.9999999 , 0.5633253 , 0.46475542, 0.85303843, 0.61701006],dtype=float32)
Conclusion
First value 0.999999 is distance between display classes and display classes itself. second 0.5633253 is distance between display classes and show and last 0.61701006 is distance between display classes and show types.
Using this, you can find distance between given input and statements in db. then rank them according to distance.

You can use wordnet for finding synonyms and then use these synonyms for finding similar statements.
import nltk
from nltk.corpus import wordnet as wn
nltk.download('wordnet')
def get_syn_list(gword):
syn_list = []
try:
syn_list.extend(wn.synsets(gword,pos=wn.NOUN))
syn_list.extend(wn.synsets(gword,pos=wn.VERB))
syn_list.extend(wn.synsets(gword,pos=wn.ADJ))
syn_list.extend(wn.synsets(gword,pos=wn.ADV))
except :
print("Something Wrong Happened")
syn_words = []
for i in syn_list:
syn_words.append(i.lemmas()[0].name())
return syn_words
Now use split and split your statements in db. like this
stat = ["display classes"]
syn_dict = {}
for i in stat:
tmp = []
for x in i.split(" "):
tmp.extend(get_syn_list(x))
syn_dict[i] = set(tmp)
Now you have synonyms just compare them with inputted text. And use lemmatizer before comparing words so that displayed become display.

Hey you can use spacy
This answer is from https://medium.com/better-programming/the-beginners-guide-to-similarity-matching-using-spacy-782fc2922f7c
import spacy
nlp = spacy.load("en_core_web_lg")
doc1 = nlp("display classes")
doc2 = nlp("show types")
print(doc1.similarity(doc2))
Output
0.6277548513279427
Edit
Run following command, which will download model.
!python -m spacy download en_core_web_lg

Related

How to determine which topic each Test Tweet is addressing to?

I implemented LDA using python and modeled topics for a set of tweets and trying to map each tweet to topic to see which topic does the tweet belong to but I couldn't find any help online.
Although I found that this could be done for NMF but I couldn't find any functions or any specific options in python for this specific case, I'm using gensim for generation of topic using LDA
Using your trained model to get the most-associated topics for any particular text is covered in the Gensim docs for the LdaModel class, under 'Usage Examples' - https://radimrehurek.com/gensim/models/ldamodel.html#usage-examples – but the treatment there may be a little unituitive, because:
it calls any text you might be analyzing an 'unseen' document, but the same process works for documents that were part of the training corpus; and
like many other Gensim classes, the analysis is done via Python's indexed-lookup idiom. That is, bracketed [ ]-accessing (aka the shortcut for calling .__getitem__()), using the right representation of the text as if it were a lookup key, even though in truth here it's an argument to the model's analysis rather than something strictly looking-up some stored response.
So this part of its examples is what you need to follow:
Query, the model using new, unseen documents
>>> # Create a new corpus, made of previously unseen documents.
>>> other_texts = [
... ['computer', 'time', 'graph'],
... ['survey', 'response', 'eps'],
... ['human', 'system', 'computer']
... ]
>>> other_corpus = [common_dictionary.doc2bow(text) for text in other_texts]
>>>
>>> unseen_doc = other_corpus[0]
>>> vector = lda[unseen_doc] # get topic probability distribution for a document
Just remember:
the other_texts & unseen_document in this example can also be repeats from the training corpus, to ask the trained model what it thinks those doc's toppics should be; and
you need to convert any text to a bag-of-words representation using the exact same dictionary as was used for training documents, so that word-indexes and frequency-weightings are proper for the bag-of-words representations used for lookups.

What is nlp in spacy?

Usually we start from:
nlp = spacy.load('en_encore_web_sm') # or medium, or large
or
nlp = English()
then:
doc = nlp('my text')
Then we can do a lot of fun with that even not knowing the nature of the first line.
But what exactly is 'nlp'? What is going on under the hood? Is "nlp" a pretrained model, as understood in machine learning, and therefore some big file located somewhere on the disc?
I met an explanation, that 'nlp' is an 'object, containing process pipeline', but that only explains a little.
You can always check the type of any python objects:
nlp = spacy.load('en_encore_web_sm') # or medium, or large
print(type(nlp))
print(dir(nlp)) # view a list of attributes
You will get something like this (depending on the passed arguments)
<class 'spacy.lang.en.English'>
You are right it is something like 'pretrained' model as it contains vocabulary, binary weights, etc.
Please check the official documentation:
https://spacy.io/api/language
You could infer what nlp() is by exploring it. For example:
import spacy
from spacy import displacy
nlp = spacy.load("en_core_web_lg")
text = "Elon Musk 889-888-8888 elonpie#tessa.net Jeff Bezos (345)123-1234 bezzi#zonbi.com Reshma Saujani example.email#email.com 888-888-8888 Barkevious Mingo"
text = nlp(text)
print(text)
Will print the exact same text. On the other hand if you do:
for word in text.ents:
print(word.text,word.label_)
you will get the entities of the string:
Elon Musk PERSON
889-888 CARDINAL
Jeff Bezos PERSON
345)123 CARDINAL
Reshma Saujani PERSON
It is indeed large pre-trained model for the English language and has many functions (parser, lemmatizer, tagger) as the one demonstrated above. Hope this helps a bit to clarify your question.

SBERT's Interpretability: can we better understand the final results?

I'm familiar with SBERT and its pre-trained models and they are amazing! But at the same time, I want to understand how the results are calculated, and I can't find anything more specific in their website.
For example, I have a document and I want to find other documents that are similar to it. I used 2 documents containing 200-250 words each (I changed the model.max_seq_length to 350 so the model can handle bigger texts), and in the end we can see that the cosine-similarity is 0.79. Is that all we can see? Is there a way to extract the main phrases/keywords that made the model return this high value of similarity?
Thanks in advance!
Have you tried to make either a simple word-count-comparison between the two documents and other random documents? Or a tf-idf, if the two documents are part of a bigger corpus?
Another thing you can do, is to look inside the "stored_embeddings" matrix (see code below and here) in which SBERT encodes your sentences (e.g. for 20.000 documents you'll get a 20.000*384 matrix), after having saved it into a pickle file like:
from sentence_transformers import SentenceTransformer
import pickle
embeddings = model.encode(sentences)
with open('embeddings.pkl', "wb") as fOut:
pickle.dump({'sentences': sentences, 'embeddings': embeddings}, fOut, protocol=pickle.HIGHEST_PROTOCOL)
with open('embeddings.pkl', "rb") as fIn:
stored_data = pickle.load(fIn)
stored_embeddings = stored_data['embeddings']
The stored embeddings variable can be handled as a numpy matrix and can therefore be (for example) indexed to access single elements. By looking at the values of the single 384 dimensions (to do this, you can go column by column but in case of a big matrix I suggest you not to .enumerate(), it'll take forever) and compare the values that the two documents take in one precise dimension. You can see which dimension has the highest values or variance, for example.
I'm not saying it'll be interpretable what you'll find, but at least you can try and see what you find.

Translating text from english to Italian using hugging face Helsinki models not fully translating

I'm a newbie going through the hugging face library trying out the Translation models for a data entry task and translating text from English to Italian.
The code I tried based on the documentation:
from transformers import MarianTokenizer, MarianMTModel
from typing import List
#src = 'en' # source language
#trg = 'it' # target language
#saved the model locally.
#model_name = f'Helsinki-NLP/opus-mt-{src}-{trg}'
#model.save_pretrained("./model_en_to_it")
#tokenizer.save_pretrained("./tokenizer_en_to_it")
model = MarianMTModel.from_pretrained('./model_en_to_it')
tokenizer = MarianTokenizer.from_pretrained('./tokenizer_en_to_it')
#Next, trying to iterate over each column - 'english_text' of the dataset and
#translate the text from English to Italian and append the translated text to the
#list 'italian'.
italian = []
for i in range(len(data)):
batch = tokenizer(dataset['english_text'][i],
return_tensors="pt",truncation=True,
padding = True)
gen = model.generate(**batch)
italian.append(tokenizer.batch_decode(gen, skip_special_tokens=True))
Two concerns over here:
Translates and appends only partial text i.e., it truncates the paragraph if it exceeds a certain length. How to translate the text given any length?
I have near about 10k data and it is taking a hell of a lot of time.
Even if any one of the problem could be solved, that's helpful. Would love to learn
Virtually all current MT systems are trained using single sentences, not paragraphs. If your input text is in paragraphs, you need to do sentence splitting first. Any NLP library will do (e.g., NLTK, Spacy, Stanza). Having multiple sentences in a single input will lead to worse translation quality (because this is not what the model was trained for). Moreover, the complexity of the Transformer model is quadratic with respect to the input length (it does not fully hold when everything is parallelized on a GPU), so it gets very slow with very long inputs.

How to find word that are similar to two keywords using FastText?

I am trying to find words that are similar to two different words. I know that I can find the most similar word with FastText but I was wondering if there is a way to find a keyword that is similar to two keywords. For example, "apple" is similar to "orange" and also similar to "kiwi". So, what I want to do is if I have two words, "organ" and "kiwi", then I would like to get a suggestion of the keyword "apple" or any other fruits. Is there a way to do this?
I think that there isn't out of the box function for this feature.
In any case, you can think about this simple approach:
Load a pretrained embedding (availaible here)
Get a decent amount of nearest neighbors for every interested word
Search for intersections in the nearest neighbors of the two words
A small note: this is a crude approach. If necessary, even more sophisticated operations can be performed using the similarity cosine.
Code example:
import fasttext
# load the pretrained model
# (in the example I use the Italian model)
model=fasttext.load_model('./ml_models/cc.it.300.bin')
# get nearest neighbors for the interested words (100 neighbors)
arancia_nn=model.get_nearest_neighbors('arancia', k=100)
kiwi_nn=model.get_nearest_neighbors('kiwi', k=100)
# get only words sets (discard the similarity cosine)
arancia_nn_words=set([el[1] for el in arancia_nn])
kiwi_nn_words=set([el[1] for el in kiwi_nn])
# compute the intersection
common_similar_words=arancia_nn_words.intersection(kiwi_nn_words)
Example output (in Italian):
{'agrume',
'agrumi',
'ananas',
'arance',
'arancie',
'arancio',
'avocado',
'banana',
'ciliegia',
'fragola',
'frutta',
'lime',
'limone',
'limoni',
'mandarino',
'mela',
'mele',
'melograno',
'melone',
'papaia',
'papaya',
'pera',
'pompelmi',
'pompelmo',
'renetta',
'succo'}
I have used the Gensim W2V implementation for such computations for years now, but Gensim has also FastText implementation: https://radimrehurek.com/gensim/models/fasttext.html

Resources