Finding the Start and End char indices in Spacy - python-3.x

I am training a custom model in Spacy to extract custom entities but while I need to provide an input train data that consists of my entities along with the index locations, I wanted to understand if there's a faster way to assign the index value for keywords I am looking for in a particular sentence in my training data.
An example of my traning data:
TRAIN_DATA = [
('Behaviour Skills include Communication, Conflict Resolution, Work Life Balance,
{'entities': [(25, 37, 'BS'),(40, ,60, 'BS'),(62, 79, 'BS')]
})
]
Now to pass the index location for specific keywords in my training data, I am presently counting it manually to give the location of my keyword.
For example: in case of the first line where I am saying Behaviour skills include Communication etc, I am manually calculating the location of the index for the word "Communication" which is 25,37.
I am sure there must be another way to identify the location of these indices by some other methods instead counting it manually. Any ideas how can I achieve this?

Using str.find() can help here. However, you have to loop through both sentences and keywords
keywords = ['Communication', 'Conflict Resolution', 'Work Life Balance']
texts = ['Behaviour Skills include Communication, Conflict Resolution, Work Life Balance',
'Some sentence where lower case conflict resolution is included']
LABEL = 'BS'
TRAIN_DATA = []
for text in texts:
entities = []
t_low = text.lower()
for keyword in keywords:
k_low = keyword.lower()
begin = t_low.find(k_low) # index if substring found and -1 otherwise
if begin != -1:
end = begin + len(keyword)
entities.append((begin, end, LABEL))
TRAIN_DATA.append((text, {'entities': entities}))
Output:
[('Behaviour Skills include Communication, Conflict Resolution, Work Life Balance',
{'entities': [(25, 38, 'BS'), (40, 59, 'BS'), (61, 78, 'BS')]}),
('Some sentence where lower case conflict resolution is included',
{'entities': [(31, 50, 'BS')]})]
I added str.lower() just in case you might need it.

Related

Is it possible to get classes on the WordNet dataset?

I am playing with WordNet and try to solve a NLP task.
I was wondering if there exists any way to get a list of words belonging to some large sets, such as "animals" (i.e. dog, cat, cow etc.), "countries", "electronics" etc.
I believe that it should be possible to somehow get this list by exploiting hypernyms.
Bonus question: do you know any other way to classify words in very large classes, besides "noun", "adjective" and "verb"? For example, classes like, "prepositions", "conjunctions" etc.
Yes, you just check if the category is a hypernym of the given word.
from nltk.corpus import wordnet as wn
def has_hypernym(word, category):
# Assume the category always uses the most popular sense
cat_syn = wn.synsets(category)[0]
# For the input, check all senses
for syn in wn.synsets(word):
for match in syn.lowest_common_hypernyms(cat_syn):
if match == cat_syn:
return True
return False
has_hypernym('dog', 'animal') # => True
has_hypernym('bucket', 'animal') # => False
If the broader word (the "category" here) is the lowest common hypernym, that means it's a direct hypernym of the query word, so the query word is in the category.
Regarding your bonus question, I have no idea what you mean. Maybe you should look at NER or open a new question.
With some help from polm23, I found this solution, which exploits similarity between words, and prevents wrong results when the class name is ambiguous.
The idea is that WordNet can be used to compare a list words, with the string animal, and compute a similarity score. From the nltk.org webpage:
Wu-Palmer Similarity: Return a score denoting how similar two word senses are, based on the depth of the two senses in the taxonomy and that of their Least Common Subsumer (most specific ancestor node).
def keep_similar(words, similarity_thr):
similar_words=[]
w2 = wn.synset('animal.n.01')
[similar_words.append(word) for word in words if wn.synset(word + '.n.01').wup_similarity(w2) > similarity_thr ]
return similar_words
For example, if word_list = ['dog', 'car', 'train', 'dinosaur', 'London', 'cheese', 'radon'], the corresponding scores are:
0.875
0.4444444444444444
0.5
0.7
0.3333333333333333
0.3076923076923077
0.3076923076923077
This can easily be used to generate a list of animals, by setting a proper value of similarity_thr

NLP : Error/Unknown/misspelled text Detection model of a patient's medical text file

I have few patient's medical record text files which i got from the internet and i want to identify/find the files which are bad quality(misspelled words/special characters between the words/Erroneous words) and files with good quality(clean text).i want to build error detection model using text mining/NLP.
1)can someone please help me on the approach and solution for feature extraction and model selection.
2)Is there any medical corpus for medical records to identify the misspelled/Erroneous words.
If your goal is to simply correct these misspelled words to improve performance on whatever downstream task you want to do, then I can suggest a simple approach which has worked sufficiently well for me.
First tokenize your text (I recommend scispacy for medical text)
Identify possible "bad quality" words simply by the count of each unique word constructed from all the words in your corpus e.g. all words that occur <= 3 times
Add words that occur > 3 times in your corpus (we assume these are all correctly spelled) to a regular English dictionary. If your corpus is large, this is perfectly adequate for capturing medical terms. Otherwise use a medical dictionary e.g. UMLS, or https://github.com/glutanimate/wordlist-medicalterms-en to add the medical words not in a regular dictionary
Use pyspellchecker to identify the misspellings by using the Levenshtein Distance algorithm and comparing against our dictionary.
Replace the typos with what pyspellchecker thinks they should be.
A basic example:
import spacy
import scispacy
from collections import Counter
from spellchecker import SpellChecker
nlp = spacy.load('en_core_sci_md') # sciSpaCy
word_freq = Counter()
for doc in corpus:
tokens = nlp.tokenizer(doc)
tokenised_text = ""
for token in tokens:
tokenised_text = tokenised_text + token.text + " "
word_freq.update(tokenised_text.split())
infreq_words = [word for word in word_freq.keys() if word_freq[word] <= 3 and word[0].isdigit() == False]
freq_words = [word for word in word_freq.keys() if word_freq[word] > 3]
add_to_dictionary = " ".join(freq_words)
f=open("medical_dict.txt", "w+")
f.write(add_to_dictionary)
f.close()
spell = SpellChecker()
spell.distance = 1 # set the distance parameter to just 1 edit away - much quicker
spell.word_frequency.load_text_file('medical_dict.txt')
misspelled = spell.unknown(infreq_words)
misspell_dict = {}
for i, word in enumerate(misspelled):
if (word != spell.correction(word)):
misspell_dict[word] = spell.correction(word)
print(list(misspell_dict.items())[:10])
I would also recommend using regular expressions to fix any other "bad quality" words which can be systematically corrected.
You can do biobert to do contextual spelling check,
Link: https://github.com/dmis-lab/biobert

Custom NER Spacy throwing IndexError: list index out of range

I am trying to create custom NER using Spacy but, while training, I am getting the following error:
gold = GoldParse(doc, entities=entity_offsets)
File "gold.pyx", line 565, in spacy.gold.GoldParse.init
IndexError: list index out of range
Any idea as to how I can fix this?
The most common resolution that came up after doing some google search was to trim leading and trailing white spaces in the training data. So I used this code to trim them off. But still was of no use.
'''
invalid_span_tokens = re.compile(r'\s')
cleaned_data = []
for text, annotations in data:
entities = annotations['entities']
valid_entities = []
for start, end, label in entities:
valid_start = start
valid_end = end
while valid_start < len(text) and invalid_span_tokens.match(
text[valid_start]):
valid_start += 1
while valid_end > 1 and invalid_span_tokens.match(
text[valid_end - 1]):
valid_end -= 1
valid_entities.append([valid_start, valid_end, label])
cleaned_data.append([text, {'entities': valid_entities}])
'''
Ah, so the words would be a keyword argument on the GoldParse object. This lets you specify the gold-standard tokenization, if it doesn't match spaCy's tokenization. Assuming your input looks like this:
text = 'helloworld'
words = ['hello', 'world']
tags = ['INTJ', 'NOUN']
You can do the following:
doc = Doc(text)
gold = GoldParse(doc, words=words, tags=tags)
nlp.update([doc], [gold])
Alternatively, you can also use the new "simple training style" and just pass in the text as a string, and the annotations as a dictionary:
nlp.update([text], [{'words': words, 'tags': tags}])
In general, we'd recommend using the simple style, as it removes one layer of abstraction, and lets you get rid of the Doc and GoldParse imports. But ultimately, the style you choose depends on your personal preference.

Some diverging issues of Word2Vec in Gensim using high alpha values

I am implementing word2vec in gensim, on a corpus with nested lists (collection of tokenized words in sentences of sentences form) with 408226 sentences (lists) and a total of 3150546 words or tokens.
I am getting a meaningful results (in terms of the similarity between two words using model.wv.similarity) with the chosen values of 200 as size, window as 15, min_count as 5, iter as 10 and alpha as 0.5. All are lemmatized words and these all are input to models with vocabulary as 32716.
The results incurred from default alpha value, size, window and dimensions are meaningless for me based on the used data in computing the similarity values. However higher value of alpha as 0.5 gives me some meaningful results in terms of inducing meaningful similarity scores between two words. However, when I calculate the top n similar words, it's again meaningless. Does I need to change the entire parameters used in the initial training process.
I am still unable to reveal the exact reason, why the model behaves good with such a higher alpha value in computing the similarity between two words of the used corpus, whereas it's meaningless while computing the top n similar words with scores for an input word. Why is this the case?
Does it is diverging towards optimal solution. How to check this?
Any idea why is it the case is deeply appreciated.
Note: I'm using Python 3.7 on Windows machine with anaconda prompt and giving input to the model from a file.
This is what I have tried.
import warnings
warnings.filterwarnings(action='ignore', category=UserWarning, module='gensim')
from gensim.models import Word2Vec
import ast
path = "F:/Folder/"
def load_data():
global Sentences
Sentences = []
for file in ['data_d1.txt','data_d2.txt']:
with open(path + file, 'r', encoding = 'utf-8') as f1:
Sentences.extend(ast.literal_eval(*f1.readlines()))
load_data()
def initialize_word_embedding():
model = Word2Vec(Sentences, size = 200, window = 15, min_count = 5, iter = 10, workers = 4)
print(model)
print(len(model.wv.vocab))
print(model.wv.similarity(w1 = 'structure', w2 = '_structure_'))
similarities = model.wv.most_similar('system')
for word, score in similarities:
print(word , score)
initialize_word_embedding()
The example of Sentences list is as follows:
[['scientist', 'time', 'comet', 'activity', 'sublimation', 'carbon', 'dioxide', 'nears', 'ice', 'system'], ['inconsistent', 'age', 'system', 'year', 'size', 'collision'], ['intelligence', 'system'], ['example', 'application', 'filter', 'image', 'motion', 'channel', 'estimation', 'equalization', 'example', 'application', 'filter', 'system']]
The data_d1.txt and data_d2.txt is a nested list (list of lists of lemmatized tokenized words). I have preprocessed the raw data and save it in a file. Now giving the same as input. For computing the lemmatizing tokens, I have used the popular WordNet lemmatizer.
I need the word-embedding model to calculate the similarity between two words and computing the most_similar words of a given input word. I am getting some meaningful scores for the model.wv.similarity() method, whereas in calculating the most_similar() words of a word (say, system as shown in above). I am not getting the desired results.
I am guessing the model is getting diverged from the global minima, with the use of high alpha values.
I am confused what should be the dimension size, window for inducing some meaningful results, as there is no such rules regarding how to compute the the size and window.
Any suggestion is appreciated. The size of total sentences and words are specified above in the question.
Results what I am getting without setting alpha = 0.5
Edit to Recent Comment:
Results:
Word2Vec(vocab=32716, size=200, alpha=0.025)
The similarity between set and _set_ is : 0.000269373188960656
which is meaningless for me as it is very very less in terms of accuracy, But, I am a getting 71% by setting alpha as 0.5, which seems to be meaningful for me as the word set is same for both the domains.
Explanation: The word set should be same for both the domains (as I am comparing the data of two domains with same word). Don't get confused with word _set_, this is because the word is same as set, I have injected a character _ at start and end to distinguish the same for two different domains.
The top 10 words along with scores of _set_ are:
_niche_ 0.6891741752624512
_intermediate_ 0.6883598566055298
_interpretation_ 0.6813371181488037
_printer_ 0.675414502620697
_finer_ 0.6625382900238037
_pertinent_ 0.6620787382125854
_respective_ 0.6619025468826294
_converse_ 0.6610435247421265
_developed_ 0.659270167350769
_tent_ 0.6588765382766724
Whereas, the top 10 words for set are:
cardinality 0.633270263671875
typereduction 0.6233855485916138
zdzisław 0.619156002998352
crisp 0.6165326833724976
equivalenceclass 0.605925977230072
pawlak 0.6058803200721741
straight 0.6045454740524292
culik 0.6040038466453552
rin 0.6038737297058105
multisets 0.6035065650939941
Why the cosine similarity value is 0.00 for the word set for two different data.

how to get the most representative features in the following tfidf model?

Hello I have the following list:
listComments = ["comment1","comment2","comment3",...,"commentN"]
I created a tfidf vectorizer to get a model from my comments as follows:
tfidf_vectorizer = TfidfVectorizer(min_df=10,ngram_range=(1,3),analyzer='word')
tfidf = tfidf_vectorizer.fit_transform(listComments)
Now in order to undestand more about my model I would like to get the most representative features, I tried:
print("these are the features :",tfidf_vectorizer.get_feature_names())
print("the vocabulary :",tfidf_vectorizer.vocabulary_)
and this is giving me a list of words that I think that my model is using for the vectorization:
these are the features : ['10', '10 days', 'red', 'car',...]
the vocabulary : {'edge': 86, 'local': 96, 'machine': 2,...}
However I would like to find a way to get the 30 most representative features, I mean the words that achieves the highest values in my tfidf model, the words with highest inverse frecuency, I was Reading in the documentation but I was not able to find this method I really appreciate help with this issue, thanks in advance,
If you want to get a list of the vocabulary with respect to idf scores you can use the idf_ attribute and argsort it.
# create an array of feature names
feature_names = np.array(tfidf_vectorizer.get_feature_names())
# get order
idf_order = tfidf_vectorizer.idf_.argsort()[::-1]
# produce sorted idf word
feature_names[idf_order]
If you would like to get a sorted list of tfidf scores for each document you would do a similar thing.
# get order for all documents based on tfidf scores
tfidf_order = tfidf.toarray().argsort()[::-1]
# produce words
feature_names[tfidf_order]

Resources