What is the best way to overcome wrong entity recognision with Spacy? - python-3.x

I'm testing such sentence to extract entity values:
s = "Height: 3m, width: 4.0m, others: 3.4 m, 4m, 5 meters, 10 m. Quantity: 6."
sent = nlp(s)
for ent in sent.ents:
print(ent.text, ent.label_)
And got some misleading values:
3 CARDINAL
4.0m CARDINAL
3.4 m CARDINAL 4m CARDINAL 5 meters QUANTITY 10 m QUANTITY 6 CARDINAL
namely, number 3m is not paired with m. This is the case for many examples as I can't rely on this engine when want to separate meters from quantities.
Should I do this manually?

One potential difficulty in your example is that it's not very close to natural language. The pre-trained English models were trained on ~2m words of general web and news text, so they're not always going to perform perfect out-of-the-box on text with a very different structure.
While you could update the model with more examples of QUANTITY in your specific texts, I think that a rule-based approach might actually be a better and more efficient solution here.
The example in this blog post is actually very close to what you're trying to do:
import spacy
from spacy.pipeline import EntityRuler
nlp = spacy.load("en_core_web_sm")
weights_pattern = [
{"LIKE_NUM": True},
{"LOWER": {"IN": ["g", "kg", "grams", "kilograms", "lb", "lbs", "pounds"]}}
]
patterns = [{"label": "QUANTITY", "pattern": weights_pattern}]
ruler = EntityRuler(nlp, patterns=patterns)
nlp.add_pipe(ruler, before="ner")
doc = nlp("U.S. average was 2 lbs.")
print([(ent.text, ent.label_) for ent in doc.ents])
# [('U.S.', 'GPE'), ('2 lbs', 'QUANTITY')]
The statistical named entity recognizer respects pre-defined entities and wil "predict around" them. So if you're adding the EntityRuler before it in the pipeline, your custom QUANTITY entities will be assigned first and will be taken into account when the entity recognizer predicts labels for the remaining tokens.
Note that this example is using the latest version of spaCy, v2.1.x. You might also want to add more patterns to cover different constructions. For more details and inspiration, check out the documentation on the EntityRuler, combining models and rules and the token match pattern syntax.

Related

Transformers get named entity prediction for words instead of tokens

This is very basic question, but I spend hours struggling to find the answer. I built NER using Hugginface transformers.
Say I have input sentence
input = "Damien Hirst oil in canvas"
I tokenize it to get
tokenizer = transformers.BertTokenizer.from_pretrained('bert-base-uncased')
tokenized = tokenizer.encode(input) #[101, 12587, 7632, 12096, 3514, 1999, 10683, 102]
Feed tokenized sentence to the model to get predicted tags for the tokens
['B-ARTIST' 'B-ARTIST' 'I-ARTIST' 'I-ARTIST' 'B-MEDIUM' 'I-MEDIUM'
'I-MEDIUM' 'B-ARTIST']
prediction comes as output from the model. It assigns tags to different tokens.
How can I recombine this data to obtain tags for words instead of tokens? So I would know that
"Damien Hirst" = ARTIST
"Oil in canvas" = MEDIUM
There are two questions here.
Annotating Token Classification
A common sequential tagging, especially in Named Entity Recognition, follows the scheme that a sequence to tokens with tag X at the beginning gets B-X and on reset of the labels it gets I-X.
The problem is that most annotated datasets are tokenized with space! For example:
[CSL] O
Damien B-ARTIST
Hirst I-ARTIST
oil B-MEDIUM
in I-MEDIUM
canvas I-MEDIUM
[SEP] O
where O indicates that it is not a named-entity, B-ARTIST is the beginning of the sequence of tokens labelled as ARTIST and I-ARTIST is inside the sequence - similar pattern for MEDIUM.
At the moment I posted this answer, there is an example of NER in huggingface documentation here:
https://huggingface.co/transformers/usage.html#named-entity-recognition
The example doesn't exactly answer the question here, but it can add some clarification. The similar style of named entity labels in that example could be as follows:
label_list = [
"O", # not a named entity
"B-ARTIST", # beginning of an artist name
"I-ARTIST", # an artist name
"B-MEDIUM", # beginning of a medium name
"I-MEDIUM", # a medium name
]
Adapt Tokenizations
With all that said about annotation schema, BERT and several other models have different tokenization model. So, we have to adapt these two tokenizations.
In this case with bert-base-uncased, the expected outcome is like this:
damien B-ARTIST
hi I-ARTIST
##rst I-ARTIST
oil B-MEDIUM
in I-MEDIUM
canvas I-MEDIUM
In order to get this done, you can go through each token in original annotation, then tokenize it and add its label again:
tokens_old = ['Damien', 'Hirst', 'oil', 'in', 'canvas']
labels_old = ["B-ARTIST", "I-ARTIST", "B-MEDIUM", "I-MEDIUM", "I-MEDIUM"]
label2id = {label: idx for idx, label in enumerate(label_list)}
tokens, labels = zip(*[
(token, label)
for token_old, label in zip(tokens_old, labels_old)
for token in tokenizer.tokenize(token_old)
])
When you add [CLS] and [SEP] in the tokens, their labels "O" must be added to labels.
With the code above, it is possible to get into a situation that a beginning tag like B-ARTIST get repeated when the beginning word splits into pieces. According to the description in huggingface documentation, you can encode these labels with -100 to be ignored:
https://huggingface.co/transformers/custom_datasets.html#token-classification-with-w-nut-emerging-entities
Something like this should work:
tokens, labels = zip(*[
(token, label2id[label] if (label[:2] != "B-" or i == 0) else -100)
for token_old, label in zip(tokens_old, labels_old)
for i, token in enumerate(tokenizer.tokenize(token_old))
])

Is it possible to get classes on the WordNet dataset?

I am playing with WordNet and try to solve a NLP task.
I was wondering if there exists any way to get a list of words belonging to some large sets, such as "animals" (i.e. dog, cat, cow etc.), "countries", "electronics" etc.
I believe that it should be possible to somehow get this list by exploiting hypernyms.
Bonus question: do you know any other way to classify words in very large classes, besides "noun", "adjective" and "verb"? For example, classes like, "prepositions", "conjunctions" etc.
Yes, you just check if the category is a hypernym of the given word.
from nltk.corpus import wordnet as wn
def has_hypernym(word, category):
# Assume the category always uses the most popular sense
cat_syn = wn.synsets(category)[0]
# For the input, check all senses
for syn in wn.synsets(word):
for match in syn.lowest_common_hypernyms(cat_syn):
if match == cat_syn:
return True
return False
has_hypernym('dog', 'animal') # => True
has_hypernym('bucket', 'animal') # => False
If the broader word (the "category" here) is the lowest common hypernym, that means it's a direct hypernym of the query word, so the query word is in the category.
Regarding your bonus question, I have no idea what you mean. Maybe you should look at NER or open a new question.
With some help from polm23, I found this solution, which exploits similarity between words, and prevents wrong results when the class name is ambiguous.
The idea is that WordNet can be used to compare a list words, with the string animal, and compute a similarity score. From the nltk.org webpage:
Wu-Palmer Similarity: Return a score denoting how similar two word senses are, based on the depth of the two senses in the taxonomy and that of their Least Common Subsumer (most specific ancestor node).
def keep_similar(words, similarity_thr):
similar_words=[]
w2 = wn.synset('animal.n.01')
[similar_words.append(word) for word in words if wn.synset(word + '.n.01').wup_similarity(w2) > similarity_thr ]
return similar_words
For example, if word_list = ['dog', 'car', 'train', 'dinosaur', 'London', 'cheese', 'radon'], the corresponding scores are:
0.875
0.4444444444444444
0.5
0.7
0.3333333333333333
0.3076923076923077
0.3076923076923077
This can easily be used to generate a list of animals, by setting a proper value of similarity_thr

nltk latent semantic analysis copies the first topics over and over

This is my first attempt with Natural Language Processing so I started with Latent Semantic Analysis and used this tutorial to build the algorithm. After testing it I see that it only classifies the first semantic words and repeats the same terms over and over on top of the other documents.
I tried feeding it the documents found in HERE too and it does exactly the same. Repeating the values of the same topic several times in the other ones.
Could anyone help explain what is happening? I've been searching all over and everything seems exactly like in the tutorials.
testDocs = [
"The Neatest Little Guide to Stock Market Investing",
"Investing For Dummies, 4th Edition",
"The Little Book of Common Sense Investing: The Only Way to Guarantee Your Fair Share of Stock Market Returns",
"The Little Book of Value Investing",
"Value Investing: From Graham to Buffett and Beyond",
"Rich Dad's Guide to Investing: What the Rich Invest in, That the Poor and the Middle Class Do Not!",
"Investing in Real Estate, 5th Edition",
"Stock Investing For Dummies",
"Rich Dad's Advisors: The ABC's of Real Estate Investing: The Secrets of Finding Hidden Profits Most Investors Miss",
]
stopwords = ['and','edition','for','in','little','of','the','to']
ignorechars = ''',:'!'''
#First we apply the standard SKLearn algorithm to compare with.
for element in testDocs:
#tokens.append(tokenizer.tokenize(element.lower()))
element = element.lower()
print(testDocs)
#Vectorize the features.
vectorizer = tfdv(max_df=0.5, min_df=2, max_features=8, stop_words='english', use_idf=True)#, ngram_range=(1,3))
#Store the values in matrix X.
X = vectorizer.fit_transform(testDocs)
#Apply LSA.
lsa = TruncatedSVD(n_components=3, n_iter=100)
lsa.fit(X)
#Get a list of the terms in the order it was decomposed.
terms = vectorizer.get_feature_names()
print("Terms decomposed from the document: " + str(terms))
print()
#Prints the matrix of concepts. Each number represents how important the term is to the concept and the position relates to the position of the term.
print("Number of components in element 0 of matrix of components:")
print(lsa.components_[0])
print("Shape: " + str(lsa.components_.shape))
print()
for i, comp in enumerate(lsa.components_):
#Stick each of the terms to the respective components. Zip command creates a tuple from 2 components.
termsInComp = zip(terms, comp)
#Sort the terms according to...
sortedTerms = sorted(termsInComp, key=lambda x: x[1], reverse=True)
print("Concept %d", i)
for term in sortedTerms:
print(term[0], end="\t")
print()

find bigram using gensim

this is my code
from gensim.models import Phrases
documents = ["the mayor of new york was there the hill have eyes","the_hill have_eyes new york mayor was present"]
sentence_stream = [doc.split(" ") for doc in documents]
bigram = Phrases(sentence_stream, min_count=1)
sent = ['the', 'mayor', 'of', 'new_york', 'was', 'there', 'the_hill', 'have_eyes']
print(bigram[sent])
i want it detects "the_hill_have_eyes" but the output is
['the', 'mayor', 'of', 'new_york', 'was', 'there', 'the_hill', 'have_eyes']
Phrases is a purely-statistical method for combining some unigram-token-pairs to new bigram-tokens. If it's not combining two unigrams you think should be combined, it's because the training data and/or chosen parameters (like threshold or min_count) don't imply that pairing should be combined.
Note especially that:
even when Phrases-combinations prove beneficial for downstream classification or info-retrieval steps, they may not intuitively/aesthetically match the "phrases" we as human readers would like to see
since Phrases requires bulk statistics for good results, it requires a lot of training data – you are unlikely to see impressive or representative results from tiny toy-sized training data
In particular with regard to that last point & your example, the interpretation of min_count in Phrases default-scoring means even a min_count=1 isn't low enough to cause bigrams for which there is only a single example in the training-corpus to be created.
So, if you expand your training-corpus a bit, you may be able to create the results you want. But you should still be aware that this method's only value comes from training are larger, realistic corpuses, so anything you see in tiny contrived examples may not generalize to real uses.
What you want is not actually bigrams but "fourgrams".
This can be achieved by doing something like this (my old piece of code I wrote some months ago):
// read the txt file
sentences = Text8Corpus(datapath('testcorpus.txt'))
phrases = Phrases(sentences, min_count=1, threshold=1)
bigram = Phraser(phrases)
sent = [u'trees', u'graph', u'minors']
// look for words in "sent"
print(bigram[sent])
[u'trees_graph', u'minors'] // output
// to create the bigrams
bigram_model = Phrases(unigram_sentences)
// apply the trained model to a sentence
for unigram_sentence in unigram_sentences:
bigram_sentence = u' '.join(bigram_model[unigram_sentence])
// get a trigram model out of the bigram
trigram_model = Phrases(bigram_sentences)
So here you have a trigram model (detecting 3 words together) and you get the idea on how to implement fourgrams.
Hope this helps. Good luck.

How do I get the correct NER using SpaCy from text like "F.B.I. Agent Peter Strzok, Who Criticized Trump in Texts, Is Fired"?

How do I get the correct NER using SpaCy from text like "F.B.I. Agent Peter Strzok, Who Criticized Trump in Texts, Is Fired - The New York Times SectionsSEARCHSkip to contentSkip to site."
here "Criticized Trump" is recognized as person instead of "Trump" as person.
How to pre-process and lower case the text like "Criticized" or "Texts" from the above string to overcome above issue or any other technique to do so.
import spacy
from spacy import displacy
from collections import Counter
import en_core_web_sm
nlp = en_core_web_sm.load()
from pprint import pprint
sent = ("F.B.I. Agent Peter Strzok, Who Criticized Trump in Texts, Is Fired - The New York Times SectionsSEARCHSkip to contentSkip to site")
doc = nlp(sent)
pprint([(X, X.ent_iob_, X.ent_type_) for X in doc])
Result from above code:-
"Criticized Trump" as 'PERSON' and "Texts" as 'GPE'
Expected result should be:-
"Trump" as 'PERSON' instead of "Criticized Trump" as 'PERSON' and "Texts" as '' instead of "Texts" as 'GPE'
You can add more examples of Named Entities to tune the NER model. Here you have all the information needed for the preparation of train data https://spacy.io/usage/training. You can use prodigy (annotation tool from spaCy creators, https://prodi.gy) to mark Named Entities in your data.
Indeed, you can pre-process using POS tagging in order to change to lower case words like "Criticized" or "Texts" which are not proper nouns.
Proper capitalization (lower vs. upper case) will help the NER tagger.
sent = "F.B.I. Agent Peter Strzok, Who Criticized Trump in Texts, Is Fired - The New York Times SectionsSEARCHSkip to contentSkip to site"
doc = nlp(sent)
words = []
spaces = []
for a in doc:
if a.pos_ != 'PROPN':
words.append( a.text.lower() )
else:
words.append(a.text)
spaces.append(a.whitespace_)
spaces = [len(sp) for sp in spaces]
docNew = Doc(nlp.vocab, words=words, spaces=spaces)
print(docNew)
# F.B.I. Agent Peter Strzok, who criticized Trump in texts, is fired - the New York Times SectionsSEARCHSkip to contentskip to site

Resources