NLP: Curating definitional summaries for a specific term from textbook - nlp

I would like to be able to curate definitional summaries for a specific term from a textbook.
For example, from a Biology textbook, I would like to be able form a concise summary for the word "mitochondria". I have tried this by first parsing through the textbook for all sentences that contain the word "mitochondria", and feeding those sentences through summarization algorithms such as TextRank and LexRank, but those algorithms were not able to determine "definitional" sentences that well.
By definitional summaries, I mean useful sentences as far as a definition goes. For example, the sentence "The mitochondria is the powerhouse of the cell" would be a definitional sentence while the sentence "Fungal cells also contain mitochondria and a complex system of internal membranes, including the endoplasmic reticulum and Golgi apparatus" is not really pertinent to the definition of the mitochondria.
Any help or leads would be very much appreciated

There isn't really a straightforward way to do this, but you do have some options:
Just use a regex for "mitochondria is". It is the stupidest possible thing, but given a textbook it might prove satisfactory. It's simple enough testing should be easy, and at worst provides a baseline to compare alternatives to.
Run a parser (example: Stanford Parser) on each sentence with the word "mitochondria", and extract sentences where mitochondria is the subject. This would eliminate the negative example you gave. You would have to tune this, perhaps restricting main verbs, accounting for coordinators, and so on.
Use Information Extraction (example: Stanford OpenIE) to get a list of facts about mitochondria (like is-in(mitochondria, cell)) and do something with that.

This is a very open ended question. I can try to point how I would approach this...
One way would be to use some kind of vector representation for text (word2vec
or sent2vec come to mind).
Then by encoding the average of the sentences in vector format and checking the cosine similarity of this and of the term you seek, you could be getting something close to the definitional sentences you seek.
Even testing the cosine similarity of the averaged sentences you get out of the summary algorithm and the term might get you close to judge how close you are

Related

Use the polarity distribution of word to detect the sentiment of new words

I have just started a project in NLP. Suppose I have a graph for each word that shows the polarity distribution of sentiments for that word in different sentences. I want to know what I can use to recognize the feelings of new words? Any other use you have in mind I will be happy to share.
I apologize for any possible errors in my writing. Thanks a lot
Assuming you've got some words that have been hand-labeled with positive/negative sentiments, but then you encounter some new words that aren't labeled:
If you encounter the new words totally alone, outside of contexts, there's not much you can do. (Maybe, you could go out to try to find extra texts with those new words, such as vis dictionaries or the web, then use those larger texts in the next approach.)
If you encounter the new words inside texts that also include some of your hand-labeled words, you could try guessing that the new words are most like the words you already know that are closest-to, or used-in-the-same-places. This would leverage what's called "the distributional hypothesis" – words with similar distributions have similar meanings – that underlies a lot of computer natural-language analysis, including word2vec.
One simple thing to try along these lines: across all your texts, for every unknown word U, tally up the counts all neighboring words within N positions. (N could be 1, or larger.) From that, pick the top 5 words occuring most often near the unknown word, and look up your prior labels, and avergae them together (perhaps weighted by the number of occurrences.)
You'll then have a number for the new word.
Alternatively, you could train a word2vec set-of-word-vectors for all of your texts, including the unknown & know words. Then, ask that model for the N most-similar neighbors to your unknown word. (Again, N could be small or large.) Then, from among those neighbors with known labels, average them together (again perhaps weighted by similarity), to get a number for the previously unknown word.
I wouldn't particularly expect either of these techniques to work very well. The idea that individual words can have specific sentiment is somewhat weak given the way that in actual language, their meaning is heavily modified, or even reversed, by the surrounding grammar/context. But in each case these simple calculate-from-neighbors techniqyes are probably better than random guesses.
If your real aim is to calculate the overall sentiment of longer texts, like sentences, paragraphs, reviews, etc, then you should discard your labels of individual words an acquire/create labels for full texts, and apply real text-classification techniques to those larger texts. A simple word-by-word approach won't do very well compared to other techniques – as long as those techniques have plenty of labeled training data.

Idenfying bigrams using Gensim Phraser that contain the word "not," for sentiment analysis

I am working on a sentiment analysis project where I am analyzing a corpus of documents, and I am specifically not removing the word "not" as a stopword, so that I can use it to determine if a text agrees or disagrees with something. For instance, there is a difference between "not effective" and "effective" when discussing the COVID vaccine.
However, my phraser is not identifying any bigrams with the word "not." I presume this is because that token exists in such large numbers (particularly because I expanded contractions, so "isn't" -> "is not"), that the scoring function simply scores all bigrams with "not" too low. This would be because the standard phrase scoring function is:
(where min_count is a hyper parameter)
So, since "not" exists many thousands of times in the database, worda_count will be very large, leading to a large denominator and dropping the score considerably.
Is there a way to get around this, so "not" bigrams are scored effectively?
I can think of a few options off the top of my head:
Write my own scoring function that effectively has two scoring formula: the standard scoring formula, and a different scoring formula if the first word is "not".
I could include "not" in a list of connector_words, but gensim.models.phrases.Phraser specifically indicates that these connector words cannot be at the beginning or end of a phrase.
As you've discovered, the Phrases functionality in Gensim is pretty crude: it only combines words based on a meaning-oblivious statistical analysis. It's more likely to be helpful in promoting certain noun-phrases ('new_york') or idioms than generic syntactical reversals-of-meaning (as with an added 'not'). So whether you'll want to use it at all, I'm not sure.
You could try the most simpleminded thing possible: preprocess to always attach 'not' to the following word. Maybe it'll help!
You could also try some expensive grammar-aware preprocessing - the sort that labels words with parts-of-speech, & further identifies which other words/word-ranges a particular 'not' modifies. That might allow you to condiionally connect the 'not' to later words – maybe even non-contiguous words – & perhaps that will provide a lift to downstream sentiment-analysis.

Bytes vs Characters vs Words - which granularity for n-grams?

At least 3 types of n-grams can be considered for representing text documents:
byte-level n-grams
character-level n-grams
word-level n-grams
It's unclear to me which one should be used for a given task (clustering, classification, etc). I read somewhere that character-level n-grams are preferred to word-level n-grams when the text contains typos, so that "Mary loves dogs" remains similar to "Mary lpves dogs".
Are there other criteria to consider for choosing the "right" representation?
Evaluate. The criterion for choosing the representation is whatever works.
Indeed, character level (!= bytes, unless you only care about english) probably is the most common representation, because it is robust to spelling differences (which do not need to be errors, if you look at history; spelling changes). So for spelling correction purposes, this works well.
On the other hand, Google Books n-gram viewer uses word level n-grams on their books corpus. Because they don't want to analyze spelling, but term usage over time; e.g. "child care", where the individual words aren't as interesting as their combination. This was shown to be very useful in machine translation, often referred to as "refrigerator magnet model".
If you are not processing international language, bytes may be meaningful, too.
I would outright discard byte-level n-grams for text-related tasks, because bytes are not a meaningful representation of anything.
Of the 2 remaining levels, the character-level n-grams will need much less storage space and will , subsequently, hold much less information. They are usually utilized in such tasks as language identification, writer identification (i.e. fingerprinting), anomaly detection.
As for word-level n-grams, they may serve the same purposes, and much more, but they need much more storage. For instance, you'll need up to several gigabytes to represent in memory a useful subset of English word 3-grams (for general-purpose tasks). Yet, if you have a limited set of texts you need to work with, word-level n-grams may not require so much storage.
As for the issue of errors, a sufficiently large word n-grams corpus will also include and represent them. Besides, there are various smoothing methods to deal with sparsity.
There other issue with n-grams is that they will almost never be able to capture the whole needed context, so will only approximate it.
You can read more about n-grams in the classic Foundations of Statistical Natural Language Processing.
I use character ngrams on small strings, and word ngrams for something like text classification of larger chunks of text. It is a matter of which method will preserve the context you need more or less...
In general for classification of text, word ngrams will help a bit with word-sense dissambiguation, where character ngrams would be easily confused and your features could be completely ambiguous. For unsupervised clustering, it will depend on how general you want your clusters, and on what basis you want docs to converge. I find stemming, stopword removal, and word bigrams work well in unsupervised clustering tasks on fairly large corpora.
Character ngrams are great for fuzzy string matching of small strings.
I like to think of a set of grams as a vector, and imagine comparing vectors with the grams you have, then ask yourself if what you are comparing maintains enough context to answer the question you are trying to answer.
HTH

Part of speech tagging : tagging unknown words

In the part of speech tagger, the best probable tags for the given sentence is determined using HMM by
P(T*) = argmax P(Word/Tag)*P(Tag/TagPrev)
T
But when 'Word' did not appear in the training corpus, P(Word/Tag) produces ZERO for given all possible tags, this leaves no room for choosing the best.
I have tried few ways,
1) Assigning small amount of probability for all unknown words, P(UnknownWord/AnyTag)~Epsilon... means this completely ignores the P(Word/Tag) for unknowns word by assigning the constant probability.. So decision making on unknown word is by prior probabilities.. As expected it is not producing good result.
2) Laplace Smoothing
I confused with this. I don't know what is difference between (1) and this. My way of understanding Laplace Smoothing adds the constant probability(lambda) to all unknown & Known words.. So the All Unknown words will get constant probability(fraction of lambda) and Known words probabilities will be the same relatively since all word's prob increased by Lambda.
Is the Laplace Smoothing same as the previous one ?
*)Is there any better way of dealing with unknown words ?
Your two approaches are similar, but, if I understand correctly, they differ in one key way. In (1) you are assigning extra mass to counts of unknown words and in (2) you are assigning extra mass to all counts. You definitely want to do (2) and not (1).
One of the problems with Laplace smoothing is that it give too much of a boost to unknown words and drags down the probabilities of high-probability words too much (relatively speaking). Your version (1) would actually worsen this problem. Basically, it would over-smooth.
Laplace smoothing words ok for an HMM, but it's not great. Most people do add-one smoothing but you could experiment with things like add-one-half or whatever.
If you want to move beyond this naive approach to smoothing, check out "one-count smoothing", as described in the Appendix of Jason Eisner's HMM tutorial. The basic idea here is that for unknown words more probability mass should be given to tags that appear with a wider variety of low frequency words. For example, since the tag NOUN appears on a large number of different words and DETERMINER appears on a small number of different words, it is more likely that an unseen word will be a NOUN.
If you want to get even fancier, you could use a Chinese Restaurant Process model taken from non-parametric Bayesian statistics to put a prior distribution on unseen word/tag combinations. Kevin Knight's Bayesian inference tutorial has details.
I think the HMM-based TnT tagger provides a better approach to handle unknown words (see the approach in TnT tagger's paper).
The accuracy results (for known words and unknown words) of TnT and other two POS and morphological taggers on 13 languages including Bulgarian, Czech, Dutch, English, French, German, Hindi, Italian, Portuguese, Spanish, Swedish, Thai and Vietnamese, can be found in this article.

Financial news headers classification to positive/negative classes

I'm doing a small research project where I should try to split financial news articles headers to positive and negative classes.For classification I'm using SVM approach.The main problem which I see now it that not a lot of features can be produced for ML. News articles contains a lot of Named Entities and other "garbage" elements (from my point of view of course).
Could you please suggest ML features which can be used for ML training? Current results are: precision =0.6, recall=0.8
Thanks
The task is not trivial at all.
The straightforward approach would be to find or create a training set. That is a set of headers with positive news and a set of headers with negative news.
You turn the training set to a TF/IDF representation and then you train a Linear SVM to separate the two classes. Depending on the quality and size of your training set you can achieve something decent - not sure for 0.7 break even point.
Then, to get better results you need to go for NLP approaches. Try use a part-of-speech tagger to identify adjectives (trivial), and then score them using some sentiment DB like SentiWordNet.
There is an excellent overview on Sentiment Analysis by Bo Pang and Lillian Lee you should read:
How about these features?
Length of article header in words
Average word length
Number of words in a dictionary of "bad" words, e.g. dictionary = {terrible, horrible, downturn, bankruptcy, ...}. You may have to generate this dictionary yourself.
Ratio of words in that dictionary to total words in sentence
Similar to 3, but number of words in a "good" dictionary of words, e.g. dictionary = {boon, booming, employment, ...}
Similar to 5, but use the "good"-word dictionary
Time of the article's publication
Date of the article's publication
The medium through which it was published (you'll have to do some subjective classification)
A count of certain punctuation marks, such as the exclamation point
If you're allowed access to the actual article, you could use surface features from the actual article, such as its total length and perhaps even the number of responses or the level of opposition to that article. You could also look at many other dictionaries online such as Ogden's 850 basic english dictionary, and see if bad/good articles would be likely to extract many words from those. I agree that it seems difficult to come up with a long list (e.g. 100 features) of useful features for this purpose.
iliasfl is right, this is not a straightforward task.
I would use a bag of words approach but use a POS tagger first to tag each word in the headline. Then you could remove all of the named entities - which as you rightly point out don't affect the sentiment. Other words should appear frequently enough (if your dataset is big enough) to cancel themselves out from being polarised as either positive or negative.
One step further along, if you still aren't close could be to only select the adjectives and verbs from the tagged data as they are the words that tend to convey the emotion or mood.
I wouldn't be too disheartened in your precision and recall figures though, an F number of 0.8 and above is actually quite good.

Resources