Are there any known ways (above and beyond statistical analysis, but not necessarily excluding it as being part of the solution) to relate sentences or concepts to one another using Natural Language Processing. Thus far I've only worked with NLTK and Stanford-NLP to aid in my project, but I am open to alternative open source solutions.
As an example take the following George Orwell essay (http://orwell.ru/library/essays/wiw/english/e_wiw). Suppose I gave the application the sentence
"What are George Orwell's opinions on writers."
or perhaps
"George Orwell believes writers enjoy writing to express their creativity, to make a point and for their egos."
Might yield lines from the essay like
"The aesthetic motive is very feeble in a lot of writers, but even a pamphleteer or writer of textbooks will have pet words and phrases which appeal to him for non-utilitarian reasons; or he may feel strongly about typography, width of margins, etc."
or
"Serious writers, I should say, are on the whole more vain and self-centered than journalists, though less interested in money."
I understand that this is not easy and I may not achieve much accuracy, but I was hoping for ideas on what already exists and what I could try to start off, or at least get the best results possible based on what is already known and out there.
The simplest way of doing this might be using some distance functions (such as Cosine similarity) between your query sentence and the sentence pool. It's easy to implement. Create a vocabulary from the text collection and each sentence is represented as a vector. You can use TF-IDF to represent values in the vector, and calculate the cosine similarity between sentences, and get the highest scored sentence with respect to your query sentence.
Or you can build index from your corpus and use for example Lucene and let it do the work for you.
You may also consider using LSA (Latent Semantic Analysis) where you can get the similarity between sentences.
From what I understand from your question (and also your comment) is you are more interested in understanding the meaning of individual sentence and then equate with each other in proximity. Statistical approach, in my opinion, is more for "getting a feel" of the sentence than understanding it. In my opinion I would suggest deep parsing approach.
Deep parse the sentence, understand what roles the words play in the sentence, understand what the subject-verb-object model (left to right parsing and such techniques) and then have a vocabulary that helps you categorise the nouns and verbs.
e.g.
"Serious writers, I should say, are on the whole more vain and self-centered than journalists, though less interested in money."
Parsing this sentence, lets you understand the subject of the sentence is "serious writers" (serious being an adjective, writers basically). In the verb form it states "are" (current state) and "interested". Each verb then points to some more vocabulary including adjectives. If you arrange this vocabulary in correct manner (and keep building it) I think you should get somewhere with your problem.
Related
I used a code for getting sentiment (sense i.e good, bad, average) of any sentence by matching the adjective word with my predefined set of good, bad, average words, Set of bad words, set of average words in the sentence. But for negation(sentence containing "not") I am not able to assign exact sense(whether good or bad or average) to the sentence containing not from my code.
Ex:- sentence-" Bob is best boy in the school." Since in this sentence there is one adjective "best" matching to the good set than Good sense is assigned to this sentence.
But, for negation sentence-"Bob is not best boy in the school". Since in this sentence there is only one adjective "best" matching to the good set than Good sense is assigned to this sentence. But here "not" makes the sense to bad but my code is not able to do handle "not" in the sentence.
Help me to solve the negation problem
"not" is a word to negate expressions in language. To use the term of "negation" would be better for the problem.
To handle "negation", one would use negation triggers (e.g. not, never) and their scopes in sentences. In "Bob is not best boy in the school" example, "best boy in the school" is scope of "not". Scope of negation could be detected via some basic rules or via heuristics using syntactic parse trees as well.
For sentiment analysis, if a sentiment-laden term passes in scope of a negation trigger, one can invert or dampen sentiment value of the trigger or flag the sentiment-laden term.
The case you mentioned is something to be investigated different, however. A superlative adjective in scope of negation may be investigated with adjective's antonym:
worst - bad - neutral - good - best
So these terms are "scaled" and negation conveys semantics this way:
"not best" implies one of "worst - bad - neutral - good", however in general between bad and good also other context of the sentence must be examined
"not good" implies one of "bad - neutral"
This concept is something that I took from Grace's scalar implicature. You can look that up for more detail.
In conclusion for a simple solution, if you use sentiment association scores for those kind of adjectives (e.g. best: +4) I suggest not to invert its score directly by multiplying with -1 when its under scope of negation, but multiplying it with -0.5 to find in between association.
Hope that helps, cheers.
The approach you are taking for "sentiment analysis" is very basic. You need to use some good algorithms for sentiment analysis, a good starting point is support vector machine, random forests which can give you good results without having huge training data. If you care about very good accuracy, then use deep neural nets. Some of the good option for datasets are mentioned Below.
Huge ngrams dataset from google storage.googleapis.com/books/ngrams/books/datasetsv2.html
http://www.sananalytics.com/lab/twitter-sentiment/
http://inclass.kaggle.com/c/si650winter11/data
http://nlp.stanford.edu/sentiment/treebank.html
Because of the problem you are facing, people started using statistics for NLP. There are several other steps which are involved before you apply these algorithms like sentence tokenization, word tokenization, lexical analysis etc.
I am a graduate student focusing on ML and NLP. I have a lot of data (8 million lines) and the text is usually badly written and contains so many spelling mistakes.
So i must go through some text cleaning and vectorizing. To do so, i considered two approaches:
First one:
cleaning text by replacing bad words using hunspell package which is a spell checker and morphological analyzer
+
tokenization
+
convert sentences to vectors using tf-idf
The problem here is that sometimes, Hunspell fails to provide the correct word and changes the misspelled word with another word that don't have the same meaning. Furthermore, hunspell does not reconize acronyms or abbreviation (which are very important in my case) and tends to replace them.
Second approache:
tokenization
+
using some embeddings methode (like word2vec) to convert words into vectors without cleaning text
I need to know if there is some (theoretical or empirical) way to compare this two approaches :)
Please do not hesitate to respond If you have any ideas to share, I'd love to discuss them with you.
Thank you in advance
I post this here just to summarise the comments in a longer form and give you a bit more commentary. No sure it will answer your question. If anything, it should show you why you should reconsider it.
Points about your question
Before I talk about your question, let me point a few things about your approaches. Word embeddings are essentially mathematical representations of meaning based on word distribution. They are the epitome of the phrase "You shall know a word by the company it keeps". In this sense, you will need very regular misspellings in order to get something useful out of a vector space approach. Something that could work out, for example, is US vs. UK spelling or shorthands like w8 vs. full forms like wait.
Another point I want to make clear (or perhaps you should do that) is that you are not looking to build a machine learning model here. You could consider the word embeddings that you could generate, a sort of a machine learning model but it's not. It's just a way of representing words with numbers.
You already have the answer to your question
You yourself have pointed out that using hunspell introduces new mistakes. It will be no doubt also the case with your other approach. If this is just a preprocessing step, I suggest you leave it at that. It is not something you need to prove. If for some reason you do want to dig into the problem, you could evaluate the effects of your methods through an external task as #lenz suggested.
How does external evaluation work?
When a task is too difficult to evaluate directly we use another task which is dependent on its output to draw conclusions about its success. In your case, it seems that you should pick a task that depends on individual words like document classification. Let's say that you have some sort of labels associated with your documents, say topics or types of news. Predicting these labels could be a legitimate way of evaluating the efficiency of your approaches. It is also a chance for you to see if they do more harm than good by comparing to the baseline of "dirty" data. Remember that it's about relative differences and the actual performance of the task is of no importance.
I have some texts in different languages and, potentially, with some typo or other mistake, and I want to retrieve their own vocabulary. I'm not experienced with NLP in general, so maybe I use some word improperly.
With vocabulary I mean a collection of words of a single language in which every word is unique and the inflections for gender, number, or tense are not considered (e.g. think, thinks and thought are are all consider think).
This is the master problem, so let's reduce it to the vocabulary retrieving of one language, English for example, and without mistakes.
I think there are (at least) three different approaches and maybe the solution consists of a combination of them:
search in a database of words stored in relation with each others. So, I could search for thought (considering the verb) and read the associated information that thought is an inflection of think
compute the "base form" (a word without inflections) of a word by processing the inflected form. Maybe it can be done with stemming?
use a service by any API. Yes, I accept also this approach, but I'd prefer to do it locally
For a first approximation, it's not necessary that the algorithm distinguishes between nouns and verbs. For instance, if in the text there were the word thought like both noun and verb, it could be considered already present in the vocabulary at the second match.
We have reduced the problem to retrieve a vocabulary of an English text without mistakes, and without consider the tag of the words.
Any ideas about how to do that? Or just some tips?
Of course, if you have suggestions about this problem also with the others constraints (mistakes and multi-language, not only Indo-European languages), they would be much appreciated.
You need lemmatization - it's similar to your 2nd item, but not exactly (difference).
Try nltk lemmatizer for Python or Standford NLP/Clear NLP for Java. Actually nltk uses WordNet, so it is really combination of 1st and 2nd approaches.
In order to cope with mistakes use spelling correction before lemmatization. Take a look at related questions or Google for appropriate libs.
About part of speech tag - unfortunately, nltk doesn't consider POS tag (and context in general), so you should provide it with the tag that can be found by nltk pos tagging. Again, it is already discussed here (and related/linked questions). I'm not sure about Stanford NLP here - I guess it should consider context, but I was sure that NLTK does so. As I can see from this code snippet, Stanford doesn't use POS tags, while Clear NLP does.
About other languages - google for lemmatization models, since algorithm for most languages (at least from the same family) is almost the same, differences are in training data. Take a look here for example of German; it is a wrapper for several lemmatizers, as I can see.
However, you always can use stemmer at cost of precision, and stemmer is more easily available for different languages.
Topic Word has become an integral part of the rising debate in the present world. Some people perceive that Topic Word (Synonyms) beneficial, while opponents reject this notion by saying that it leads to numerous problems. From my point of view, Topic Word (Synonyms) has more positive impacts than negative around the globe. This essay will further elaborate on both positive and negative effects of this trend and thus will lead to a plausible conclusion.
On the one hand, there is a myriad of arguments in favour of my belief. The topic has a plethora of merits. The most prominent one is that the Topic Word (Synonyms). According to the research conducted by Western Sydney University, more than 70 percentages of the users were in favour of the benefits provided by the Topic Word (Synonyms). Secondly, Advantage of Essay topic. Thus, it can say that Topic Word (Synonyms) plays a vital role in our lives.
On the flip side, critics may point out that one of the most significant disadvantages of the Topic Word (Synonyms) is that due to Demerits relates to the topic. For instance, a survey conducted in the United States reveals that demerit. Consequently, this example explicit shows that it has various negative impacts on our existence.
As a result, after inspection upon further paragraphs, I profoundly believe that its benefits hold more water instead of drawbacks. Topic Word (Synonyms) has become a crucial part of our life. Therefore, efficient use of Topic Word (Synonyms) method should promote; however, excessive and misuse should condemn.
Are there any recourses on determining the countability of nouns? Either some way to work it out or a dictionary that records whether a noun is likely to countable or not countable?
I'm not interested in whether the noun can be countable but more is it likely to be countable. for instance rice can go to rices which means it can be countable but in most cases it wont be.
This is a tough one. Many English words can be both (beer, time, glass, language, etc etc) depending on the context/meaning.
Figuring out (un)countability from the word alone or from a regular dictionary is impossible or impractical.
You can try to figure it out from a large text corpus by seeing how the word is used:
if there's a plural form or not
if there's an indefinite article before it or none
if it's used with many/few, much/little, a piece of(?), etc
But many words can function as both nouns and adjectives and that complicates matters. For example in an air pump, air functions as an adjective and an refers to pump, not to air.
Likewise, many words can function as both nouns and verbs and have identical forms. For example, in she pressures him, pressures isn't a plural of pressure.
Also, some uncountable nouns can have an indefinite article before them when they are made more specific, e.g. knowledge vs a good practical knowledge.
You can gather statistics from an analyzed corpus and based on it judge whether or not a word is more likely to be countable or uncountable.
There are several existing English lexica that contain information about count/mass/etc. distinctions, none of which quite agree with each other because they focus on slightly different distinctions and it's a complicated task. Two are ComLex and CUVPlus (which I can't find a download link for at the moment, although you can find it mentioned in many places).
Check out the work by Timothy Baldwin and Francis Bond in 2003 on learning noun countability from corpora. If you have many occurrences of an unfamiliar noun in a corpus, you can do fairly well at the task of figuring out whether this noun can possibly be a count noun, can possibly be a mass noun, etc. however individual instances are still be quite difficult to classify. If you have the sentence "the wug was white" and according to your lexicon "wug" can be either count or mass, there's not enough information in the immediate context to help you classify it.
I'm not sure if there is an 'official' dictionary saying if a noun is likely to be countable or not, but I can come up with two ways you could go about this:
Either assuming that a noun is likely to be uncountable if somebody put it in a 'list of mass nouns' or 'list of uncountable nouns' (you find quite a lot if you google for those phrases, for example this).
Or make a little corpus study and see how often the word is used in which way: searching "rice" in the Corpus of contemporary american English gives us 22265 hits, while the word "rices" is only found 69 times.
It depends on the context and whether the noun may have plural on its own. Different senses of the same word may differ, e.g.:
expectation: the feeling vs. what is being expected
salt: table salt vs. a type of a chemical element
Our API, GlobalNLP, returns the countability of nouns (among other things) in a particular context in this method: https://nlp.linguasys.com/docs/services/53fccbb15cfea30d9c48f8d6/operations/542a6da01c78d80a3cd6692a
I am trying to find words (specifically physical objects) related to a single word. For example:
Tennis: tennis racket, tennis ball, tennis shoe
Snooker: snooker cue, snooker ball, chalk
Chess: chessboard, chess piece
Bookcase: book
I have tried to use WordNet, specifically the meronym semantic relationship; however, this method is not consistent as the results below show:
Tennis: serve, volley, foot-fault, set point, return, advantage
Snooker: nothing
Chess: chess move, checkerboard (whose own meronym relationships shows ‘square’ & 'diagonal')
Bookcase: shelve
Weighting of terms will eventually be required, but that is not really a concern now.
Anyone have any suggestions on how to do this?
Just an update: Ended up using a mixture of both Jeff's and StompChicken's answers.
The quality of information retrieved from Wikipedia is excellent, specifically how (unsurprisingly) there is so much relevant information (in comparison to some corpora where terms such as 'blog' and 'ipod' do not exist).
The range of results from Wikipedia is the best part. The software is able to match terms such as (lists cut for brevity):
golf: [ball, iron, tee, bag, club]
photography: [camera, film, photograph, art, image]
fishing: [fish, net, hook, trap, bait, lure, rod]
The biggest problem is classifying certain words as physical artefacts; default WordNet is not a reliable resource as many terms (such as 'ipod', and even 'trampolining') do not exist in it.
I think what you are asking for is a source of semantic relationships between concepts. For that, I can think of a number of ways to go:
Semantic similarity algorithms. These algorithms usually perform a tree walk over the relationships in Wordnet to come up with a real-valued score of how related two terms are. These will be limited by how well WordNet models the concepts that you are interested in. WordNet::Similarity (written in Perl) is pretty good.
Try using OpenCyc as a knowledge base. OpenCyc is a open-source version of Cyc, a very large knowledge base of 'real-world' facts. It should have a much richer set of sematic realtionships than WordNet does. However, I have never used OpenCyc so I can't speak to how complete it is, or how easy it is to use.
n-gram frequency analysis. As mentioned by Jeff Moser. A data-driven approach that can 'discover' relationships from large amounts of data, but can often produce noisy results.
Latent Semantic Analysis. A data-driven approach similar to n-gram frequency analysis that finds sets of semantically related words.
[...]
Judging by what you say you want to do, I think the last two options are more likely to be successful. If the relationships are not in Wordnet then semantic similarity won't work and OpenCyc doesn't seem to know much about snooker other than the fact that it exists.
I think a combination of both n-grams and LSA (or something like it) would be a good idea. N-gram frequencies will find concepts tightly bound to your target concept (e.g. tennis ball) and LSA would find related concepts mentioned in the same sentence/document (e.g. net, serve). Also, if you are only interested in nouns, filtering your output to contain only nouns or noun phrases (by using a part-of-speech tagger) might improve results.
In the first case, you probably are looking for n-grams where n = 2. You can get them from places like Google or create your own from all of Wikipedia.
For more information, check out this related Stack Overflow question.