concept extraction using Wordnet - nlp

I wish to know how can i used WordNet to extract concepts from a text document.Earlier I have used bag of words approach to measure similarity between text documents, however i wish to use semantic information of text therefore wants to extract concepts from the document.I understand Wordnet offer Sysnet that contains synonyms for the given word.
however what i am trying to achieve is that how can i use this information to define a concept in the textual data. I wonder should i need to define the list of concepts separately and manually before using sysnet and than compare those concepts with the sysnet.
Any suggestion or link is appreciated.

I think you'll find that there are too many concepts out there for enumerating all of them yourself to be practical. Instead, you should consider using a pre-existing source of knowledge such as Wikidata, Wikipedia, Freebase, the content of Tweets, the web at large, or some other source as the basis for constructing your concepts. You may find clustering algorithms useful for defining these. In terms of synonyms... words related to a concept may not necessarily be synonymous (e.g. both love and hate may be connected to the same concept regarding an intensity of emotion towards someone else) and some words could belong to multiple concepts (e.g. wedding could be in both the love and in the marriage concept), so I'd suggest having some linkage from synset to concept that isn't strictly 1:1.

Related

Best lexicons for sentence vs document level analysis

What are the best lexicons for document-level and sentence-level analysis? I'm using Vader currently for sentence-level analysis, however I'm worried that when I move to the document level, Vader may not perform as well as others.
Similar question to the post here, however more specific.
In addition to the sentiment lexica listed in the linked post, I can recommend aFinn sentiment lexicon.
For sentiment analysis, depending on only lexica may not be be best solution, especially on document level. Language is so flexible that its attributes and notions other than sentiment-laden vocabulary effect semantics deeply.
Some of the core notions are contrastive discource markers (especially for document level), negation and modality.
contrastive discourse markers
There are opinions that have both pros and cons within documents and we tie those via those markers like 'however', 'nevertheless' etc. to convey meaning or an idea. For a bag of words approach, the sentences below are treated the same, yet if people to annotate their sentiment with one label, they may not annotate them with the same one:
The laptop has amazing features, but its screen is killing me.
The laptop's screen is killing me, but it has amazing features.
In general, we evaluate these kind of sentences or paragraphs with the sentiment of the subclause after 'but'. Other contastive discource markers have their own semantics as well. This is inspected in an area called discource analysis.
negation and modality
These notions change semantics as well. So, they cannot be overlooked for both levels. There are studies and papers those used negation and modality triggers with sentiment lexica. You can google it 'negation and modality on sentiment analysis' to see what you can do.
Finally what I can suggest is if you have a domain-specific dataset, you may build your own lexicon using distant supervision.
Hope this helps,
Cheers

NLP of Legal Texts?

I have a corpus of a few 100-thousand legal documents (mostly from the European Union) – laws, commentary, court documents etc. I am trying to algorithmically make some sense of them.
I have modeled the known relationships (temporal, this-changes-that, etc). But on the single-document level, I wish I had better tools to allow fast comprehension. I am open for ideas, but here's a more specific question:
For example: are there NLP methods to determine the relevant/controversial parts of documents as opposed to boilerplate? The recently leaked TTIP papers are thousands of pages with data tables, but one sentence somewhere in there may destroy an industry.
I played around with google's new Parsey McParface, and other NLP solutions in the past, but while they work impressively well, I am not sure how good they are at isolating meaning.
In order to make sense out of documents you need to perform some sort of semantic analysis. You have two main possibilities with their exemples:
Use Frame Semantics:
http://www.cs.cmu.edu/~ark/SEMAFOR/
Use Semantic Role Labeling (SRL):
http://cogcomp.org/page/demo_view/srl
Once you are able to extract information from the documents then you may apply some post-processing to determine which information is relevant. Finding which information is relevant is task related and I don't think you can find a generic tool that extracts "the relevant" information.
I see you have an interesting usecase. You've also mentioned the presence of a corpus(which a really good plus). Let me relate a solution that I had sketched for extracting the crux from research papers.
To make sense out of documents, you need triggers to tell(or train) the computer to look for these "triggers". You can approach this using a supervised learning algorithm with a simple implementation of a text classification problem at the most basic level. But this would need prior work, help from domain experts initially for discerning "triggers" from the textual data. There are tools to extract gists of sentences - for example, take noun phrases in a sentence, assign weights based on co-occurences and represent them as vectors. This is your training data.
This can be a really good start to incorporating NLP into your domain.
Don't use triggers. What you need is a word sense disambiguation and domain adaptation. You want to make sense of is in the documents i.e understand the semantics to figure out the meaning. You can build a legal ontology of terms in skos or json-ld format represent it ontologically in a knowledge graph and use it with dependency parsing like tensorflow/parseymcparseface. Or, you can stream your documents in using a kappa based architecture - something like kafka-flink-elasticsearch with added intermediate NLP layers using CoreNLP/Tensorflow/UIMA, cache your indexing setup between flink and elasticsearch using redis to speed up the process. To understand relevancy you can apply specific cases from boosting in your search. Furthermore, apply sentiment analysis to work out intents and truthness. Your use case is one of an information extraction, summarization, and semantic web/linked data. As EU has a different legal system you would need to generalize first on what is really a legal document then narrow it down to specific legal concepts as they relate to a topic or region. You could also use here topic modelling techniques from LDA or Word2Vec/Sense2Vec. Also, Lemon might also help from converting lexical to semantics and semantics to lexical i.e NLP->ontology ->ontology->NLP. Essentially, feed the clustering into your classification of a named entity recognition. You can also use the clustering to assist you in building out the ontology or seeing what word vectors are in a document or set of documents using cosine similarity. But, in order to do all that it be best to visualize the word sparsity of your documents. Something like commonsense reasoning + deep learning might help in your case as well.

How can I determine the exact meaning of a word in a sentence? [duplicate]

I am using Wordnet for finding synonyms of ontology concepts. How can i find choose the appropriate sense for my ontology concept. e.g there is an ontlogy concept "conference" it has following synsets in wordnet
The noun conference has 3 senses (first 3 from tagged texts)
(12) conference -- (a prearranged meeting for consultation or exchange of information or discussion (especially one with a formal agenda))
(2) league, conference -- (an association of sports teams that organizes matches for its members)
(2) conference, group discussion -- (a discussion among participants who have an agreed (serious) topic)
now 1st and 3rd synsets have apprpriate sense for my ontology concept. How can i choose only these two from wordnet?
The technology you're looking for is in the direction of semantic disambiguation / representation.
The most "traditional approach" is Word Sense Disambiguation (WSD), take a look at
https://en.wikipedia.org/wiki/Word-sense_disambiguation
https://stackoverflow.com/questions/tagged/word-sense-disambiguation
Anyone know of some good Word Sense Disambiguation software?
Then comes the next generation of Word Sense induction / Topic modelling / Knowledge representation:
https://en.wikipedia.org/wiki/Word-sense_induction
https://en.wikipedia.org/wiki/Topic_model
https://en.wikipedia.org/wiki/Knowledge_representation_and_reasoning
Then comes the most recent hype:
Word embeddings, vector space models, neural nets
Sometimes people skip the semantic representation and goes directly to do text similarity and by comparing pairs of sentences, the differences/similarities before getting to the ultimate aim of the text processing.
Take a look at Normalize ranking score with weights for a list of STS related work.
On the other direction, there's
ontology creation (Cyc, Yago, Freebase, etc.)
semantic web (https://en.wikipedia.org/wiki/Semantic_Web)
semantic lexical resources (WordNet, Open Multilingual WordNet, etc.)
Knowledge base population (http://www.nist.gov/tac/2014/KBP/)
There's also a recent task on ontology induction / expansion:
http://alt.qcri.org/semeval2015/task17/
http://alt.qcri.org/semeval2016/task13/
http://alt.qcri.org/semeval2016/task14/
Depending on the ultimate task, maybe either of the above technology would help.
You can also try Babelfy, which provides Word Sense Disambiguation and Named Entity Disambiguation.
Demo:
http://babelfy.org/
API:
http://babelfy.org/guide
Take a look at this list: 100 Best GitHub: Word-sense Disambiguation
and search by WordNet - there are several appropriate libraries.
I didn't use any of them, but this one seems to be promising, because it is based on classic yet effective idea (namely, Lesk algorithm) upgraded by modern word-embedding methods. Actually, before finding it, I was going to suggest to try almost the same ideas.
Note also that all methods try to find the meaning (WordNet sysnet, in your case) that is most similar to the context of the current word/collocation, so it is crucial to have context of the words you're trying to disambiguate. For example, words can come from some text and most libraries rely on that.

Choosing appropriate sense of a word from wordnet

I am using Wordnet for finding synonyms of ontology concepts. How can i find choose the appropriate sense for my ontology concept. e.g there is an ontlogy concept "conference" it has following synsets in wordnet
The noun conference has 3 senses (first 3 from tagged texts)
(12) conference -- (a prearranged meeting for consultation or exchange of information or discussion (especially one with a formal agenda))
(2) league, conference -- (an association of sports teams that organizes matches for its members)
(2) conference, group discussion -- (a discussion among participants who have an agreed (serious) topic)
now 1st and 3rd synsets have apprpriate sense for my ontology concept. How can i choose only these two from wordnet?
The technology you're looking for is in the direction of semantic disambiguation / representation.
The most "traditional approach" is Word Sense Disambiguation (WSD), take a look at
https://en.wikipedia.org/wiki/Word-sense_disambiguation
https://stackoverflow.com/questions/tagged/word-sense-disambiguation
Anyone know of some good Word Sense Disambiguation software?
Then comes the next generation of Word Sense induction / Topic modelling / Knowledge representation:
https://en.wikipedia.org/wiki/Word-sense_induction
https://en.wikipedia.org/wiki/Topic_model
https://en.wikipedia.org/wiki/Knowledge_representation_and_reasoning
Then comes the most recent hype:
Word embeddings, vector space models, neural nets
Sometimes people skip the semantic representation and goes directly to do text similarity and by comparing pairs of sentences, the differences/similarities before getting to the ultimate aim of the text processing.
Take a look at Normalize ranking score with weights for a list of STS related work.
On the other direction, there's
ontology creation (Cyc, Yago, Freebase, etc.)
semantic web (https://en.wikipedia.org/wiki/Semantic_Web)
semantic lexical resources (WordNet, Open Multilingual WordNet, etc.)
Knowledge base population (http://www.nist.gov/tac/2014/KBP/)
There's also a recent task on ontology induction / expansion:
http://alt.qcri.org/semeval2015/task17/
http://alt.qcri.org/semeval2016/task13/
http://alt.qcri.org/semeval2016/task14/
Depending on the ultimate task, maybe either of the above technology would help.
You can also try Babelfy, which provides Word Sense Disambiguation and Named Entity Disambiguation.
Demo:
http://babelfy.org/
API:
http://babelfy.org/guide
Take a look at this list: 100 Best GitHub: Word-sense Disambiguation
and search by WordNet - there are several appropriate libraries.
I didn't use any of them, but this one seems to be promising, because it is based on classic yet effective idea (namely, Lesk algorithm) upgraded by modern word-embedding methods. Actually, before finding it, I was going to suggest to try almost the same ideas.
Note also that all methods try to find the meaning (WordNet sysnet, in your case) that is most similar to the context of the current word/collocation, so it is crucial to have context of the words you're trying to disambiguate. For example, words can come from some text and most libraries rely on that.

Performing semantic analysis in text

I want to perform semantic analysis on some text similar to YAGO[1]. But I have no structure in the text to identify entities and relationships. One way is I use POS tagging and then identify subject and predicates in the sentences[2]. But still I cannot establish what relationships exist between them.
How should I go about this?
For example:
Albert Einstein was born in 1879.
Should result in:
AlbertEinstein BORNIN 1879
subject relation predicate
My aim to look for better approaches to find subjects, predicates and relationships in raw text.
What you are trying to do is essentially Natural Language Understanding, a subfield of Natural Language Processing, which again is a subfield of Computational Linguistics ~ often thought as the engineering arm.
You could do semantic parsing or relation extraction. Either are fine for this task. I decided to read through Suchanek et al (2007) and you will realise that it is ontology based, where the relations are extracted into a predefined ontological template where aixoms are predifed (e.g. BORNIN). I personally think this is far to restrictive for general intelligence but works great with weak ai problems [narrow domains]. Much more interesting work has been happening over the years such as ontology driven information extraction, where the algorithms are trained on the ontology rather than having a corpus annotated by an ontology. One PhD study that comes to mind is McDowell Thesis and the Yildiz & Miksch (2007) paper.
Regardless and without going off topic, there is a really interesting open source Python GUI driven project called iepy at the moment being developed by a firm called Machinalis which is based on django. It allows for rule based and machine learning based information extraction. I highly recommend you check it out -> Tried and tested by myself. Also, I'm not affiliated with this company.
https://github.com/machinalis/iepy
According to the documentation:
IEPY is an open source tool for Information Extraction focused on
Relation Extraction.
To give an example of Relation Extraction, if we are trying to find a
birth date in:
"John von Neumann (December 28, 1903 – February 8, 1957) was a
Hungarian and American pure and applied mathematician, physicist,
inventor and polymath." then IEPY's task is to identify "John von
Neumann" and "December 28, 1903" as the subject and object entities of
the "was born in" relation.
It's aimed at: users needing to perform Information Extraction on a
large dataset. scientists wanting to experiment with new IE
algorithms.
The task you attempt to solve is called relation extraction, while semantic analysis has much broader meaning (honestly, I can't say for sure what does it mean now).
Relation extraction is an open research problem, so I suggest to review the field - for example, start from the chapter 2.3 of Mining text data book or A Review of Relation Extraction paper (which is a little older - 2007). Then continue research by following citing or cited-by links; finally, try to implement approach that looks most promising: for example, if you know that your data is rather formal (all sentences are short and share similar strict structure), then try something like pattern-based approaches; and so on.
Stanford parser can do it :) You need to look at the dependency parser though. Have a look at the bottom of this page: http://nlp.stanford.edu/software/lex-parser.shtml:
subject: nsubj(snapped, rain),
or direct object: dobj(shut, hub))
...
Or have a look at this page (Stanford Dependencies): http://nlp.stanford.edu/software/stanford-dependencies.shtml
And to understand the annotations have a look at this: http://nlp.stanford.edu/software/dependencies_manual.pdf
And for your particular example, use Stanford "collapsed" dependency parser which for a given sentence will produce predicates like born_in(Einstein,1879), which is very similar to what you want.

Resources