what is the NLTK equivalent of the UIMA CAS (common annotation structure)? - nlp

In UIMA, the CAS (common annotating structure) plays a major role in structuring an NLP application. It allows to pass the metadata that one components adds into the next compoment. For example, sentence boundaries from a sentence tokenizer can be added to the CAS and used by the subsequent word tokenizer.
What is the equivalent data structure in NLTK?

In short, there is no equivalent concept to the CAS (Common Analysis System) in NLTK. The latter uses much simpler means of representing texts than does UIMA. In NLTK, texts are simply lists of words, whereas in UIMA you have very complex (and heavy-weight) data structures defined as part of the CAS for the purpose of describing the input data and its flow through a UIMA system.
That being said, I view the two of them to serve quite different purposes anyway. If I was to name a Java equivalent for NLTK, I would choose the OpenNLP toolkit rather than UIMA. The former offers a number of algorithms for NLP based on machine learning (as does NLTK, among other things), while the latter is a component-based framework not only for NLP, but unstructured data in general. That is, it defines a general model for building applications working with unstructured data.

Related

NLP of Legal Texts?

I have a corpus of a few 100-thousand legal documents (mostly from the European Union) – laws, commentary, court documents etc. I am trying to algorithmically make some sense of them.
I have modeled the known relationships (temporal, this-changes-that, etc). But on the single-document level, I wish I had better tools to allow fast comprehension. I am open for ideas, but here's a more specific question:
For example: are there NLP methods to determine the relevant/controversial parts of documents as opposed to boilerplate? The recently leaked TTIP papers are thousands of pages with data tables, but one sentence somewhere in there may destroy an industry.
I played around with google's new Parsey McParface, and other NLP solutions in the past, but while they work impressively well, I am not sure how good they are at isolating meaning.
In order to make sense out of documents you need to perform some sort of semantic analysis. You have two main possibilities with their exemples:
Use Frame Semantics:
http://www.cs.cmu.edu/~ark/SEMAFOR/
Use Semantic Role Labeling (SRL):
http://cogcomp.org/page/demo_view/srl
Once you are able to extract information from the documents then you may apply some post-processing to determine which information is relevant. Finding which information is relevant is task related and I don't think you can find a generic tool that extracts "the relevant" information.
I see you have an interesting usecase. You've also mentioned the presence of a corpus(which a really good plus). Let me relate a solution that I had sketched for extracting the crux from research papers.
To make sense out of documents, you need triggers to tell(or train) the computer to look for these "triggers". You can approach this using a supervised learning algorithm with a simple implementation of a text classification problem at the most basic level. But this would need prior work, help from domain experts initially for discerning "triggers" from the textual data. There are tools to extract gists of sentences - for example, take noun phrases in a sentence, assign weights based on co-occurences and represent them as vectors. This is your training data.
This can be a really good start to incorporating NLP into your domain.
Don't use triggers. What you need is a word sense disambiguation and domain adaptation. You want to make sense of is in the documents i.e understand the semantics to figure out the meaning. You can build a legal ontology of terms in skos or json-ld format represent it ontologically in a knowledge graph and use it with dependency parsing like tensorflow/parseymcparseface. Or, you can stream your documents in using a kappa based architecture - something like kafka-flink-elasticsearch with added intermediate NLP layers using CoreNLP/Tensorflow/UIMA, cache your indexing setup between flink and elasticsearch using redis to speed up the process. To understand relevancy you can apply specific cases from boosting in your search. Furthermore, apply sentiment analysis to work out intents and truthness. Your use case is one of an information extraction, summarization, and semantic web/linked data. As EU has a different legal system you would need to generalize first on what is really a legal document then narrow it down to specific legal concepts as they relate to a topic or region. You could also use here topic modelling techniques from LDA or Word2Vec/Sense2Vec. Also, Lemon might also help from converting lexical to semantics and semantics to lexical i.e NLP->ontology ->ontology->NLP. Essentially, feed the clustering into your classification of a named entity recognition. You can also use the clustering to assist you in building out the ontology or seeing what word vectors are in a document or set of documents using cosine similarity. But, in order to do all that it be best to visualize the word sparsity of your documents. Something like commonsense reasoning + deep learning might help in your case as well.

What is the most efficient way of storing language models in NLP applications?

How do they usually store and update language models (such as N-gram models)? What kind of structure is the most efficient way for storing these models in databases?
The most common data structures in language models are tries and hash tables. You can take a look at Kenneth Heafield's paper on his own language model toolkit KenLM for more detailed information about the data structures used by his own software and related packages.
For speech recognition and some other applications, it's common to represent n-gram models as finite state transducers. I don't know that FSTs are the most efficient storage structure, but there are very simple (and mathematically clean) ways of combining them with other portions of a speech recognition model.
See the OpenFST library and the OpenGRM tools (built on top of OpenFST) for language-model construction, pruning, evaluation, etc. Mohri et al., 2002 is a good introduction, along with the other papers linked from the OpenFST and OpenGRM sites.

How apache UIMA is different from Apache Opennlp

I have been doing some capability testing with Apache OpenNLP, Which has the capability to Sentence detection, Tokenization, Name entity recognition. Now when i started looking at UIMA documents it is mentioned on the UIMA home page - "language identification" => "language specific segmentation" => "sentence boundary detection" => "entity detection (person/place names etc.)".
Which says that i can use UIMA to do the same task as done by OpenNLP. What added feature both have ? I am new to this area, Please help me to understand the uses and capability perspective of both.
As I understand the question, you are asking for the differences between the feature sets of Apache UIMA and Apache OpenNLP. Their feature sets barely have anything in common as these two projects have very different aims.
Apache UIMA is an open source implementation of the UIMA specification. The latter defines a conceptual framework for augmenting unstructured information (such as natural language produced by humans) with structured metadata so that computers can work with it.
As an example for an application working with unstructured information, let us take a an application that takes natural language text as input and marks all named entities in the given text, e.g.
Input text = "Bob's cat Charlie is chasing a mouse."
Result = "<NE>Bob</NE>'s cat <NE>Charlie</NE> is chasig a mouse."
To identify the named entities in this example (i.e. Bob and Charlie), several steps of natural language processing have to be performed. Without going into detail about what each of the steps does, a hypothetical system for named entity recognition might involve the following steps:
Data preparation
Sentence splitting
Tokenization
Token lemmatization
Part-of-speech tagging
Phrase detection
Classifying phrases as named entities or not
As you can see, such applications can be very intuitively modelled as sequences of components, and this is exactly what UIMA does. It models applications dealing with unstructed information as pipelines of components (called analytics in UIMA parlance). As you can imagine, many of the pipeline components listed above can be used for other tasks and so the architecture design of UIMA emphasizes reusability of components.
To avoid confusion, the UIMA standard itself doesn't provide any specific components, but defines an infrastructure for UIM (Unstructured Information Management) applications, e.g. workflows, data types, inter-component communication, and so on.
Apache OpenNLP on the other hand does exactly that, namely provide concrete implementations of NLP algorithms dealing with very specific tasks (sentence splitting, POS-tagging, etc.). The source of your confusion might be that it is possible to write Apache UIMA components that wrap OpenNLP tools. The OpenNLP project actually provides such components.
Whether you want to use the UIMA framework for your UIM applications depends on the size of the project. If it is small, I would go without UIMA and just use OpenNLP directly, as UIMA is rather heavy-weight and thus only adds complex yet (for small applications) unnecessary overhead. Also, due to its complexity, it takes a good amount of time to learn how to use it.
Summing up, Apache UIMA and Apache OpenNLP solve different problems, but since both deal with unstructured information, they can be combined profitably.

Grammatical framework GF and owl

I am interested in the filed on Computational Linguistics and NLP. I read a lot about Grammatical Framework (GF), which is divided into abstract syntax and concrete syntax. And I know a little bit about OWL, RDF and WordNet. I am confused about the differences between the 2 technologies.
Can we use GF rather than OWL as syntax builders?
Can we eliminate Parser by using GF?
Does GF contains all terms so we don't need to use WordNet?
One of the formal definitions of Grammatical Framework is:
Grammatical Framework (GF), grammaticalframework.org, is a multilingual grammar formalism based on the idea of a shared abstract syntax and mappings between the abstract syntax and concrete languages. GF has hundreds of users all over the world.
The way GF is connected to the Semantic Web is through lemon:
Lemon is a proposed model for modeling lexicon and machine-readable dictionaries and linked to the Semantic Web and the Linked Data cloud.It was designed to meet the following challenges:
RDF-native form to enable leverage of existing Semantic Web technologies (SPARQL, OWL, RIF etc.).
Linguistically sound structure based on LMF to enable conversion to existing offline formats.
Separation of the lexicon and ontology layers, to ensure compatability with existing OWL models.
Linking to data categories, in order to allow for arbitrarily complex linguistic description.
So to answer your first question, GF and OWL complement each other. GF is essentially a set of grammatical rules that can be mapped between languages, but depending on the task at hand, you can use GF to develop powerful Semantic Web tools. For example, GF can be used to verbalise ontologies, as it has been demonstrated in lemon papers.
For the second question, yes. Since the intermediate level of GF is a set of logical rules, you don't need a parser anymore. The morphology and basic syntax mapping can be enough (again, what is your goal? As the definition says, GF covers basic syntax.)
As for WordNet:
WordNet® is a large lexical database of English. Nouns, verbs, adjectives and adverbs are grouped into sets of cognitive synonyms (synsets), each expressing a distinct concept. Synsets are interlinked by means of conceptual-semantic and lexical relations.
WordNet can be perceived as an ontology, but it is not. It cannot even be called a linguistic ontology. Having hypernym and hyponym relations does not make a dataset into an ontology.
What lemon or ontolex are trying to achieve is to create an ontology that can be used for linguistic purposes. This purpose could be annotation, corpus study, modelling dictionaries, and etc. However, the power of WordNet lies within its synsets (Words from the same lexical category that are roughly synonymous are grouped into synsets.); but the power of RDF/OWL lies within inference.
In the 4 years since this question was first asked, there have been some updates in GF. Most importantly, we now have a WordNet ported into GF, currently for 13 languages, with full inflection tables. You can find the repository in https://github.com/GrammaticalFramework/gf-wordnet#readme and a multilingual web interface in http://www.grammaticalframework.org/~krasimir/gf-wordnet.html. Some examples how to use the interface:
English inflection table:
Finnish inflection table:

Is NER necessary for Coreference resolution?

... or is gender information enough?
More specifically, I'm interested in knowing if I can reduce the number of models loaded by the Stanford Core NLP to extract coreferences. I am not interested in actual named entity recognition.
Thank you
According to the EMNLP paper that describes the coref system packaged with Stanford CoreNLP, named entities tags are just used in the following coref annotation passes: precise constructs, relaxed head matching, and pronouns (Raghunathan et al. 2010).
You can specify what passes to use with the dcoref.sievePasses configuration property. If you want coreference but you don't want to do NER, you should be able to just run the pipeline without NER and specify that the coref system should only use the annotation passes that don't require NER labels.
However, the resulting coref annotations will take a hit on recall. So, you might want to do some experiments to determine whether the degraded quality of the annotations is problem for whatever your are using them for downstream.
In general, yes. First, you need named entities because they serve as the candidate antecedents, or targets to which the pronouns refer. Many (most?) systems perform both entity recognition and type classification in one step. Second, the semantic category (e.g. person, org, location) of the entities are important for constructing accurate coreference chains.

Resources