There are phonetic level, syntactic level, semantic level, phonological level, acoustic level,
linguistic level, language level.
Are there any other levels?
What's the order from the bottom up?
And what are they really about?
Language admits a lot of diversity, but it also obeys a lot of rules (though often loose ones, with tons of exceptions). So in a particular language, certain sounds are more likely to follow other sounds, certain words are more likely to follow others, and so on. The levels are basically the level of modeling.
Acoustic level is trying to determine which acoustic signals are useful for distinguishing human speech. It tries to answer questions like, "Is this background noise or a speech sound?"
Phonological level is based on what sounds are most likely to combine together when it is trying to reconstruct an acoustic signal into a sequence of phonemes. I think this is essentially the same as the phonetic level.
The language level determines what kind of accent the user has, the dialect, etc.
At the syntactic level, you're looking at which words are likely to appear together based on the syntax of the sentence. This gets rid of words that it would have guessed based on the phonological level but would construct ungrammatical sentences.
The linguistic level as I understand it is more a matter of picking the right word (for example, which homonym our versus hour) based on the context.
At the semantic level, it attempts to model the meaning of the sentence and get rid of things that don't conform to the grammatical relations of verbs and prepositions. For example, the verb disappear takes no direct object, so if there is something in that semantic slot, there probably is an error.
The order will depend on the application really, some of them may be collapsed into each other, some may not be used at all. A conceptual hierarchy that makes sense to me is acoustic < phonological = phonetic < language < syntactic < linguistic < semantic.
Related
What are the best lexicons for document-level and sentence-level analysis? I'm using Vader currently for sentence-level analysis, however I'm worried that when I move to the document level, Vader may not perform as well as others.
Similar question to the post here, however more specific.
In addition to the sentiment lexica listed in the linked post, I can recommend aFinn sentiment lexicon.
For sentiment analysis, depending on only lexica may not be be best solution, especially on document level. Language is so flexible that its attributes and notions other than sentiment-laden vocabulary effect semantics deeply.
Some of the core notions are contrastive discource markers (especially for document level), negation and modality.
contrastive discourse markers
There are opinions that have both pros and cons within documents and we tie those via those markers like 'however', 'nevertheless' etc. to convey meaning or an idea. For a bag of words approach, the sentences below are treated the same, yet if people to annotate their sentiment with one label, they may not annotate them with the same one:
The laptop has amazing features, but its screen is killing me.
The laptop's screen is killing me, but it has amazing features.
In general, we evaluate these kind of sentences or paragraphs with the sentiment of the subclause after 'but'. Other contastive discource markers have their own semantics as well. This is inspected in an area called discource analysis.
negation and modality
These notions change semantics as well. So, they cannot be overlooked for both levels. There are studies and papers those used negation and modality triggers with sentiment lexica. You can google it 'negation and modality on sentiment analysis' to see what you can do.
Finally what I can suggest is if you have a domain-specific dataset, you may build your own lexicon using distant supervision.
Hope this helps,
Cheers
Functional decomposition is not used when applied to use case
modeling.
This phrase is a copied one from a question paper.What is the meaning of that phrase.
This is not equal to the question What is Functional Decomposition?
Use cases deal with synthesizing functions, not decomposing them. A technician decomposes a system to find the pieces it is built of. A business analyst tries to stack single parts of a system on piles so they are formed around a single goal. A use case describes a unique single piece of added value a systems gives to an actor. Indeed, functional decomposition is what most people try when describing use cases: finding bits and pieces. They all (like me) come from the programming guild where this is the daily job. But UC synthesis is working the exact opposite!
I recommend reading Bittner/Spence who explain this very complex approach in detail and very nicely.
I have a corpus of a few 100-thousand legal documents (mostly from the European Union) – laws, commentary, court documents etc. I am trying to algorithmically make some sense of them.
I have modeled the known relationships (temporal, this-changes-that, etc). But on the single-document level, I wish I had better tools to allow fast comprehension. I am open for ideas, but here's a more specific question:
For example: are there NLP methods to determine the relevant/controversial parts of documents as opposed to boilerplate? The recently leaked TTIP papers are thousands of pages with data tables, but one sentence somewhere in there may destroy an industry.
I played around with google's new Parsey McParface, and other NLP solutions in the past, but while they work impressively well, I am not sure how good they are at isolating meaning.
In order to make sense out of documents you need to perform some sort of semantic analysis. You have two main possibilities with their exemples:
Use Frame Semantics:
http://www.cs.cmu.edu/~ark/SEMAFOR/
Use Semantic Role Labeling (SRL):
http://cogcomp.org/page/demo_view/srl
Once you are able to extract information from the documents then you may apply some post-processing to determine which information is relevant. Finding which information is relevant is task related and I don't think you can find a generic tool that extracts "the relevant" information.
I see you have an interesting usecase. You've also mentioned the presence of a corpus(which a really good plus). Let me relate a solution that I had sketched for extracting the crux from research papers.
To make sense out of documents, you need triggers to tell(or train) the computer to look for these "triggers". You can approach this using a supervised learning algorithm with a simple implementation of a text classification problem at the most basic level. But this would need prior work, help from domain experts initially for discerning "triggers" from the textual data. There are tools to extract gists of sentences - for example, take noun phrases in a sentence, assign weights based on co-occurences and represent them as vectors. This is your training data.
This can be a really good start to incorporating NLP into your domain.
Don't use triggers. What you need is a word sense disambiguation and domain adaptation. You want to make sense of is in the documents i.e understand the semantics to figure out the meaning. You can build a legal ontology of terms in skos or json-ld format represent it ontologically in a knowledge graph and use it with dependency parsing like tensorflow/parseymcparseface. Or, you can stream your documents in using a kappa based architecture - something like kafka-flink-elasticsearch with added intermediate NLP layers using CoreNLP/Tensorflow/UIMA, cache your indexing setup between flink and elasticsearch using redis to speed up the process. To understand relevancy you can apply specific cases from boosting in your search. Furthermore, apply sentiment analysis to work out intents and truthness. Your use case is one of an information extraction, summarization, and semantic web/linked data. As EU has a different legal system you would need to generalize first on what is really a legal document then narrow it down to specific legal concepts as they relate to a topic or region. You could also use here topic modelling techniques from LDA or Word2Vec/Sense2Vec. Also, Lemon might also help from converting lexical to semantics and semantics to lexical i.e NLP->ontology ->ontology->NLP. Essentially, feed the clustering into your classification of a named entity recognition. You can also use the clustering to assist you in building out the ontology or seeing what word vectors are in a document or set of documents using cosine similarity. But, in order to do all that it be best to visualize the word sparsity of your documents. Something like commonsense reasoning + deep learning might help in your case as well.
what is the relation between semantic interoperability and upper ontology?
A couple of points :
You don't mention how it will automatically perform the validaty checking.
I dont agree with this statement
"This tells use that it is actually impossible to define most of English words using English alone"
An Ontolology should do presicely that - the tricky bit is to work out which one to use (there is a list of 18 top level Upper Ontologies at http://www.acutesoftware.com.au/aikif/ontology.html)
You need to clarify exactly what outputs this system is going to produce. I understand the idea of mapping the entities regulations ,etc but what are you going to do with this and how will the information be used.
The Penn Treebank tagset has a separate tag TO for the word 'to', irrespective of whether it's used in the preposition sense (such as I went to school) or the infinitive sense (such as I want to eat). What purpose does this serve from an overall NLP perspective? Just tagging the infinitival 'to' separately makes intuitive sense, but I don't see the logic behind combining an infinitive and a preposition in a single tag.
Thanks, and apologies if this doesn't fit the stack overflow guidelines.
Different corpora provide different levels of granularity. Compare this, for instance, to the British National Corpus, which includes three different tags for to.
I believe this may have come as a property of the corpus tagging practice rather than from such a specific NLP performance purpose. It's not that unlikely to imagine that it was a design decision of the POS Guidelines for the Penn Treebank Project. (Contacting the authors of this paper for further clarification.)
In order for the POS tagset not to have a separate tag for the word "to", it would sometimes need to tag "to" as a preposition, and to sometimes tag "to" with a different tag for "infinitive marker." For this to happen, a human tagger would have had to disambiguate between both roles of "to." Some tricky cases (which require grammaticality judgments) may require some extra human time to disambiguate, which may also lead to some mistagging given the size of the corpus tagged. This tradeoff may have erred more on the side of efficiency and correctness if the information gain (from the granularity of having to disambiguated) was estimated to be not that large, or if the potential tagging errors were estimated to be too many.