I have 2 corpuses, if one has a larger vocabulary size than the other, does it mean its language is more complex?
Apart from complexity of the language, what else can effect the size of the vocabulary in a corpus?
No. Language consists of a lot more than just vocabulary. If the grammatical structures are convoluted, then even a smaller vocabulary can lead to very complex sentences.
In order to answer the second part properly, you'd need to define first what exactly you mean by 'complexity'. This is not a measure that can easily be quantified (such as, eg, sentence length).
Most reading comprehension measures combine the length of words and sentences, on the assumption that longer words and longer sentences are harder to understand; however, shorter words tend to have more different meanings, and are arguably harder to understand if their meaning is not clear from the context.
Update after clarification: The size of the vocabulary depends on various factors, such as:
active vocabulary of the author: if I write a text in my native language (where my vocab is large), the number of different words I use in it will be bigger. If I write in a foreign language where I don't know that many words, it will of course be smaller
the language itself: a bit of an anomaly, but English has a much larger vocabulary than some other languages, due to its history. There are many near-synonyms, so it's easier to use more different word. Other languages are more limited.
topic: this is probably the biggest factor, as a very limited, technical topic will result in a more limited vocab. Wikipedia in general uses a broad range of words, but if you only take the articles on animals, the vocab will be more restricted.
style: similar to (1), I have an influence on the vocab size by how I write. By limiting my vocab, I can make a text more 'plain' (and leave more to the reader's imagination).
Apart from what Oliver has mentioned, from my professional experience the size of the vocabulary in a corpus often depends on the following:
How exactly do you tokenize and count vocabulary in your corpora?
For example, if you count compounds as a number of separate tokens you will have slightly different numbers compared to if you counted each compound noun as one token.
(elaborating on the issue of "topic" mentioned by Oliver above): each particular topic has its own set of terminology (knitting vs airspace engineering) but the total term density will depend on the author's vocabulary.
Inclusion of loanwords
As to your first question of language complexity, every language's complexity is relative to the issue at hand. If we are developing an English-Japanese translator -- the Japanese language is VERY complex, if a Chinese person is learning Japanese, it is MODERATELY complex. If we are comparing inflectional morphology: Russian and German are more complex than English. Basically, there are many ways of looking at the issue of language complexity depending on the participants' perspectives.
Related
I have a database of several thousands of utterances. Each record (utterance) is a text representing a problem description, which a user has submitted to a service desk. Sometimes also the service desk agent's response is included. The language is highly technical, and it contains three types of tokens:
words and phrases in Language 1 (e.g. English)
words and phrases in Language 2 (e.g. French, Norwegian, or Italian)
machine-generated output (e.g. listing of files using unix command ls -la)
These languages are densely mixed. I often see that in one conversation, a sentence in Language 1 is followed by Language 2. So it is impossible to divide the data into two separate sets, corresponding to utterances in two languages.
The task is to find similarities between the records (problem descriptions). The purpose of this exercise is to understand whether some bugs submitted by users are similar to each other.
Q: What is the standard way to proceed in such a situation?
In particular, the problem lies in the fact that the words come from two different corpora (corpuses), while in addition, some technical words (like filenames, OS paths, or application names) will not be found in any.
I don't think there's a "standard way" - just things you could try.
You could look into word-embeddings that are aligned between langauges – so that similar words across multiple languages have similar vectors. Then ways of building a summary vector for a text based on word-vectors (like a simple average of all a text's words' vectors), or pairwise comparisons based on word vectors (like "Word Mover's Distance"), may still work with mixed-language texts (even mixes of languages within one text).
That a single text, presumably about a a single (or closely related) set of issues, has mixed language may be a blessing rather than a curse: some classifiers/embeddings you train from such texts might then be able to learn the cross-language correlations of words with shared topics. But also, you could consider enhancing your texts with extra synthetic auto-translated text, for any monolingual ranges, to ensure downstream embeddings/comparisons get closer to your ideal of language-obliviousness.
Thank you for the suggestions. After several experiments I developed a method which is simple and works pretty well. Rather than using existing corpora, I created my own corpus based on all the utterances available in my multilingual database. Without translating them. The database has 130,000 utterances, including 3,5 million of words (in three languages: English, French and Norwegian) and 150,000 unique words. The phrase similarity based on the meaning space constructed this way works surprisingly well. I have tested this method on production and the results are good. I also see a lot of space for improvement, and will continue to polish it. I also wrote this article An approach to categorize multi-lingual phrases, describing all the steps in more detail. Critics or improvements welcome.
I have some texts in different languages and, potentially, with some typo or other mistake, and I want to retrieve their own vocabulary. I'm not experienced with NLP in general, so maybe I use some word improperly.
With vocabulary I mean a collection of words of a single language in which every word is unique and the inflections for gender, number, or tense are not considered (e.g. think, thinks and thought are are all consider think).
This is the master problem, so let's reduce it to the vocabulary retrieving of one language, English for example, and without mistakes.
I think there are (at least) three different approaches and maybe the solution consists of a combination of them:
search in a database of words stored in relation with each others. So, I could search for thought (considering the verb) and read the associated information that thought is an inflection of think
compute the "base form" (a word without inflections) of a word by processing the inflected form. Maybe it can be done with stemming?
use a service by any API. Yes, I accept also this approach, but I'd prefer to do it locally
For a first approximation, it's not necessary that the algorithm distinguishes between nouns and verbs. For instance, if in the text there were the word thought like both noun and verb, it could be considered already present in the vocabulary at the second match.
We have reduced the problem to retrieve a vocabulary of an English text without mistakes, and without consider the tag of the words.
Any ideas about how to do that? Or just some tips?
Of course, if you have suggestions about this problem also with the others constraints (mistakes and multi-language, not only Indo-European languages), they would be much appreciated.
You need lemmatization - it's similar to your 2nd item, but not exactly (difference).
Try nltk lemmatizer for Python or Standford NLP/Clear NLP for Java. Actually nltk uses WordNet, so it is really combination of 1st and 2nd approaches.
In order to cope with mistakes use spelling correction before lemmatization. Take a look at related questions or Google for appropriate libs.
About part of speech tag - unfortunately, nltk doesn't consider POS tag (and context in general), so you should provide it with the tag that can be found by nltk pos tagging. Again, it is already discussed here (and related/linked questions). I'm not sure about Stanford NLP here - I guess it should consider context, but I was sure that NLTK does so. As I can see from this code snippet, Stanford doesn't use POS tags, while Clear NLP does.
About other languages - google for lemmatization models, since algorithm for most languages (at least from the same family) is almost the same, differences are in training data. Take a look here for example of German; it is a wrapper for several lemmatizers, as I can see.
However, you always can use stemmer at cost of precision, and stemmer is more easily available for different languages.
Topic Word has become an integral part of the rising debate in the present world. Some people perceive that Topic Word (Synonyms) beneficial, while opponents reject this notion by saying that it leads to numerous problems. From my point of view, Topic Word (Synonyms) has more positive impacts than negative around the globe. This essay will further elaborate on both positive and negative effects of this trend and thus will lead to a plausible conclusion.
On the one hand, there is a myriad of arguments in favour of my belief. The topic has a plethora of merits. The most prominent one is that the Topic Word (Synonyms). According to the research conducted by Western Sydney University, more than 70 percentages of the users were in favour of the benefits provided by the Topic Word (Synonyms). Secondly, Advantage of Essay topic. Thus, it can say that Topic Word (Synonyms) plays a vital role in our lives.
On the flip side, critics may point out that one of the most significant disadvantages of the Topic Word (Synonyms) is that due to Demerits relates to the topic. For instance, a survey conducted in the United States reveals that demerit. Consequently, this example explicit shows that it has various negative impacts on our existence.
As a result, after inspection upon further paragraphs, I profoundly believe that its benefits hold more water instead of drawbacks. Topic Word (Synonyms) has become a crucial part of our life. Therefore, efficient use of Topic Word (Synonyms) method should promote; however, excessive and misuse should condemn.
I would like to compare how complex (varied or predictable) are my three corpora. They are from different topics, so some vocabulary is different, some is the same. Looking at one of the data sets it's clear that the syntax is more difficult than in the other two, sentences are longer, etc. I built word N-Gram language models using the SRILM toolkit (I'm new to language modelling) with the idea that I can then compare these models. One measure mentioned in relation to language models is perplexity. I'm confused about the following question: Can I just use perplexities of the three LMs directly as a measure of how varied are the corpora? The vocabulary and the sizes of the corpora are different, so now I think that this won't be a good comparison. I also built LMs from POS-Tags but the quality of the POS-Tagging result is not good because the language is from fora, has spelling mistakes, ungrammatical sentences and so on. What measures could be used to compare complexity of corpora from different domains? I'd appreciate your advise.
[I'm new to Stackexchange. I posted this on Crossvalidated, but I think maybe here is more appropriate forum.]
"I also built LMs from POS-Tags but the quality of the POS-Tagging result is not good because the language is from fora, has spelling mistakes, ungrammatical sentences and so on."
Aside from it being noisy, like you pointed out, you should think carefully about whether particular linguistic features are useful in your analysis. Does one corpus having proportionally more nouns move you toward what you want to learn about the corpora? Maybe in something like authorship attribution, but I can't really think of anywhere else that's effective.
If data sparsity is an issue, LSI can help by collapsing related terms together. This could also help with the spelling issues, collapsing poorly spelt words with their correct counterparts if they appear in similar contexts.
"The vocabulary and the sizes of the corpora are different, so now I think that this won't be a good comparison."
It's not the end of the world. Having more data is always better, but you can work with what you have.
If you haven't chosen a language model yet, there's a few decisions you have to make:
Are you going to smooth the data? How?
Are you going to use an advanced technique to better exploit the data, such as Latent Semantic Indexing (LSI)?
You mention that you have a language model; I'm assuming your language model is a probability distribution such that P(N-gram|topic). If this is correct, you've already normalized the data, so the two probability distributions should be readily comparable. Having more data would get you a more reliable result, but if your corpora are "big enough" to sample each topic reliably, you can move right into comparison.
As for comparison, try the KL-Divergence. KL-Divergence is "a measure of the information lost when Q is used to approximate P." Less loss means that the corpora are more similar. If you want a symmetric comparison, a "cheap" way to do it is to add D(P||Q) + D(Q||P). Note, though:
The KL divergence is only defined if Q(i)=0 ⇒ P(i)=0, for all i (absolute continuity).
So you'll have to smooth, in some manner.
Are there any known ways (above and beyond statistical analysis, but not necessarily excluding it as being part of the solution) to relate sentences or concepts to one another using Natural Language Processing. Thus far I've only worked with NLTK and Stanford-NLP to aid in my project, but I am open to alternative open source solutions.
As an example take the following George Orwell essay (http://orwell.ru/library/essays/wiw/english/e_wiw). Suppose I gave the application the sentence
"What are George Orwell's opinions on writers."
or perhaps
"George Orwell believes writers enjoy writing to express their creativity, to make a point and for their egos."
Might yield lines from the essay like
"The aesthetic motive is very feeble in a lot of writers, but even a pamphleteer or writer of textbooks will have pet words and phrases which appeal to him for non-utilitarian reasons; or he may feel strongly about typography, width of margins, etc."
or
"Serious writers, I should say, are on the whole more vain and self-centered than journalists, though less interested in money."
I understand that this is not easy and I may not achieve much accuracy, but I was hoping for ideas on what already exists and what I could try to start off, or at least get the best results possible based on what is already known and out there.
The simplest way of doing this might be using some distance functions (such as Cosine similarity) between your query sentence and the sentence pool. It's easy to implement. Create a vocabulary from the text collection and each sentence is represented as a vector. You can use TF-IDF to represent values in the vector, and calculate the cosine similarity between sentences, and get the highest scored sentence with respect to your query sentence.
Or you can build index from your corpus and use for example Lucene and let it do the work for you.
You may also consider using LSA (Latent Semantic Analysis) where you can get the similarity between sentences.
From what I understand from your question (and also your comment) is you are more interested in understanding the meaning of individual sentence and then equate with each other in proximity. Statistical approach, in my opinion, is more for "getting a feel" of the sentence than understanding it. In my opinion I would suggest deep parsing approach.
Deep parse the sentence, understand what roles the words play in the sentence, understand what the subject-verb-object model (left to right parsing and such techniques) and then have a vocabulary that helps you categorise the nouns and verbs.
e.g.
"Serious writers, I should say, are on the whole more vain and self-centered than journalists, though less interested in money."
Parsing this sentence, lets you understand the subject of the sentence is "serious writers" (serious being an adjective, writers basically). In the verb form it states "are" (current state) and "interested". Each verb then points to some more vocabulary including adjectives. If you arrange this vocabulary in correct manner (and keep building it) I think you should get somewhere with your problem.
Natural Language Processing (NLP), especially for English, has evolved into the stage where stemming would become an archaic technology if "perfect" lemmatizers exist. It's because stemmers change the surface form of a word/token into some meaningless stems.
Then again the definition of the "perfect" lemmatizer is questionable because different NLP task would have required different level of lemmatization. E.g. Convert words between verb/noun/adjective forms.
Stemmers
[in]: having
[out]: hav
Lemmatizers
[in]: having
[out]: have
So the question is, are English stemmers any useful at all today? Since we have a plethora of lemmatization tools for English
If not, then how should we move on to build robust lemmatizers that
can take on nounify, verbify, adjectify and adverbify
preprocesses?
How could the lemmatization task be easily scaled to other languages
that have similar morphological structures as English?
Q1: "[..] are English stemmers any useful at all today? Since we have a plethora of lemmatization tools for English"
Yes. Stemmers are much simpler, smaller, and usually faster than lemmatizers, and for many applications, their results are good enough. Using a lemmatizer for that is a waste of resources. Consider, for example, dimensionality reduction in Information Retrieval. You replace all drive/driving with driv in both the searched documents and the query. You do not care if it is drive or driv or x17a$ as long as it clusters inflectionally related words together.
Q2: "[..]how should we move on to build robust lemmatizers that can take on nounify, verbify, adjectify, and adverbify preprocesses?
What is your definition of a lemma, does it include derivation (drive - driver) or only inflection (drive - drives - drove)? Does it take into account semantics?
If you want to include derivation (which most people would say includes verbing nouns etc.) then keep in mind that derivation is far more irregular than inflection. There are many idiosyncracies, gaps, etc. Do you really want for to change (change trains) and change (as coins) to have the same lemma? If not, where do you draw the boundary? How about nerve - unnerve, earth -- unearth - earthling, ... It really depends on the application.
If you take into account semantics (bank would be labeled as bank-money or bank-river depending on context), how deep do you go (do you distinguish bank-institution from bank-building)? Some apps may not care about this at all, some might want to distinguish basic semantics, and some might want it fined-grained.
Q3: "How could the lemmatization task be easily scaled to other languages that have similar morphological structures as English?"
What do you mean by "similar morphological structures as English"? English has very little inflectional morphology. There are good lemmatizers for languages of other morphological types (truly inflectional, agglutinative, template, ...).
With a possible exception of agglutinative languages, I would argue that a lookup table (say a compressed trie) is the best solution. (Possibly with some backup rules for unknown words such as proper names). The lookup is followed by some kind of disambiguation (ranging from trivial - take the first one, or take the first one consistent with the words POS tag, to much more sophisticated). The more sophisticated disambiguations are usually supervised stochastical algorithms (e.g. TreeTagger or Faster), although a combination of machine learning and manually created rules has been done too (see e.g. this).
Obviously, for most languages, you do not want to create the lookup table by
hand, but instead, generate it from a description of the morphology of
that language. For inflectional languages, you can go the engineering
way of Hajic for Czech or Mikheev for Russian, or, if you are daring,
you use two-level morphology. Or you can do something in between,
such as Hana (myself) (Note that these are all full
morphological analyzers that include lemmatization as one of their features). Or you can learn
the lemmatizer in an unsupervised manner a la Yarowsky and
Wicentowski, possibly with manual post-processing, correcting the
most frequent words.
There are way too many options and it really all depends on what you want to do with the results.
One classical application of either stemming or lemmatization is the improvement of search engine results: By applying stemming (or lemmatization) to the query as well as (prior to indexing) to all tokens indexed, users searching for, say, "having" are able to find results containing "has".
(Arguably, verbs are somewhat uncommon in most search queries, but the same principle applies to nouns, especially in languages with a rich noun morphology.)
For the purpose of search result improvement, it is not actually important whether the stem (or lemma) is meaningful ("have") or not ("hav"). It only needs to able to represent the word in question, and all its inflectional forms. In fact, some systems use numbers or other kinds of id-strings instead of either stem or lemma (or base form or whatever it may be called).
Hence, this is an example of an application where stemmers (by your definition) are as good as lemmatizers.
However, I am not quite convinced that your (implied) definition of "stemmer" and "lemmatizer" are generally accepted. I am not sure if there is any generally accepted definition of these terms, but the way I define them is as follows:
Stemmer: A function that reduces inflectional forms to stems or base forms, using rules and lists of known suffixes.
Lemmatizer: A function that performs the same reduction, but using a comprehensive full-form dictionary to be able to deal with irregular forms.
Based on these definitions, a lemmatizer is essentially a higher-quality (and more expensive) version of a stemmer.
The answer is highly dependent on the task or specific field of study within the Natural Language Processing (NLP) that we are talking about.
It is worth pointing out that it has been proved that in some specific tasks, like Sentiment Analysis (that is a favorite sub-field in NLP), using a Stemmer or Lemmatizer as a feature in the development of a system (training a machine learning model) does not have a noticeable effect on the accuracy of the model no matter how great the tool is. Even though it makes the performance a little bit better, there are more important features like Dependency parsing that have a considerable potential to be worked on in such systems.
It is important to mention that the characteristics of the language which we are working on should also be taken into the consideration.
Stemming just removes or stems the last few characters of a word, often leading to incorrect meanings and spelling. Lemmatization considers the context and converts the word to its meaningful base form, which is called Lemma. Sometimes, the same word can have multiple different Lemmas. We should identify the Part of Speech (POS) tag for the word in that specific context. Here are the examples to illustrate all the differences and use cases:
If you lemmatize the word 'Caring', it would return 'Care'. If you stem, it would return 'Car' and this is erroneous.
If you lemmatize the word 'Stripes' in verb context, it would return 'Strip'. If you lemmatize it in noun context, it would return 'Stripe'. If you just stem it, it would just return 'Strip'.
You would get same results whether you lemmatize or stem words such as walking, running, swimming... to walk, run, swim etc.
Lemmatization is computationally expensive since it involves look-up tables and what not. If you have large dataset and performance is an issue, go with Stemming. Remember you can also add your own rules to Stemming. If accuracy is paramount and dataset isn't humongous, go with Lemmatization.