How to take the suffix in smoothing of Part of speech tagging - nlp

I am making a "Part of speech Tagger". I am handling the unknown word with the suffix.
But the main issue is that how would i decide the number of suffix... should it be pre-decided (like Weischedel approach) or I have to take the last few alphabets of the words(like Samuelsson approach).
Which approach would be better......

Quick googling suggests that the Weischedel approach is sufficient for English, which has only rudimentary morphological inflection. The Samuelsson approach seems to work better (which makes sense intuitively) when it comes to processing inflecting languages.
A Resource-light Approach to Morpho-syntactic Tagging - Google Books p 9 quote:
To handle unknown words Brants (2000) uses Samuelsson's (1993) suffix analysis, which seems to work best for inflected languages.
(This is not in a direct comparison to Weischedel's approach, though.)

Related

Anyone know of any real systems using Computational Semantics with Lambda Calculus?

I was wondering if Computational Semantics is actually used in any real-world system? (Simple examples here and here). I would like to see how an actual system works.
It seems like there are a bunch of issues with actually using Computational Semantics in any real world system:
It seems just labeling sentences with part-of-speech tags is error prone.
But you also need a reliable parse tree which is error prone and there can be many valid trees for one sentence.
Finding what pronouns are referring to what entities is error prone.
Word disambiguation is also another source of errors and multiple meanings could be valid in the same context.
Any context-free-grammar of English I can find seems to be incomplete.
Finally, after all these sources of error are dodged, we can finally convert the sentence to FOL with Computation Semantics!
Also, I can't seem to figure out how to deal with prepositions in Computation Semantics.
Is this really just an academic exercise or is Computational Semantics actually useful?
There are several better aproaches to natural language than simple lambda calculus and context free grammars, ie. HPSG, Montague Grammar, TAG, ...
Word disambiguation can be handled by Markov chains, for example.
Siri, Google Now, Cortana and IBM Watson are some examples for real world systems.
Google Translate is another application that uses Computational Semantics.
I believe (bu't don't quote me on this) that the technology spun out of the now-defunct Natural Language Theory and Technology group at the Palo Alto Research Center (PARC, formerly Xerox PARC), utilizes the lambda calculus to provide inferences about textual entailments. idk i only worked there a summer (freshman, so was wonderfully igorant of most of the goings-on there).
Anyway, that 'technology' was developed over 30 years and then Powerset bought the right to all of it for $15 million, attempting to disrupt smart search in general. Then Bing's fatass came along, gobbled it up nom nom nom, then continued devouring the entire research group as whole. The principal core investigators now work solely as adjuct profs at Stanford. Sad.

unicode characters

In my application I have unicode strings, I need to tell in which language the string is in,
I want to do it by narrowing list of possible languages by determining in which range the characters of string are.
Ranges I have from http://jrgraphix.net/research/unicode_blocks.php
And possible languages from http://unicode-table.com/en/
The problem is that algorithm has to detect all languages, does someone know more wide mapping of unicode ranges to languages ?
Thanks
Wojciech
This is not really possible, for a couple of reasons:
Many languages share the same writing system. Look at English and Dutch, for example. Both use the Basic Latin alphabet. By only looking at the range of code points, you simply cannot distinguish between them.
Some languages use more characters, but there is no guarantee that a
specific piece of text contains them. German, for example, uses the
Basic Latin alphabet plus "ä", "ö", "ü" and "ß". While these letters
are not particularly rare, you can easily create whole sentences
without them. So, a short text might not contain them. Thus, again,
looking at code points alone is not enough.
Text is not always "pure". An English text may contain French letters
because of a French loanword (e.g. "déjà vu"). Or it may contain
foreign words, because the text is talking about foreign things (e.g.
"Götterdämmerung is an opera by Richard Wagner...", or "The Great
Wall of China (万里长城) is..."). Looking at code points alone would be
misleading.
To sum up, no, you cannot reliably map code point ranges to languages.
What you could do: Count how often each character appears in the text and heuristically compare with statistics about known languages. Or analyse word structures, e.g. with Markov chains. Or search for the words in dictionaries (taking inflection, composition etc. into account). Or a combination of these.
But this is hard and a lot of work. You should rather use an existing solution, such as those recommended by deceze and Esailija.
I like the suggestion of using something like google translate -- as they will be doing all the work for you.
You might be able to build a rule-based system that gets you part of the way there. Build heuristic rules for languages and see if that is sufficient. Certain Tibetan characters do indicate Tibetan, and there are unique characters in many languages that will be a give away. But as the other answer pointed out, a limited sample of text may not be that accurate, as you may not have a clear indicator.
Languages will however differ in the frequencies that each character appears, so you could have a basic fingerprint of each language you need to classify and make guesses based on letter frequency. This probably goes a bit further than a rule-based system. Probably a good tool to build this would be a text classification algorithm, which will do all the analysis for you. You would train an algorithm on different languages, instead of having to articulate the actual rules yourself.
A much more sophisticated version of this is presumably what Google does.

Elementary Sentence Construction

I am working in NLP Project and I am looking for parser to construct simple Sentences from complex one, written in C# . Since Sentences may have complex grammatical structure with multiple embedded clauses.
Any Help ?
Text summarisation and sentence simplification are very much an open research area. Wikipedia has articles about both, you can start from there. Beware: this is hard problem and chances are the state of the art system is far worse than you might expect. There isn't an off-the-shelf piece of software that you can just grab and solve all your problems. You will have some success with the more basic sentences, but performance will degrade as you complex sentence gets more complex. Have a look at the articles referenced on Wikipedia or google around to get an idea of what is possible. My impression is most readily available software packages are for academic purposes and might take a bit of work to get running.

Text comparison algorithm

We have a requirement in the project that we have to compare two texts (update1, update2) and come up with an algorithm to define how many words and how many sentences have changed.
Are there any algorithms that I can use?
I am not even looking for code. If I know the algorithm, I can code it in Java.
Typically this is accomplished by finding the Longest Common Subsequence (commonly called the LCS problem). This is how tools like diff work. Of course, diff is a line-oriented tool, and it sounds like your needs are somewhat different. However, I'm assuming that you've already constructed some way to compare words and sentences.
An O(NP) Sequence Comparison Algorithm is used by subversion's diff engine.
For your information, there are implementations with various programming languages by myself in following page of github.
https://github.com/cubicdaiya/onp
Some kind of diff variant might be helpful, eg wdiff
If you decide to devise your own algorithm, you're going to have to address the situation where a sentence has been inserted. For example for the following two documents:
The men are bad. I hate the men
and
The men are bad. John likes the men. I hate the men
Your tool should be able to look ahead to recognise that in the second, I hate the men has not been replaced by John likes the men but instead is untouched, and a new sentence inserted before it. i.e. it should report the insertion of a sentence, not the changing of four words followed by a new sentence.
The specific algorithm used by diff and most other comparison utilities is Eugene Myer's An O(ND) Difference Algorithm and Its Variations. There's a Java implementation of it available in the java-diff-utils package.
Here are two papers that describe other text comparison algorithms that should generally output 'better' (e.g. smaller, more meaningful) differences:
Tichy, Walter F., "The String-to-String Correction Problem with Block Moves" (1983). Computer Science Technical Reports. Paper 378.
Paul Heckel, "A Technique for Isolating Differences Between Files", Communications of the ACM, April 1978, Volume 21, Number 4
The first paper cites the second and mentions this about its algorithm:
Heckel[3] pointed out similar problems with LCS techniques and proposed a
linear-lime algorithm to detect block moves. The algorithm performs adequately
if there are few duplicate symbols in the strings. However, the algorithm gives
poor results otherwise. For example, given the two strings aabb and bbaa,
Heckel's algorithm fails to discover any common substring.
The first paper was mentioned in this answer and the second in this answer, both to the similar SO question:
Is there a diff-like algorithm that handles moving block of lines? - Stack Overflow
The difficulty comes when comparing large files efficiently and with good performance. I therefore implemented a variation of Myers O(ND) diff algorithm - which performs quite well and accurate (and supports filtering based on regular expression):
Algorithm can be tested out here: becke.ch compare tool web application
And a little bit more information on the home page: becke.ch compare tool
The most famous algorithm is O(ND) Difference Algorithm, also used in Notepad++ compare plugin (written in C++) and GNU diff(1). You can find a C# implementation here:
http://www.mathertel.de/Diff/default.aspx

Libraries or tools for generating random but realistic text

I'm looking for tools for generating random but realistic text. I've implemented a Markov Chain text generator myself and while the results were promising, my attempts at improving them haven't yielded any great successes.
I'd be happy with tools that consume a corpus or that operate based on a context-sensitive or context-free grammar. I'd like the tool to be suitable for inclusion into another project.
Most of my recent work has been in Java so a tool in that language is preferred, but I'd be OK with C#, C, C++, or even JavaScript.
This is similar to this question, but larger in scope.
Extending your own Markov chain generator is probably your best bet, if you want "random" text. Generating something that has context is an open research problem.
Try (if you haven't):
Tokenising punctuation separately, or include punctuation in your chain if you're not already. This includes paragraph marks.
If you're using a 2- or 3- history Markov chain, try resetting to using a 1-history one when you encounter full stops or newlines.
Alternatively, you could use WordNet in two passes with your corpus:
Analyse sentences to determine common sequences of word types, ie nouns, verbs, adjectives, and adverbs. WordNet includes these. Everything else (pronouns, conjunctions, whatever) is excluded, but you could essentially pass those straight through.
This would turn "The quick brown fox jumps over the lazy dog" into "The [adjective] [adjective] [noun] [verb(s)] over the [adjective] [noun]"
Reproduce sentences by randomly choosing a template sentence and replacing [adjective], [nouns] and [verbs] with actual adjectives nouns and verbs.
There are quite a few problems with this approach too: for example, you need context from the surrounding words to know which homonym to choose. Looking up "quick" in wordnet yields the stuff about being fast, but also the bit of your fingernail.
I know this doesn't solve your requirement for a library or a tool, but might give you some ideas.
I've used for this purpose many data sets, including wikinews articles.
I've extracted text from them using this tool:
http://alas.matf.bg.ac.rs/~mr04069/WikiExtractor.py

Resources