Can someone tell me if the library pysimilar compares lexical or semantic similarity of input text?
I wanted to find lexical similarity between two sentences. Any better suggestion or library is greatly appreciated.
Related
I'm making a program which provides some english sentences which user has to learn more.
For example:
First, I provide a sentence "I have to go school today" to user.
Then if the user wants to learn more sentences like that, I find some sentences which have high grammar similarity with that sentence.
I think the only way for providing sentences is to calculate similarity.
Is there a way to calculate grammar similarity between two sentences?
or is there a better way to make that algorithm?
Any advice or suggestions would be appreciated. Thank you.
My approach for solving this problem would be to do a Part Of Speech Tagging of using a tool like NLTK and compare the trees structure of your phrase with your database.
Other way, if you already have a training dataset, use the WEKA to use a machine learn approach to connect the phrases.
You can parse your sentence as either a constituent or dependency tree and use these representations to formulate some form of query that you can use to find candidate sentences with similar structures.
You can check this available tool from Stanford NLP:
Tregex is a utility for matching patterns in trees, based on tree relationships and regular expression matches on nodes (the name is short for "tree regular expressions"). Tregex comes with Tsurgeon, a tree transformation language. Also included from version 2.0 on is a similar package which operates on dependency graphs (class SemanticGraph, called semgrex.
I want to use CRF for sentence level sentiment classiciation (positive or negative). But, I am lost on how to create a very simple feature to detect this using either CRFsuite or CRF++. Been trying for a few days, can anyone suggest how to design a simple feature which I can use as starting point to understand how to use the tools.
Thanks.
You could start providing gazetteers containing words separated by sentiment (e.g. positive adjectives, negative nouns, etc) and so using CRF to label relevant portions of the sentences. Using gazetteers you can also provide lists of other words which won't be labeled themselves, but could help identifying sentiment terms. You could also use WordNet instead of gazetteers. Your gazetteer features could be binary, i.e. gazetteer matched or not matched. Check out http://crfpp.googlecode.com for more examples and references.
I hope this helps!
I am looking to measure the level of confusion in series of short texts. I was wondering if anyone knows if there exists a "Confusion" dictionary similar to the sentiment dictionaries that are easily available. Basically, I'm looking for a collection of words or phrases that signal confusion. Any help would be greatly appreciated. Thanks.
I am trying to come up with a way to convert speech to text. I am trying to use Sphinx to attain this. What I mean by unguided speech to text is that, the speaker is not bound to speak from a definite set of sentences. Rather he might speak any sentence. So its not possible for me to have a grammar file, where each word is one of the alternative pre-written in the grammar file. I understand that I would have to train Sphinx somehow to do this.
But I am a beginner in sphinx. How to start training Sphinx to convert unguided speech? Is it possible to attain unguided conversion with Sphinx?
The task you are up to is, as of right now, is not yet possible to complete, at least not with satisfying accuracy.
As for the Sphinx-based solution: you will have to create dictionary with all the words to be recognized. There is no other way.
Once you have the dictionary, you can generate a simple n-gram model based on it, with ony unigrams - each unigram will be one word. The probability of each may be the same, or you may attempt to do some statistical analysis of the words that will be used.
Lets say I have a text in English and there is a word missing somewhere in it.
I have a list of candidate words from a dictionary with no other information. These candidate words are selected by some other, rather inaccurate, algorithm. I would like to use WordNet and the context around the missing word to assign probabilities to candidate words.
There is an obvious ad-hoc way that came to my mind on how to solve this. One way would be to extract "interesting" words surrounding the missing word, calculate semantic similarity with every candidate word according to some metric and assign probabilities to candidate words based on the average score.
However I was unable to find any useful research papers on this problem.
So, what i'm asking is if you're aware of any research (papers) about this problem, how do you find my proposal and do you have a better idea?
You can start from Experiments: Enriching indirect answers. A good article is Semantic web access prediction using WordNed.