NL training without input - nlp

I'm a beginner at developing using bixby, and I was trying to create and modify the sample Dice capsule that they have on their github. I wanted to add an extra feature of snake-eyes to it, which basically is an extra boolean concept which is true for 1-1 roll on a pair of dice.
Another difference is that I am not taking any input as the sample capsule. In the sample capsule, the user gives the input of number of dice to be roller and the number of sides each die has. I have restricted the number of dice to be 2 with 6 sides each, so there is no point of taking input, and it basically boils down to a simple dice rolling application, which spits out the rolls and the sum.
In the simulator, I am able to get the output perfectly fine by using the "intent { goal : .... }" syntax, but i want to train the model to be able to run on prompt like "roll die", although there is no concept to match the roll command to!
Is there any way to do this?
Thanks in advance!

https://bixbydevelopers.com/dev/docs/get-started/quick-start#add-training-examples
You can see here that even the roll 2 6-sided dice utterance isn't being trained to a goal like a "roll" command.
Instead, it is trained to the goal RollResultConcept.
Bixby will automatically call the RollDice action to produce the RollResultConcept - so you don't need to train on RollDice yourself.

Related

Time Series Forecasting in Tensorflow 2.0 - How to predict using the last of the Validation Dataset?

I'm (desperately) trying to figure out Tensorflow 2.0 without much luck so far, but I think I'm close with what I need right now.
I've followed the doc here to make a simple network to forecast stock data (not weather data), and what I'd like to do now is, forecast the future using the latest/most recent section of the validation dataset. I'm hoping someone else has read through it already and can help me here.
The code to predict the future using the validation dataset looks like this:
for x, y in val_data_multi.take(3):
multi_step_plot(x[0], y[0], multi_step_model.predict(x)[0])
...where to the best of my knowledge, it takes a random chunk (3 separate times), and in my case is a 20 row x 9 column section, from the val_data_multi "Repeat dataset" type, and then uses the model's multi_step_plot function to spit out a plot that has the predicted values based on that random section of the validation dataset. But what if I don't want to just take a random validation section, I want to use the bottom of my actual dataset? So that if I have recent stock data at the bottom of my validation dataset, and I want to forecast for the future that hasn't happened yet, how can I take a 20x9 section from the bottom of that set, and not just have it "take" a random section to predict with?
As a pseudo code attempt to explain what I'm trying to do, I was trying something like:
for x, y in val_data_multi[-20:].take(1): #.take(3)
...to try and make it take one section 20 rows up from the bottom, and all columns. But of course this didn't work as TypeError: 'RepeatDataset' object is not subscriptable.
I hope that makes sense, and if it'll help for me to post my code, I can do that, but I'm just using what's already shown in that page, just made some modifications to use a stock dataset, that's all.
I was able to find a much better guide from this Github repo:
https://github.com/Hvass-Labs/TensorFlow-Tutorials/blob/master/23_Time-Series-Prediction.ipynb
...which basically gets into better detail what I'm looking to do and made it very easy to understand. Thanks!

Doubts regarding `Understanding Keras LSTMs`

I am new to LSTMs and going through the Understanding Keras LSTMs and had some silly doubts related to a beautiful answer by Daniel Moller.
Here are some of my doubts:
There are 2 ways specified under the Achieving one to many section where it’s written that we can use stateful=True to recurrently take the output of one step and serve it as the input of the next step (needs output_features == input_features).
In the One to many with repeat vector diagram, the repeated vector is fed as input in all the time-step, whereas in the One to many with stateful=True the output is fed as input in the next time step. So, aren't we changing the way the layers work by using the stateful=True?
Which of the above 2 approaches (using the repeat vector OR feeding the previous time-step output as the next input) should be followed when building an RNN?
Under the One to many with stateful=True section, to change the behaviour of one to many, in the code for manual loop for prediction, how will we know the steps_to_predict variable because we don't know the ouput sequence length in advance.
I also did not understand the way the entire model is using the last_step output to generate the next_step ouput. It has confused me about the working of model.predict() function. I mean, doesn't model.predict() simultaneously predict the entire output sequences at once rather than looping through the no. of output sequences (whose value I still don't know) to be generated and doing model.predict() to predict a specific time-step output in a given iteration?
I couldn't understand the entire of Many to many case. Any other link would be helpful.
I understand that we use model.reset_states() to make sure that a new batch is independent of the previous batch. But, Do we manually create batches of sequence such that one batch follows another batch or does Keras in stateful=True mode automatically divides the sequence into such batches.
If it's done manually then, why would anyone divide the dataset into such batches in which a part of a sequence is in one batch and the other in the next batch?
At last, what are the practical implementation or examples/use-cases where stateful=True would be used(because this seems to be something unusual)? I am learning LSTMs and this is the first time I've been introduced to stateful in Keras.
Can anyone help me in explaining my silly questions so that I can be clear on LSTM implementation in Keras?
EDIT: Asking some of these for clarification of the current answer and some for the remaining doubts
A. So, basically stateful lets us keep OR reset the inner state after every batch. Then, how would the model learn if we keep on resetting the inner state again and again after each batch trained? Does resetting truely means resetting the parameters(used in computing the hidden state)?
B. In the line If stateful=False: automatically resets inner state, resets last output step. What did you mean by resetting the last output step? I mean, if every time-step produces its own output then what does resetting of last output step mean and that too only the last one?
C. In response to Question 2 and 2nd point of Question 4, I still didn't get your manipulate the batches between each iteration and the need of stateful((last line of Question 2) which only resets the states). I got to the point that we don't know the input for every output generated in a time-step.
So, you break the sequences into sequences of only one-step and then use new_step = model.predict(last_step) but then how do you know about how long do you need to do this again and again(there must be a stopping point for the loop)? Also, do explain the stateful part( in the last line of Question 2).
D. In the code under One to many with stateful=True, it seems that the for loop(manual loop) is used for predicting the next word is used just in test time. Does the model incorporates that thing itself at train time or do we manually need use this loop also at the train time?
E. Suppose we are doing some machine translation job, I think the breaking of sequences will occur after the entire input(language to translate) has been fed to the input time-steps and then generation of outputs(translated language) at each time-step is going to take place via the manual loop because now we are ended up with the inputs and starting to produce output at each time-step using the iteration. Did I get it right?
F. As the default working of LSTMs requires 3 things mentioned in the answer, so in case of breaking of sequences, are current_input and previous_output fed with same vectors because their value in case of no current input being available is same?
G. Under the many to many with stateful=True under the Predicting: section, the code reads:
predicted = model.predict(totalSequences)
firstNewStep = predicted[:,-1:]
Since, the manual loop of finding the very next word in the current sequence hasn't been used up till now, how do I know the count of the time-steps that has been predicted by the model.predict(totalSequences) so that the last step from predicted(predicted[:,-1:]) will then later be used for generating the rest of the sequences? I mean, how do I know the number of sequences that have been produced in the predicted = model.predict(totalSequences) before the manual for loop (later used).
EDIT 2:
I. In D answer I still didn't get how will I train my model? I understand that using the manual loop(during training) can be quite painful but then if I don't use it how will the model get trained in the circumstances where we want the 10 future steps, we cannot output them at once because we don't have the necessary 10 input steps? Will simply using model.fit() solve my problem?
II. D answer's last para, You could train step by step using train_on_batch only in the case you have the expected outputs of each step. But otherwise I think it's very complicated or impossible to train..
Can you explain this in more detail?
What does step by step mean? If I don't have OR have the output for the later sequences , how will that affect my training? Do I still need the manual loop during training. If not, then will the model.fit() function work as desired?
III. I interpreted the "repeat" option as using the repeat vector. Wouldn't using the repeat vector be just good for the one to many case and not suitable for the many to many case because the latter will have many input vectors to choose from(to be used as a single repeated vector) ? How will you use the repeat vector for the many to many case?
Question 3
Understanding the question 3 is sort of a key to understand the others, so, let's try it first.
All recurrent layers in Keras perform hidden loops. These loops are totally invisible to us, but we can see the results of each iteration at the end.
The number of invisible iterations is equal to the time_steps dimension. So, the recurrent calculations of an LSTM happen regarding the steps.
If we pass an input with X steps, there will be X invisible iterations.
Each iteration in an LSTM will take 3 inputs:
The respective slice of the input data for this step
The inner state of the layer
The output of the last iteration
So, take the following example image, where our input has 5 steps:
What will Keras do in a single prediction?
Step 0:
Take the first step of the inputs, input_data[:,0,:] a slice shaped as (batch, 2)
Take the inner state (which is zero at this point)
Take the last output step (which doesn't exist for the first step)
Pass through the calculations to:
Update the inner state
Create one output step (output 0)
Step 1:
Take the next step of the inputs: input_data[:,1,:]
Take the updated inner state
Take the output generated in the last step (output 0)
Pass through the same calculation to:
Update the inner state again
Create one more output step (output 1)
Step 2:
Take input_data[:,2,:]
Take the updated inner state
Take output 1
Pass through:
Update the inner state
Create output 2
And so on until step 4.
Finally:
If stateful=False: automatically resets inner state, resets last output step
If stateful=True: keep inner state, keep last ouptut step
You will not see any of these steps. It will look like just a single pass.
But you can choose between:
return_sequences = True: every output step is returned, shape (batch, steps, units)
This is exactly many to many. You get the same number of steps in the output as you had in the input
return_sequences = False: only the last output step is returned, shape (batch, units)
This is many to one. You generate a single result for the entire input sequence.
Now, this answers the second part of your question 2: Yes, predict will compute everything without you noticing. But:
The number of output steps will be equal to the number of input steps
Question 4
Now, before going to the question 2, let's look at 4, which is actually the base of the answer.
Yes, the batch division should be done manually. Keras will not change your batches. So, why would I want to divide a sequence?
1, the sequence is too big, one batch doesn't fit the computer's or the GPU's memory
2, you want to do what is happening on question 2: manipulate the batches between each step iteration.
Question 2
In question 2, we are "predicting the future". So, what is the number of output steps? Well, it's the number you want to predict. Suppose you're trying to predict the number of clients you will have based on the past. You can decide to predict for one month in the future, or for 10 months. Your choice.
Now, you're right to think that predict will calculate the entire thing at once, but remember question 3 above where I said:
The number of output steps is equal to the number of input steps
Also remember that the first output step is result of the first input step, the second output step is result of the second input step, and so on.
But we want the future, not something that matches the previous steps one by one. We want that the result step follows the "last" step.
So, we face a limitation: how to define a fixed number of output steps if we don't have their respective inputs? (The inputs for the distant future are also future, so, they don't exist)
That's why we break our sequence into sequences of only one step. So predict will also output only one step.
When we do this, we have the ability to manipulate the batches between each iteration. And we have the ability to take output data (which we didn't have before) as input data.
And stateful is necessary because we want that each of these steps be connected as a single sequence (don't discard the states).
Question 5
The best practical application of stateful=True that I know is the answer of question 2. We want to manipulate the data between steps.
This might be a dummy example, but another application is if you're for instance receiving data from a user on the internet. Each day the user uses your website, you give one more step of data to your model (and you want to continue this user's previous history in the same sequence).
Question 1
Then, finally question 1.
I'd say: always avoid stateful=True, unless you need it.
You don't need it to build a one to many network, so, better not use it.
Notice that the stateful=True example for this is the same as the predict the future example, but you start from a single step. It's hard to implement, it will have worse speed because of manual loops. But you can control the number of output steps and this might be something you want in some cases.
There will be a difference in calculations too. And in this case I really can't answer if one is better than the other. But I don't believe there will be a big difference. But networks are some kind of "art", and testing might bring funny surprises.
Answers for EDIT:
A
We should not mistake "states" with "weights". They're two different variables.
Weights: the learnable parameters, they're never reset. (If you reset the weights, you lose everything the model learned)
States: current memory of a batch of sequences (relates to which step on the sequence I am now and what I have learned "from the specific sequences in this batch" up to this step).
Imagine you are watching a movie (a sequence). Every second makes you build memories like the name of the characters, what they did, what their relationship is.
Now imagine you get a movie you never saw before and start watching the last second of the movie. You will not understand the end of the movie because you need the previous story of this movie. (The states)
Now image you finished watching an entire movie. Now you will start watching a new movie (a new sequence). You don't need to remember what happened in the last movie you saw. If you try to "join the movies", you will get confused.
In this example:
Weights: your ability to understand and intepret movies, ability to memorize important names and actions
States: on a paused movie, states are the memory of what happened from the beginning up to now.
So, states are "not learned". States are "calculated", built step by step regarding each individual sequence in the batch. That's why:
resetting states mean starting new sequences from step 0 (starting a new movie)
keeping states mean continuing the same sequences from the last step (continuing a movie that was paused, or watching part 2 of that story )
States are exactly what make recurrent networks work as if they had "memory from the past steps".
B
In an LSTM, the last output step is part of the "states".
An LSTM state contains:
a memory matrix updated every step by calculations
the output of the last step
So, yes: every step produces its own output, but every step uses the output of the last step as state. This is how an LSTM is built.
If you want to "continue" the same sequence, you want memory of the last step results
If you want to "start" a new sequence, you don't want memory of the last step results (these results will keep stored if you don't reset states)
C
You stop when you want. How many steps in the future do you want to predict? That's your stopping point.
Imagine I have a sequence with 20 steps. And I want to predict 10 steps in the future.
In a standard (non stateful) network, we can use:
input 19 steps at once (from 0 to 18)
output 19 steps at once (from 1 to 19)
This is "predicting the next step" (notice the shift = 1 step). We can do this because we have all the input data available.
But when we want the 10 future steps, we cannot output them at once because we don't have the necessary 10 input steps (these input steps are future, we need the model to predict them first).
So we need to predict one future step from existing data, then use this step as input for the next future step.
But I want that these steps are all connected. If I use stateful=False, the model will see a lot of "sequences of length 1". No, we want one sequence of length 30.
D
This is a very good question and you got me ....
The stateful one to many was an idea I had when writing that answer, but I never used this. I prefer the "repeat" option.
You could train step by step using train_on_batch only in the case you have the expected outputs of each step. But otherwise I think it's very complicated or impossible to train.
E
That's one common approach.
Generate a condensed vector with a network (this vector can be a result, or the states generated, or both things)
Use this condensed vector as initial input/state of another network, generate step by step manually and stop when a "end of sentence" word or character is produced by the model.
There are also fixed size models without the manual loop. You suppose your sentence has a maximum length of X words. The result sentences that are shorter than this are completed with "end of sentence" or "null" words/characters. A Masking layer is very useful in these models.
F
You provide only the input. The other two things (last output and inner states) are already stored in the stateful layer.
I made the input = last output only because our specific model is predicting the next step. That's what we want it to do. For each input, the next step.
We taught this with the shifted sequence in training.
G
It doesn't matter. We want only the last step.
The number of sequences is kept by the first :.
And only the last step is considered by -1:.
But if you want to know, you can print predicted.shape. It is equal to totalSequences.shape in this model.
Edit 2
I
First, we can't use "one to many" models to predict the future, because we don't have data for that. There is no possibility to understand a "sequence" if you don't have the data for the steps of the sequence.
So, this type of model should be used for other types of applications. As I said before, I don't really have a good answer for this question. It's better to have a "goal" first, then we decide which kind of model is better for that goal.
II
With "step by step" I mean the manual loop.
If you don't have the outputs of later steps, I think it's impossible to train. It's probably not a useful model at all. (But I'm not the one that knows everything)
If you have the outputs, yes, you can train the entire sequences with fit without worrying about manual loops.
III
And you're right about III. You won't use repeat vector in many to many because you have varying input data.
"One to many" and "many to many" are two different techniques, each one with their advantages and disadvantages. One will be good for certain applications, the other will be good for other applications.

Number of training samples for text classification tas

Suppose you have a set of transcribed customer service calls between customers and human agents, where on average each call's length is 7 minutes. Customers will mostly call because of issues they have with the product. Let's assume that a human can assign one label per axis per call:
Axis 1: What was the problem from the customer's perspective?
Axis 2: What was the problem from the agent's perspective?
Axis 3: Could the agent resolve the customer's issue?
Based on the manually labeled texts you want to train a text classifier that shall predict a label for each call for each of the three axes. But the labeling of recordings takes time and costs money. On the other hand you need a certain amount of training data to get good prediction results.
Given the above assumptions, how many manually labeled training texts would you start with? And how do you know that you need more labeled training texts?
Maybe you've worked on a similar task before and can give some advice.
UPDATE (2018-01-19): There's no right or wrong answer to my question. Ok, ideally, somebody worked on exactly the same task, but that's very unlikely. I'll leave the question open for one more week and then accept the best answer.
This would be tricky to answer but I will try my best based on my experience.
In the past, I have performed text classification on 3 datasets; the number in the bracket indicates how big my dataset was: restaurant reviews (50K sentences), reddit comments (250k sentences) and developer comments from issue tracking systems (10k sentences). Each of them had multiple labels as well.
In each of the three cases, including the one with 10k sentences, I achieved an F1 score of more than 80%. I am stressing on this dataset specifically because I was told by some that the size is less for this dataset.
So, in your case, assuming you have atleast 1000 instances (calls that include conversation between customer and agent) of average 7 minute calls, this should be a decent start. If the results are not satisfying, you have the following options:
1) Use different models (MNB, Random Forest, Decision Tree, and so on in addition to whatever you are using)
2) If point 1 gives more or less similar results, check the ratio of instances of all the classes you have (the 3 axis you are talking about here). If they do not share a good ratio, get more data or try out the different balancing techniques if you cannot get more data.
3) Another way would be to classify them at a sentence level than message or conversation level to generate more data and individual labels for sentences rather than message or the conversation itself.

I need a function that describes a set of sequences of zeros and ones?

I have multiple sets with a variable number of sequences. Each sequence is made of 64 numbers that are either 0 or 1 like so:
Set A
sequence 1: 0,0,0,0,0,0,1,1,0,0,0,0,1,1,1,1,0,0,0,1,1,1,0,0,0,1,1,1,0,0,0,0,1,1,0,0,0,0,0,0,1,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,1,1,1,0,0,0,0,0
sequence 2:
0,0,0,0,1,1,1,1,0,0,0,1,1,1,0,0,0,0,1,1,0,0,0,0,0,1,1,0,0,0,0,0,1,1,0,0,0,0,0,0,1,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0
sequence 3:
0,0,0,0,0,1,1,1,0,0,0,1,1,1,0,0,0,1,1,1,0,0,0,0,1,1,0,0,0,0,0,0,1,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,1,1,1,1,1,1,1,0
...
Set B
sequence1:
0,0,0,0,0,1,1,1,0,0,0,1,1,1,0,0,0,1,1,1,0,0,0,0,1,1,0,0,0,0,0,0,1,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,1,1,1,1,1,1,1,1
sequence2:
0,0,0,0,0,1,1,1,0,0,0,1,1,1,0,0,0,1,1,1,0,0,0,0,1,1,0,0,0,0,0,0,1,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,1,1,0,0,0,0,0,0,0,1,1,1,1,1,1,0
...
I would like to find a mathematical function that describes all possible sequences in the set, maybe even predict more and that does not contain the sequences in the other sets.
I need this because I am trying to recognize different gestures in a mobile app based on the cells in a grid that have been touched (1 touch/ 0 no touch). The sets represent each gesture and the sequences a limited sample of variations in each gesture.
Ideally the function describing the sequences in a set would allow me to test user touches against it to determine which set/gesture is part of.
I searched for a solution, either using Excel or Mathematica, but being very ignorant about both and mathematics in general I am looking for the direction of an expert.
Suggestions for basic documentation on the subject is also welcome.
It looks as if you are trying to treat what is essentially 2D data in 1D. For example, let s1 represent the first sequence in set A in your question. Then the command
ArrayPlot[Partition[s1, 8]]
produces this picture:
The other sequences in the same set produce similar plots. One of the sequences from the second set produces, in response to the same operations, the picture:
I don't know what sort of mathematical function you would like to define to describe these pictures, but I'm not sure that you need to if your objective is to recognise user gestures.
You could do something much simpler, such as calculate the 'average' picture for each of your gestures. One way to do this would be to calculate the average value for each of the 64 pixels in each of the pictures. Perhaps there are 6 sequences in your set A describing gesture A. Sum the sequences element-by-element. You will now have a sequence with values ranging from 0 to 6. Divide each element by 6. Now each element represents a sort of probability that a new gesture, one you are trying to recognise, will touch that pixel.
Repeat this for all the sets of sequences representing your set of gestures.
To recognise a user gesture, simply compute the difference between the sequence representing the gesture and each of the sequences representing the 'average' gestures. The smallest (absolute) difference will direct you to the gesture the user made.
I don't expect that this will be entirely foolproof, it may well result in some user gestures being ambiguous or not recognisable, and you may want to try something more sophisticated. But I think this approach is simple and probably adequate to get you started.
In Mathematica the following expression will enumerate all the possible combinations of {0,1} of length 64.
Tuples[{1, 0}, {64}]
But there are 2^62 or 18446744073709551616 of them, so I'm not sure what use that will be to you.
Maybe you just wanted the unique sequences contained in each set, in that case all you need is the Mathematica Union[] function applied to the set. If you have a the sets grouped together in a list in Mathematica, say mySets, then you can apply the Union operator to every set in the list my using the map operator.
Union/#mySets
If you want to do some type of prediction a little more information might be useful.
Thanks you for the clarifications.
Machine Learning
The task you want to solve falls under the disciplines known by a variety of names, but probably most commonly as Machine Learning or Pattern Recognition and if you know which examples represent the same gestures, your case would be known as supervised learning.
Question: In your case do you know which gesture each example represents ?
You have a series of examples for which you know a label ( the form of gesture it is ) from which you want to train a model and use that model to label an unseen example to one of a finite set of classes. In your case, one of a number of gestures. This is typically known as classification.
Learning Resources
There is a very extensive background of research on this topic, but a popular introduction to the subject is machine learning by Christopher Bishop.
Stanford have a series of machine learning video lectures Standford ML available on the web.
Accuracy
You might want to consider how you will determine the accuracy of your system at predicting the type of gesture for an unseen example. Typically you train the model using some of your examples and then test its performance using examples the model has not seen. The two of the most common methods used to do this are 10 fold Cross Validation or repeated 50/50 holdout. Having a measure of accuracy enables you to compare one method against another to see which is superior.
Have you thought about what level of accuracy you require in your task, is 70% accuracy enough, 85%, 99% or better?
Machine learning methods are typically quite sensitive to the specific type of data you have and the amount of examples you have to train the system with, the more examples, generally the better the performance.
You could try the method suggested above and compare it against a variety of well proven methods, amongst which would be Random Forests, support vector machines and Neural Networks. All of which and many more are available to download in a variety of free toolboxes.
Toolboxes
Mathematica is a wonderful system, is infinitely flexible and my favourite environment, but out of the box it doesn't have a great deal of support for machine learning.
I suspect you will make a great deal of progress more quickly by using a custom toolbox designed for machine learning. Two of the most popular free toolboxes are WEKA and R both support more than 50 different methods for solving your task along with methods for measuring the accuracy of the solutions.
With just a little data reformatting, you can convert your gestures to a simple file format called ARFF, load them into WEKA or R and experiment with dozens of different algorithms to see how each performs on your data. The explorer tool in WEKA is definitely the easiest to use, requiring little more than a few mouse clicks and typing some parameters to get started.
Once you have an idea of how well the established methods perform on your data you have a good starting point to compare a customised approach against should they fail to meet your criteria.
Handwritten Digit Recognition
Your problem is similar to a very well researched machine learning problem known as hand written digit recognition. The methods that work well on this public data set of handwritten digits are likely to work well on your gestures.

Can I use NLTK to determine if a comment is a positive one or a negative one?

Can you show me a simple example using http://www.nltk.org/code to determine if a string about a happy or upset mood?
NLTK cannot out of the box, but if you are looking for some related research on that area, take a look at this paper on Offensive Language Detection. The same methods could be adapted to detect comments which are not offensive/unoffensive, but instead happy/unhappy. The primary software package being used in this project for text classification is called WEKA and uses multiple classifiers, trained on previous examples, to determine whether language is offensive or not (and in this method uses a tunable threshold).
Pattern is something worthwhile a test drive too: you can see two opinion mining experiments right on the project homepage.
http://www.clips.ua.ac.be/pages/pattern-examples-100days
http://www.clips.ua.ac.be/pages/pattern-examples-elections
Nopey.
This is a task far beyond the capabilities of NLTK or any grammatical parser that is known or can be realistically imagined. Look at the NLTK Book to see what sorts of tasks it can accomplish which are far, far from your stated purpose.
As a cheap example:
I really enjoyed using your paper to train my dog.
Parse that up with NLTK and you can get
[('I', 'PRP'), ('really', 'RB'), ('enjoyed', 'VBD'),
('using', 'VBG'), ('your', 'PRP$'), ('paper', 'NN'),
('to', 'TO'), ('train', 'VB'), ('my', 'PRP$'), ('dog', 'NN')]
Where the parse tree would tell me that 'enjoyed' is the central (past-tense) verb of the simple sentence. To enjoy something is good. To train something is generally a good thing. Gerunds, nouns, comparatives, and such are relatively neutral. So give this a Good score of 0.90.
Except I really mean that I either hit my dog with your paper or let it excrete on the paper which you'd probably consider a not Good thing.
Hire a person for this recognition task.
Added for those who imagine that even trained classifiers are of much use:
Classify this real entry from a real customer review corpus using any classifier you like trained on any dataset you like:
This camera keeps on autofocussing in
auto mode with a buzzing sound which
can't be stopped. It would be really
good if they have given an option to
stop this autofocussing. If you want
to have the date and time on the
image, it's only through their
software which reads the image's date
and time from the image's meta-data.
So if you use your card reader and
copy images - you got to once again
open them through their software to
put the date and time. In that too,
there isn't a direct way to add date
and time
- you got to say 'print images' to a different directory in which there is
an option to specify the date and time
. Even the slightest of the shakes
totally distorts your image. Indoor
images weren't so clear. You got to
have flash 'on' to get it even though
your room is well lit. The lens cap is
a really annoying. the movie clips
taken will always have some 'noise' in
it - you can't avoid that.
The worst mood classification I obtained was "totally equivocal" yet humans can easily determine that this is anything but complimentary. This wasn't a randomly picked datum, rather one that was selected for negative bias without "hate" or "suxz" or similar.
You're looking for a technique that uses a machine learning classifier to determine whether a piece of text is positive or negative. There have been various different attempts at this by a number of research teams (e.g. http://research.yahoo.com/pub/2387 and http://lingcog.iit.edu/doc/appraisal_sentiment_cikm.pdf) we can get about 80% to 90% accuracy at determining whether a product review is positive or negative.
Due to the brevity of your question, it's not obvious to me whether determining whether a product review is positive or negative is the same task you're trying to accomplish, or merely a related task, but I'd suggest starting simple with bag-of-words classification with a Bayesian classifier (which NLTK should be able to handle), and then improve your techniques from there depending on how the accuracy turns out.
Unfortunately, I've never used NLTK (nor Python for that matter) so I can't give you a code example of how to use NLTK for this.

Resources