Difference between IBM Watson Conversation and Natural Language Classifier? - nlp

What is difference between IBM Watson Conversation and Natural Language Classifier?
Conversation: https://www.ibm.com/watson/developercloud/conversation.html
Natural Language Classifier:https://console.ng.bluemix.net/catalog/services/natural-language-classifier/

The Natural Language Classifier service supports classification-only use cases (for example, routing calls in a call center). Watson Conversation includes similar intents, but also helps you to build and train a bot with entities and dialog to simulate conversation.
Both services have a graphical UI available. However, the Conversation UI tool offers more features.

To my understanding, Conversation provides you with an UI and functionality to wire up all the things you would have to do manually with Natural Language Classifier. Within Conversation, you also have the ability to create dialog/conversational workflow based on your defined intents and entities which you can create within the same UI.

Related

Is there a way to use inner features of an application through Bixby?

I want Bixby to access the inner features of the application. Like to compose a message and send it in a Chat App. Is there a way to do so?
Sure, you can! As long as the application has REST (Or SOAP) endpoints that can be invoked, it can be called from Bixby.
Having said that, Bixby has many built-in features that allows developers to create rich, natural conversational experiences. As a general guide, the data intensive and complex computations parts of your capsule should be run on an external REST endpoint while the conversational experience (and the associated logic) should be driven from within Bixby. Hope this helps.

should I build own NLP engine for rare language or use cloud services for chatbots (azure, GCP) and translations?

So, I would like to build a chatbot for a language not widely supported (i.e. goole/azure don't have support for building chatbots, but only translation service). Translation works well from that language to English (and vice versa).
So, is it easier:
To build a new NLP engine for that specific language to recognize context of user question when using the chatbot?
To translate user questions to English and then internally use chatbot engine which supports English (of course, the chatbot needs to be
programmed), i.e. azure/GCP engines. When context is recognized, and
when used provide answer (it can again be translated to destination
language toward user) ?
Method 2 seems easier (cloud api/services already available). Not sure how it works in practice when recognizing context is this chained?
Here you go for method 2 -->multilingual conversations that helps to translate and identify the intent.

Is there any Azure based service similar to IBM Watson Knowledge Studio?

I tried to find some Azure service similar to IBM Watson Knowledge Studio but I failed. I'm looking for something I can train to analyze texts and retrieving entities, relations between entities and entities related sentiments.
Do you know if there is anything in Azure I could use to do that?
Yes, there is remote similarity between Azure Language Understanding (LUIS) and IBM Watson Knowledge Studio (WKS), but they are not substitutes.
The notable differences:
LUIS is for building chatbots - conversations with utterance data. WKS is for general unstructured text, usually much larger in volume than utterances. In this respect, LUIS is competing with IBM Conversation service, not with WKS and the IBM Watson services that run the WKS custom models - Watson Natural Language Understanding and Watson Discovery.
Because LUIS is built for processing utterances, it has much lower limits, compared to WKS. For example, LUIS limits the input text to 500 characters, while WKS processes text of up to 40,000 characters. LUIS also limits the Simple entities to 30, which may be ok for processing targeted utterances, but not for building high quality model for processing large documents and complex domains.
LUIS supports only customization of Entity Type mentions (in various forms, like Simple, Hierarchical, Composite, RegEx, List,.. - very similar to the WKS Entity types). WKS (and the runtime services that use the WKS custom models), on the other hand, supports Entity Relations - an important feature that helps you extract insights from the client-specific corpus that you cannot do with Entity mentions alone.
LUIS supports only a fraction of the languages that WKS supports. And the LUIS language support is partial - see https://learn.microsoft.com/en-us/azure/cognitive-services/luis/luis-supported-languages
LUIS, similarly to IBM Conversation service, is a runtime NLP service that allows customization in its tooling. WKS, on the other hand, is a stand alone customization SaaS offering, that was specifically designed to organize a team of domain subject matter experts (SME) and cognitive solution developers to transfer the SME domain knowledge into the custom model that is then deployed to and used by IBM Watson runtime services, like Natural Language Understanding and Discovery. In other words, while LUIS and IBM Conversion provide tooling for customizing the solution directly, WKS provides a separate environment with built-in methodology for managing customization projects and for annotation skill building.
LUIS, to my understanding, is offered as a multi-tenant public service. WKS is offered in both multi-tenant and isolated configurations. In that respect, WKS is suitable not only for the general public, but also for projects with sensitive client data.
In conclusion, there's no WKS equivalent (substitute) that I'm aware of. LUIS may be considered as the Azure alternative to IBM Conversation service, if your solution is built in Azure, but LUIS is not a substitute for IBM Watson Knowledge Studio.
So, it's very important to consider your use case (application domain), when choosing on which platform to build your solution.
Hope this helps.
Did you look at the Azure AI gallery? One common approach is to customise the solutions there for your particular requirements. Here's a search of all text-related items, which you can for example refine to be Microsoft content only.
I'm not aware of a single service that maps directly; the Text Analytics API for example just does language and phrase detection and sentiment analysis.
Have a look at Luis.ai...should do what you need.

Difference between Rasa core and Rasa nlu

I tried to understand the difference between Rasa core and Rasa NLU from the official documentation, but I don't understand much. What I understood is that Rasa core is used to guide the flow of the conversation, while Rasa NLU is used to process the text to extract information (entities).
There are examples to build chatbots in Rasa core as well as Rasa NLU. I couldn't understand what the difference in the two approaches is and when to adopt one instead of the other approach.
Could you please help me to understand this better?
You got it right. Both work together but they have distinct goals. In simple terms, Rasa Core handles the conversation flow, utterances, actions and Rasa NLU extract entities and intents.
About your second question:
The first example shows the entire workflow to create the bot, it shows how to setup the domain and the stories. Those are features from Rasa Core, not Rasa NLU. At item 2 on this example (called Define an interpreter) the author explicitly said he is making use of Rasa NLU as the interpreter (but you could be even using another entity extractor framework).
The second example (the Rasa NLU one) shows how to train the entity and intent extractor only. You don't have any information about domains and stories, no information about the conversational flow, it is a pure NLU example (even though he is using the default run method from Rasa Core to run the bot).
When I started studying Rasa was a bit hard to understand the concepts to develop the bots. But as you start coding it got clear. No matter which platforms you use, NLU will be handling entity and intents while the conversational flow will be something else.
It is even possible to use one library to handle the core of your bot and another one to handle NLU.
I would like to note that different from the most tools you can use to build the core of your bot, Rasa Core use machine learning to better generalize the dialogue flow. Instead of write code for each possible node on your conversation, you can use a dataset of possible conversational paths and train the core to generalize it. This is a very cool and powerful feature :)
Hope it helps.
A very layman description for starters: Rasa NLU is the interpreter which understands the input. Basically, it figures out entities and labels the intent.
Rasa Core does the rest of the work you want your bot to do, the flow of conversation being the most important thing.
For example, you say "Hello" to the bot. Rasa NLU will understand the input's intent as a greeting and Rasa Core will tell the bot to reply with a greeting.
The reply back would be a greeting if you train your bot for it or it might be anything else as well.
To explain in simple terms Rasa NLU uses NLP (Natural Language Processing) to understand what you tell the bot.
It understands what you say and matches it to some intent that you have defined.
Rasa Core on the other hand handles the conversation flow. The stories markdown file lists the intents and the actions for them. Hence when the NLU gives the intent, the Core performs the action corresponding to it and the bot replies with that action.
#trinca's answer is correct. I just rephrase a bit the points
Second thing, there are examples to build chatbot in Rasa core as well
as Rasa nlu both can be used to build chatbot but couldn't understand
what's the difference in two approaches and when to follow which one.
No, NLU/Core are not different approaches, rather, these are different components of a dialog manager engine.
RASA NLU is a intent/entities classifier:
You off-line trains the classifier with a number of examples sentences with attached relative intent (and entities) tags.
Afterward, at run-time, you submit to the classifier an incoming sentence and you have back an intent tag and a list of possible entities related to the intent, as result of the classification.
RASA Core is a (probabilistic) dialog manager:
It decides/guess which is the next probable "state" (again just an intent) of the chatbot conversation. It's off-line trained with a RASA specialities: "stories". These are possible sequences of intents, following examples of conversation that developers submit in the train phase.
Afterward, at run-time, RASA Core, when a user submit a sentence (so a corresponding intent guessed bu previous mentioned NLU component) it guess the "probable" next state of the conversation (an intent).
Notes:
IMMO you can't build a chatbot with just the NLU (an intent
classifier) component proposed by many competitors as the "solution" to build bots), because with just the intents classifier (The NLU) you can manage just "stateless" dialogs (single turn volleys without any context of the conversation).
An the end of the day RASA is winner in comparison with other mentioned frameworks (these are often just channel gateways/intente classifiers) because the dialog manager component and the stories way to design/develop a
conversation, without hardcoded rules (if/then).
Rasa Core:
Rasa Core is the component in Rasa that handles dialogue management. Dialogue management is responsible for keeping a record of the conversation context and choosing the next actions accordingly.
Rasa NLU:
Rasa NLU is responsible for intent recognition and entity extraction.
Example
For example, if the user input is What's the weather like tomorrow in New York?, Rasa NLU needs to extract that the intent of the user is asking for weather, and the corresponding entity names and type, for example, the date is tomorrow, and the location is New York.

How to implement BOT engine like WIT.AI for on an on-premise solution?

I want to build a chatbot for a customer service application. I tried SaaS services like Wit.Ai, Motion.Ai, Api.Ai, LUIS.ai etc. These cognitive services find the "intent" and "entities" when trained with the typical interactions model.
I need to build chatbot for on-premise solution, without using any of these SaaS services.
e.g Typical conversation would be as following -
Can you book me a ticket?
Is my ticket booked?
What is the status of my booking BK02?
I want to cancel the booking BK02.
Book the tickets
StandFord NLP toolkit looks promising but there are licensing constraints. Hence I started experimenting with the OpenNLP. I assume, there are two OpenNLP tasks involved -
Use 'Document Categorizer' to find out the intent
Use 'Named Entity Recognition' to find out entities
Once the context is identified, I will call my application APIS to build the response.
Is it a right approach?
How good OpenNLP is in parsing the text?
Can I use Facebook FASTTEXT library for Intent identification?
Is there any other open source library which can be helpful in building the BOT?
Will "SyntaxNet" be useful for my adventure?
I prefer to do this in Java. BUT open to node or python solution too.
PS - I am new to NLP.
Have a look at this. It says it is an Open-source language understanding for bots and a drop-in replacement for popular NLP tools like wit.ai, api.ai or LUIS
https://rasa.ai/
Have a look at my other answer for a plan of attack when using Luis.ai:
Creating an API for LUIS.AI or using .JSON files in order to train the bot for non-technical users
In short use Luis.ai and setup some intents, start with one or two and train it based on your domain. I am using asp.net to call the Cognitive Service API as outlined above. Then customize the response via some JQuery...you could search a list of your rules in a javascript array when each intent or action is raised by the response from Luis.
If your Bot is english based, then I would use OpenNLP's sentence parser to dump the customer input into a database (I do this today). I then use the OpenNLP tokenizer and push the keywords (less the stop words) and Parts of Speech into a database table for keyword analysis. I have a custom Sentiment model built for OpenNLP that will tag each sentence with a Pos, Neg, Neutral sentiment...You can then use this to identify negative customer service feedback. To build your own Sentiment model have a look at SentiWord.net and download their domain agnostic data file to build and train an OpenNLP model or have a look at this Node version...
https://www.npmjs.com/package/sentiword
Hope that helps.
I'd definitely recommend Rasa, it's great for your use case, working on-premise easily, handling intents and entities for you and on top of that it has a friendly community too.
Check out my repo for an example of how to build a chatbot with Rasa that interacts with a simple database: https://github.com/nmstoker/lockebot
I tried RASA, But one glitch I found there was the inability of Rasa to answer unmatched/untrained user texts.
Now, I'm using ChatterBot and I'm totally in love with it.
Use "ChatterBot", and host it locally using - 'flask-chatterbot-master"
Links:
ChatterBot Installation: https://chatterbot.readthedocs.io/en/stable/setup.html
Host Locally using - flask-chatterbot-master: https://github.com/chamkank/flask-chatterbot
Cheers,
Ratnakar
With the help of the RASA and Botkit framework we can build the onpremise chatbot and the NLP engine for any channel. Please follow this link for End to End steps on building the same. An awsome blog that helped me to create a one for my office
https://creospiders.blogspot.com/2018/03/complete-on-premise-and-fully.html
First of all any chatbot is going to be the program that runs along with the NLP, Its the NLP that brings the knowledge to the chatbot. NLP lies on the hands of the Machine learning techniques.
There are few reasons why the on premise chatbots are less.
We need to build the infrastructure
We need to train the model often
But using the cloud based NLP may not provide the data privacy and security and also the flexibility of including my business logic is very less.
All together going to the on premise or on cloud is based on the needs and the use case of the requirements.
How ever please refer this link for end to end knowledge on building the chatbot on premise with very few steps and easily and fully customisable.
Complete On-Premise and Fully Customisable Chat Bot - Part 1 - Overview
Complete On-Premise and Fully Customisable Chat Bot - Part 2 - Agent Building Using Botkit
Complete On-Premise and Fully Customisable Chat Bot - Part 3 - Communicating to the Agent that has been built
Complete On-Premise and Fully Customisable Chat Bot - Part 4 - Integrating the Natural Language Processor NLP
Disclaimer: I am the author of this package.
Abodit NLP (https://nlp.abodit.com) can do what you want but it's .NET only at present.
In particular you can easily connect it to databases and can provide custom Tokens that are queries against a database. It's all strongly-typed and adding new rules is as easy as adding a method in C#.
It's also particularly adept at turning date time expressions into queries. For example "next month on a Thursday after 4pm" becomes ((((DatePart(year,[DATEFIELD])=2019) AND (DatePart(month,[DATEFIELD])=7)) AND (DatePart(dw,[DATEFIELD])=4)) AND DatePart(hour,[DATEFIELD])>=16)

Resources