How to use custom logic with Chatbot frameworks - dialogflow-es

I am working on a chatbot, I have implemented it with Dialogflow (Dialogflow ES). I found that Dialogflow has the following Pros
Easy to use
Good at Intent classification
Good at extracting Entities (prebuilt/custom)
Conversations can be chained to a certain extent using input/output contexts and lifespan
But in my use case, there are certain situations where human level judgment is required and it cannot be done using Dialogflow. Can we add our custom logic to process certain user requests in Dialogflow or any other chatbot framework which provide more flexibility?

You're a bit vague what you mean by "custom logic", but this sounds like fulfillment is what you're looking for.
With this, you can enable Intents so they send JSON to code that you run (either by a webhook you run or via some deployed through an inline editor which manages the deployment for you). Your code can apply your business logic to determine what the response might be, including what replies to send, what Output Contexts are set, and any parameters that are in those Contexts.

Related

How to add mutliple QnA bots to Enterprise Assistant?

Our teams create individual bots for themselves and now we want to integrate them into the Enterprise Assistant.
What I need help with is Multi Bot Orchestration.
Has anyone been able to do this efficiently?
Basically, the user asks a question in the Enterprise Assistant and then the bot gets the answer from the respective child/skill QnA bot.
I am able to add skills like calendar, people, SAP, etc. but dealing with QnA bots is proving to be an impossible challenge.
To create a multi bot orchestration, we need to include multiple levels of dispatchers, pipelines for exception handling and fallbacks. It needs an expert system type agent who can handle multiple NLP models. A chat bot is designed based on NLP modelling. The NLP implementation will be depending on number of operations that enterprise application will handle. The following are the different NLP models required to handle a perfect enterprise application.
General QnA
Billing information
Calendar
Leave Planning
Employee storage disk
On demand service to the clients
Fault tolerance
Saving the QnA pattern of chat between bot and human
Calculating the accuracy
Taking users own request and performing NLP
like this we have so many requirements to implement a Bot. All the models will be trained and connected to the dispatcher.
The following are the things followed in bot orchestration.
It consists of 5 different implementations:
Ingress Pipeline
Egress Pipeline
Dispatcher
Exception pipeline
Fallback pipeline
Ingress Pipeline:
The ingress pipeline will be detection few things like
Language detect and translate
Entity recognition
User input redaction
Egress Pipeline:
the following are requirement and operations in egress pipeline
3rd party tools for analytics
Managed responses (default responses)
Data service redaction
Dispatcher:
The dispatcher will be having two different implementations.
Policies and rules,
NLU (Natural Language Understanding) strategy.
Policies and Rules
The policies and rules of the organization must be trained to the model. Because, if the user is asking for the financial status of the orgzanition, the bot must understand the policies and rules it must follow in order to answer for the question.
Natual Language Understanding
Natural Language Understanding (NLU)  — The computer’s ability to understand what we say. The context what user is typing in his/her own words is the procedure of NLU.
The dispatcher will be communicating with the egress pipeline and ingress pipeline as it is the parent handler of all the operations and model training results.
To address the exceptions and rollbacks, we need to have exception handlers who are also expert system agents. An agent is having some expertise in handling the exceptions.
Example:
Question: Give the details of pithon projects of this monthh.
Handled Exceptions: In the above question "python" spelt like "pithon" and "month" was spelt like "monthh". NLP and NLU have to understand these cluases and have to handle in exception of mis-spelt words.
The following link can explain in detailed architecture of Multi-bot orchestration.
https://servisbot.com/multi-bot-orchestration-architecture/

Single dialogflow agent for several customers

I have done POC on Dialogflow to programmatically create agents and intents. It's working good. However soon I realized I can only create one agent for a project. My use case is to use Dialogflow for multiple customers with their own faq. Hence keeping one agent per customer was making sense but creating a separate project for each customer doesn't seem to be an ideal choice. I am looking for some guidance on using one agent for multiple customers, also making sure there is no conflict. Is this achievable? One way I can think of to use a fulfillment service. When users asks questions I'll pass customer content along with questions to the fulfillment service. Using customer context I'll try to find to answer specific to that customer.
You can achieve the multiple agents in one project setup if you use Dialogflow CX. The limit for this is 100 agents for Dialogflow CX.
The downside of this is if you created an agent in Dialogflow ES, you cannot migrate it to Dialogflow CX as it introduces new concepts like flows, pages, and state handlers. See comparison between ES and CX.
CX agent -> This is an advanced agent type that is suitable for large
or very complex agents. Flows and pages are the building blocks of
conversation design, and state handlers are used to control
conversation paths. The CX agent type is summarized in Dialogflow CX
basics.
Also Dialogflow CX is quite new and its features is not yet as robust as Dialogflow ES. You can read through this article comparing the limitations of CX and ES.
But if you are using features that Dialogflow ES only have, the only option is to have one agent per project.

Dialogflow Integrations (Using the fulfillment webhook model versus API interactions model)

I am trying to understand the different model in building a bot using dialogflow and came across this 2 methods.
Fulfillment model (with webhook enabled) documentation here
API Interactions documentation here
I understand that both of this models have their own pros and cons, and I understand how they both work. Most online examples are showing the fulfillment method (I guess that's more common?)
However, I would still like to ask what reason will it be to choose one or the other? If anyone had used either model before, what limitations are there?
p/s: I've look through quite a number of tutorials, and read through the dialogflow documentation.
the integration by fulfillment is indeed the default approach because you use DialogFlow to design your conversation flow and (big bonus) manage the integration with the various channels (ie Telegram, Facebook).
It is the easiest way to design a fully fledge conversation, you only need to worry about the post hooks that are sent to your backend to either save the data or alter the conversation (add contexts or trigger events).
Important remark: all user traffic (who says what) goes via Dialogflow cloud
The API interaction becomes a good option when you have already an existing frontend (say an existing application or web sites) and you want to plug in DialogFlow NLP capabilities.
I have done something like that to create a FAQ chatbot that called DialogFlow to identify which intent would match a certain phrase while the BOT was deployed in MS Teams.
The architecture would indeed look like the one in the documentation: MS Team ecosystem is the "End-User" part, then my Java app ("Your System") would use the API to call DialogFlow.
Important remark: only given statements (the ones you send) go to Dialogflow cloud

Is there a way to use inner features of an application through Bixby?

I want Bixby to access the inner features of the application. Like to compose a message and send it in a Chat App. Is there a way to do so?
Sure, you can! As long as the application has REST (Or SOAP) endpoints that can be invoked, it can be called from Bixby.
Having said that, Bixby has many built-in features that allows developers to create rich, natural conversational experiences. As a general guide, the data intensive and complex computations parts of your capsule should be run on an external REST endpoint while the conversational experience (and the associated logic) should be driven from within Bixby. Hope this helps.

Chatbot - Possible to call Watson API to respond user queries?

Chatbot has been developed using IBM bulemix to respond the user queries of grade one students.
Suppose a question raised "What is the life cycle of the leaf?" As of now, Chatbot has no entities related to leaf, life cycle etc..
Chatbot identifies the above query as an irrelevant entity. For the above case is it possible call any Watson knowledge API to answer the above queries?
Or
Can we make any third party searches (google/bing).
Or
the only option we need teach more relevant entities to the chatbot
You can use Watson-Discovery Tool
https://www.ibm.com/watson/services/discovery/
As #Rabindra said, you can use Discovery. IBM Developers built one example using Conversation and Discovery service using Java. And I built one example using Node.js based on the Conversation simple example. You can read the README and will understand how it works.
Basically, you need to know: this example has one action variable to call for Discovery when don't have the "relevant information" for an answer to the user and the Discovery service is called for get relevant answers.
You can see more about in this video from Official IBM Watson channel.
See more about Discovery Service.
See the API Reference for using Discovery Service.
You can also check entity linking service from Bing: https://azure.microsoft.com/en-us/services/cognitive-services/entity-linking-intelligence-service/. It is in preview for now, so you will get limited queries per second but it is free for use.

Resources