I got started with dialogflow two weeks ago, and have a good basic understanding of how everything works. I'm trying to do the following:
I have one agent, which should take a query from the user (more like a keyword in a sentence, for example "I'd like to buy a phone" or "I'd like to get a loan").
This query should go to a system which has a list of chatbots registered. It has to find the best matched agent for the query.
My question is, how do I redirect from the initial chatbot (which listens to the query) to the 'final' agent?
Is there a blogpost, documentation or something similar for this? I was unfortunately not able to find it.
Thanks in advance!
you can use your base agent's webhook as a dispatcher to relay a message to another agent the same way you would do it with curl.
We took another approach where we're using a middleware in front of DialogFlow that does the dispatching based on a base agent's response.
Once the base agent returns another agent to our middleware, we use the event system to query that specific agent's welcome intent and return it back to the UI.
Related
I'm back again with a question about NLP. I made my own back-end, which on one side can connect to websites, the Google Assistant and Facebook Messenger, and on the other end to Dialogflow. On the side, is logs interactions and does some other database stuff.
Now, I'm trying to connect this back-end to Alexa. I made a project which calls my endpoint. This project has one intent, which has a paramater which should get the raw user input, send it to my back-end, process it, parse and send the response to get back. I feel like there is not a real way to collect and send the raw user input, so I can process it myself (on Dialogflow) instead of using the Amazon way of mapping intents and such.
I know Dialogflow can export to Alexa, but this is not an option for me. I really hope one of you can point me in the right direction.
I just need a way to collect the raw user input, and respond in an Alexa accepted response format.
For Actions on Google for example, I'm using a Custom Project Action Package.
Thanks a lot in advace!
To accept or get any user input, you can use sys.any in google assistant and AMAZON.SearchQuery in AMAZON ALEXA.
In Alexa, You have to add the carrier phrase to use AMAZON.SearchQuery. You can't combine any other slot with AMAZON.SearchQuery.
So there are also some limitations. I hope this answer will help you.
Google assistant will search the internet or wiki to questions it did not have an answer for. Is it possible for my agent to do the same too? Rather than saying "Sorry, I don't understand", it will make the agent appear more intelligent.
Yes!
enable webhook for fallback intent.
call a search engine API like Google Search or DuckDuckGo(free but limited result).
parse the service response and create a reply.
update context as per requirement.
send the reply back to the user
All this should happen within 5 seconds!
But it would be better to make the conversation agent with the correct use of intents and not overriding the fallback as I just explained. You should follow best practices to develop your agent.
I am trying to create a chatbot application where user can create their own bot like Botengine. After going through google I saw I need some NLP api to process user's query. As per wit.ai basic example I can set and get data. Now I am confused, How I am going to create a botengine?
So as far I understand the flow, Here is an example for pizza delivery:-
User will enter a welcome message i.e - Hi, Hello ...
Welcome reply will be saved by bot owner in my database.
User will enter some query, then I will hit wit.ai API to process that query. Example :- Users query is "What kind of pizza's available in your store" and wit.ai will respond with the details of intent "pizza_type"
Then I will search for the intent return by wit in my database.
So, is that the right flow to create a chatbot? Am I in the right direction? Could anyone give me some link or some example so I can go through it. I want to create this application using nodejs. I have also found some example in node-wit, but can't find how I will implement this.
Thanks
What you need is webhook. You need to call different API's based on the user intent. I believe you can distinguish between different intents using parameters available in request. Check this out - Creating nodejs webhook for dialogflow
I’m coding a bot using PHP-BotMan for complexity reasons and using Dialogflow query api to extract and manipulate the informations from the response. I saw examples and hints from people here and on dialogflow forum suggesting using context or events, some of them mixing both. What is the better way to handle this?
The flow of the application is:
user messages bot
bot queries (text or/and #event?) dialogflow
internally process a reply or return dialogflow slotfilling* request
text response bot reply user with last reply or asking to fill slot
Also, how can I be sure that a slotfilling process is finished with “actionIncomplete” only having two values, NULL or TRUE? The dialogflow query response doesn’t show wich slotfilling parameters are required or not…
Thanks for the help!!
slotfilling is when dialogflow sends a text response requesting required parameters to finish an intent, adding those replied values to the context
I was trying something similar to your scenario, here are few points i found helpful:
When Slotfitting with webhook, i can't use the "Required" params field since i have to control the input parameters via webhook (query database to provide options). Which means actionIncomplete field is not useful anymore.
I personally prefer to use context as it can add/remove params which gives you more control.
Hence the dialog was designed to use webhook to check all required params before move on to next conversation flow. and pop quick replies menu to ease and restrict possible input from users.
HTH.
just getting started with Assistant features in RPi and I am able to successfully implement upto this point and wondering few thing.
Scenario:
user: hey google "please turn on my living room Lights"
List item my code in horword.py : has a function to perform same action based on ON_RECOGNIZING_SPEACH_FINISHED
RPi/google home: I am not sure how respond to that
I was able to capture the request query asked by user using ON_RECOGNIZING_SPEACH_FINISHED = Args.text(str) and use it in my logic to perform the task. However, at the same time, "ok google" is responding with this answer.
to mitigate this problem, I created an google-actions, now it understands my query and respond with intention from api.ai. However, didn't acts on turn lights ON. So, wondering how can I read response from google home/api.ai in text and change code to act on it locally.
appreciate it.
You will not get response as text.
For getting response to client app use webhook in API.AI and send message using fcm to client app.
Read the fcm message in client app and do the corresponding actions.
finally was able to figure out multiple ways. answered this in other stack question. find more details in this post.
Multiple ways to handle this since google doesn't gives voices transcript and we let google say our transcript which is kind off solution for now.