I was hoping to get guidance into how I may use Dialog Flow to shorten the process of getting information from an action.
For example, I would like to provide the following command:
"Ok Google, Ask my test app what is the capital city of the US."
However, I currently need to say:
"Ok, Google, open my test app"
I would then need to wait for a response before providing the name of the country that I need the capital city for.
I'm finding the guidance from the Google documentation difficult to follow.
Do I need to create an implicit invocation in order to give the parameter with the launch command?
Yes, if you want to trigger intents from the launch commands you have to add those intents as a deep link / implicit invocation, to do this the only thing you have to do is create an intent which can handle the parameter and then add it to the implicit intents under the actions on google integration section in Dialogflow.
You don't have to create any intent specially for implicit invocation, you can just use the ones you have already created for normal conversations.
I've written an guide to implicit invocations / deep links for Google Assistant if you need more help or infomation. Here is a video with the end result
Related
I am completely new to the "Actions on Google" world, but following some tutorials (like this) i have already achieved good results.
My test
With Google Assistant and/or Google Home mini send my commands to a personal nodejs online server.
To do this:
i have created a new project on https://console.actions.google.com/
selected conversational option
selected create action / custom intent option
from Dialogflow i have personalized the Default Welcome Intent and created a new Intent with the Fulfillment option set to Enable webhook call for this intent
And obviously, from Dialogflow > Fulfillment, i have enabled the Webhook option (with the url of my nodejs app), and not the Inline editor.
This procedure works, when my app recognizes my custom intent, the answer is sent to my nodejs app online.
My problem
The procedure works, but i always have to do 2 steps before i can perform my action:
1) Hey Google, talk with "nameofmyapp"
2) Say the command
My goal
Execute my command directly without always having to do this 2 steps.
Absolutely! Google calls this "deep linking". With this, you'll be able to do something like
Hey Google, ask nameof myapp to command
See the documentation for details, but in short you'll
Make sure you have an Intent for the command in Dialogflow, with several possible phrases that can be used to trigger it.
These phrases should be what you'd say under "command" in the example above - you'd omit the "to" part.
Go to the Integrations section in Dialogflow, under the Google Assistant integration.
In the Implicit invocation section, select the Intent that you'd like to allow as a deep-linked Intent.
If the command takes action and then should quit, make sure either you have set this in Dialogflow or your fulfillment calls app.close();
I've set up a simple QnA bot which is linked to a QnA service. Within the QnA service I have set up some questions which have follow up prompts(Dependents) e.g. how do I get to a campus, via bus, train etc. see image in link, within the Qna maker testing function you can just click a button called enable mutli-turn which provides functional buttons to inform you of what can/should be asked next via the dependents of the answer See image in link.
However when used within a channel/in the emulator nothing of the like appears see image, which is a bit odd. And obviously I want to implement such functionality in to the bot as it makes life so much easier for the users.
I am new to the whole bot thing(I started last month), so I have a browsed the internet to see what I could find but I could not see anything out side of writing the questions within the bot it self, see Microsofts documentation, which makes using QnA maker pretty much pointless.
What I think I need to do is intercept the message from QnA maker as it replies to the user, look at the Json received to find if has any dependents then run a different dialog, which gets the contextual dependents names and runs a simple for loop generating cards for each dependents, then send the message to the user with the generated cards, however I'm not sure how to intercept the Json and look for any dependents, or there is a button the I need to click within azure which just does it.
There is this experimental sample that has been released by the Bot Framework team which demonstrates how to handle follow-up prompts.
You can download it (you will have to download the whole repo) then plug in your details to the appsettings.json file and you should be able to test it using the Bot Framework Emulator - these were the only steps that I had to perform.
The key part is this method which checks to see if the result contains any prompts and returns the response accordingly - it is called inside the FuctionDialog.
If you're only ever going to be implementing a single level of prompts i.e. you have a question which shows prompts and when you click on one of these prompts it will display an answer rather than taking you to another prompt, then it is possible to take the guts of the logic from the ProcessAsync method (checking for prompts) along with the required classes from the Models folder and the CardHelper class and get this to work in your existing application - you won't have to worry about the QnABotState because you'll only be going a single level deep so you don't need to track where you are in the series of prompts. e.g.
var query = inputActivity.Text;
var qnaResult = await _qnaService.QueryQnAServiceAsync(query, new QnABotState());
var qnaAnswer = qnaResult[0].Answer;
var prompts = qnaResult[0].Context?.Prompts;
if (prompts == null || prompts.Length < 1)
{
outputActivity = MessageFactory.Text(qnaAnswer);
}
else
{
outputActivity = CardHelper.GetHeroCard(qnaAnswer, prompts);
}
await turnContext.SendActivityAsync(outputActivity);
Could someone please advice where do we add this code mentioned above? I am a rookie, have very basic knowledge about programming. Using visual studio with C# for this. How and where do I add this code to make it work? I am also not diving too deep. Just trying to make some simple logic where a user clicks on a few follow up prompts and is taken to the required information. Would really appreciate if someone could help. Thanks
First picture shows the starting follow up prompt.
Second picture that follows the first followup prompt
I have been working with Google Dialogflow to create a Google Assistant experience.
My GA Action is to Raise Support tickets and those tickets are raised in our system via API.
We ask the user to describe the Issue they are facing, We have used a fallback Intent to capture the Issue/Ticket Description(Since the reply can be any free text, is this the best way to capture free text?).
Once the user gives a description, A webhook is called and the results are sent to our backend to capture.
We have noticed that when the user uses the words "not working" as a part of the issue description, it always calls the welcome intent, instead of going to the follow up Intent. If the user describes the Issue without using those words, it works fine. Below are 2 different responses.
I personally feel that this is a bug in GA, is there any way to solve it?
I think you're doing some things wrong. I don't have enough information to understand 100% what you are doing, but I will try to give you some general advice:
A fallback intent is used to 'fall back' to this intent when a user asks something that is nowhere provided in one of your other intents. That's why your fallback intent has the 'input.unknown' set as action. It will be triggered when the user gives some input that is unknown for your application. F.e. I don't think your '(Pazo) Support Action' will provide an answer if the user asks to book a plane to Iceland, so that's when your fallback intent comes in to give an answer such as 'Sorry, I can't answer that question. Pazo is here to give you support in... What can I do for you?'
Your user can either register a complaint or raise a support ticket if I'm getting this right? I recommend you to make two seperate intents. One to handle the complaints and one to handle the support tickets.
Before developing advanced actions with a seperate webhook and a lot of logic with calling an API etc., I recommend to go through the documentation of Actions on Google:
https://developers.google.com/actions/extending-the-assistant
During our testing, we were unable to complete at least one of the behaviors or actions advertised by your app. Please make sure that a user can complete all core conversational flows listed in your registration information or recommended by your app.
Thank you for submitting your assistant app for review!
During testing, your app was unable to complete a function detailed in the app’s description. The reviewer interacted with the app by saying: “how many iphones were sold in the UK?” and app replied “I didn't get that. Can you try with other question?" and left conversation.
How can I resolve the above point to approve my Google Assistant action skills?
Without seeing the code in question or the intent you think should be handling this in Dialogflow, it is pretty difficult - but we can generalize.
It sounds like you have two issues:
Your fallback intent that generated the "I didn't get that" message is closing the conversation. This means that either the "close conversation" checkbox is checked in Dialogflow, you're using the app.tell() method when you should be using app.ask() instead, or the JSON you're sending back has close conversation set to true.
You don't have an intent to handle the question about how many iPhones were sold in the UK. This could be because you just don't list anything like that as a sample phrase, or the two parameters (the one for object type and the one for location) aren't using entity types that would match.
It means that somewhere, either in your app description or in a Dialogflow intent(they have full access to see what's in your intents) you hinted that “how many iphones were sold in the UK?” would be a valid question. Try changing the description/intents to properly match the restrictions of your app.
I’m building an agent on API.ai where I ask a user a question. I’m not expecting them to answer the question back to my agent. However they may wish to follow up this question later on by asking for some more information. If I ‘end the conversation’ in my intent they can’t then do something such as say ‘tell me more’ without invoking my action again from scratch (in which case all context is lost), but similarly if they don’t say anything, then (on google home at least) the question gets repeated as it's expecting a response.
Is there anyway I could do this?
Actions are conversational experiences. Typically your app would ask a question and the user would provide a response. Once the user exits your app, the conversational context goes back to the assistant.
If you want to provide a quick way to let the user engage with your app again, then consider implementing support for deep links: https://developers.google.com/actions/apiai/define-actions#define_additional_actions
In addition to what Leon has said, you could also manage the context of the user yourself (instead of relying on API.AI's Contexts) and key off the anonymous userid that you get with each request.
This way they can deep-link back to ask you a followup question, and you know "who" is returning and where the conversation last stood when you gave a reply.
I understand that what you basically want is to create an intent in which what the user will say is not predictable (they may not say anything at all).
In that case , you can simply end the response with a prompt to specify it. "...Do you want to ask something more ". If user says "no" , end the conversation in a different intent. Otherwise carry on with the flow.