Actions on Google - SendTyping response - dialogflow-es

I am trying to improve the user experience of my application. One UX improvement would be to reply to the user after they initialize the intent. What i would like to do is simulate the "Typing" reply, so the experience would be for a voice enabled device
User: Who's the league leader in wins?
Assistant: Give me a second to do some investigation
Assistant: Ok, John Doe is currently leading with 10 wins.
Is there a way to send multiple responses for one request.
I am using API.AI and a webhook for fulfillment. I know that i can send multiple items in one response, but i would need to send multiple responses.

At this point, no this isn't possible.
However, at Google I/O they've indicated that they understand this is something people are looking for. There are already some solutions that have been announced that are coming (such as notifications), and there may be others coming as well.

Related

How to ask "Was this helpful?" in DialogFlow at the end of conversation after rendering the response from Intent

So I have a flow prepared.
User: I would like to book an appointment
Bot: Sure. Does 3pm works for you?
User: Yes
Bot: Great. Appointment has been set. (Response from Fulfillment)
Bot: Anything else you need help with? Yes | No (How to achieve this)
I have tried triggering followupEvent but that won't display any response till the chain of intent is complete.
When the followupEventInput parameter is set for a WebhookResponse,
Dialogflow ignores the fulfillmentText, fulfillmentMessages, and
payload fields. When Dialogflow receives a webhook response that
includes an event, it immediately triggers the corresponding intent in
which it was defined.
I have End Intents ready for response for Yes and No. But need help in triggering it.
An intent shouldn't be used as a step in your flow or be tied to a single response, its intended to represent a category of phrases your user might say to complete a certain goal in your conversation. Since the was this helpful isn't triggered by any user phrase, but more as a trigger for the user to continue the conversation shows that it shouldn't be a separate intent.
Having the was this helpful phrase be available to multiple intents is a good choice so it can be used throughout your conversation, but I would recommend saving this phrase in a file, an API or a CMS and retrieving the response via code.
I'm not a PHP developer, but I expect it to be along the lines of: responseService.getResponse("requestFeedbackPrompt");
This allows you to retrieve the was this helpful phrase throughout your code, without making the mistake of making a seperate intent for it, as this will create problems later on with keeping state.
If you would decide to go with a single intent for this, you will quickly see that it will become difficult to maintain track of context, states and which step of the conversation you are in as multiple intents will go through this generic intent.
What would you do if you need a different variant of the was this helpful response, with the single intent, you will end up creating an intent for each variation and you will have to align the conversation flow and state accordingly every time.
If you use the service, you just call responseService.getResponse("OtherFeedbackPrompt);`
Hi have something similar in one of my bots. I have taken a different approach to those mentioned.
My bot asks if there's anything it can help with at the end of a an acknowledgement fulfilment.
The customer then has the option to respond with Yes or No.
Within the page that asks the question I have created routes.
One route for Yes and another for No.
The Yes route directs customers back to the point where they can start making selections. The No route provides a fulfilment to the customer and ends the session. I have used Yes and No intents for these.

The words "not working" always trigger the default intent in Google Assistant

I have been working with Google Dialogflow to create a Google Assistant experience.
My GA Action is to Raise Support tickets and those tickets are raised in our system via API.
We ask the user to describe the Issue they are facing, We have used a fallback Intent to capture the Issue/Ticket Description(Since the reply can be any free text, is this the best way to capture free text?).
Once the user gives a description, A webhook is called and the results are sent to our backend to capture.
We have noticed that when the user uses the words "not working" as a part of the issue description, it always calls the welcome intent, instead of going to the follow up Intent. If the user describes the Issue without using those words, it works fine. Below are 2 different responses.
I personally feel that this is a bug in GA, is there any way to solve it?
I think you're doing some things wrong. I don't have enough information to understand 100% what you are doing, but I will try to give you some general advice:
A fallback intent is used to 'fall back' to this intent when a user asks something that is nowhere provided in one of your other intents. That's why your fallback intent has the 'input.unknown' set as action. It will be triggered when the user gives some input that is unknown for your application. F.e. I don't think your '(Pazo) Support Action' will provide an answer if the user asks to book a plane to Iceland, so that's when your fallback intent comes in to give an answer such as 'Sorry, I can't answer that question. Pazo is here to give you support in... What can I do for you?'
Your user can either register a complaint or raise a support ticket if I'm getting this right? I recommend you to make two seperate intents. One to handle the complaints and one to handle the support tickets.
Before developing advanced actions with a seperate webhook and a lot of logic with calling an API etc., I recommend to go through the documentation of Actions on Google:
https://developers.google.com/actions/extending-the-assistant

How to ensure my Google Home Assistant application is not rejected?

During our testing, we were unable to complete at least one of the behaviors or actions advertised by your app. Please make sure that a user can complete all core conversational flows listed in your registration information or recommended by your app.
Thank you for submitting your assistant app for review!
During testing, your app was unable to complete a function detailed in the app’s description. The reviewer interacted with the app by saying: “how many iphones were sold in the UK?” and app replied “I didn't get that. Can you try with other question?" and left conversation.
How can I resolve the above point to approve my Google Assistant action skills?
Without seeing the code in question or the intent you think should be handling this in Dialogflow, it is pretty difficult - but we can generalize.
It sounds like you have two issues:
Your fallback intent that generated the "I didn't get that" message is closing the conversation. This means that either the "close conversation" checkbox is checked in Dialogflow, you're using the app.tell() method when you should be using app.ask() instead, or the JSON you're sending back has close conversation set to true.
You don't have an intent to handle the question about how many iPhones were sold in the UK. This could be because you just don't list anything like that as a sample phrase, or the two parameters (the one for object type and the one for location) aren't using entity types that would match.
It means that somewhere, either in your app description or in a Dialogflow intent(they have full access to see what's in your intents) you hinted that “how many iphones were sold in the UK?” would be a valid question. Try changing the description/intents to properly match the restrictions of your app.

Return response to Google Assistant via API

I have a Actions on Google project that uses api.ai for its actions. This is working well and I can see request/responses appear on the google assistant interface (On mobiles and simulator)
One of my usecases for api.ai needs to broken into 2 parts, in that we have to inform the user that the processing has started and then inform them again once its completed (without them reprompting for the output).
Im trying for a way to inform the user who is using the Google assistant when the processing is completed, but have failed so far. Something like this
User: I would like to see if my loan request is approved
Google Assistant: Hold on, let me check and let u know .
.... (Makes a webservice call to the backend asynchronously)
.... After few seconds ...
.... Postback to google assistant from the webservice
Google Assistant: Thanks for holding, your request is approved.
Im not sure how to do the "postback to google assistant" call. I have tried to get the SessionId from the Api.AI call and then use that to make a event request , but that doesnt seem to send the response to the assistant. Google Assistant seems to be using the formats defined in https://developers.google.com/actions/reference/rest/Shared.Types/AppRequest, but Im unsure how to get the ConversationToken and use that for sending the response back to the user.
Short answer: you can't do that.
Slightly longer answer: At least right now, there is no good way to send a notification. Your Action can only respond to a specific statement from the user. You can say something like "ask again in a minute and I should have a result for you", but that isn't a great experience. At Google I/O 2017, they announced that notifications would be coming to the Google Home at some point... but gave neither a time frame nor any information about an API.
Long, but probably still unsatisfying answer: You can look into Transactions which let them initiate purchase or request of some sort and then "check out". Once they have checked out, you would confirm that a transaction is being processed with an OrderUpdates and then can send updates with the status of the "order". These status updates can turn into notifications or user's can query the state of the order at any time. Transactions don't require payment, so this may work depending on your needs.
However, there are a few things to note. This is still in developer preview, so things may change in the future. It also doesn't work on all surfaces where the Assistant runs, so while it does work on Assistant on phones, it does not work on the Google Home right now.

Control your device with custom commands using Actions in Google

just getting started with Assistant features in RPi and I am able to successfully implement upto this point and wondering few thing.
Scenario:
user: hey google "please turn on my living room Lights"
List item my code in horword.py : has a function to perform same action based on ON_RECOGNIZING_SPEACH_FINISHED
RPi/google home: I am not sure how respond to that
I was able to capture the request query asked by user using ON_RECOGNIZING_SPEACH_FINISHED = Args.text(str) and use it in my logic to perform the task. However, at the same time, "ok google" is responding with this answer.
to mitigate this problem, I created an google-actions, now it understands my query and respond with intention from api.ai. However, didn't acts on turn lights ON. So, wondering how can I read response from google home/api.ai in text and change code to act on it locally.
appreciate it.
You will not get response as text.
For getting response to client app use webhook in API.AI and send message using fcm to client app.
Read the fcm message in client app and do the corresponding actions.
finally was able to figure out multiple ways. answered this in other stack question. find more details in this post.
Multiple ways to handle this since google doesn't gives voices transcript and we let google say our transcript which is kind off solution for now.

Resources