I have a mobile application which has a google assistant SDK integrated within and I want to re-prompt the assistant to say something like "Hey, I am still here, can I help ?" on 'no-input' from the user. I would like to know what could be the best approach to do that ?
I already went through a couple of links on stack overflow and github where I read that it is "not possible" to "re-prompt" on a Mobile Device as Assistant closes the microphone if there is no response from the user. What would be the best way forward -
Can I increase the timeout of when the assistant closes the microphone ?
I did figure out a way to keep the microphone always "ON", but it creates a "SERVICE UNAVAILABLE" error after few seconds and then the user has to re-start the service again which is not a good UX. Is there a way to mitigate the error, my investigation reveals it could be because of audio buffer ?
I read that we can use "Media Response" so that a Media is played while the user doesn't respond but is this the best approach ?
I also have a follow up question here, if I go with "Media Response" approach, is there a way I can explicitly close the conversation after say 30 secs of no user response ?
Some of the links I went through -
Follow up intent for NO INPUT not firing with dialogflow
Reprompt user if no response in google action?
Can you please suggest a good approach to tackle this problem ?
My apologies if the question sounds silly, I am new to DialogFlow.
EDIT: - I also came across "continued conversation", is it possible to enable "continued conversation" from the assistant SDK, I searched but didn't find any documentation on that ?
Related
I have a smart home dialogflow webhook working from the google actions test console, but when I speak to a google home device, there is no sign that my intents are being recognized. E.g., I enter “Home temperature?” in the console, I can see it calling my webhook, executing my script, and responding with “The temperature is 72 degrees.”
But when I say: “Hey Google, Home temperature” to my google home device, it says my nest device is not registered, or something like that. I.e., it is what it would say if I did not have smart home action intents registered with google actions.
I am unable to find anything in the docs or by web searches which says what I am supposed to do to get my google assistant devices to recognize my custom intent phrases.
Does anyone have this working? The Smart Home integration is not supposed to require a lead in, like “Hey Google, Ask whoever, Home temperature”, Right? That is only for “conversation mode” integrations, correct? My understanding is that “Smart Home” mode does not require a lead-in. Please correct me if that is incorrect…
Either way, my voice requests through my Google Home are not recognized.
Please, any advice for what I am missing or how I can troubleshoot this?
Thanks!
P.S. I'm new to Stack Overflow, and I didn't find this "dialogflow" group until posting in another group. So I am reposting here. Sorry if this is redundant. I could not find how to delete the original post...
It sounds like I was wrong about the "Hey Google, talk to ..." requirement for Dialogflow.
The "Smart Home" mode does not preclude this. You cannot just say, "Hey Google, home temperature?", you have to say, "Hey Google, ask [my dialogflow app], home temperature?"
Furthermore, unless you Publish your app, the response will always say, "Alright, here's the test version of [my dialogflow app]...
Between the two, it pretty much ruins it for me... Off to the drawing board.
I have been working with Google Dialogflow to create a Google Assistant experience.
My GA Action is to Raise Support tickets and those tickets are raised in our system via API.
We ask the user to describe the Issue they are facing, We have used a fallback Intent to capture the Issue/Ticket Description(Since the reply can be any free text, is this the best way to capture free text?).
Once the user gives a description, A webhook is called and the results are sent to our backend to capture.
We have noticed that when the user uses the words "not working" as a part of the issue description, it always calls the welcome intent, instead of going to the follow up Intent. If the user describes the Issue without using those words, it works fine. Below are 2 different responses.
I personally feel that this is a bug in GA, is there any way to solve it?
I think you're doing some things wrong. I don't have enough information to understand 100% what you are doing, but I will try to give you some general advice:
A fallback intent is used to 'fall back' to this intent when a user asks something that is nowhere provided in one of your other intents. That's why your fallback intent has the 'input.unknown' set as action. It will be triggered when the user gives some input that is unknown for your application. F.e. I don't think your '(Pazo) Support Action' will provide an answer if the user asks to book a plane to Iceland, so that's when your fallback intent comes in to give an answer such as 'Sorry, I can't answer that question. Pazo is here to give you support in... What can I do for you?'
Your user can either register a complaint or raise a support ticket if I'm getting this right? I recommend you to make two seperate intents. One to handle the complaints and one to handle the support tickets.
Before developing advanced actions with a seperate webhook and a lot of logic with calling an API etc., I recommend to go through the documentation of Actions on Google:
https://developers.google.com/actions/extending-the-assistant
I have a Actions on Google project that uses api.ai for its actions. This is working well and I can see request/responses appear on the google assistant interface (On mobiles and simulator)
One of my usecases for api.ai needs to broken into 2 parts, in that we have to inform the user that the processing has started and then inform them again once its completed (without them reprompting for the output).
Im trying for a way to inform the user who is using the Google assistant when the processing is completed, but have failed so far. Something like this
User: I would like to see if my loan request is approved
Google Assistant: Hold on, let me check and let u know .
.... (Makes a webservice call to the backend asynchronously)
.... After few seconds ...
.... Postback to google assistant from the webservice
Google Assistant: Thanks for holding, your request is approved.
Im not sure how to do the "postback to google assistant" call. I have tried to get the SessionId from the Api.AI call and then use that to make a event request , but that doesnt seem to send the response to the assistant. Google Assistant seems to be using the formats defined in https://developers.google.com/actions/reference/rest/Shared.Types/AppRequest, but Im unsure how to get the ConversationToken and use that for sending the response back to the user.
Short answer: you can't do that.
Slightly longer answer: At least right now, there is no good way to send a notification. Your Action can only respond to a specific statement from the user. You can say something like "ask again in a minute and I should have a result for you", but that isn't a great experience. At Google I/O 2017, they announced that notifications would be coming to the Google Home at some point... but gave neither a time frame nor any information about an API.
Long, but probably still unsatisfying answer: You can look into Transactions which let them initiate purchase or request of some sort and then "check out". Once they have checked out, you would confirm that a transaction is being processed with an OrderUpdates and then can send updates with the status of the "order". These status updates can turn into notifications or user's can query the state of the order at any time. Transactions don't require payment, so this may work depending on your needs.
However, there are a few things to note. This is still in developer preview, so things may change in the future. It also doesn't work on all surfaces where the Assistant runs, so while it does work on Assistant on phones, it does not work on the Google Home right now.
just getting started with Assistant features in RPi and I am able to successfully implement upto this point and wondering few thing.
Scenario:
user: hey google "please turn on my living room Lights"
List item my code in horword.py : has a function to perform same action based on ON_RECOGNIZING_SPEACH_FINISHED
RPi/google home: I am not sure how respond to that
I was able to capture the request query asked by user using ON_RECOGNIZING_SPEACH_FINISHED = Args.text(str) and use it in my logic to perform the task. However, at the same time, "ok google" is responding with this answer.
to mitigate this problem, I created an google-actions, now it understands my query and respond with intention from api.ai. However, didn't acts on turn lights ON. So, wondering how can I read response from google home/api.ai in text and change code to act on it locally.
appreciate it.
You will not get response as text.
For getting response to client app use webhook in API.AI and send message using fcm to client app.
Read the fcm message in client app and do the corresponding actions.
finally was able to figure out multiple ways. answered this in other stack question. find more details in this post.
Multiple ways to handle this since google doesn't gives voices transcript and we let google say our transcript which is kind off solution for now.
In a custom action for Google Home, can I create an intent where the assistant keeps listening for 10 minutes, waiting for a custom keyword without answering?
I looked into the docs and I couldn't find an answer but I guess that what I'm looking for is some kind of parameter that prevents the default answering behavior (when the user stops talking, the assistant answers back) and locks the assistant in listening mode.
Not really. The Assistant is more designed for conversational interaction, and it isn't much of a conversation if it just sits there. It also raises privacy issues - Google is very concerned at the perception of having a permanently open mic recording everything and sending it to some third party.
I understand the use case, however. One thing you might consider is to return a small, quiet, beep to indicate you're still listening, but haven't heard anything to trigger on yet. You'd do this as both a fallback event (for when people don't say the keyword, but are speaking) and as a reprompt event. I haven't tested this sort of approach, however.