I have a very basic/general question:
Is there any way for me to interrupt Google Home while it's talking? I currently have developed an app that asks the user a question. However, I am running into usability issues because the user begins to respond to the question before the mic activates, because they already have predicted the question and as a result the app doesn't pick up their response and waits. The user, however, gets confused because they don't know that the app is waiting for the response.
Users can interrupt Google's reply at any time by starting with "Hey Google" or "OK Google" or the equivalent hotword phrase in their locale. If your action is running, what they say after that will be sent to you.
Related
I have a mobile application which has a google assistant SDK integrated within and I want to re-prompt the assistant to say something like "Hey, I am still here, can I help ?" on 'no-input' from the user. I would like to know what could be the best approach to do that ?
I already went through a couple of links on stack overflow and github where I read that it is "not possible" to "re-prompt" on a Mobile Device as Assistant closes the microphone if there is no response from the user. What would be the best way forward -
Can I increase the timeout of when the assistant closes the microphone ?
I did figure out a way to keep the microphone always "ON", but it creates a "SERVICE UNAVAILABLE" error after few seconds and then the user has to re-start the service again which is not a good UX. Is there a way to mitigate the error, my investigation reveals it could be because of audio buffer ?
I read that we can use "Media Response" so that a Media is played while the user doesn't respond but is this the best approach ?
I also have a follow up question here, if I go with "Media Response" approach, is there a way I can explicitly close the conversation after say 30 secs of no user response ?
Some of the links I went through -
Follow up intent for NO INPUT not firing with dialogflow
Reprompt user if no response in google action?
Can you please suggest a good approach to tackle this problem ?
My apologies if the question sounds silly, I am new to DialogFlow.
EDIT: - I also came across "continued conversation", is it possible to enable "continued conversation" from the assistant SDK, I searched but didn't find any documentation on that ?
I had submitted my app for review, but Google replied with "mic" issue. Google Replied :
During our testing, we found that your app would sometimes leave the mic open for the user without any prompt. Make sure that your app always says something before leaving the mic open for the user, so that the user knows what they can say. This is particularly important when your app is first triggered.
Some Points:
1. app.ask()---leaves the mic opens.
2. app.tell()---app leaves the conversation.
I have also enabled the "toggle" of "Set this intent as end of conversation".
Any suggestions??
My app is one to one i.e. If user ask "my address" then address is shown, If "show me direction to PLACE_NAME" then directions are shown. BUT after it mic opens. How to close it?
-----UPDATED-------
fuction someName(app)
{
//---code-----
app.ask('Alright, your address is '+ user_address);
}
I don't want to use app.tell() as it closes the app.
Some other suggestion for this one to one Q/A conversation.
If you are doing fulfillment through a webhook, then the "end conversation" toggle is ignored in favor of what you are sending from your webhook.
You don't show any code, but as noted:
If you use app.ask() or one of the variants of it, the message will be sent to the user and the microphone will be left open. In this case, you should make sure it is clear what you're expecting from the user - in other words, ask a question or prompt them.
If you use app.tell(), the message will be sent to the user and the microphone will be closed. This will end this conversation.
It sounds like, in your case, you should be using app.tell().
I have a Actions on Google project that uses api.ai for its actions. This is working well and I can see request/responses appear on the google assistant interface (On mobiles and simulator)
One of my usecases for api.ai needs to broken into 2 parts, in that we have to inform the user that the processing has started and then inform them again once its completed (without them reprompting for the output).
Im trying for a way to inform the user who is using the Google assistant when the processing is completed, but have failed so far. Something like this
User: I would like to see if my loan request is approved
Google Assistant: Hold on, let me check and let u know .
.... (Makes a webservice call to the backend asynchronously)
.... After few seconds ...
.... Postback to google assistant from the webservice
Google Assistant: Thanks for holding, your request is approved.
Im not sure how to do the "postback to google assistant" call. I have tried to get the SessionId from the Api.AI call and then use that to make a event request , but that doesnt seem to send the response to the assistant. Google Assistant seems to be using the formats defined in https://developers.google.com/actions/reference/rest/Shared.Types/AppRequest, but Im unsure how to get the ConversationToken and use that for sending the response back to the user.
Short answer: you can't do that.
Slightly longer answer: At least right now, there is no good way to send a notification. Your Action can only respond to a specific statement from the user. You can say something like "ask again in a minute and I should have a result for you", but that isn't a great experience. At Google I/O 2017, they announced that notifications would be coming to the Google Home at some point... but gave neither a time frame nor any information about an API.
Long, but probably still unsatisfying answer: You can look into Transactions which let them initiate purchase or request of some sort and then "check out". Once they have checked out, you would confirm that a transaction is being processed with an OrderUpdates and then can send updates with the status of the "order". These status updates can turn into notifications or user's can query the state of the order at any time. Transactions don't require payment, so this may work depending on your needs.
However, there are a few things to note. This is still in developer preview, so things may change in the future. It also doesn't work on all surfaces where the Assistant runs, so while it does work on Assistant on phones, it does not work on the Google Home right now.
just getting started with Assistant features in RPi and I am able to successfully implement upto this point and wondering few thing.
Scenario:
user: hey google "please turn on my living room Lights"
List item my code in horword.py : has a function to perform same action based on ON_RECOGNIZING_SPEACH_FINISHED
RPi/google home: I am not sure how respond to that
I was able to capture the request query asked by user using ON_RECOGNIZING_SPEACH_FINISHED = Args.text(str) and use it in my logic to perform the task. However, at the same time, "ok google" is responding with this answer.
to mitigate this problem, I created an google-actions, now it understands my query and respond with intention from api.ai. However, didn't acts on turn lights ON. So, wondering how can I read response from google home/api.ai in text and change code to act on it locally.
appreciate it.
You will not get response as text.
For getting response to client app use webhook in API.AI and send message using fcm to client app.
Read the fcm message in client app and do the corresponding actions.
finally was able to figure out multiple ways. answered this in other stack question. find more details in this post.
Multiple ways to handle this since google doesn't gives voices transcript and we let google say our transcript which is kind off solution for now.
In a custom action for Google Home, can I create an intent where the assistant keeps listening for 10 minutes, waiting for a custom keyword without answering?
I looked into the docs and I couldn't find an answer but I guess that what I'm looking for is some kind of parameter that prevents the default answering behavior (when the user stops talking, the assistant answers back) and locks the assistant in listening mode.
Not really. The Assistant is more designed for conversational interaction, and it isn't much of a conversation if it just sits there. It also raises privacy issues - Google is very concerned at the perception of having a permanently open mic recording everything and sending it to some third party.
I understand the use case, however. One thing you might consider is to return a small, quiet, beep to indicate you're still listening, but haven't heard anything to trigger on yet. You'd do this as both a fallback event (for when people don't say the keyword, but are speaking) and as a reprompt event. I haven't tested this sort of approach, however.