I have a Actions on Google project that uses api.ai for its actions. This is working well and I can see request/responses appear on the google assistant interface (On mobiles and simulator)
One of my usecases for api.ai needs to broken into 2 parts, in that we have to inform the user that the processing has started and then inform them again once its completed (without them reprompting for the output).
Im trying for a way to inform the user who is using the Google assistant when the processing is completed, but have failed so far. Something like this
User: I would like to see if my loan request is approved
Google Assistant: Hold on, let me check and let u know .
.... (Makes a webservice call to the backend asynchronously)
.... After few seconds ...
.... Postback to google assistant from the webservice
Google Assistant: Thanks for holding, your request is approved.
Im not sure how to do the "postback to google assistant" call. I have tried to get the SessionId from the Api.AI call and then use that to make a event request , but that doesnt seem to send the response to the assistant. Google Assistant seems to be using the formats defined in https://developers.google.com/actions/reference/rest/Shared.Types/AppRequest, but Im unsure how to get the ConversationToken and use that for sending the response back to the user.
Short answer: you can't do that.
Slightly longer answer: At least right now, there is no good way to send a notification. Your Action can only respond to a specific statement from the user. You can say something like "ask again in a minute and I should have a result for you", but that isn't a great experience. At Google I/O 2017, they announced that notifications would be coming to the Google Home at some point... but gave neither a time frame nor any information about an API.
Long, but probably still unsatisfying answer: You can look into Transactions which let them initiate purchase or request of some sort and then "check out". Once they have checked out, you would confirm that a transaction is being processed with an OrderUpdates and then can send updates with the status of the "order". These status updates can turn into notifications or user's can query the state of the order at any time. Transactions don't require payment, so this may work depending on your needs.
However, there are a few things to note. This is still in developer preview, so things may change in the future. It also doesn't work on all surfaces where the Assistant runs, so while it does work on Assistant on phones, it does not work on the Google Home right now.
Related
Update: June 30, 2020
After more testing, I have details that might help someone recognize my problem.
The issue seems to be that Slack is sending data to Azure Bot Services, but that data isn't being forwarded to my code. Ive been able to use the Bot Emulator without any problems and the Azure Web Chat works fine.
I know that the Slack configuration for the OAuth Redirect URL is correct (I was able to add my bot to Slack) and the Request URL for Events is correct (they sent the 'challenge' and it's verified). I've subscribed to the exact Scopes and Events that are in the Microsoft documentation and I've verified that the Interactivity and Events options are enabled.
When a user types text in my bot's Slack channel, my app receives "message" activity and my code can send a response, so it looks like Microsoft can communicate end-to-end for normal messages. I do not receive any data when users first join my bot (like a ConversationUpdate) or if they click a button in a dialog. I can see Slack sending data when a button is pressed, it just never arrives.
As a test, I copied the Messaging Endpoint from my Azure bot settings and pasted it into Slack's Interactivity "Request URL" and when I click a button in Slack I can see the data that Slack is sending (sadly in a format that my code can't handle).
Original Post
I have a Bot Framework app (v4) that I've written in nodejs. It works well and I have an ActivityHandler that responds to people being added to a conversation and when they send messages. I was able to get pro-active messaging functioning and everything was great until I tried to get interactivity working.
I started off using some sample button code from Microsoft's documentation:
let reply = MessageFactory.suggestedActions(['Red', 'Yellow', 'Blue'], 'What is the best color?');
await turnContext.sendActivity(reply);
This works fine in the emulator, but in Slack it renders as a bulleted list. It looks like that's the way that "suggested actions" are handled in Slack.
I changed my code to use a "hero card":
let card = CardFactory.heroCard(
'What is the best color?',
undefined,
CardFactory.actions([
{
type: 'imBack',
title: 'Color Red',
value: 'Red Value'
}
])
);
let reply = MessageFactory.attachment(card);
await turnContext.sendActivity(reply);
This works okay in the emulator, except my app thinks the user typed "Red Value" and the button stays on-screen and is still clickable. I might be able to work around that, but the button doesn't work at all in Slack. It is rendered fine, but I don't get a notification in my app.
Clicking the button shows an HTTP request to:
https://{MY_SLACK}.slack.com/api/chat.attachmentAction?_x_id=f8d003c3-1592436018.632&_x_csid=NcWi3y50lFU&slack_route={OTHER_SLACK_STUFF}
And I can see that the request POSTs all sorts of data including:
payload: {"actions":[{"id":"1","name":"imBack","text":"Color Red","type":"button","value":"Red Value","style":"default"}],"attachment_id":"2","callback_id":"{MAGIC_NUMBER}:{TEAM_ID}","channel_id":"{CHANNEL_ID}","message_ts":"1592435983.056000","prompt_app_install":false,"team_id":"{TEAM_ID}"}
I'm not sure how to see anything useful in the Azure Portal - the analytics option for my bot doesn't seem to work and the activities option only says "Write a Bot Service". I don't see any sign of the message going from Slack to Azure.
I'm developing locally and configured ngrok so that my messaging endpoint in Azure could be set to https://69fe1382ce17.ngrok.io/api/messages On the Slack side of things, I've configured the Interactivity Request URL to be https://slack.botframework.com/api/Actions The Event Subscription Request URL is https://slack.botframework.com/api/Events/{MY_BOT_NAME}
What I would like is a set of buttons with different options and when the user clicks one, my bot gets some sort of "value" instead of message text. I'd also like for the button to go away so the user can't send repeated commands. It would be nice if the hero card collapsed with just the prompt being displayed.
Are there any interactive options that work for Slack and other channels?
Thanks!
Lee
I know linking to another site with no additional detail is frowned upon, but I don't have enough expertise to answer your question. I suspect the link here might move you in the right direction:
Choice Prompts are not translated over to Slack format #3974
Good luck!
Your question is multifaceted so I'll try to break it down into smaller pieces.
What's the deal with suggested actions in Slack?
Suggested actions are not supported in Slack, but the Bot Builder SDK thinks they are. This is a longstanding bug. I've just reported it again on the docs page you linked: https://github.com/MicrosoftDocs/bot-docs/issues/1742
This means you would encounter problems if you were trying to have the choice factory automatically generate the right kind of choices for your channel. You're not doing that, so you should be fine. Hero cards are supposed to work in Slack.
Why aren't hero cards working in Slack?
First I need to mention that hero cards only work with the Slack connector and not the Slack adapter. You seem to be using the connector so you should be fine.
I suspect your problem is related to how you've configured your bot's settings on the Slack side. There is a step in the Bot Framework doc that seems to be important if you want to get buttons to work. If you've followed the doc exactly and you still can't get buttons to work, it may be worthwhile to dig into the Slack API documentation.
How do I only allow a button to be clicked once?
You can update or delete the activity. There's no easy way to do this, but if you voice your support for my cards library then it can be done for you automatically.
The Slack connector actually puts a lot of relevant information in the incoming activity's channel data, and you can use that to figure out what activity the incoming activity came from. That would take some experimentation on your part.
There's another approach that works on more channels than just Slack. It's real complicated, but if you wanna tackle this then here are the basic steps:
You need to put an ID in the action data to help your bot identify the action.
You need to save the activity ID that gets returned when you send the action to Slack.
You need to associate the returned activity ID with the ID you put in the action data.
You need to retrieve the activity ID using the action data ID when the user clicks the button.
You need to use that activity ID to update or delete the activity.
Unfortunately there's no centralized guide to help you do this, but there are many examples explaining it scattered across Stack Overflow. Here is a good one: https://stackoverflow.com/a/55174866/2122672
I'm back again with a question about NLP. I made my own back-end, which on one side can connect to websites, the Google Assistant and Facebook Messenger, and on the other end to Dialogflow. On the side, is logs interactions and does some other database stuff.
Now, I'm trying to connect this back-end to Alexa. I made a project which calls my endpoint. This project has one intent, which has a paramater which should get the raw user input, send it to my back-end, process it, parse and send the response to get back. I feel like there is not a real way to collect and send the raw user input, so I can process it myself (on Dialogflow) instead of using the Amazon way of mapping intents and such.
I know Dialogflow can export to Alexa, but this is not an option for me. I really hope one of you can point me in the right direction.
I just need a way to collect the raw user input, and respond in an Alexa accepted response format.
For Actions on Google for example, I'm using a Custom Project Action Package.
Thanks a lot in advace!
To accept or get any user input, you can use sys.any in google assistant and AMAZON.SearchQuery in AMAZON ALEXA.
In Alexa, You have to add the carrier phrase to use AMAZON.SearchQuery. You can't combine any other slot with AMAZON.SearchQuery.
So there are also some limitations. I hope this answer will help you.
During our testing, we were unable to complete at least one of the behaviors or actions advertised by your app. Please make sure that a user can complete all core conversational flows listed in your registration information or recommended by your app.
Thank you for submitting your assistant app for review!
During testing, your app was unable to complete a function detailed in the app’s description. The reviewer interacted with the app by saying: “how many iphones were sold in the UK?” and app replied “I didn't get that. Can you try with other question?" and left conversation.
How can I resolve the above point to approve my Google Assistant action skills?
Without seeing the code in question or the intent you think should be handling this in Dialogflow, it is pretty difficult - but we can generalize.
It sounds like you have two issues:
Your fallback intent that generated the "I didn't get that" message is closing the conversation. This means that either the "close conversation" checkbox is checked in Dialogflow, you're using the app.tell() method when you should be using app.ask() instead, or the JSON you're sending back has close conversation set to true.
You don't have an intent to handle the question about how many iPhones were sold in the UK. This could be because you just don't list anything like that as a sample phrase, or the two parameters (the one for object type and the one for location) aren't using entity types that would match.
It means that somewhere, either in your app description or in a Dialogflow intent(they have full access to see what's in your intents) you hinted that “how many iphones were sold in the UK?” would be a valid question. Try changing the description/intents to properly match the restrictions of your app.
just getting started with Assistant features in RPi and I am able to successfully implement upto this point and wondering few thing.
Scenario:
user: hey google "please turn on my living room Lights"
List item my code in horword.py : has a function to perform same action based on ON_RECOGNIZING_SPEACH_FINISHED
RPi/google home: I am not sure how respond to that
I was able to capture the request query asked by user using ON_RECOGNIZING_SPEACH_FINISHED = Args.text(str) and use it in my logic to perform the task. However, at the same time, "ok google" is responding with this answer.
to mitigate this problem, I created an google-actions, now it understands my query and respond with intention from api.ai. However, didn't acts on turn lights ON. So, wondering how can I read response from google home/api.ai in text and change code to act on it locally.
appreciate it.
You will not get response as text.
For getting response to client app use webhook in API.AI and send message using fcm to client app.
Read the fcm message in client app and do the corresponding actions.
finally was able to figure out multiple ways. answered this in other stack question. find more details in this post.
Multiple ways to handle this since google doesn't gives voices transcript and we let google say our transcript which is kind off solution for now.
Is it somehow possible to call a web service which can ask things back and receives the answer?
Let me explain:
At home, I have a media center with some movies on it. It's content changes over time of course: Files get added, removed, renamed and so on.
Now I’d like to say for example “Hey Google, play wizard of oz” and then wizard of oz should played on my tv.
Since I know how to develop things in .NET, the web service running at home already exists and works fine, movies start. And I guess thanks to API.ai, I should be able connect it via the webhook function to Google Home.
But what if there are multiple results and I want to ask, which result should be picked? For example:
User says "Play Star Wars"
Google Home calls my web service, which checks my disk and finds out that there are multiple Star Wars movies.
Now, the user needs to be asked "There are multiple results. Which one would you like to see? Star Wars: A new hope, Star Wars: The empire strikes back, ..."
The user now answers "Star Wars: A new hope"
Google Home calls the web service again with that info and after success it replies "Okay, playing Star Wars: A new hope."
I haven't found out how to do that with API.ai. As I understand, API.ai calls the web service with some parameters (JSON), sends the response text received from the web service back to Google Home and then just ends.
Or did I miss something? Do you guys have any idea how I could achieve this scenario?
Or can we somehow develop our private services, like the ones listed in the Google Home app (Akinator, Dominos, CNBC, ...) or is that only possible as a partner? Would be nice actually.
Thanks in advance!
As I understand, API.ai calls the web service with some parameters
(JSON), sends the response text received from the web service back to
Google Home and then just ends.
The bot is still in control unless you send from your web service:
data: {
google: {
expect_user_response: false,
}
}
or check this box in API.AI in the intent pane
If you are using the ActionsSDKAssistant, make sure that you are using the right method. Ask vs. Tell
https://developers.google.com/actions/reference/ActionsSdkAssistant#ask
https://developers.google.com/actions/reference/ActionsSdkAssistant#tell
You need to study the API, the api.ai webhook request/response format and implement it. Take a look at this tutorial. Then, of course, you will have to poke a hole in your firewall to be able to receive the calls from Google or use ngrok or the BST proxy.