My expectation and experience with most skills in the store is that I should be able to choose a skill that I've never used or enabled before, and simply speak "Open " to launch the skill on an Alexa device.
I've recently deployed a skill, and I'm finding that I can use it only via the:
"Enable «skill invocation name» skill" phrase
However, it's not opening via the:
"Open «skill invocation name»" phrase
Is it possible that it takes time for Alexa to index the name for this shortened "Open «skill invocation name»" invocation phrase for users who have not previously enabled or invoked a skill?
Open "<< skill invocation name >>"
should work fine out of the box. As far as I know it is only one shot invocations that can take some time to train after a Skill has been recently deployed/updated.
Ask "<< skill invocation name >>" to do X
Not an answer but I guess you can try all possible variations to launch a Skill from this list https://developer.amazon.com/en-US/docs/alexa/custom-skills/understanding-how-users-invoke-custom-skills.html#no-intent and see which works/don't work for you.
Is your Skill name difficult to pronounce so that Alexa misunderstands the Skill name? If so, you can look at what Alexa thinks you are saying in the Alexa app, for instructions, see https://help.xappmedia.com/hc/en-us/articles/360032800651-Checking-your-Utterance-History-in-the-Alexa-App.
I had the same problem/question about a year ago and asked the Alexa developer support. Their answer was basically that a user always needs to enable a skill for the first time via voice or in the app.
The shortcut of using "open" directly for the very first invocation is not guaranteed to work. I think of it as an undocumented feature under test that is not available all of the time.
Related
I was hoping to get guidance into how I may use Dialog Flow to shorten the process of getting information from an action.
For example, I would like to provide the following command:
"Ok Google, Ask my test app what is the capital city of the US."
However, I currently need to say:
"Ok, Google, open my test app"
I would then need to wait for a response before providing the name of the country that I need the capital city for.
I'm finding the guidance from the Google documentation difficult to follow.
Do I need to create an implicit invocation in order to give the parameter with the launch command?
Yes, if you want to trigger intents from the launch commands you have to add those intents as a deep link / implicit invocation, to do this the only thing you have to do is create an intent which can handle the parameter and then add it to the implicit intents under the actions on google integration section in Dialogflow.
You don't have to create any intent specially for implicit invocation, you can just use the ones you have already created for normal conversations.
I've written an guide to implicit invocations / deep links for Google Assistant if you need more help or infomation. Here is a video with the end result
I have been working with Google Dialogflow to create a Google Assistant experience.
My GA Action is to Raise Support tickets and those tickets are raised in our system via API.
We ask the user to describe the Issue they are facing, We have used a fallback Intent to capture the Issue/Ticket Description(Since the reply can be any free text, is this the best way to capture free text?).
Once the user gives a description, A webhook is called and the results are sent to our backend to capture.
We have noticed that when the user uses the words "not working" as a part of the issue description, it always calls the welcome intent, instead of going to the follow up Intent. If the user describes the Issue without using those words, it works fine. Below are 2 different responses.
I personally feel that this is a bug in GA, is there any way to solve it?
I think you're doing some things wrong. I don't have enough information to understand 100% what you are doing, but I will try to give you some general advice:
A fallback intent is used to 'fall back' to this intent when a user asks something that is nowhere provided in one of your other intents. That's why your fallback intent has the 'input.unknown' set as action. It will be triggered when the user gives some input that is unknown for your application. F.e. I don't think your '(Pazo) Support Action' will provide an answer if the user asks to book a plane to Iceland, so that's when your fallback intent comes in to give an answer such as 'Sorry, I can't answer that question. Pazo is here to give you support in... What can I do for you?'
Your user can either register a complaint or raise a support ticket if I'm getting this right? I recommend you to make two seperate intents. One to handle the complaints and one to handle the support tickets.
Before developing advanced actions with a seperate webhook and a lot of logic with calling an API etc., I recommend to go through the documentation of Actions on Google:
https://developers.google.com/actions/extending-the-assistant
I am working on a simple custom fact skill for Amazon Alexa and trying to learn more about how to make my own skills!
When I am using the "Test" function in the developer console, asking Alexa "Alexa, open [invocation name]" works fine, and she will present a fact. However, saying "Alexa, open [invocation name] and tell me something" will result in "Hmm, I'm not sure". "Tell me something" is one of my sample utterances. Nothing besides the initial invocation is working. I used the template provided in the Alexa skill kit to build my skill.
Alexa, Open [invocation name]
should open your skill.
Alexa, Ask [invocation name] to [utterance]
should be the right thing if you are directly asking it to tell something.
During our testing, we were unable to complete at least one of the behaviors or actions advertised by your app. Please make sure that a user can complete all core conversational flows listed in your registration information or recommended by your app.
Thank you for submitting your assistant app for review!
During testing, your app was unable to complete a function detailed in the app’s description. The reviewer interacted with the app by saying: “how many iphones were sold in the UK?” and app replied “I didn't get that. Can you try with other question?" and left conversation.
How can I resolve the above point to approve my Google Assistant action skills?
Without seeing the code in question or the intent you think should be handling this in Dialogflow, it is pretty difficult - but we can generalize.
It sounds like you have two issues:
Your fallback intent that generated the "I didn't get that" message is closing the conversation. This means that either the "close conversation" checkbox is checked in Dialogflow, you're using the app.tell() method when you should be using app.ask() instead, or the JSON you're sending back has close conversation set to true.
You don't have an intent to handle the question about how many iPhones were sold in the UK. This could be because you just don't list anything like that as a sample phrase, or the two parameters (the one for object type and the one for location) aren't using entity types that would match.
It means that somewhere, either in your app description or in a Dialogflow intent(they have full access to see what's in your intents) you hinted that “how many iphones were sold in the UK?” would be a valid question. Try changing the description/intents to properly match the restrictions of your app.
In a custom action for Google Home, can I create an intent where the assistant keeps listening for 10 minutes, waiting for a custom keyword without answering?
I looked into the docs and I couldn't find an answer but I guess that what I'm looking for is some kind of parameter that prevents the default answering behavior (when the user stops talking, the assistant answers back) and locks the assistant in listening mode.
Not really. The Assistant is more designed for conversational interaction, and it isn't much of a conversation if it just sits there. It also raises privacy issues - Google is very concerned at the perception of having a permanently open mic recording everything and sending it to some third party.
I understand the use case, however. One thing you might consider is to return a small, quiet, beep to indicate you're still listening, but haven't heard anything to trigger on yet. You'd do this as both a fallback event (for when people don't say the keyword, but are speaking) and as a reprompt event. I haven't tested this sort of approach, however.