Alexa Custom Skill Sample Utterances - amazon

I am working on a simple custom fact skill for Amazon Alexa and trying to learn more about how to make my own skills!
When I am using the "Test" function in the developer console, asking Alexa "Alexa, open [invocation name]" works fine, and she will present a fact. However, saying "Alexa, open [invocation name] and tell me something" will result in "Hmm, I'm not sure". "Tell me something" is one of my sample utterances. Nothing besides the initial invocation is working. I used the template provided in the Alexa skill kit to build my skill.

Alexa, Open [invocation name]
should open your skill.
Alexa, Ask [invocation name] to [utterance]
should be the right thing if you are directly asking it to tell something.

Related

Open invocation name - Not working for first launch

My expectation and experience with most skills in the store is that I should be able to choose a skill that I've never used or enabled before, and simply speak "Open " to launch the skill on an Alexa device.
I've recently deployed a skill, and I'm finding that I can use it only via the:
"Enable «skill invocation name» skill" phrase
However, it's not opening via the:
"Open «skill invocation name»" phrase
Is it possible that it takes time for Alexa to index the name for this shortened "Open «skill invocation name»" invocation phrase for users who have not previously enabled or invoked a skill?
Open "<< skill invocation name >>"
should work fine out of the box. As far as I know it is only one shot invocations that can take some time to train after a Skill has been recently deployed/updated.
Ask "<< skill invocation name >>" to do X
Not an answer but I guess you can try all possible variations to launch a Skill from this list https://developer.amazon.com/en-US/docs/alexa/custom-skills/understanding-how-users-invoke-custom-skills.html#no-intent and see which works/don't work for you.
Is your Skill name difficult to pronounce so that Alexa misunderstands the Skill name? If so, you can look at what Alexa thinks you are saying in the Alexa app, for instructions, see https://help.xappmedia.com/hc/en-us/articles/360032800651-Checking-your-Utterance-History-in-the-Alexa-App.
I had the same problem/question about a year ago and asked the Alexa developer support. Their answer was basically that a user always needs to enable a skill for the first time via voice or in the app.
The shortcut of using "open" directly for the very first invocation is not guaranteed to work. I think of it as an undocumented feature under test that is not available all of the time.

Dialogflow works in test console, but not via google assistant

I have a smart home dialogflow webhook working from the google actions test console, but when I speak to a google home device, there is no sign that my intents are being recognized. E.g., I enter “Home temperature?” in the console, I can see it calling my webhook, executing my script, and responding with “The temperature is 72 degrees.”
But when I say: “Hey Google, Home temperature” to my google home device, it says my nest device is not registered, or something like that. I.e., it is what it would say if I did not have smart home action intents registered with google actions.
I am unable to find anything in the docs or by web searches which says what I am supposed to do to get my google assistant devices to recognize my custom intent phrases.
Does anyone have this working? The Smart Home integration is not supposed to require a lead in, like “Hey Google, Ask whoever, Home temperature”, Right? That is only for “conversation mode” integrations, correct? My understanding is that “Smart Home” mode does not require a lead-in. Please correct me if that is incorrect…
Either way, my voice requests through my Google Home are not recognized.
Please, any advice for what I am missing or how I can troubleshoot this?
Thanks!
P.S. I'm new to Stack Overflow, and I didn't find this "dialogflow" group until posting in another group. So I am reposting here. Sorry if this is redundant. I could not find how to delete the original post...
It sounds like I was wrong about the "Hey Google, talk to ..." requirement for Dialogflow.
The "Smart Home" mode does not preclude this. You cannot just say, "Hey Google, home temperature?", you have to say, "Hey Google, ask [my dialogflow app], home temperature?"
Furthermore, unless you Publish your app, the response will always say, "Alright, here's the test version of [my dialogflow app]...
Between the two, it pretty much ruins it for me... Off to the drawing board.

It's possible to create one-shoot app with Actions on Google?

I am completely new to the "Actions on Google" world, but following some tutorials (like this) i have already achieved good results.
My test
With Google Assistant and/or Google Home mini send my commands to a personal nodejs online server.
To do this:
i have created a new project on https://console.actions.google.com/
selected conversational option
selected create action / custom intent option
from Dialogflow i have personalized the Default Welcome Intent and created a new Intent with the Fulfillment option set to Enable webhook call for this intent
And obviously, from Dialogflow > Fulfillment, i have enabled the Webhook option (with the url of my nodejs app), and not the Inline editor.
This procedure works, when my app recognizes my custom intent, the answer is sent to my nodejs app online.
My problem
The procedure works, but i always have to do 2 steps before i can perform my action:
1) Hey Google, talk with "nameofmyapp"
2) Say the command
My goal
Execute my command directly without always having to do this 2 steps.
Absolutely! Google calls this "deep linking". With this, you'll be able to do something like
Hey Google, ask nameof myapp to command
See the documentation for details, but in short you'll
Make sure you have an Intent for the command in Dialogflow, with several possible phrases that can be used to trigger it.
These phrases should be what you'd say under "command" in the example above - you'd omit the "to" part.
Go to the Integrations section in Dialogflow, under the Google Assistant integration.
In the Implicit invocation section, select the Intent that you'd like to allow as a deep-linked Intent.
If the command takes action and then should quit, make sure either you have set this in Dialogflow or your fulfillment calls app.close();

How to keep mic open in DialogFlow

I´m using DialogFlow to create a chatbot. I want to keep the mic open along a conversation so the user doesn´t have to press the mic button every time.
Is this possible?
Thanks!
If you are referring to Dialogflow test console ("Try it now" prompt) - it doesn't seems you can.
But I will assume that you are referring to Google Assistant integration.
First of all you will have to end each answer with additional question, eg "Can I help you with anything else?" - this will be verified when you will deploy your bot to production in Actions on Google console. For each intent that you want to keep the microphone listening, you will have to make sure that it has "Set this intent as end of conversation" option disabled.

Alexa Node Skill Questions / Prompts

So after scavenging the web, I am unable to find an answer to my problem.
Basically, I want to produce the following result in Alexa, and I want to know if its possible and the direction I should be looking in on how to achieve.
Skill / Intent Init
"Hey Alexa.. ask to find a restaurant near me"
Prompt
"What's your favorite cuisine?"
Response
"Italian"
Prompt
"Are you looking to spend a lot of money?"
Response
"No"
The intent logic goes somewhere in the middle of this
"Okay I found a restaurant near you called
This looks like a fairly standard Alexa custom skill. Most of the Alexa examples and tutorials would show you how to do this. I suggest looking at the Amazon developer site for their Alexa custom skill examples and tutorials, or just searching on "Alexa tutorial".
You will collecting 3 bits of information:
The user's location
The type of food
Expensive
These will need to be persisted between questions, so look at examples that either use a database to store the info (DynamoDB is about the easiest to use) or that persist information in the session object (this would be my recommendation).
You can either ask the user for their location using the built in city slot type, or obtain the address of the alexa device using the device address API.
Good luck. I hope this helps give you some pointers on how/where to start.

Resources