I am making a mobile app for a college project that will feature some games. I was thinking on how can I make it a better, and I thought of using my amazon echo that has been collecting dust since I bought it :D
I had an idea of saying something like "alexa show me only FPS games", and in my app I grab that input and filter the app to only show FPS games. But the question is, how do I grab alexas input? What's the simplest way, is it even possible?
Had an idea that maybe I can grab alexas input in a form of a JSON, and then program it accordingly, but is that possible?
I have never programmed alexa skills, so I have no clue where to start with this, any directions would be pretty helpful! Also, keep in mind that I am a student that doesn't have as many programming experience, but I am willing to do the research.
Thanks a lot, cheers!
The traditional Alexa skills use Lambda functions that respond to events from Alexa Skills Kit. This flow of events would look like the following:
Echo device -> Alexa -> AWS Lambda -> Alexa -> Echo Device
Lambda functions are not Alexa-only components though, meaning you can program it to do whatever you want. Want to record metrics to a database before Alexa responds? Don't want Alexa to respond at all? That's entirely up to you.
For the use case you subscribed, you could write a Lambda function that filters a list of games for any spoken keyword and pushes that list to a client, and then ends the Alexa conversation: a "one-way" Alexa skill.
Echo device -> Alexa -> AWS Lambda
With that being said, you don't really need to use Alexa for that. There are plenty of other speech-to-text programs that can accomplish this (Amazon Transcribe, Watson Speech to Text, Google Speech Recognition). Additionally, those you could probably plug in without writing any server side code, so that's a plus.
Related
I want to send "voice" commands to my home alexa device to do, well, anything that I "ask" it to.
So I want to interact with my Alexa device via python. I feel like I'm going around in circles trying to get this to work. I got it at one point using "gTTS" to convert my text to an audio file and attempt to send that audio file to an alexa endpoint but it also doesn't do anything. I have even created a "product" and gotten my product id, client id, secret id, and refresh token and have been attempting to use those.
Is this possible? I need some hope here. I'm feeling a little down like this isn't possible.
If it is possible am I going down the right track?
Your home device is only meant to take commands via the mic. You could create a DIY hardware or software device that consumes the Alexa Voice Service and feed audio to it.
I'm back again with a question about NLP. I made my own back-end, which on one side can connect to websites, the Google Assistant and Facebook Messenger, and on the other end to Dialogflow. On the side, is logs interactions and does some other database stuff.
Now, I'm trying to connect this back-end to Alexa. I made a project which calls my endpoint. This project has one intent, which has a paramater which should get the raw user input, send it to my back-end, process it, parse and send the response to get back. I feel like there is not a real way to collect and send the raw user input, so I can process it myself (on Dialogflow) instead of using the Amazon way of mapping intents and such.
I know Dialogflow can export to Alexa, but this is not an option for me. I really hope one of you can point me in the right direction.
I just need a way to collect the raw user input, and respond in an Alexa accepted response format.
For Actions on Google for example, I'm using a Custom Project Action Package.
Thanks a lot in advace!
To accept or get any user input, you can use sys.any in google assistant and AMAZON.SearchQuery in AMAZON ALEXA.
In Alexa, You have to add the carrier phrase to use AMAZON.SearchQuery. You can't combine any other slot with AMAZON.SearchQuery.
So there are also some limitations. I hope this answer will help you.
I have developed a complex dialogflow bot, which uses complex dialogs to get data from user and then store it in database. And then give responses to users by querying db.
Can i use the same logic/webhooks/code to call from alexa skill? I don't want to write such a complex logic again for alexa skill.
What I want is whenever an alexa intent is called by user, i want to transfer that intent to my dialogflow webhook to handle it. Is it possible? If so then can you please provide any documentation/examples/tutorials ets.
My dialogflow model consists of 4 slot types:
Date
Number
any
some custom slots
I am certain this is not possible straight away as the REST API of Dialogflow will be different from that of Alexa. Also, Alexa is not fully supported for integration in Dialogflow like Facebook or Slack. If your code is well written and business logic is separate from the platform/request/response mapping then you will be able to use the same business logic in your Alexa webhook code. You just need to write the code for consuming the REST API of Alexa in this case.
Yes, this is possible. While Dialogflow and Alexa have different webhook JSON formats, fundamentally they both do the same thing. You will need to handle parsing the JSON to get what you need, and then formatting the response, so each uses their particular format - but the logic that you are using should still be sound and available to both.
Dialogflow lets you export the model into an Alexa compatible format that you can paste into the Alexa Skills Kit. This helps at least a bit.
So after scavenging the web, I am unable to find an answer to my problem.
Basically, I want to produce the following result in Alexa, and I want to know if its possible and the direction I should be looking in on how to achieve.
Skill / Intent Init
"Hey Alexa.. ask to find a restaurant near me"
Prompt
"What's your favorite cuisine?"
Response
"Italian"
Prompt
"Are you looking to spend a lot of money?"
Response
"No"
The intent logic goes somewhere in the middle of this
"Okay I found a restaurant near you called
This looks like a fairly standard Alexa custom skill. Most of the Alexa examples and tutorials would show you how to do this. I suggest looking at the Amazon developer site for their Alexa custom skill examples and tutorials, or just searching on "Alexa tutorial".
You will collecting 3 bits of information:
The user's location
The type of food
Expensive
These will need to be persisted between questions, so look at examples that either use a database to store the info (DynamoDB is about the easiest to use) or that persist information in the session object (this would be my recommendation).
You can either ask the user for their location using the built in city slot type, or obtain the address of the alexa device using the device address API.
Good luck. I hope this helps give you some pointers on how/where to start.
I'm trying to develop a skill for amazon alexa whereby the application leads the user into a new state.
"User input" -> "Speak" -> "Ask Question" -> "User input" .... etc
Is the most obvious way of going about this, however, this means I have to rather bluntly mash together "speak" and "ask question".
Is there another way to chain events for amazon alexa. Say, for example, emit some speach then go to another handler ? (I know that I can emit("handlerName") and switch to another handler, but I can't do that AND make alexa speak before the switch happens)
The best way to "chain events" is to maintain state is by using the session object in the Alexa API's request and response structures (see here). Store a variable in the attributes indicating the current step in your flow.