Connecting Alexa to my own NodeJS back-end - node.js

I'm back again with a question about NLP. I made my own back-end, which on one side can connect to websites, the Google Assistant and Facebook Messenger, and on the other end to Dialogflow. On the side, is logs interactions and does some other database stuff.
Now, I'm trying to connect this back-end to Alexa. I made a project which calls my endpoint. This project has one intent, which has a paramater which should get the raw user input, send it to my back-end, process it, parse and send the response to get back. I feel like there is not a real way to collect and send the raw user input, so I can process it myself (on Dialogflow) instead of using the Amazon way of mapping intents and such.
I know Dialogflow can export to Alexa, but this is not an option for me. I really hope one of you can point me in the right direction.
I just need a way to collect the raw user input, and respond in an Alexa accepted response format.
For Actions on Google for example, I'm using a Custom Project Action Package.
Thanks a lot in advace!

To accept or get any user input, you can use sys.any in google assistant and AMAZON.SearchQuery in AMAZON ALEXA.
In Alexa, You have to add the carrier phrase to use AMAZON.SearchQuery. You can't combine any other slot with AMAZON.SearchQuery.
So there are also some limitations. I hope this answer will help you.

Related

Basic Concept of Chatbot using Wit.ai

I am trying to create a chatbot application where user can create their own bot like Botengine. After going through google I saw I need some NLP api to process user's query. As per wit.ai basic example I can set and get data. Now I am confused, How I am going to create a botengine?
So as far I understand the flow, Here is an example for pizza delivery:-
User will enter a welcome message i.e - Hi, Hello ...
Welcome reply will be saved by bot owner in my database.
User will enter some query, then I will hit wit.ai API to process that query. Example :- Users query is "What kind of pizza's available in your store" and wit.ai will respond with the details of intent "pizza_type"
Then I will search for the intent return by wit in my database.
So, is that the right flow to create a chatbot? Am I in the right direction? Could anyone give me some link or some example so I can go through it. I want to create this application using nodejs. I have also found some example in node-wit, but can't find how I will implement this.
Thanks
What you need is webhook. You need to call different API's based on the user intent. I believe you can distinguish between different intents using parameters available in request. Check this out - Creating nodejs webhook for dialogflow

How to create a chatbot with facebook messenger like templates

I'm trying to create a chatbot for use in a chat app I've created. I basically need the chatbot to send me replies that have message templates like in facebook messenger. For example, If I type in "what's the weather like", I want my chatbot's reply to look like facebook's media template, linked here: Media Template
Does anyone have any tutorials or links I can follow?
Thank you in advance.
Cheers!
Usavaully work flow of chat application as follows,
Message providers(Facebook, twitter,slack etc..) receives messages from user
Message is sent to the configured endpoint(your webserver) according to the settings provided in the face book developer page reference
In the webserver you classify the intent prepares the response according to the request and sends the responses back.
So in the 3rd point web server you give responses based on the platform you are responding to reference, since in your case it's your own platform you need to design your own UI based on the response format or you can use some predefined html templates.
I hope answer gave some direction to work on.

Port existing custom chatbot as Google Assistant action

We have a framework that implements chatbot / voice assistant logic for handling complex conversations in the health domain. Everything is implemented on our server side. This gives us full control of how responses are generated.
The channel (such as Alexa or Facebook Messenger cloud) calls our webhook:
When user messages, the platform sends these to our webhook: hashed user id, message text (chat message or transcribed voice)
Our webhook responds with the appropriately structured response, which includes text to be displayed, spoken, possibly choice buttons and some images etc. It also includes a flag whether the current session has finished or user input is expected.
Integrating a new channel involves conversion of the response returned into the form expected by a channel and setting some flags (has voice, has display etc.).
This simple framework has worked so far for Facebook Messenger, Cortana, Alexa (a little bit of hacking was needed to abandon it's intent and slot recognition), our web chatbot.
We wanted to write a thin layer of support for Google Assistant action.
Is there any way of passing all the input from Assistant user intact into a webhook such as the one described above and taking full control of the way responses are generated and the end of conversation is determined?
I'd rather not delve into those cumbersome ways of API.AI of structuring a conversation which seems good for a trivial scenarios such as ordering an Uber but seems very bad for longer conversation.
Since you already have a Natural Language Understanding layer for your system, you don't need API.AI/Dialogflow, and you can skip this layer completely. (The NLU is useful, even for large and extensive conversations, but doesn't make sense in your case where you've already defined the conversation through other means.)
You'll need to use the Actions SDK (sometimes known as actions.json after the configuration file it uses) to define triggering phrases, but after that you'll get all the text that the user says as part of your conversation through a webhook that delivers JSON to you. You'll reply with JSON that contains the text/audio response, images on cards, possibly suggestion chips, etc.

Return response to Google Assistant via API

I have a Actions on Google project that uses api.ai for its actions. This is working well and I can see request/responses appear on the google assistant interface (On mobiles and simulator)
One of my usecases for api.ai needs to broken into 2 parts, in that we have to inform the user that the processing has started and then inform them again once its completed (without them reprompting for the output).
Im trying for a way to inform the user who is using the Google assistant when the processing is completed, but have failed so far. Something like this
User: I would like to see if my loan request is approved
Google Assistant: Hold on, let me check and let u know .
.... (Makes a webservice call to the backend asynchronously)
.... After few seconds ...
.... Postback to google assistant from the webservice
Google Assistant: Thanks for holding, your request is approved.
Im not sure how to do the "postback to google assistant" call. I have tried to get the SessionId from the Api.AI call and then use that to make a event request , but that doesnt seem to send the response to the assistant. Google Assistant seems to be using the formats defined in https://developers.google.com/actions/reference/rest/Shared.Types/AppRequest, but Im unsure how to get the ConversationToken and use that for sending the response back to the user.
Short answer: you can't do that.
Slightly longer answer: At least right now, there is no good way to send a notification. Your Action can only respond to a specific statement from the user. You can say something like "ask again in a minute and I should have a result for you", but that isn't a great experience. At Google I/O 2017, they announced that notifications would be coming to the Google Home at some point... but gave neither a time frame nor any information about an API.
Long, but probably still unsatisfying answer: You can look into Transactions which let them initiate purchase or request of some sort and then "check out". Once they have checked out, you would confirm that a transaction is being processed with an OrderUpdates and then can send updates with the status of the "order". These status updates can turn into notifications or user's can query the state of the order at any time. Transactions don't require payment, so this may work depending on your needs.
However, there are a few things to note. This is still in developer preview, so things may change in the future. It also doesn't work on all surfaces where the Assistant runs, so while it does work on Assistant on phones, it does not work on the Google Home right now.

Control your device with custom commands using Actions in Google

just getting started with Assistant features in RPi and I am able to successfully implement upto this point and wondering few thing.
Scenario:
user: hey google "please turn on my living room Lights"
List item my code in horword.py : has a function to perform same action based on ON_RECOGNIZING_SPEACH_FINISHED
RPi/google home: I am not sure how respond to that
I was able to capture the request query asked by user using ON_RECOGNIZING_SPEACH_FINISHED = Args.text(str) and use it in my logic to perform the task. However, at the same time, "ok google" is responding with this answer.
to mitigate this problem, I created an google-actions, now it understands my query and respond with intention from api.ai. However, didn't acts on turn lights ON. So, wondering how can I read response from google home/api.ai in text and change code to act on it locally.
appreciate it.
You will not get response as text.
For getting response to client app use webhook in API.AI and send message using fcm to client app.
Read the fcm message in client app and do the corresponding actions.
finally was able to figure out multiple ways. answered this in other stack question. find more details in this post.
Multiple ways to handle this since google doesn't gives voices transcript and we let google say our transcript which is kind off solution for now.

Resources