I am trying to program DialogFlow application with integration to Google Assistant (Actions on Google). What I need is periodical execution of my script in a certain time over chosen Google Home device - I managed to do that through Routines.
Unfortunately the settings of Routines is not as easy as I expected (you need go through several click and typing of custom action name). Then I found that it is possible to ask the user for that in the Assistant (Routine suggestions) and let him set that with fewer necessary steps.
But my implementation is not working:
exports.dialogflowFirebaseFulfillment = functions.https.onRequest((request, response) => {
const agent = new WebhookClient({ request, response });
...
function scheduleRoutine(agent) {
const intent = agent.arguments.get('UPDATE_INTENT');
agent.add(new RegisterUpdate({
intent: intent,
frequency: 'ROUTINES'
}));
}
let intentMap = new Map();
...
intentMap.set('setup_update', scheduleRoutine)
agent.handleRequest(intentMap);
});
because I am using WebhookClient I am not able to call conv.arguments.get('UPDATE_INTENT') as in the example. But I can get to that part of code through fulfilment which leads to error:
TypeError: Cannot read property 'get' of undefined
at scheduleRoutine (/user_code/index.js:71:34)
Have anybody already implemented Routine suggestion with Dialogflow?
Are you trying to use RegisterUpdate from the actions-on-Google library? You cannot mix features from that library with the dialogflow-fulfillment library. They're incompatible.
If you want to use features specific to actions on Google, you must use that library for your webhook.
Related
First off, I'm new to Alexa skill development, so I have much to learn. I've been banging my head off the desk trying to figure this out. I've found various tutorials and have gone over the information provided by Amazon for accessing the Customer Profile API via an Alexa skill but still can't manage to obtain the customer's phone number.
I'm using the AWS console in-line code editor (Cloud9). Most, if not all, instructions use something like 'axios', 'request', or 'https' modules which I don't think is possible unless you use the ask-cli (please correct me if I'm wrong). Also, I followed a tutorial to initially create the skill which had me use Skillinator.io to create an AWS Lambda template based on the skill's JSON in the Amazon Developer console. The format of the code in the Customer Profile API tutorials does not match what was provided by the Skillinator.io tool. The way the Intent handlers are set up is different, which is where I believe my confusion is coming from. Here's an example:
Skillinator.io code:
const handlers = {
'LaunchRequest': function () {
welcomeOutput = 'Welcome to the Alexa Skills Kit!';
welcomeReprompt = 'You can say, Hello!';
this.emit(':ask', welcomeOutput, welcomeReprompt);
},
};
Tutorial code:
const LaunchRequestHandler = {
canHandle(handlerInput) {
return handlerInput.requestEnvelope.request.type === 'LaunchRequest';
},
handle(handlerInput) {
const speechText = 'Welcome to the Alexa Skills Kit!';
return handlerInput.responseBuilder
.speak(speechText)
.reprompt(speechText)
.withSimpleCard('Hello World', speechText)
.getResponse();
}
};
Can anyone shed some light and help me understand why there is a difference in the way the handlers are formatted, and how (if possible) to create the request to the Customer Profile API?
I've already completed the steps for the necessary permissions/account linking.
Thanks in advance.
EDIT:
I've learned that the difference in syntax is due to the different versions of the sdk, Skillinator being 'alexa-sdk' or v1 and the various tutorials using 'ask-sdk' or v2.
I'm still curious as to whether using modules like 'axios' or 'request' is possible via the in-line code editor in AWS console or if it's even possible to access the Customer Profile API using sdk v1?
I've decided to answer the question with what I've learned in hopes that others won't waste as much time as I have trying to understand it.
Basically, it is possible to use the above-mentioned modules in sdk v1 using the AWS console's in-line code editor but you must create a .zip file of your code and any necessary modules and upload that .zip to Lambda.
I've edited my original answer to include my findings for the difference in syntax in the intent handlers.
From what I can tell (and please correct me if I'm wrong), it is not possible to access the Customer Profile API using the sdk v1.
Is it possible to format a conversation so that the bot initiates conversation using dialogflow in a web demo integration?
The objective is to say something like “Hi, I’m a bot, I can do x” to establish that it’s a chatbot rather than a human.
Can anyone suggest any idea for this?
You can set a welcome intent, then send a /query request containing an event parameter. Set the event parameter to WELCOME and your chatbot will respond with whatever conversation opening you set.
More info here: https://dialogflow.com/docs/events
If you are using something other than the API for interacting with your Dialogflow agent (Slack, Facebook Messenger, etc.) you will need to add an appropriate event under "intents" in your console (such as the "Facebook Welcome" event).
For interacting with your Dialogflow agent via the API, see below.
In the API interaction quickstart documentation, Dialogflow gives you the SessionClient's detectIntent method for sharing messages with your bot.
Each language has a different solution. But on an abstract level, you want to change the request object that you send to Dialogflow to include a "Welcome" event (no input message required), as Omegastick described.
For example, in Node.js, your request object would look like this:
// The text query request.
const request = {
session: sessionPath,
queryInput: {
event: {
name: "Welcome",
languageCode: languageCode
}
},
};
This assumes you have an appropriate intent set up in your Dialogflow console to handle Welcome events. One is provided by default that you can observe.
You can also add contexts, so that your agent gives a different greeting message based on some condition.
Following this guide:
https://actions-on-google.github.io/actions-on-google-nodejs/
I created an action for DialogFlow
import { dialogflow, Image, Conversation, BasicCard } from 'actions-on-google';
const app = dialogflow();
app.intent('test', (conv, input) => {
conv.contexts.set('i_see_context_in_web_demo', 1);
conv.ask(`i see this only into actions on google simulator`);
conv.ask(new Image({
url: 'https://developers.google.com/web/fundamentals/accessibility/semantics-builtin/imgs/160204193356-01-cat-500.jpg',
alt: 'cat',
}));
});
I then activated Web Demo integration
I saw that the Web Demo integration does not show the cards, the images. I hypothesize that it only shows text, no rich text
I understand that it elaborates only JSON like this:
{
"fulfillmentText": "Welcome!",
"outputContexts": []
}
But I did not find any method in the library used to enhance fulfillmentText
can you help me?
You're using the actions-on-google library, which is specifically designed to send messages that will be used by the Google Assistant. The Web Demo uses the generic messages that are available for Dialogflow. The actions-on-google library does not send these generic messages.
If you want to be able to create messages that are usable by both, you'll need to look into the dialogflow fulfillment library, which can create messages that are usable by the Google Assistant as well as other platforms. Be aware, however, that not all rich messages are available on all platforms, but that the basic text responses should be.
You also don't need to use a library - you can create the JSON response yourself.
I'm trying to make an app using DialogFlow which finds a specific object in a specific place.
This is a generic example.
The user would say something like "Where to I find Dog in Europe" and the app would reply with "Dog can be found in Europe via: breeding, finding it out in the wild or by buying it"
considering Dog as input1 and europe as input2
Ideally the app should be able to cross reference input1 and input2 to find the correct response. Can I implement a database like structure and do this?
You can't access a database from Dialogflow directly, but you can build your own fulfillment backend that can do anything you want. It communicates with Dialogflow via HTTP requests/responses in the Dialogflow Webhook format.
Here is an example fulfillment that reads data from Firebase database - https://github.com/actions-on-google/dialogflow-updates-nodejs
You can't access a database directly in Dialog flow, but you can build your own fulfillment back end. I have been using Airtable as a database and Integromat and Webhooks to query the database and parse the results back to Dialogflow. As a novice coder I found this to be the simnplest way.
KaySubb is right, you can make a fulfillment that reads data from a firebase database(or firestore).
You can do this turning on fulfillment at the bottom page of the intent page.
First go to https://console.firebase.google.com/ (login with google account) and you should be able to see your google cloud platform project.
To use firebase, you need to first install it. Get node.js as you need npm first. I'm not sure what OS you're on but go into command line or terminal and type.
npm install firebase --save
then type:
firebase login
this will authenticate your login and connect your project when you deploy.
Then use go to the directory you want to create your project in:
firebase init functions
Select your project and select javascript, install all dependencies
Now go to functions and open the index.js file. Here you can change you write code needed in js.
Write your functions and type:
firebase deploy
in the command line open in the file directory. When it completes, it will
give you a link. This as the webhook URL in dialogflow (it should start with
https://us-central). If you see only 1 link which says
console.firebase.google.com....... then open that link on a browser, click on
"functions" on the left side of the screen and get the link from there.
This should get you started with firebase, now you can link your project to firebase fulfillment. There is great firestore explanation here
https://www.youtube.com/watch?v=kdk6MhhI8oc
But I'll give you a brief explanation:
On the top of your index.js file you will need:
const functions = require('firebase-functions');
var admin = require("firebase-admin");
admin.initializeApp(functions.config().firebase);
var firestore = admin.firestore();
The basic code is here:
exports.webhook = functions.https.onRequest((request, response) => {
switch(request.body.result.action){
case 'saveData':
let params = request.body.result.parameters
firestore.collection('colName').doc('docName').add({
name:params.name
age:params.age
}).then(() => {
response.send({
speech:
`this is a response for "${params.name}".`
});
})
.catch((e => {
console.log('Error getting documents', e);
response.send({
speech:
`Sorry, something has gone wrong. Try again and if the problem persists, please report it.`
});
}))
break;
default:
}
})
I'll explain what it does:
You need the switch to decide which intent to do. request.body.result.action returns the action name (write this in dialogflow just above the parameters).
Once that is decided request.body.result.parameters give you the parameters from the intent. params.______ gives you the parameter.
I would definitely recommend reading the official documentation:
https://firebase.google.com/docs/firestore/quickstart
to help understand the data structure to help create the ideal database for you. Essentially a collection is a list and within that a doc is one entry. You can name them yourself of using the entries from param.
respond.send is what the bot will reply to the user, I've also shown how to use the parameters in the response.
.catch will just store any errors in the log, you can read the log in console.firebase.google.com.... open your project and click on function. There will be a place to read logs there. You can check any errors encountered over there.
default: will output whatever default response you wrote on dialogflow at the bottom of the intent.
Hope this helps,comment any questions. I have gone through a huge amount as concisely as I could. This will take some time to get used to and become good at, follow the docs and the youtube videos if you have a lot of trouble!
If you're having even more trouble, there is a slack that helps people that I can direct you to.
I am building an Alexa skill in node using the alexa-sdk. I am using a dialog model to handle the user interaction. I am having some trouble passing the flow along to new request types, such as from the launch request to an intent request.
Below is an example of my handlers and what I want ideally. My specific usecase is that I would like to ask some questions of the user and then send them to different intents based on what they answer. In the intents I would like to have access to the request objects, as if they entered that intent originally, so the dialog model can do its work.
const handlers = {
'LaunchRequest': function () {
this.emit('Entry'); // this does not do what I want
},
'Entry': function () {
let request = this.event.request; // this is the launch request object.
// I would like to get the request object for Entry, like if the user started here
// ask some questions, potentially passing the torch to a new intent based on the answers
}
};
So, is there any way to "call" an intent like the user originally made a request to that intent? Sorry if I missed something obvious in the documentation, I searched around pretty thoroughly I think, but there is A LOT of documentation. ps: I could manually construct the request object of course, but I really should not have to I feel.
I am pretty sure there is no way yet to call on an intent as you are asking.
If you go through the syntax description of dialog directieves here, it says:
Note that you cannot change intents when returning a Dialog directive, so the intent name and set of slots must match the intent sent to your skill.
With returning a dialog directive you are able to 'elicit' or 'confirm' slots or intents, or even let a delegate handle your dialog for you, with prompts and reprompts set in the Skill Builder.
As far as i know, the only solution to trigger a specific intent is to make the user invoke it. You can guide the user into saying a specific utternace to trigger your intent.
As for saving older requests, you can use session attributes. Just build a response after your Launch with a session attribute containing the whole LaunchRequest.
"sessionAttributes": {
"oldRequest": this.event.request
}