I'm trying to configure my VUI to repeat a sentence in Dialogflow when it is prompted for this. Little back story, I am helping develop a social robot for elderly so repeating a sentence is a much needed feature. I just started on this project and the previous developer responsible for this is gone and not reachable, also, I have no experience in Node.js.
I am looking to use the multivocal for this. These are the steps I have done so far:
Created an intent called 'Repeat'.
Added training phrases for the intent
Added 'multivocal.repeat' as an action
Enabled webhook call for this intent
In Node.js added 'intentMap.set('Repeat', repeat);' to my existing intentMap
// Run the proper function handler based on the matched Dialogflow intent name
let intentMap = new Map();
intentMap.set('Game_Rock_Paper_Scissors - Result', game_stone_paper_scissors);
intentMap.set('Game_Rock_Paper_Scissors - Result - yes', game_stone_paper_scissors_again);
intentMap.set('Conversation tree', conversation_tree);
intentMap.set('Question date', question_date);
intentMap.set('Question time', question_time);
intentMap.set('Music', music);
intentMap.set('Repeat', repeat);
Next, under the 'Fulfillment' tab, I want to enter the function inside the Inline Editor 'index.js' tab and make sure in the 'package.json' that the multivocal library is installed and usable. So far, there are a few things in my package.json
"name": "dialogflowFirebaseFulfillment",
"description": "This is the default fulfillment for a Dialogflow agents using Cloud Functions for Firebase",
"version": "0.0.1",
"private": true,
"license": "Apache Version 2.0",
"author": "Google Inc.",
"engines": {
"node": "8"
},
"scripts": {
"start": "firebase serve --only functions:dialogflowFirebaseFulfillment",
"deploy": "firebase deploy --only functions:dialogflowFirebaseFulfillment"
},
"dependencies": {
"actions-on-google": "^2.2.0",
"firebase-admin": "^5.13.1",
"firebase-functions": "^2.0.2",
"dialogflow": "^0.6.0",
"dialogflow-fulfillment": "^0.5.0"
}
}
So, how do I install the library? In the Read.me is says 'You can install it using npm install --save multivocal.' But where would I install this? Do I put this somewhere in the index.js or packages.js?
Also, in this example it shows the index.js code to be:
const Color = require('./color.js');
Color.init();
const Multivocal = require('multivocal');
exports.webhook = Multivocal.processFirebaseWebhook;
Should I just add this to the end of my index.js? Or should it be wrapped in a function? Sorry to be so not experienced in this but I hope I explained it clearly.
Some answers for using multivocal using the Dialogflow Inline Editor:
How do I include this in the package.json file?
The directions for using npm are if you're writing the fulfillment locally and not using the Editor.
You need to add this line in the dependencies section:
"multivocal": "^0.14.0"
and Dialogflow / Firebase Cloud Functions will take care of importing the library. You won't need the "actions-on-google", "dialogflow", or "dialogflow-fulfillment" libraries, so the section can look something like this:
"dependencies": {
"firebase-admin": "^5.13.1",
"firebase-functions": "^2.0.2",
"multivocal": "^0.14.0"
}
How do I write my index.js?
The simple example assumes that you can put your configuration and code in a separate file ("color.js" in the example). Since you can't do that with the Inline Editor, the general boilerplate of your code will be something like this:
// Import the library on the first line, so you can call methods on it to setup your configuration
const Multivocal = require('multivocal');
// Your configuration and code go here
// Setup the webhook as the final line
exports.dialogflowFirebaseFulfillment = Multivocal.processFirebaseWebhook;
Where do the Intent Handler registration and functions go?
Unlike the dialogflow-fulfillment library, multivocal doesn't use an explicit intentMap. It maintains one itself, so to register an Intent Handler function you would use something like
Multivocal.addIntentHandler( intentName, handlerFunction );
BUT keep in mind that handler functions are also a little different.
What? How are they different?
Multivocal has a lot of things handled through configuration, rather than code. So there is no direct counterpart to the agent.add() function call that you'd have with dialogflow-fulfillment or the actions-on-google library.
Instead, your Intent Handler function should perform any logic, database calls, or whatever to get values that will be used in responses and save them in the Environment. Every Handler should return a Promise that contains the Environment.
You should also set configuration for your Intent - the most common of which is to set the possible "Response" templates. The most simple response templates just include text, and where to insert values from the environment. It is also best practice to prompt the user for what they can do next, so we might setup a "Suffix" template to use by default, or one to use for specific Intents, Actions, or Outents.
So if you had an Intent named "color.favorite", and had a value in the environment called "color" (that your handler may have loaded from a database), the configuration for this response in English may look something like this. It also includes a default suffix to prompt the user what they can do next.
const config = {
Local: {
en: {
Response: {
"Intent.color.favorite": [
"{{color}} is one of my favorite colors as well.",
"Oh yes, {{color}} can be quite striking.",
"I can certainly understand why you like {{color}}."
]
},
Suffix: {
Default: [
"What other color do you like?"
"Tell me another color."
]
}
}
}
}
and you would register this configuration with
new Multivocal.Config.Simple( config );
You can (and are expected to) register multiple configurations, although you can combine them in one object. So the Response section above could contain response sections for each of your Intents, by name.
Ok, but how do I handle a "repeat" Intent?
All you need to do is provide an Intent that has its "Action" set to "multivocal.repeat" in the Dialogflow UI and that has the webhook enabled. So something like this would work:
Multivocal already has registered a handler and configuration based on this.
If you want to change the possible responses, you can add configuration for the "multivocal.repeat" Action. Which may look something like this:
const enRepeat = [
"Sorry about that, let me try again.",
"I said:"
];
const config = {
Local: {
en: {
Response: {
"Action.multivocal.repeat": enRepeat
}
}
}
}
and then either combine this configuration with other configurations you've written, or load it as above.
To emphasize - there is no need for you to write any code for this, just some optional configuration.
Related
How do I work with node modules?
tldr; How do I look at a node module I've installed and know where to go and what I'm looking for
If I use npm i googleapis for example, it downloads the node module for Googles APIs but how do I browse the module and work out what's useful for me?
To try and eliminate any ambiguity from the question, I'll use this use case.
I'm developing a Discord bot and I want to add statistics to one of
the commands. Here is the supplied code from Google:
<script src="https://apis.google.com/js/api.js"></script>
<script>
/**
* Sample JavaScript code for youtube.channels.list
* See instructions for running APIs Explorer code samples locally:
* https://developers.google.com/explorer-help/code-samples#javascript
*/
function authenticate() {
return gapi.auth2.getAuthInstance()
.signIn({scope: "https://www.googleapis.com/auth/youtube.readonly"})
.then(function() { console.log("Sign-in successful"); },
function(err) { console.error("Error signing in", err); });
}
function loadClient() {
gapi.client.setApiKey("YOUR_API_KEY");
return gapi.client.load("https://www.googleapis.com/discovery/v1/apis/youtube/v3/rest")
.then(function() { console.log("GAPI client loaded for API"); },
function(err) { console.error("Error loading GAPI client for API", err); });
}
// Make sure the client is loaded and sign-in is complete before calling this method.
function execute() {
return gapi.client.youtube.channels.list({
"part": [
"snippet,contentDetails,statistics"
],
"id": [
"UC_x5XG1OV2P6uZZ5FSM9Ttw"
]
})
.then(function(response) {
// Handle the results here (response.result has the parsed body).
console.log("Response", response);
},
function(err) { console.error("Execute error", err); });
}
gapi.load("client:auth2", function() {
gapi.auth2.init({client_id: "YOUR_CLIENT_ID"});
});
</script>
<button onclick="authenticate().then(loadClient)">authorize and load</button>
<button onclick="execute()">execute</button>
Now Google offers a supported library which means I can replace the external script tags and import from the node package, specifically the parts I need.
So I'll need to import or require whatever gives me access to things like:
gapi.auth2.getAuthInstance
gapi.client.setApiKey
gapi.client.youtube.channels.list
For someone who is new to nodejs, instead of copy and pasting from every piece of documentation and hoping it works, how do I comfortably look at a node package and find the things I need and can use?
Edit #1
I think my use of the google apis case threw off the direction of the question and changed the scope of what I asked so I'm going to correct the best I can.
The assumption should be made that there is no documentation on the package whether it's so poorly written, doesn't exist at all or the documentation is down for an extended period of time during a time sensitive development.
At that point, is there any possible way to look at the node_modules folder, the specific package that needs to be worked with and work out what's going on? Is there any way to look at the structure of a package and recognise "well most likely what I need is in this folder or file"
That's what documentation is for.
When someone writes an API for a package; to make things clearer for the consumer, he should document the exported functions well enough.
The best way to get the documentations for node packages is to search for the package at www.npmjs.com .
For google apis you can go to that page here to see the "get started" and some examples. And you can go here to see the full detailed APIs of that package.
Answering Edit #1
Well, in that case, it could be a difficult task, depends on the how structured and organized the package is.
Since we are talking about nodejs, you should look for the package.json file and search for the path of the main file "main": "<PATH HERE>".
Then, you can go to that main file and try to locate what exactly is being exported. You can search for module.exports or the export keyword.
Everything that is explicitly exported is intended to be used as an API.
I'm not familiar with any other way other than go deeper in the package's files and identify what exactly is being exported.
I have setup my Dialogflow CX and Messenger on my web site and I want to execute commands with Google Tag manager.
Basically what I want to do is that if a user scrolls more than 75% of a page, vertically, GTM should trigger this example ( taken from https://cloud.google.com/dialogflow/cx/docs/concept/integration/dialogflow-messenger#rendercustomcard )
const dfMessenger = document.querySelector('df-messenger');
const payload = [
{
"type": "info",
"title": "Info item title",
"subtitle": "Info item subtitle",
"image": {
"src": {
"rawUrl": "https://example.com/images/logo.png"
}
},
"actionLink": "https://example.com"
}];
dfMessenger.renderCustomCard(payload);
This code snippet works fine if I embed it in my web page, and also in when GTM triggers and embeds the tag after a scroll the snippet. But when I try the other types of cards, List type is what I would like to use in my case, I the following in my browsers console "DfMessenger: Could not render undefined".
Any clue if this is due to me triggering things from GTM or any ideas what I could test?
Posting this answer from #Per Olsson as a wikianswer:
I figured out what was wrong with const dfMessenger = document.querySelector('df-messenger') dfMessenger.addEventListener('df-request-sent', function (event) { console.log(event) and compared with the objects and found a misspelled wording. Everything works but you have to be really careful with spelling. I still think the documentation is a bit poor though, but that is not for this forum.
I have an application that runs a Jest test suite from the command line, then takes the JSON output, parses it and then fills table in a database as per the output file. The application runs shell command:
npm run all
and in the package.json file the all script looks like this:
"scripts": {
"all": "../node_modules/.bin/jest --json --outputFile=testResults.json",`
......
}
So I get the testResults.json file and I am able to parse it - so far so good.
But during the test case run I would like to add some extra data to the output. Something like details - where the problems is, how to fix it, some troubleshooting information etc. For example to put one more field in :
require('testResults.json').testResults[x].assertionResults[y].details
You see, the detail property is not part of the json output file format. But can I create it from within the test case (pseudo example):
test('Industry code should match ind_full_code', async () => {
result = await stageDb.query(QUERY);
// And here I want to add this custom information to some global property available?
reporter.thisTestCase.assertionResults.details = "Here is what you should do to fix this ...." // <- Ideally this is how easy I imagine it to be.
expect(result.results).toEqual([]);
}, 2 * 100 * 1000)
I just want to give a little bit more information to the QA or whomever on test failure.
In other words I need the option to change the output from within the test case.
I've been looking into custom reporters, but their listeners are passed the same information as to the json reported.
I've found a need for a similar feature in Jest. The ability to add documentation to the test is rarely supported by test frameworks.
However I found a way to do this with the soon to be default runner: Jest Circus. I then made my own Jest Circus environment. A custom Jest Circus environment provides more test events/lifecycles and access to the actual test code that is being ran.
// Example of a custom Jest Circus environment
export default class MyCustomNodeEnvironment extends NodeEnvironment {
handleTestEvent(event: Circus.Event, state: Circus.State) {
if(event.name === 'test_fn_start') {
console.log(event.test.toString())
// will log the actual test code.
}
}
}
// jest.config.js
{
"testEnvironment": "<rootDir>/my-custom-environment.js",
"testRunner": "jest-circus/runner"
}
I then used regex patterns to find comments in the test functions and add them to the Allure report (Allure report demo).
If you'd like to create your own Jest environment and implement this yourself I've made a template repo or if you prefer a gist of a basic Jest Circus environment.
If you like how Allure reports look you should checkout my open source project jest-circus-allure-environment.
I have an intent named "intent.address" with the action name "address_action" and the training phrase "My address". When this intent is triggered, response comes from my webhook saying "Alright! your address is USER_ADDRESS".
Used app.ask() here
What I want is, when this response comes from the webhook then another intent named "intent.conversation" (event name "actions_intent_CONFIRMATION")(here also webhook enabled) gets triggered, which will ask for user confirmation to continue or not?
For example :
Alright your address is USER_ADDRESS
then next
Do you want ask address/directions again?
Intents do not reflect what your webhook says, it reflects what the user says. Intents are what the user intends to do - what they want.
So no, you can't just trigger another Intent this way. There are a few ways to do what you're asking, however.
Using the Confirmation helper with the actions-on-google v1 node.js library
If you really want to use the Confirmation helper, you need to send back JSON, since the node.js v1 library doesn't support sending this helper directly. Your response will need to look something like:
{
"data": {
"google": {
"expectUserResponse": true,
"systemIntent": {
"intent": "actions.intent.CONFIRMATION",
"data": {
"#type": "type.googleapis.com/google.actions.v2.ConfirmationValueSpec",
"dialogSpec": {
"requestConfirmationText": "Please confirm your order."
}
}
}
}
}
}
If you're not already doing JSON in your responses, then you probably don't want to go this route.
Using the Confirmation helper with the actions-on-google v2 node.js library
If you've already switched to v2 of this library, then you have the ability to send a Confirmation with something like this
app.intent('ask_for_confirmation_detail', (conv) => {
conv.ask("Here is some information.");
conv.ask(new Confirmation('Can you confirm?'));
});
Or you can just use the Dialogflow way of doing this
In this scenario - you don't use the Confirmation helper at all, since it is pretty messy.
Instead, you include your question as part of the response, and you add two Followup Intents - one to handle what to do if they say "yes", and one if they say "no".
I am writing a Node.js application using the tslint:recommended rule set and it warns me about the usage of console.log in my code:
src/index.ts[1, 1]: Calls to 'console.log' are not allowed.
What else should I be using? Should I use some sort of system.out equivalent?
Here's my tslint.json:
{
"extends": "tslint:recommended"
}
It is stated in the eslint doc about the no-console rule:
When Not To Use It
If you’re using Node.js, however, console is used to output information to the user and so is not strictly used for
debugging purposes. If you are developing for Node.js then you most
likely do not want this rule enabled.
So it is valid to deviate from the recommended ruleset here and hence I adapted my tslint.json to match:
{
"extends": "tslint:recommended",
"rules": {
"no-console": false
}
}
Faster alternative to console.log:
process.stdout.write("text");
You will need to insert newlines yourself. Example:
process.stdout.write("new line\n");
Difference between "process.stdout.write" and "console.log" in node.js?