What is the best way to implement Dialogflow v2 webhook in Java - dialogflow-es

I would like to know what is the best way to implement a dialogflow v2 webhook in Java because there are several libraries available today.
I have identified the following libraries:
Dialogflow API Client Library for Java :
https://developers.google.com/api-client-library/java/apis/dialogflow/v2
Google Cloud Java Client for Dialogflow :
https://github.com/googleapis/google-cloud-java/tree/master/google-cloud-clients/google-cloud-dialogflow
Actions on Google Java/Kotlin Client Library :
https://github.com/actions-on-google/actions-on-google-java
I already had a good experience with Dialogflow API Client Library for Java, which facilitates the creation of rich message. But is it really the best choice?
What is the best solution in terms of:
features (rich message,...)
practicality
durability
performance
Edit:
After some test, Actions on Google Java/Kotlin Client Library seems to be the best choice in terms of features and is very good in practicality for Google Actions.
Using the ResponseBuilder you can achieve things the same way as Dialogflow API Client Library for Java.
ResponseBuilder responseBuilder = getResponseBuilder(request);
WebhookResponse webhookResponse$actions_on_google = new WebhookResponse();
List<IntentMessage> fulfillmentMessages = Lists.newArrayList();
IntentMessage im = new IntentMessage();
IntentMessageQuickReplies qr = new IntentMessageQuickReplies();
List<String> l = Lists.newArrayList();
l.add("a");
l.add("b");
l.add("c");
qr.setQuickReplies(l);
im.setQuickReplies(qr);
fulfillmentMessages.add(im);
webhookResponse$actions_on_google.setFulfillmentMessages(fulfillmentMessages);
responseBuilder.setWebhookResponse$actions_on_google(webhookResponse$actions_on_google);

Related

Is accessing the Customer Profile API possible using the Cloud9 code editor in the AWS Lambda web console? If so, how?

First off, I'm new to Alexa skill development, so I have much to learn. I've been banging my head off the desk trying to figure this out. I've found various tutorials and have gone over the information provided by Amazon for accessing the Customer Profile API via an Alexa skill but still can't manage to obtain the customer's phone number.
I'm using the AWS console in-line code editor (Cloud9). Most, if not all, instructions use something like 'axios', 'request', or 'https' modules which I don't think is possible unless you use the ask-cli (please correct me if I'm wrong). Also, I followed a tutorial to initially create the skill which had me use Skillinator.io to create an AWS Lambda template based on the skill's JSON in the Amazon Developer console. The format of the code in the Customer Profile API tutorials does not match what was provided by the Skillinator.io tool. The way the Intent handlers are set up is different, which is where I believe my confusion is coming from. Here's an example:
Skillinator.io code:
const handlers = {
'LaunchRequest': function () {
welcomeOutput = 'Welcome to the Alexa Skills Kit!';
welcomeReprompt = 'You can say, Hello!';
this.emit(':ask', welcomeOutput, welcomeReprompt);
},
};
Tutorial code:
const LaunchRequestHandler = {
canHandle(handlerInput) {
return handlerInput.requestEnvelope.request.type === 'LaunchRequest';
},
handle(handlerInput) {
const speechText = 'Welcome to the Alexa Skills Kit!';
return handlerInput.responseBuilder
.speak(speechText)
.reprompt(speechText)
.withSimpleCard('Hello World', speechText)
.getResponse();
}
};
Can anyone shed some light and help me understand why there is a difference in the way the handlers are formatted, and how (if possible) to create the request to the Customer Profile API?
I've already completed the steps for the necessary permissions/account linking.
Thanks in advance.
EDIT:
I've learned that the difference in syntax is due to the different versions of the sdk, Skillinator being 'alexa-sdk' or v1 and the various tutorials using 'ask-sdk' or v2.
I'm still curious as to whether using modules like 'axios' or 'request' is possible via the in-line code editor in AWS console or if it's even possible to access the Customer Profile API using sdk v1?
I've decided to answer the question with what I've learned in hopes that others won't waste as much time as I have trying to understand it.
Basically, it is possible to use the above-mentioned modules in sdk v1 using the AWS console's in-line code editor but you must create a .zip file of your code and any necessary modules and upload that .zip to Lambda.
I've edited my original answer to include my findings for the difference in syntax in the intent handlers.
From what I can tell (and please correct me if I'm wrong), it is not possible to access the Customer Profile API using the sdk v1.

DIalogflow Telephony integration is interpreting SSML response from webhook as normal text

I am using dialogflow-fulfillment nodejs library to send response(eg: agent.add("<speak>hello</speak>")) back to the dialogflow agent. It works fine with dialogflow agent and google simulator. However, when I use the same response with telephony integration. It does not recognize it as "ssml" and speak it as "greater than speak less than....hello less than slash ..greater than>. Also. I checked SDK supported platforms and it looks like version 0.6.1 does not support telephony Platform yet.
You are correct that the client API doesn't include methods for the telephony gateway, so you'll need to craft the JSON response yourself. This is an example of what you can put for "fulfillmentMessages":
fulfillmentMessages: [
{
platform: 'TELEPHONY',
telephonySynthesizeSpeech: {
ssml: `<speak>YOUR MESSAGE GOES HERE</speak>`
}
}
]
Here's the link to the relevant API v2 beta 1 documentation (scroll down to TelephonySynthesizeSpeech): https://cloud.google.com/dialogflow-enterprise/docs/reference/rpc/google.cloud.dialogflow.v2beta1#telephonysynthesizespeech

DialogFlow V2 How set fulfillmentText via node js library?

Following this guide:
https://actions-on-google.github.io/actions-on-google-nodejs/
I created an action for DialogFlow
import { dialogflow, Image, Conversation, BasicCard } from 'actions-on-google';
const app = dialogflow();
app.intent('test', (conv, input) => {
conv.contexts.set('i_see_context_in_web_demo', 1);
conv.ask(`i see this only into actions on google simulator`);
conv.ask(new Image({
url: 'https://developers.google.com/web/fundamentals/accessibility/semantics-builtin/imgs/160204193356-01-cat-500.jpg',
alt: 'cat',
}));
});
I then activated Web Demo integration
I saw that the Web Demo integration does not show the cards, the images. I hypothesize that it only shows text, no rich text
I understand that it elaborates only JSON like this:
{
"fulfillmentText": "Welcome!",
"outputContexts": []
}
But I did not find any method in the library used to enhance fulfillmentText
can you help me?
You're using the actions-on-google library, which is specifically designed to send messages that will be used by the Google Assistant. The Web Demo uses the generic messages that are available for Dialogflow. The actions-on-google library does not send these generic messages.
If you want to be able to create messages that are usable by both, you'll need to look into the dialogflow fulfillment library, which can create messages that are usable by the Google Assistant as well as other platforms. Be aware, however, that not all rich messages are available on all platforms, but that the basic text responses should be.
You also don't need to use a library - you can create the JSON response yourself.

Create bot that supports two LUIS apps in Microsoft Bot Framework

I need to make a bilingual bot using Node.js and Microsoft Bot Framework. The bot uses LUIS for natural language.
I use the standard way to plug in LUIS:
// Create bot, send welcome message:
let bot = new builder.UniversalBot(connector, NoneIntentHandler);
// Plug in LUIS:
bot.recognizer(new builder.LuisRecognizer(config.luis.url));
However, I need to support two languages, English and Chinese. It's not a problem for me to detect a language. I have two separate LUIS apps, one for English and one for Chinese, and they return the same intents and entities.
But the problem is how to dynamically switch between two different apps, depending on the language of the user's input. The bot.recognizer doesn't accept two URLs or any other parameters. So it seems there is no built in support for that.
Is there some way to dynamically kill and recreate the bot object with another recognizer? Or reassign the recognizer depending on the LUIS language? Or any other way to do it?
You can try the following:
var recognizer1 = new builder.LuisRecognizer('<model 1>');
var recognizer2 = new builder.LuisRecognizer('<model 2>');
var intents = new builder.IntentDialog({ recognizers: [recognizer1, recognizer2] });

Chrome text-to-speech API doesn't work

I'm trying the chrome text-to-speech API but even the demo provided by google
https://developer.chrome.com/trunk/extensions/examples/extensions/ttsdemo/ttsdemo.html
doesn't work for me, I can't hear any sound, do you?
I don't think it is a problem of my browser because google.translate.com (which I guess is based on the same technology) works for me if I try the listening mode.
Any idea?
Thanks
As of Chrome 33, Chrome's speech synthesis API is available in JavaScript.
Quick example:
window.speechSynthesis.speak(
new SpeechSynthesisUtterance('Oh why hello there.')
);
Details:
HTML5 Rocks: Introduction to the Speech Synthesis API
. . Hi, Eugenio.
. . This API is only available for extensions. You can port your logic to inside an extension (people would have to install it to use, of course), create an extension that exposes the functions to the "outside world" (people would still need to install the extension to use your app correctly) or simply use a client-side synthesizer (speak.js, for example).
. . You can use WebAudio API (or event tags) and calls to the Google Translate TTS endpoint, but that's not a Public API and it has no guarantees. It can simply stop working because of some limitation from Google, they can change the API or endpoints and yadda yadda. If it's only for testing, that'll probably do, but if it's a bigger project (or a comercial one), I strongly advise against that option.
. . Good luck.
Today (October 2015) there's 55% devices that have Speech Synthesis API support: http://caniuse.com/#feat=speech-synthesis
Here's the example:
// Create the utterance object
var utterance = new SpeechSynthesisUtterance();
utterance.text = 'Hello, World!';
// optional parameters
utterance.lang = 'en-GB'; // language, default is 'en-US'
utterance.volume = 0.5; // volume, from 0 to 1, default is 1
utterance.rate = 0.8; // speaking rate, default is 1
// speak it!
window.speechSynthesis.speak(utterance);
Just to add some links because I also was lost finding the right information.
You can use the so called "speech synthesis api" of Chrome, see demo: https://www.audero.it/demo/speech-synthesis-api-demo.html
Further info:
Talking Web Pages and the Speech Synthesis API
Getting Started with the Speech Synthesis API
Using HTML5 Speech Recognition and Text to Speech
Hope that helps, and hope the links will survive the future.

Resources