How can I use SSML in Dialogflow Fullfilment (Dutch language) - dialogflow-es

is there a simple way to use SSML in the Fullfilment section using Actions-on-google functions. I tried all sorts of coding, but no good results. I'm using Dutch as default language.
In below example, Google Assistant is spelling each '<' etc.:
// Handle the Dialogflow intent named 'favorite color'.
// The intent collects a parameter named 'color'.
app.intent('favoriete kleur', (conv, {color}) => {
const luckyNumber = color.length;
const audioSound = 'https://www.example.com/MY_MP3_FILE.mp3'; // AoG currently only supports MP3!
if (conv.user.storage.userName) {
conv.ask(`<speak>${conv.user.storage.userName}, je geluksnummer is ${luckyNumber}<audio src="${audioSound}"><desc>Geluid wordt afgespeeld</desc></audio></speak>`); // Audio should have description
conv.ask(new Suggestions('Paars', 'Geel', 'Oranje'));
} else {
conv.ask(`<speak>Je geluksnummer is ${luckyNumber}<audio src="${audioSound}"><desc>Geluid wordt afgespeeld</desc></audio></speak>`);
conv.ask(new Suggestions('Paars', 'Geel', 'Oranje'));
}
});
Please find below the environment settings:
Index.js initiation settings:
'use strict';
// Import the Dialogflow module and response creation dependencies
// from the Actions on Google client library.
const {
dialogflow,
BasicCard,
Permission,
Suggestions,
Carousel,
MediaObject,
SimpleResponse,
Table,
Button
// Image,
} = require('actions-on-google');
// Import the firebase-functions package for deployment.
const functions = require('firebase-functions');
// Instantiate the Dialogflow client.
const app = dialogflow({debug: true});
package.json:
{
"name": "codelab-level-three",
"description": "Actions on Google Codelab Level 3",
"author": "Google Inc",
"private": true,
"scripts": {
"lint": "eslint .",
"serve": "firebase serve --only functions",
"shell": "firebase experimental:functions:shell",
"start": "npm run shell",
"deploy": "firebase deploy --only functions",
"logs": "firebase functions:log"
},
"dependencies": {
"actions-on-google": "^2.0.0",
"firebase-admin": "~5.8.1",
"firebase-functions": "^0.8.1",
"i18n": "^0.8.3"
},
"devDependencies": {
"eslint": "^4.19.0",
"eslint-config-google": "^0.9.1"
}
}
The produced payload looks like this:
"status": 200,
"headers": {
"content-type": "application/json;charset=utf-8"
},
"body": {
"payload": {
"google": {
"expectUserResponse": true,
"richResponse": {
"items": [
{
"simpleResponse": {
"textToSpeech": "<speak>Paul, je geluksnummer is 5<audio src=\"https://www.example.com/MY_MP3_FILE.mp3\"><desc>Geluid wordt afgespeeld</desc></audio></speak>"
}
}
],
"suggestions": [
{
"title": "Paars"
},
{
"title": "Geel"
},
{
"title": "Oranje"
}
]
}
}
}
}
}

The most likely issue is that you're not including a closing </speak> tag. So you should probably write it as something like
conv.ask(`<speak>Je geluksnummer is ${luckyNumber}.` +
`<audio src="${audioSound}"></audio></speak>`);

The answer to my problem was easy: just read the f...ing manual. :-) Although both .OGG as also .MP3 are supported the website should provide HTTPS. Website which are not secured (like HTTP) are not supported. Below you can find the example function how you can test:
app.intent('favoriete muziek', conv => {
const Optie = conv.parameters.optie;
//Taalspecifieke meldingen
const SoundLib =
{
'1': {
description : 'Simple sound using .ogg',
audiosound : 'https://actions.google.com/sounds/v1/alarms/alarm_clock.ogg',
audiotext : 'You should hear an audio alarm signal',
},
'2': {
description : 'Music MP3 via HTTP',
audiosound : 'http://storage.googleapis.com/automotive-media/Jazz_In_Paris.mp3',
audiotext : 'You should hear a Jazz record called "Jazz in Paris" ',
},
'3': {
description : 'Longer MP3 file via HTTP',
audiosound : 'http://www.navyband.navy.mil/anthems/anthems/netherlands.mp3',
audiotext : 'You should hear now the Dutch National Anthem',
},
'4': {
description : 'short MP3 audio via HTTPS',
audiosound : 'https://ia802508.us.archive.org/5/items/testmp3testfile/mpthreetest.mp3',
audiotext : 'You should hear a short spoken intro text',
},
};
const Sound = SoundLib[Optie];
var spraakzin = "<speak>This text is using <say-as interpret-as='verbatim'>SSML</say-as> followed by an audio file in a SimpleResponse box: <audio src='" + Sound.audiosound + "'>The audio file could not be processed</audio></speak>";
if (!conv.surface.capabilities.has("actions.capability.MEDIA_RESPONSE_AUDIO")) {
conv.ask("Media response via audio is not supported on this device.");
return;
}
conv.ask(new SimpleResponse({
speech: spraakzin,
text: Sound.audiotext,
}));
});
More info can be found here: SSML examples - look at prerequisites of AUDIO

to add up to what Prisoner already said, there are some other problems.
app.intent('favoriete kleur', (conv, {color}) => {
const luckyNumber = color.length;
const audioSound = 'https://actions.google.com/sounds/v1/cartoon/clang_and_wobble.mp3'; // AoG currently only supports MP3!
if (conv.user.storage.userName) {
conv.ask(`<speak>${conv.user.storage.userName}, je geluksnummer is <audio src="${audioSound}"><desc>${luckyNumber}</desc></audio></speak>`); // Audio should have description
conv.ask(new Suggestions('Paars', 'Geel', 'Oranje'));
} else {
conv.ask(`<speak>Je geluksnummer is <audio src="${audioSound}"><desc>${luckyNumber}</desc></audio></speak>`);
conv.ask(new Suggestions('Paars', 'Geel', 'Oranje'));
}
});
AoG currently only support MP3 as audio format. See https://developers.google.com/actions/assistant/responses#media_responses Sorry, I was wrong. This only goes for Media Responses, NOT for embedded audio in SSML.
I removed concatenations in the code above. That's contraproductive and makes things more difficult to read than necessary. (Opinionated)
Audio output - when not only outputting a soundeffect, but text - should contain a description which will also be printed on the screen. The example in the code supplied should be okay.
But yes, what's the cause of your original problem is, is that you're not closing the audio tags. Actions on Google are pretty unforgiving concerning unclosed tags.
Hope that helped.

Related

Webhooks Directus 9 - send an email when user create a record in a table

I created the "mission" collection. I want to send an email to a personalized recipient for each new recording on the mission table.
According to the Directus documentation, I saw that this is possible via webHooks.
enter link description here
However, I don't quite understand the logic. Especially since in the Directus administration interface, there is a page to add webhooks and link them to the collection concerned.
Can you tell me where I should start to achieve my POC.
I also put some screenshots on the architecture of my app, you can tell me if this is really how it should be or not. I have doubts.
{
"name": "test1-directus",
"version": "1.0.0",
"description": "",
"main": "index.js",
"scripts": {
"test": "echo \"Error: no test specified\" && exit 1",
"start": "directus start"
},
"keywords": [],
"author": "",
"license": "ISC",
"dependencies": {
"directus": "^9.0.0-rc.91",
"mysql": "^2.18.1",
"nodemailer": "^6.6.3"
}
}
I created a project with the command: npx create-directus-project test1directus
My project is running on port 8055 with a reverse proxy setting on nginx.
Is everything OK or did I miss a step?
Thank you in advance for your help.
I found this example to put in: extensions / hooks / sync-with-external / index.js
After several modifications, this error persists on my writing:
An error was thrown while executing hook "items.create"
Cannot destructure property 'mission' of 'undefined' as it is undefined.
The console.log doesn't show me anything.
const axios = require("axios");
module.exports = function registerHook({ services, exceptions }) {
const { MailService } = services;
const { ServiceUnavailableException, ForbiddenException } = exceptions;
return {
// Force everything to be admin-only at all times
"items.*": async function ({ item, accountability }) {
if (accountability.admin !== true) throw new ForbiddenException();
},
// Sync with external recipes service, cancel creation on failure
"items.create": async function (input, { mission, schema }) {
console.log(items);
if (mission !== "recipes") return input;
const mailService = new MailService({ schema });
try {
await axios.post("https://example.com/items", input);
await mailService.send({
to: "pseudo.pseudo#gmail.com",
template: {
name: "item-created",
data: {
collection: mission,
},
},
});
} catch (error) {
throw new ServiceUnavailableException(error);
}
input[0].syncedWithExample = true;
return input;
},
};
};
You can now use Directus Flows from Settings > Flows.
Read the docs here: https://docs.directus.io/configuration/flows

Rich response fulfillment Dialogflow Messenger

I'm using Dialogflow Messenger integration. Custom Payloads are working from the Dialogflow platform, but I don't know how to use them from Webhook.
Here is my JavaScript code that doesn't works:
const functions = require('firebase-functions');
const { dialogflow } = require('actions-on-google');
const { Card, Suggestion } = require('dialogflow-fulfillment');
const { WebhookClient } = require('dialogflow-fulfillment');
const app = dialogflow();
const response = {
"fulfillment_messages": [{
"payload": {
"richContent": [
[{
"type": "chips",
"options": [{
"text": "Empezar!"
}]
}]
]
}
}]
}
const numero = (conv) => {
conv.ask(response);
};
app.intent("numero", numero);
exports.dialogflowFirebaseFulfillment = functions.https.onRequest(app);
The intent is detected correctly but the code for the rich response is not working, the chatbot response is: 'Cannot display response in Dialogflow simulator. Please test on the Google Assistant simulator instead.'.
I want to know how to use all possible custom payloads (Info response, Description response, Button response...).
Your JSON response in the webhook should be-
const response = {
"fulfillment_messages": [{
"payload": {
"richContent": [
[{
"type": "chips",
"options": [{
"text": "Empezar!"
}]
}]
]
}
}]
}
Your response on dialogflow messenger would look like
Here is another stackoverflow post that might help-
How to show Rich Response Buttons (''Chips) Using Dialogflow Fulfillment?

actions-on-google Account Linking Signin status is "ERROR" always

I got this sample code from the docs of actions on google account linking with google account. The signin.status is always "ERROR". I have tried on actions console simulator, google assistant app on my phone and on a google home mini with personal results on. But the result is the same in all cases.
const express = require('express');
const bodyParser = require('body-parser');
const {actionssdk, SignIn} = require('actions-on-google');
const app = actionssdk({
// REPLACE THE PLACEHOLDER WITH THE CLIENT_ID OF YOUR ACTIONS PROJECT
clientId: <client_id>,
});
// Intent that starts the account linking flow.
app.intent('actions.intent.MAIN', (conv) => {
conv.ask(new SignIn('To get your account details'));
});
// Create an Actions SDK intent with the `actions_intent_SIGN_IN` event.
app.intent('actions.intent.SIGN_IN', (conv, params, signin) => {
console.log(signin)
if (signin.status === 'OK') {
const payload = conv.user.profile.payload;
conv.ask(`I got your account details, ${payload.name}. What do you want to do next?`);
} else {
conv.ask(`I won't be able to save your data, but what do you want to do next?`);
}
});
app.intent('actions.intent.TEXT', (conv) => {
conv.close("bye");
})
//Run server
const expressApp = express().use(bodyParser.json());
expressApp.post('/', function(req,res){
app(req,res);
});
expressApp.listen(8080,() => {console.log("listening")});
This is the signin object I'm being returned
{ '#type': 'type.googleapis.com/google.actions.v2.SignInValue',
status: 'ERROR' }
EDIT
My actions.json is as follows
{
"actions": [
{
"description": "Default Welcome Intent",
"name": "MAIN",
"fulfillment": {
"conversationName": "fulfilment function"
},
"intent": {
"name": "actions.intent.MAIN",
"trigger": {
"queryPatterns": [
"talk to Care Cat"
]
}
}
},
{
"description": "Everything Else Intent",
"name": "allElse",
"fulfillment": {
"conversationName": "fulfilment function"
},
"intent": {
"name": "actions.intent.TEXT"
}
}
],
"conversations": {
"fulfilment function": {
"name": "fulfilment function",
"url": <url>
}
},
"locale": "en"
}
Could it be because it is still a test app which is not published yet?
Can someone help me with this?
In your Google Cloud Platform Account, check your IAM settings and enable the Dialogflow API Admin
Documentation for more details: https://cloud.google.com/dialogflow/docs/access-control

Firebase function for nodemailer deployed but no logs and not working correctly with database

I have setup a firebase-function with nodemailer to grab the input from my
firebase-database connected to my contact form and email it to my email.
I have successfully deployed the function and can see the function showing up in my firebase console, however i do not receive any errors in the console nor see any logs or information in the function section of the console. And it just simply doesn't work right now.
This is the first time i am doing this, and i have looked at almost all other similar questions on SO but none of them have given me any clues as to what i am doing wrong.
This is my functions package.json:
{
"name": "functions",
"description": "Cloud Functions for Firebase",
"scripts": {
"lint": "eslint .",
"serve": "firebase serve --only functions",
"shell": "firebase functions:shell",
"start": "npm run shell",
"deploy": "firebase deploy --only functions",
"logs": "firebase functions:log"
},
"engines": {
"node": "8"
},
"dependencies": {
"firebase-admin": "~7.0.0",
"firebase-functions": "^2.3.0",
"nodemailer": "^6.1.1"
},
"devDependencies": {
"eslint-plugin-promise": "^4.0.1",
"firebase-functions-test": "^0.1.6"
},
"private": true
}
and this is the index.js code inside the functions-folder:
const functions = require("firebase-functions");
const admin = require("firebase-admin");
const nodemailer = require("nodemailer");
const gmailEmail = "k****l#gmail.com";
const gmailPassword = functions.config().gmail.pass;
admin.initializeApp();
var goMail = function(message) {
const transporter = nodemailer.createTransport({
service: "gmail",
auth: {
user: gmailEmail,
pass: gmailPassword
}
});
const mailOptions = {
from: gmailEmail, // sender address
to: "****l#gmail.com", // list of receivers
subject: "!", // Subject line
text: "!" + message, // plain text body
html: "!" + message // html body
};
const getDeliveryStatus = function(error, info) {
if (error) {
return console.log(error);
}
console.log("Message sent: %s", info.messageId);
};
transporter.sendMail(mailOptions, getDeliveryStatus);
};
exports.onDataAdded = functions.database
.ref("/messages/{messageId}")
.onCreate(function(snap, context) {
const createdData = snap.val();
var name = createdData.name;
var email = createdData.email;
var number = createdData.number;
var message = createdData.message;
goMail(name, email, number, message);
});
I'm not sure if my setup is wrong or if I'm doing something wrong with the nodemailer code in index.js.
Thanks for the help in advance.
As explained in the comment, since your Cloud Function is triggered by a background event, you must return a promise to indicate that the asynchronous tasks are finished.
So in the goMail function you should return the promise returned by the sendMail() method with:
...
return transporter.sendMail(mailOptions); //Remove the callback
And you should return, in the Cloud Function itself, the promise returned by the goMail function with:
return goMail(...)

How to connect watson assisstant v2 with Amazon skill kit in Aws Lambda?

Hello I want to connect my watson assisstant with an alexa device,
for this I need Amazon development skill kit and AWS lambda. But i can't connect watson because i got problem with my promises and i can't see the logs of my code in the amazon developer console. And my assistant work on nodeJs application.
There is the headers of my watson :
const assistant = new AssistantV2({
version: '2019-02-28',
iam_apikey: 'apiSecretKey',
url: 'https://gateway-lon.watsonplatform.net/assistant/api'
});
const assistant_id = "assistantIDSecret" ;
There is some codes that i tried :
const MyNameIsIntentHandler = {
canHandle(handlerInput) {
return handlerInput.requestEnvelope.request.type === 'IntentRequest'
&& handlerInput.requestEnvelope.request.intent.name === 'SearchIntent';
},
async handle(handlerInput) {
assistant.createSession({
assistant_id: assistant_id
})
.then(res => {
session_id = res.session_id;
})
.catch(err => {
console.log(err);
});
assistant.message({
assistant_id: assistant_id,
session_id: session_id,
input: {
'message_type': 'text',
'text': "hello"
}
})
.then(res => {
console.log(JSON.stringify(res, null, 2));
speechText = res.output.generic.response.text;
})
.catch(err => {
speechText = err;
});
}, function(err){
speechText = "Problem with Api call";
});
return handlerInput.responseBuilder
.speak(speechText)
.getResponse();
},
};
I tried to replace then, catch by an await:
try{
let res = await assistant.createSession({
assistant_id: assistant_id
});
session_id = res.session_id;
let message = await assistant.message({
assistant_id: assistant_id,
session_id: session_id,
input: {
'message_type': 'text',
'text': "hello"
}
});
speechText = message.output.generic.response.text;
}catch(err){
speechText = err;
}
The results of speechText should give me "Good day to you" it's a response that comes from Watson.but now Alexa says "Sorry, I can't understand the command. Please say again."
Do you have an others ways to try this? thanks you!
Sounds like you have managed to call out to Watson Assistant, and if the response configured in your dialog node was "Good day to you" -which is what you have received, then that connection is working. If I remember right however the response that Alexa is expecting is a JSON object, not a string. So you need to format the response to meet Alexa's needs.
A quick look at this site: https://developer.amazon.com/docs/custom-skills/request-and-response-json-reference.html
indicates that the following is a good example the required response json packet.
{
"version": "string",
"sessionAttributes": {
"key": "value"
},
"response": {
"outputSpeech": {
"type": "PlainText",
"text": "Plain text string to speak",
"playBehavior": "REPLACE_ENQUEUED"
},
"reprompt": {
"outputSpeech": {
"type": "PlainText",
"text": "Plain text string to speak",
"playBehavior": "REPLACE_ENQUEUED"
}
},
"shouldEndSession": true
}
}
Note. I cannot confirm as I have never had an Alexa Skill taken to production ( only built them as demo's under the development environment, and shared with a limited few). But I have been informed that Amazon are not happy for their skills to off load work to Watson. Which is a shame.

Resources