Is there a way to influence how user voice input is interpreted? - dialogflow-es

We have an Acton on Google where a user needs to say one of these answers: 'High', 'Rising', 'Low' or 'Falling'.
But when user says "high", it is often recognised as "hi", and "low" as "hello".
I found #Leon Nicholls uses speechBiasing here: https://github.com/entertailion/Magnificent-Escape-Action/blob/4258a544789624b82253b4d29355a7519aab4179/game.js
So I addeded this before doing onv.ask(...):
conv.speechBiasing = ['High', 'Rising', 'Low', 'Falling'];
This resulted in this:
"speechBiasingHints": [
"High",
"Rising",
"Low",
"Falling"
],
Unfortunately, the user answer is still showing on SmartScreen as "hi" and not "high".
Is there another way to influence how the user voice input is interpreted?

If you want to force a specific intent to be selected as a response, you can use possibleIntents: [] (doc) in addition to speechBiasingHints: [].
You can also use follow-up intents as described here. Note that although the implementation is done in Dialogflow in the documentation you can recreate the logic in code if you're not using Dialogflow.

Related

not able to extract entities from user phrases when uploading intent using api

I am trying to build an intent using DialogFlow api.
Intent name: makePizza
Phases: ['I want to order pizza', 'I want to order small pizza']
Response: ['Your pizza is on its way']
After uploading the intent, it looks like this:
But if I make intent from the console and add phrase I want to order small pizza it automatically detects that keyword small is size parameter:
size entity is already added in the agent.
I understand that this can be achieved using below code:
training_phrases_parts = [
{
'type': 'EXAMPLE',
'parts': [
{'text': "i want to order "},
{'text': 'small', 'entity_type': '#size', 'alias': 'size'},
{'text': ' pizza'},
]
}]
But that is not doable for so many intents because there will be so many intents and their user phrases (which might or might not contains parameters). Please give suggestion to make this generic, i was not able to make it generic.
Is there any way to achieve this after the intents are uploaded to DialogFlow? Like detecting the entities from the user phrases? Or any other suggestion please!
Note 1: I tried to upload related parameters as well along with the intent, but that did not did the trick as well.

Multiple output text value setting in watson conversation

I have following node in the conversation. I want to raise the action and based on that need to call the API. In success scenario, will show the output.text[0] and error scenario getting from output.text[1]
{
"output": {
"text": {
"values": [
"I want to get this with success scenario",
"I want to get this with error scenario"
],
"selection_policy": "sequential"
},
"action": "MyAction"
}
}
But when I am accessing this conversation node in node.js, it will always give 1st value, i.e. 'I want to get this with success scenario' It will never give output like 'I want to get this with error scenario' .
How to resolve this issue ?
This is really complicated to answer because depends your Business role inside your Chatbot.
But, if you wanna answer one message according to the Condition...
You can see in your output the selection_policy is sequential, in the other words, first will show the 1st phrase, and just if the condition access again by the user, will show the second message.
"The order in which values in the array are returned depends on the attribute selection_policy."
The better form to solve this is to create two conditions inside the Node flow for each phrase.
For example first condition flow:
if bot recognizes successScenario response "I want to get this with success scenario"
In another second condition flow:
if bot recognizes errorScenario response "I want to get this with error scenario"
And your app with Nodejs will get the value according the condition.
See the official documentation about this here, search about selection_policy.
I have resolved that issue with the following way.
{
"output": {
"text": {
"values": [
{
"successMsg": "success Message",
"errorMsg": "error message"
}
]
},
"action": "MyAction"
}
}
Reason for not going for new node, because with this action, I need to create the API and based on the API call success and fail display this message to the user. Then it will help to save one roundtrip because no need to send results status again to conversation service, it will identify the what to present to user based on the this node itself.

LUIS: Action Parameter cannot be passed (with Dialog Execution)

By using LUIS and it's "Dialog Execution" under Action Binding, i'm expecting to be able to provide the required parameter (of an Action). (So that the Action can be triggered, or the Dialog can be continued.)
As far as i understand, once the Parameter has been asked to provide, we should provide it in the follow-up query call. For example:
First query:
https://westus.api.cognitive.microsoft.com/luis/v2.0/apps/...?subscription-key=...&q=what are the available items
Then, it asks me "Under what category?" (expecting me to provide the required parameter), like:
Then i provided it in the follow-up query:
https://westus.api.cognitive.microsoft.com/luis/v2.0/apps/...?subscription-key=...&q=electronics&contextId=d754ce3...
But then, seems like i still don't get the value accepted, and therefore it is still showing as null. Like this:
So the Parameter is not captured. So that Action can ever be triggered, yet. (Or i cannot reach to next Parameter, if there any)
Am i doing something wrong with it, or what seems to be the problem please?
(Below is the screenshot of that Intent with the "Action Parameters")
I have experienced this before. (In fact it still happens). Even in the Microsoft's Official LUIS API Example DEMOS, it still happens.
For example, in their Weather Bot there, just try something like:
You: What will the weather be tomorrow?
Bot: Where would you like the weather?
You: Singapore
Bot:
{
"name": "location",
"required": true,
"value": null
}
Then now try again, like:
You: What will the weather be tomorrow?
Bot: Where would you like the weather?
You: in Singapore
Bot:
{
"name": "location",
"required": true,
"value": [
{
"entity": "singapore",
"type": "builtin.geography.country"
}
]
}
Conclusion?
Prepositions! (in, at, on, by, under, ...) LUIS still doesn't understand the Entity input without the proper preposition provided, sometimes, in some cases.
I'm pretty sure this is the reason for your case. Try again with a preposition.
( This problem took me like 1~2 weeks to realise. Hope Microsoft can improve LUIS better in all this aspects asap. )

scope of questions in api.ai

Can anyone suggest me , how to allowed scope of questions in api.ai? i.e. I want to ask user "how many book can you carry at a time ?" : user can reply any positive integer number. Then my bot reply: "good , you can still better than others!". now, without any reference if user directly write "any positive integer number" at starting then also bot reply : "good , you can still better than others!" , instead of "I didn't get"(or default response.). This answer come only when previous question has been asked.How can I do this?
==== case : 1 ====
Bot: how many book can you carry at a time ?
User:5
Bot:good , you can still better than others!
=== case : 2 ===
(without any reference if users gives inputs at very starting of conversation)
User: 5
Bot: good , you can still better than others!
Thanks In Advance.
You should make a required parameter instead of putting numbers in User says:
In your intent configure your action to have one required parameter numBooks. Have the prompt for that parameter be "how many book can you carry at a time ?". Then for that intent, have the response be, "good , you can still better than others!". Finally, in the User says section, add anything you want the user to say to trigger the intent, for example: "hi". Save your intent. Now whenever a user says "hi" the bot will ask the question and the conversation will begin. But if the user randomly sends a number, it will respond with fallback intent.

What are the possible kinds of webhooks Trello can send? What attributes come in each?

I'm developing an app that is tightly integrated with Trello and uses Trello webhooks for a lot of things. However, I can't find anywhere in Trello's developer documentation what are the "actions" that may trigger a webhook and what data will come in each of these.
In fact, in my experience, the data that comes with each webhook is kinda random. For example, while most webhooks contain the shortLink of the card which is being the target of some action, some do not, in a totally unpredictable way. Also, creating cards from checklists doesn't seem to trigger the same webhook that is triggered when a card is created normally, and so on.
So, is that documented somewhere?
After fighting against these issues and my raw memory of what data should come in each webhook, along with the name of each different action, I decided to document this myself and released it as a (constantly updating as I find new webhooks out there) set of JSON files showing samples of the data each webhook will send to your endpoint:
https://github.com/fiatjaf/trello-webhooks
For example, when a board is closed, a webhook will be sent with
{
"id": "55d7232fc3597726f3e13ddf",
"idMemberCreator": "50e853a3a98492ed05002257",
"data": {
"old": {
"closed": false
},
"board": {
"shortLink": "V50D5SXr",
"id": "55af0b659f5c12edf972ac2e",
"closed": true,
"name": "Communal Website"
}
},
"type": "updateBoard",
"date": "2015-08-21T13:10:07.216Z",
"memberCreator": {
"username": "fiatjaf",
"fullName": "fiatjaf",
"avatarHash": "d2f9f8c8995019e2d3fda00f45d939b8",
"id": "50e853a3a98492ed05002257",
"initials": "F"
}
}
In fact, what comes is a JSON object like {"model": ..., "action": ... the data you see up there...}, but I've removed these for the sake o brevity and I'm showing only what comes inside the "action" key.
based on #flatjaf's repo, I gathered and summarized all* the webhooks types.
addAttachmentToCard
addChecklistToCard
addLabelToCard
addMemberToBoard
addMemberToCard
commentCard
convertToCardFromCheckItem
copyCard
createCard
createCheckItem
createLabel
createList
deleteAttachmentFromCard
deleteCard
deleteCheckItem
deleteComment
deleteLabel
emailCard
moveCardFromBoard
moveCardToBoard
moveListFromBoard
moveListToBoard
removeChecklistFromCard
removeLabelFromCard
removeMemberFromBoard
removeMemberFromCard
updateBoard
updateCard
updateCheckItem
updateCheckItemStateOnCard
updateChecklist
updateComment
updateLabel
updateList
hope it helps!
*I don't know if that list includes all the available webhooks types because as i already said, it's based on flatjaf's repo created 2 years ago

Resources