The google assistant does not show the suggestion chips sent into the webhook response.
{
"fulfillmentText": "Some text",
"payload": {
"google": {
"expectUserResponse": true,
"richResponse": {
"items": [
{
"simpleResponse": {
"textToSpeech": "What number ?"
}
}
],
"suggestions": [
{
"title": "One"
},
{
"title": "Two"
}
]
}
}
},
"followupEventInput": {
"name": "numbers",
"parameters": {
"param1": "this is it"
}
}}
The interesting thing is that if I remove the "followupEventInput" field, the suggestion chips are displayed.
Can someone give me a hint on this behaviour ?
The JSON you're sending back doesn't do what you likely want it to do.
The followupEventInput means that the Intent is triggered immediately, rather than sending back the rest of the reply (including the suggestions). Instead, the reply from the followup event is sent back.
It sounds like you want to send back a reply and then, no matter what the user says or selects, their message is sent to a specific action. Keep in mind that Dialogflow Intents are triggered based on the user's actions and shaped based on the contexts that might be set.
In this case, it sounds like you may want to set an outputContext to influence which Intents will be examined when collecting the user's response. You can then have an Intent that takes this as an input Context and matches the possible phrases. If you truly want to get whatever the user says in the reply, you can use a Fallback Intent with the input Context set appropriately.
While you can redirect to another Intent to send output, usually this is unnecessary. Remember that Intents best represent the user's input rather than your agent's output. Particularly if you're using your webhook to generate and send a reply - just send the reply.
Related
I use LUIS as a Language Understanding Service for our Chat Bot built with Microsoft Bot Framework.
And I observe the strange behavior:
I added a string "what is my deductible?" to the intent "Deductible".
A user sends "what is my deductible" string ---> LUIS returns the desired intent "Deductible". OK!
A user sends "what is my deductibl?" OR "what is my deductible" (misspelling in the first case, lack of a question mark in the second case) ---> LUIS returns some other intents (which are not related to deductible AT ALL). NOT OK!
Also, I don't see any utterances like those in "Review Endpoint Utterances" section so I could reassign the utterances to the desired intent. NOT OK AT ALL!
Any ideas how to fix it, how to improve the recognition for utterances with misspellings, lack of symbols and also - it's very important - with synonyms of the words?
To improve the utterances with misspellings, lack of symbols and other requirements, there is a need to add the templates of utterance using correct syntax.
misspellings
The below code is V2 prediction end point
{
"query": "Book a flite to London?",
"alteredQuery": "Book a flight to London?",
"topScoringIntent": {
"intent": "BookFlight",
"score": 0.780123
},
"entities": []
}
The below code is V3 prediction end point
{
"query": "Book a flite to London?",
"prediction": {
"normalizedQuery": "book a flight to london?",
"topIntent": "BookFlight",
"intents": {
"BookFlight": {
"score": 0.780123
}
},
"entities": {},
}
}
To perform the utterances perfect creation using valid syntaxes, Using the following resources.
https://blog.botframework.com/2018/06/07/improving-accuracy-in-luis-with-patterns/
https://learn.microsoft.com/en-us/azure/cognitive-services/luis/luis-how-to-model-intent-pattern
https://learn.microsoft.com/en-us/azure/cognitive-services/luis/luis-concept-data-alteration?tabs=V3
While using a dialogflow mega-agent, according to a user setting I want to be able to query a specific sub agent at a time. I have 3 sub agents. Each subagent has similar training phrases but should request different data. The user should be able to select which sub agent they want to use and the query should only go to the selected subagent
To specify one or more sub-agents for a detect intent request, set the subAgents field of QueryParameters.
For example:
{
"queryInput": {
"text": {
"text": "reserve a meeting room for six people",
"languageCode": "en-US"
}
},
"queryParams": {
"subAgents": [
{"project": "projects/sub-agent-1-project-id"},
{"project": "projects/sub-agent-2-project-id"}
]
}
}
For more information on Dialogflow ES mega agents, see the Mega agents documentation.
We have built a teams app that can be used in the group chat. So, basically any user can do
#
At the server side, we want to get the sending user and respond to the sent text based on who sent it. The code to get users in the conversation looks like below:
const connector = context.adapter.createConnectorClient(context.activity.serviceUrl);
const response = await connector.conversations.getConversationMembers(context.activity.conversation.id);
functions.logger.log("conversation members are:", response)
The response returns an array of all the users in the conversation with below structure
[
{
"id": "29:1a-Xb7uPrMwC2XqjMEHCC7ytV2xb2VUCqTA-n_s-k5ZyMCTKIL-ku2XkgbE167D_5ZbmVaqQxJGIQ13vypSqu-A",
"name": "Neeti Sharma",
"objectId": "718ab805-860c-43ec-8d4e-4af0c543df75",
"givenName": "Neeti",
"surname": "Sharma",
"email": "xxx#xxxx.xxx",
"userPrincipalName": "xxxx#xxxx.xxx",
"tenantId": "xxx-xx-xx-xxxxxx-x",
"userRole": "user"
},
{
...
}
]
The above response does not indicate who is the sender of the message in the group chat. How do we find that?
I'm not sure the exact syntax for Node (I work mostly in C#), but basically on the context.activity object there is a from property (i.e. context.activity.from), which is of type ChannelAccount (DotNet reference here, but it's very similar for Node). That will give you, at least, Name and AadObjectId. What you're using right now is getConversationMembers, which gives you everyone in the entire Channel, not just that particular message/thread.
turnContext.Activity.From.Id is also unique to each user. You can use that property too. Email is tough to get in any other events than the MembersAdded event.
I'm writing my first NodeJS app for Google Home (using DialogFlow - formerly API.ai).
I'm looking at the doc on this page: https://developers.google.com/actions/reference/v1/dialogflow-webhook
but I don't see any way to set session variables.
My current test program sets speech like this:
speechText = "I'm not sure that character exists!";
callback(null, {"speech": speechText});
In DialogFlow, my JSON after running looks like this, and it looks like maybe the "contexts" is where the session state would go?
{
"id": "3a66f4d1-830e-48fb-b72d-12711ecb1937",
"timestamp": "2017-11-24T23:03:20.513Z",
"lang": "en",
"result": {
"source": "agent",
"resolvedQuery": "test word",
"action": "MyAction",
"actionIncomplete": false,
"parameters": {
"WordNumber": "400"
},
"contexts": [],
"metadata": {
"intentId": "a306b829-7c7a-46fb-ae1d-2feb1c309124",
"webhookUsed": "true",
"webhookForSlotFillingUsed": "false",
"webhookResponseTime": 752,
"intentName": "MyIntentName"
},
"fulfillment": {
"messages": [{
"type": 0,
"speech": ""
}]
},
"score": 1
},
"status": {
"code": 200,
"errorType": "success",
"webhookTimedOut": false
},
"sessionId": "fe0b7d9d-7a55-45db-9be9-75149ff084fe"
}
I just noticed from a chat bot course that I bought that you can set up Contexts like this, but still not sure exactly how the contexts get set and passed back and forth between the response of one call of my program to the request in the next call of my program (defined via "webhook").
When I added the contexts above, DialogFlow wouldn't recognize my utterance any longer and was giving me the DefaultFallback response. When I remove them, my AWS Lambda get's called.
For starters, the documentation page you're looking at refers to a deprecated version of the API. The page that talks about the current version of the api (v2) is https://developers.google.com/actions/dialogflow/webhook. The deprecated version will only be supported for another 6 months or so.
You're on the right track using Contexts! If you were using Google's actions-on-google node.js library, there would be some additional options - but they all use Contexts under the scenes. (And since they do use Contexts under the scenes - you should make sure you pick Context names that are different from theirs.) You can also use the sessionId and keep track of things in a local data store (such as DynamoDB) indexed against that SessionID. But enough about other options...
A Context consists of three elements:
A name.
A lifetime - for how many messages from the user will this context be sent back to you. (But see below about re-sending contexts.)
An object of key-value strings.
You'll set any contexts in the JSON that you return as an additional parameter named contextOut. This will be an array of contexts. So your response may look something like this:
var speechText = "I'm not sure that character exists!";
var sessionContext = {
name: "session_variables",
lifespan: 5,
parameters: {
"remember": "one",
"something": "two"
}
};
var contextOut = [sessionContext];
var response = {
speech: speechText,
contextOut: context
};
callback(null, response);
This will include a context named "session_variables" that stores two such variables. It will be returned for the next 5 messages sent to your webhook. You can, however, add this to every message you send, and the latest lifetime and parameters will be the ones that are sent back next time.
You'll get these contexts in the JSON sent to you in the result.contexts array.
The "Context" field on the Intent screen is used for an additional purpose in Dialogflow beyond just preserving session information. This indicates that the Intent is only triggered if the specified Context exists (lifetime > 0) when the phrase tries to be matched with it (or when handling a fallback intent). If you're using a webhook, the "Context Out" field is ignored if you send back contexts yourself.
This lets you do things like ask a particular question and set a Context (possibly with parameters) to indicates that some answers should be understood as being replies to the question you just asked.
I am searching for a way to store conditions in mongodb, to be queried and checked, then do something as a result of the condition check.
First off here is an example of the event object that I am considering
{
"name": "My Event",
"created": 1490726092221,
"startDate": 1490726092221,
"endDate": 1490726097810,
"notifications": [
{
"message": "{event.Name} Created", // message template
"status": 0, // 0=initialized 1=failed 2=sent
"sendDate": null, // date that the notification was sent
"sentTo": ["c2a34dfg32c1d4583e73a123"] //members to send notification to
"criteria": {
"script": "event.created >= 0 && this.status < 2"
}
}
],
"members": [
{
"_id": "c2a34dfg32c1d4583e73a123" // Reference to the user
}
]
}
The use case, I want to have customizated notifications for an event. So if an event is scheduled it could have notifications for when it is created, when the event start date is within a few days, when a member joins etc. While I could code all of these into javascript functions and correspond to them by an enum for the notification criteria, or have hooks for when certain events happen, this seems like a strict approach.
What I am envisioning is possibly a scripting language that can be stored as a string on the document, which can be queried and evaluated, which will return a boolean to trigger the notification or not.
The script would need to have the event as an input variable, as well as a few special input variables to be available to the script.
This could be done with javascript and eval() but that scares me. Are there any other tools that can be used for this use case? Or, are there any suggestions for a better approach to this problem?
Sounds like you are building a workflow engine with a MongoDB back end. I would start by researching things like the ones listed in this answer Workflow engine in Javascript