I am building a bot for google assistant. I have enabled fulfillment section for some intents. Dialog flow sends the request to the fulfillment url. The url is executed and a hard coded response is returned. I can see the response in the assistant simulator. Everything works fine except one thing. The request is empty.I can't access fields that are supposed to be present in the request.
I have accessed the same url using post request from a python code and it displays the parameters. So, there are no issues in the code. I think I am missing some configuration option.
I was expecting the post body in the following format:
POST body:
{
"responseId": "ea3d77e8-ae27-41a4-9e1d-174bd461b68c",
"session": "projects/your-agents-project-id/agent/sessions/88d13aa8-2999-4f71-b233-39cbf3a824a0",
"queryResult": {
"queryText": "user's original query to your agent",
"parameters": {
"param": "param value"
},
"allRequiredParamsPresent": true,
"fulfillmentText": "Text defined in Dialogflow's console for the intent that was matched",
"fulfillmentMessages": [
{
"text": {
"text": [
"Text defined in Dialogflow's console for the intent that was matched"
]
}
}
],
"outputContexts": [
{
"name": "projects/your-agents-project-id/agent/sessions/88d13aa8-2999-4f71-b233-39cbf3a824a0/contexts/generic",
"lifespanCount": 5,
"parameters": {
"param": "param value"
}
}
],
"intent": {
"name": "projects/your-agents-project-id/agent/intents/29bcd7f8-f717-4261-a8fd-2d3e451b8af8",
"displayName": "Matched Intent Name"
},
"intentDetectionConfidence": 1,
"diagnosticInfo": {},
"languageCode": "en"
},
"originalDetectIntentRequest": {}
}
But when I print the post data using print(request.POST), the actual post request shown is
One more thing: Does dialog flow append the action at the end of the fulfillment url? If so, I will have to handle the logic separately. I have done it without considering the action name. But a lot of my stuff is hacked, so I just want to be sure.
On another note, is dialogflow good enough? It has worked fine on a few examples similar to what it was trained on. How many training samples does it need to work properly? What is the underlying algorithm used in dialogflow? Or should I use the fulfillment url and handle everything on my own? I am inclined towards the later. I do not have too much faith in the existing chatbots.
Any help is appreciated.
If the Fallback Intent is the one being triggered, then you wouldn't get any parameters since this means that nothing else matched.
Got it. Used request.body. This solves the problem. Then parsed it using json.loads and accessed the parameters.
Related
I have an application within watson assistant that consumes many services from other endpoints. and I would like to call this conversation (from watson) within a google assistant conversation in a certain intention. for example i will develop a rich conversation on google assistant and in one of the options i will call watson's conversation.
I tried as follows, but it didn't work. does anyone know any example that can help me?
{"locale": "pt-BR",
"actions": [
{
"description": "Launch intent",
"name": "MAIN",
"fulfillment": {
"conversationName": "mainConversation"
},
"intent": {
"name": "actions.intent.MAIN"
}
},
{
"description": "Direct access",
"name": "BUY",
"fulfillment": {
"conversationName": "ExampleAction"
},
"intent": {
"name": "com.example.ExampleAction.BUY",
"trigger": {
"queryPatterns": [
"teste",
"azul",
"start"
]
}
}
}
],
"conversations": {
"mainConversation": {
"name": "mainConversation",
"url": "https://us-central1-ericanovo-798cc.cloudfunctions.net/webhook",
"fulfillmentApiVersion": 2
},
"BUY": {
"name": "ExampleAction",
"url": "https://orquestrador-sulamerica-teste.mybluemix.net/api/v1/chat/google?externaltoken=574213c0-e904-11e9-9970-ff484aa25334",
"fulfillmentApiVersion": 2
}
}
}
thanks
That won't work because the webhook for everything published under the same project has to be the same URL. You are expected to handle all the Intents and "actions" at that webhook.
In your case, you would also need to make sure the request is formatted the way the Watson API would be expecting it. The Assistant will send it using the Conversation Webhook Format, and it sounds like you would send it using Watson's Analyze Text API.
You're not showing any of your code, so it is difficult to be sure - but the first would be in a JSON format that you can extract. You can then use a library in Node (such as request-promise to make the calls to Watson. Based on the result from Watson, you'd need to format the results as a response and return it to the Assistant.
It isn't clear why you'd need multiple webhooks specifically, although it is certainly possible that some Intents may make different API calls than others.
Keep in mind that your custom Intents will only be valid on invocation. Subsequent Intents will all be TEXT Intents.
I am trying to build a Slack app by using AWS lambda and NodeJs. The issue I am facing is that I don't understand in what format does the SlackBot need the JSON payload from my AWS lambda code to display it.
I followed the tutorial video suggested on Slack linked here. In the video, the following JSON object is created and returned from the AWS lambda.
const response = {
statusCode: 200,
body: "Sample Response",
};
The SlackBot posts the text entered in the 'body' property (i.e. 'Sample Response' in this case) as a response. This seems to be working well. But, I need some more flair than simple text so I looked into their Block Kit UI builder. But there seems to be no documentation for how to do this with a similar 'response' JSON object like this. How exactly am I supposed to use the JSON object created by the UI builder?
I do not know much about Web development so sorry if this seems like a very basic question. I wish there was a sample Slack app on their website which showed this.
The following may work for you (I use a similar one on the production);
{
"channel": "your-channel-name",
"username": "channel-username",
"attachments": [
{
"title": "some-title",
"fallback": "some message",
"text": "some text",
"fields": [
{
"title": "sub-title",
"value": "sub-title-value",
"short": true
},
{
"title": "some-other-title",
"value": "some-value"
}
],
"color": "red"
}
],
"icon_emoji": "gun"
}
This link or this one may provide some extra information.
I want to use the Permission helper intent from dialogflow to get the users name. I have my webhook in C# that sends the following JSON request to dialogflow to do this:
https://github.com/dialogflow/fulfillment-webhook-json/blob/master/responses/v2/ActionsOnGoogle/AskForPermission.json
{
"payload": {
"google": {
"expectUserResponse": true,
"systemIntent": {
"intent": "actions.intent.PERMISSION",
"data": {
"#type": "type.googleapis.com/google.actions.v2.PermissionValueSpec",
"optContext": "To deliver your order",
"permissions": [
"NAME",
"DEVICE_PRECISE_LOCATION"
]
}
}
}
}
}
I also created the following intent:
But when I send this response to Dialogflow nothing happens, in the diagnostic info it does say 'Webhook execution successful' and i see my response coming in:
I thought i would be able to say yes or no as a response and then get the data in my next response objects from dialogflow under the property originalDetectIntentRequest
It looks like you're testing it through the Dialogflow "try it now" sidebar.
Permission requests require the Actions on Google Simulator, and only work for Actions on Google. You can click on the "See how it works in the Google Assistant" link that is also on the right side-bar to go to the simulator to test it.
I have an API.ai agent that sends a request (comes from the user) to a webhook which needs a lot of processing (more than 5 sec) to get the answer. As far as I know, that there is no way to increase the response timeout in API.ai
So, I have created 2 intents. The first one simply will call my webhook to start the processing the result, and at the same time the webhook will reply to the user, "Your request is under processing...".
The second intent has an event and action. The purpose of the new event is just to display the result to the user.
Once the result is ready, my backend application will send a curl statement to trigger the event in the second intent with the necessary parameter modifications like sessionID, v, and time zone … etc.
I have received the following JSON from API.AI (I created an example to simplify my case):
{ "id": "de31ee96-c42f-4f2d-8461-ee39279ec2ed", "timestamp": "2017-09-27T13:39:46.932Z", "lang": "en", "result": {
"source": "agent",
"resolvedQuery": "custom_event",
"action": "test",
"actionIncomplete": false,
"parameters": {
"user_name": "Sam"
},
"contexts": [
{
"name": "welcoming-followup",
"parameters": {
"name.original": "",
"user_name": "Sam",
"name": "",
"user_name.original": ""
},
"lifespan": 2
}
],
"metadata": {
"intentId": "c196a388-16ac-4966-b55c-7cd999a7d680",
"webhookUsed": false,
"webhookForSlotFillingUsed": "false",
"intentName": "Welcoming"
},
"fulfillment": {
"speech": "Hello Sam",
"messages": [
{
"type": 0,
"speech": "Hello Sam"
}
]
},
"score": 1.0 }, "status": {
"code": 200,
"errorType": "success" }, "sessionId": "67cb28fd-6871-750c-d668-d0b736b763ec" }
Here is the curl statement that was sent by my backend.
The curl statement is: curl -X POST -H "Content-Type: application/json; charset=utf-8" -H "Authorization: Bearer I INSERTED THE CORRECT CODE HERE" --data "{'event':{ 'name': 'custom_event', 'data': {'name': 'Sam'}}, 'timezone':'America/New_York', 'lang':'en', 'sessionId':'a6ac2555-4b19-40f8-92ec-397f6a042dde'}" "https://api.api.ai/v1/query?v=20150910"
As shown from the above JSON, the API.ai agent received the trigger successfully. But, The response that I have specified in the “Response Section” does not appear to the user.
I attached a screenshot for the second intent in the API.ai agent.
Note: I tried the agent in developer console, WebDemo and Slack. None of them shown to me (as a user) the specified response.
I am not sure if I did something wrong?
screenshot of the second intent
API.AI is not really meant to handle event-driven activities. It is meant to be the intermediary in a conversation - so the normal pattern is:
User says something
API.AI processes this, possibly with a webhook, and sends a response.
Devices such as Google Home do not have a way to get a notification, so unless the user says something (step 1), then you will never get to step 2.
When you try to trigger it manually, API.AI is treating your trigger as the step 1, and it is replying to your trigger. It has no way to send that reply back to the Assistant because it isn't having a conversation with the Assistant at that moment - it is having a conversation with however you manually triggered it.
There isn't really a good way to do what you want right now. We know notifications are coming to the Assistant eventually (it was announced at I/O 2017), but we don't know if it will have an API or what it will look like. The Transaction API does have notifications as part of it, but Transactions are meant for activities where you are purchasing or reserving something. If you need to, you can use something like Firebase Cloud Messaging to let your user know they can ask for the result, but that's a sub-optimal experience.
I'm new to Api.ai , i read the doc. but i didn't understand how Api.ai works better with many parameters.
I'll try to explain by an example :
I have a Management software which manages the members/actions/projects , where i can get the actions of any member at any project using the normal interface.
let's replace this with a smart bot where the chat will run as i expected below,
USER : i want to see my actions for ANY PROJECT NAME HERE
bot : your action is XXXXXX.
OR
USER: give me all the members of the project ANY PROJECT NAME
Bot: Members are "1-2-3-4-5-...."
i think you got what i mean , if you need more i can explain more.How can i let Api.ai understands this ?
For API.ai to 'remember' values (ie store and retrieve information such as the names of projects, actions and team members) you will need to connect API.ai to a webhook/database of your own, there isn't anyway for API.ai to do this on its own.
Once you connect API.ai to a custom webhook/database you can use the variables that API.ai will parse for you to run your query. You simply need to build the intents corresponding to the search and parameters involved
Here's how the process would flow:
User asks "I want to see my actions for [ANY PROJECT NAME HERE]"
API ai logic recognizes this as the intent 'search-action' for $project_name, you having set this up in API.ai like this
Your custom webhook receives JSON response from API.ai that in this case would look like this:
{
"id": "REDACTED",
"timestamp": "2017-04-19T03:18:18.028Z",
"lang": "en",
"result": {
"source": "agent",
"resolvedQuery": "I want to see my actions for project Unicorn",
"action": "search-action",
"actionIncomplete": false,
"parameters": {
"project_name": "project Unicorn"
},
"contexts": [],
"metadata": {
"intentId": "REDACTED",
"webhookUsed": "false",
"webhookForSlotFillingUsed": "false",
"intentName": "Search - Actions"
},
"fulfillment": {
"speech": "",
"messages": [
{
"type": 0,
"speech": ""
}
]
},
"score": 1
},
"status": {
"code": 200,
"errorType": "success"
},
"sessionId": "REDACTED"
}
So, your webhook has logic that recognizes when result.action is 'search-action' is should run a database search for actions in project result.parameters.project_name
Your webhook fulfills the API.ai request, or alternatively, sends message to message platform directly (ie Facebook messenger)