My initial play command works fine and my Next Intent works fine. But ENQUEUE does not work. Nothing plays after my first song is finished.
Here is the request that I am receiving from Alexa
{
"Token": "31|f2a55190-c12b-4a28-8e57-bfe1c36581f5|838",
"OffsetInMilliseconds": 0,
"Type": "AudioPlayer.PlaybackNearlyFinished",
"Context": null,
"RequestId": "amzn1.echo-api.request.aa961d0b-cc62-4921-a674-b8c2e00e0d22",
"Timestamp": "2017-02-19T04:34:33Z"
}
Below is the response that I am sending
{
"Card": null,
"OutputSpeech": null,
"Reprompt": null,
"ShouldEndSession": true,
"Directives": [{
"type": "AudioPlayer.Play",
"playBehavior": "ENQUEUE",
"audioItem": {
"stream": {
"token": "32|f2a55190-c12b-4a28-8e57-bfe1c36581f5|839",
"expectedPreviousToken": "32|f2a55190-c12b-4a28-8e57-bfe1c36581f5|838",
"url": "https://www.example.com/music/test/mysong.mp3",
"offsetInMilliseconds": 0
}
}
}]
}
I can't figure out why the next song is not playing
Thanks
I worked on skill that heavily uses audio directives. I can't tell that much from the response. But here is my 2cents. You have to make sure that expectedPreviousToken is correct. It should be the same token that was played earlier.
Related
I am trying to integrate DialogFlow bot with Hangouts Chat (for G Suite). I have enabled the integration on DialogFlow and the basic intents are working fine.
In order to perform backend operations using fulfillment, I have created a firebase cloud function and added this as the webhook URL on DialogFlow fulfillment page.
I have written the cloud function code to identify the intent, and to generate the Webhook response format for a simple text response. This is working, and I am seeing the firestore data being modified in response to the intent.
However for a more complicated intent, I wish to use more of the dynamic card based response that Chat offers. In order to achieve this, I have looked at the documentation for dialogflow card response.
I saw this following code at https://cloud.google.com/dialogflow/docs/integrations/hangouts. When I paste this into the dialogflow intent editor UI under hangouts custom payload (after disabling webhook integration), it works
{
"hangouts": {
"header": {
"title": "Pizza Bot Customer Support",
"subtitle": "pizzabot#example.com",
"imageUrl": "..."
},
"sections": [{
"widgets": [{
"keyValue": {
"icon": "TRAIN",
"topLabel": "Order No.",
"content": "12345"
}
},
{
"keyValue": {
"topLabel": "Status",
"content": "In Delivery"
}
}]
},
{
"header": "Location",
"widgets": [{
"image": {
"imageUrl": "https://dummyimage.com/600x400/000/fff"
}
}]
},
{
"header": "Buttons - i could leave the header out",
"widgets": [{
"buttons": [{
"textButton": {
"text": "OPEN ORDER",
"onClick": {
"openLink": {
"url": "https://example.com/orders/..."
}
}
}
}]
}]
}]
}
}
This is exactly what I need, but I need this response from the webhook. I'm not getting the correct response format to map between the two.
When I try to integrate the same code with the webhook, I am not getting any reply on hangouts chat. When I check the history section on dialogflow UI, here is the response structure as mentioned in Raw interaction log
{
"queryText": "<redacted>",
"parameters": {},
"intent": {
"id": "<redacted>",
"displayName": "<redacted>",
"priority": 500000,
"webhookState": "WEBHOOK_STATE_ENABLED"
},
"intentDetectionConfidence": 1,
"diagnosticInfo": {
"webhook_latency_ms": 284
},
"languageCode": "en",
"slotfillingMetadata": {
"allRequiredParamsPresent": true
},
"id": "<redacted>",
"sessionId": "<redacted>",
"timestamp": "2020-07-30T12:05:29.094Z",
"source": "agent",
"webhookStatus": {
"webhookUsed": true,
"webhookPayload": {
"hangouts": {
"header": {
"subtitle": "pizzabot#example.com",
"title": "Pizza Bot Customer Support",
"imageUrl": "..."
},
"sections": [
{
"widgets": [
{
"keyValue": {
"content": "12345",
"topLabel": "Order No.",
"icon": "TRAIN"
}
},
{
"keyValue": {
"topLabel": "Status",
"content": "In Delivery"
}
}
]
},
{
"widgets": [
{
"image": {
"imageUrl": "https://dummyimage.com/600x400/000/fff"
}
}
],
"header": "Location"
},
{
"widgets": [
{
"buttons": [
{
"textButton": {
"text": "OPEN ORDER",
"onClick": {
"openLink": {
"url": "https://example.com/orders/..."
}
}
}
}
]
}
],
"header": "Buttons - i could leave the header out"
}
]
}
},
"webhookStatus": {
"message": "Webhook execution successful"
}
},
"agentEnvironmentId": {
"agentId": "<redacted>",
"cloudProjectId": "<redacted>"
}
}
I also found this link on chat docs which explains how to show an interactive card based UI https://developers.google.com/hangouts/chat/how-tos/cards-onclick. However I'm not able to understand how to integrate the same with the webhook.
UPDATE
I have followed a tutorial at https://www.leeboonstra.com/Bots/custom-payloads-rich-cards-dialogflow/ and was able to get the card response to show up using the sample code they mention. It is using this deprecated library (https://github.com/dialogflow/dialogflow-fulfillment-nodejs). Here is the code for that to work,
let payload = new Payload("hangouts", json, {
rawPayload: true,
sendAsMessage: true,
});
agent.add(payload);
Here the json variable should be the previous JSON structure I have mentioned. So now, I'm able to map to the correct response format using the deprecated API. However, I'm not able to get the button to send the right response to the back end. Here is the buttons field that I modified from the previous json,
"buttons": [
{
"textButton": {
"text": "Click Me",
"onClick": {
"action": {
"actionMethodName": "snooze",
"parameters": [
{
"key": "time",
"value": "1 day"
},
{
"key": "id",
"value": "123456"
}
]
}
}
}
}
]
As far as I know, responding to a Google Chat (formerly Hangouts Chat) button isn't possible when using the direct Dialogflow integration.
The problem is that the button response can be sent one of two ways:
An event will be sent back to the bot code indicating the click.
Using the onClick.openLink.url property, as most of your test show.
This will take the person clicking it to the URL in question. But once there, you're taken out of the bot flow.
However, the documentation for the Hangouts Chat integration with Dialogflow doesn't provide any information about how this event is passed to Dialogflow, and the last time I tested it - it isn't.
You can write your own integration using Google Chat's API on something like Cloud Functions or Apps Script and have your script call Dialogflow's Detect Intent API to determine what Intent would be triggered by the user (and determine replies or call the webhook for additional processing). Under this scheme, you can choose how to handle the onClick event. Making your own integration also provides you a way to do Incoming Webhooks, which isn't possible when using the Dialogflow integration.
I am using rest api to gather some information from azure devops. I want to get full build results including every stage. But in the documentation it is not available. The simple build api call only gives me limited data. Is there any way to collect the stage wise information like whether the stage was successful or the start and end time for each stage.
Will be grateful for the help.
You should first call this url:
https://dev.azure.com/<YourOrg>/<Your-project>/_apis/build/builds/<buildid>?api-version=5.1
in links you will find timeline:
"_links": {
"self": {
"href": "https://dev.azure.com/thecodemanual/4fa6b279-3db9-4cb0-aab8-e06c2ad550b2/_apis/build/Builds/460"
},
"web": {
"href": "https://dev.azure.com/thecodemanual/4fa6b279-3db9-4cb0-aab8-e06c2ad550b2/_build/results?buildId=460"
},
"sourceVersionDisplayUri": {
"href": "https://dev.azure.com/thecodemanual/4fa6b279-3db9-4cb0-aab8-e06c2ad550b2/_apis/build/builds/460/sources"
},
"timeline": {
"href": "https://dev.azure.com/thecodemanual/4fa6b279-3db9-4cb0-aab8-e06c2ad550b2/_apis/build/builds/460/Timeline"
},
"badge": {
"href": "https://dev.azure.com/thecodemanual/4fa6b279-3db9-4cb0-aab8-e06c2ad550b2/_apis/build/status/30"
}
},
and there you will find what you are looking for:
{
"previousAttempts": [],
"id": "67c760f8-35f0-533f-1d24-8e8c3788c96d",
"parentId": null,
"type": "Stage",
"name": "A",
"startTime": "2020-04-24T08:42:37.2133333Z",
"finishTime": "2020-04-24T08:42:46.9933333Z",
"currentOperation": null,
"percentComplete": null,
"state": "completed",
"result": "succeeded",
"resultCode": null,
"changeId": 12,
"lastModified": "0001-01-01T00:00:00",
"workerName": null,
"order": 1,
"details": null,
"errorCount": 0,
"warningCount": 0,
"url": null,
"log": null,
"task": null,
"attempt": 1,
"identifier": "A"
},
You can also refer to the below api, this rest api is grabbed from the browser's Network.
Get https://dev.azure.com/{org}/{pro}/_build/results?buildId={id}&__rt=fps&__ver=2
Stage results are represented by different numbers i.e 0->completed,5->canceled etc.
The disadvantage of this api is that the returned content cannot be read intuitively. In contrast, the workaround provided by Krzysztof Madej is more convenient and intuitive
I have a syntax issue I believe in getting my autopilot response to work. My program works but after the autopilot asks the question, it does not give much time for the user to say the response before stopping/hanging up the call.
Is there a way to add in a timeout or pause? I have tried the syntax for this but it does not work. This is what I have:
"actions": [
{
"collect": {
"name": "user_input",
"questions": [
{
"question": "Welcome to the modem status check line?",
"name": "search",
"type": "Twilio.NUMBER"
}
],
"on_complete": {
"redirect": {
"method": "POST",
"uri": "https://website......"
}
}
}
}
]
}
When I add below
{
"listen":true
}
anywhere in this syntax it does not work and gives me an error of:
.actions[0].collect.questions[0] should NOT have additional properties
I have also tried timeout: 3 and it does not work either.
I have tried
{
"listen":true
}
and
"listen": {
before my task
Twilio developer evangelist here.
You can't use the Listen attribute in a Collect flow, and there is no easy way to add a timeout or pause. You can, however, add on a Validate action to your Collect flow like so and increase the number of max_attempts so your Autopilot bot repeats the question or asks the user to try again/say their response again.
I'm wondering why this is happening because when I use my bots via phone call, the call stays open for quite a long time waiting for the user's response.
exports.handler = function(context, event, callback) {
const responseObject = {
"actions": [
{
"collect": {
"name": "collect_clothes_order",
"questions": [
{
"question": "What is your first name?",
"name": "first_name",
"type": "Twilio.FIRST_NAME"
},
{
"question": "What type of clothes would you like?",
"name": "clothes_type",
"type": "CLOTHING",
"validate": {
"on_failure": {
"messages": [
{
"say": "Sorry, that's not a clothing type we have. We have shirts, shoes, pants, skirts, and dresses."
}
],
"repeat_question": true
},
"on_success": {
"say": "Great, I've got your the clothing type you want."
},
"max_attempts": {
"redirect": "task://collect_fallback",
"num_attempts": 3
}
}
}
],
"on_complete": {
"redirect": "https://rosewood-starling-9398.twil.io/collect"
}
}
}
]
};
callback(null, responseObject);
};
Let me know if this helps at all!
I am trying to setup a bot using dialogflow/webhook. my webhook returns the response for basic text message along with contextoutputs(esp. I am interested in parameters passed from webhooks). This works. But when i use the Basic card response of V2 along with outputcontext, Actions-on-google simulator says, "My test app isn't responding right now. Try again soon." But this works, if i remove outputcontext from the response. Please help
Steps Worked:
1. DialogFlow Testing
Basic Message (FulfillmentText) and contextoutput works fine
Card and contextOutput - not working
Card and followupEvent - Works
2. Actions on Google
Basic Message (FulfillmentText) and contextoutput works fine
Card and contextOutput - not working
Card and followupEvent - not working
Attached the response
{
"payload": {
"google": {
"expectUserResponse": true,
"richResponse": {
"items": [
{
"simpleResponse": {
"textToSpeech": "This is sample Response"
}
},
{
"basicCard": {
"title": "testbot",
"formattedText": "This is sample Response",
"image": {
"url": "example.com/image.png",
"accessibilityText": "samplebot"
},
"buttons": [
{
"title": "example",
"openUrlAction": {
"url": "http://example.com"
}
}
]
}
}
]
}
}
},
"outputContexts": [{
"name": "projects/<projectid>/agent/sessions/<sessionid>/contexts/<contextname>",
"lifespanCount": 1,
"parameters": {
"param1": "123",
"param2": "456"
}
}]
}```
I assume you've used the sample code in your action. However, unless you changed the url fields, your action can not find the imageUrl and openUrlAction.
If you change url fields with actual(not "http://example.com") links, your app will respond properly.
Also make sure you've added necessary classes.
e.g.
const { dialogflow, BasicCard, Image, Button } = require('actions-on-google');
Just started with the DialogFlow building an app. I have hosted a service in Java on cloud (not using the firebase). Basically, I receive the data from the agent and send the response back as Json. For simple query its working as expected. Like if I say "My name is X", the service will respond as "Hello X" and it will be played on the Response. The JSON response is sent as
{speech: "Hello X", type:"0"}
Now, I want to fetch the user location, so I want to ask the user the permission to access the location. I have a separate intent that does not have any training_Phrases. It has an Event actions_intent_PERMISSION.
I am sending the following response
{
"conversationToken": "[\"_actions_on_google_\"]",
"expectUserResponse": true,
"expectedInputs": [
{
"inputPrompt": {
"richInitialPrompt": {
"items": [
{
"simpleResponse": {
"textToSpeech": "PLACEHOLDER_FOR_PERMISSION"
}
}
]
}
},
"possibleIntents": [
{
"intent": "actions.intent.PERMISSION",
"inputValueData": {
"#type": "type.googleapis.com/google.actions.v2.PermissionValueSpec",
"optContext": "To locate you",
"permissions": [
"NAME"
]
}
}
],
"speechBiasingHints": [
"$geo_city",
"$event_category",
"$event_date"
]
}
],
"responseMetadata": {
"status": {},
"queryMatchInfo": {
"queryMatched": true,
"intent": "1ec64dc5-a6f4-44f6-8483-633b8638c729"
}
}
}
But I am getting the response as 400 Bad request. Is there anything that I am doing wrong here or am I missing any thing?
There are three issues.
The first is that the actions_intent_PERMISSION event is sent in response to a permission request. So this should not be the intent that triggers the request.
Second, you're asking for the users name, but not their location. You want either the DEVICE_COARSE_LOCATION or DEVICE_PRECISE_LOCATION.
The third, and much bigger, issue is that the JSON you're sending is the format used by the Action SDK. Since you're using Dialogflow, you'll be using a different response format which is the basic Dialogflow response, plus Actions on Google specific content in the data.google JSON property.
Your response should look something more like this:
{
"data": {
"google": {
"expectUserResponse": true,
"systemIntent": {
"intent": "actions.intent.PERMISSION",
"data": {
"#type": "type.googleapis.com/google.actions.v2.PermissionValueSpec",
"optContext": "To locate you",
"permissions": [
"NAME",
"DEVICE_PRECISE_LOCATION"
]
}
}
}
}
}
Dialogflow also has some other examples of requests and replies that should help for the other parts of your conversation.