Media Player is playing get confused to catch Custom Next Intent - node.js

I'm made simple Media Player by using MediaResponse.
Google automatically handle : play, pause, stop, resume.
Since Google not supported : next function yet, I created custom Next Intent to handle next function.
I had 2 cases makes me confused after talk to Dr. Media then go :
MediaResponse show correctly, not playing yet (not click button Play yet).
Case 1 - Then I say next (or next song), it matched with my defined phrases in my custom intent. And Next function is OK.
Case 2 - I clicked Play button to play audio, then I say next (or next song), it didn't match with my defined phrases (it oftens say Okay and nothing happens), so Next function is not OK. (As images in below)
In case 2, How I can catch my phrases next, next song, play next ... when Media Player is playing? (In other words, these phrases can not trigger my custom intent)
Please help me, Thanks
p/s : It happens on Mobile phones
actions_intent_NEXT.json
{
"id": "01055a59-26a0-4f39-a770-6bc5404482d9",
"name": "actions_intent_NEXT",
"auto": true,
"contexts": [
"actions_capability_screen_output",
"actions_capability_media_response_audio"
],
"responses": [
{
"resetContexts": false,
"affectedContexts": [
{
"name": "actions_capability_screen_output",
"parameters": {},
"lifespan": 5
},
{
"name": "actions_capability_media_response_audio",
"parameters": {},
"lifespan": 5
}
],
"parameters": [
{
"id": "2ce8514c-a346-4251-9a35-cab32d2c9d7c",
"required": false,
"dataType": "#nextPlay",
"name": "nextPlay",
"value": "$nextPlay",
"isList": false
}
],
"messages": [
{
"type": 0,
"speech": []
}
],
"defaultResponsePlatforms": {},
"speech": []
}
],
"priority": 500000,
"cortanaCommand": {
"navigateOrService": "NAVIGATE",
"target": ""
},
"webhookUsed": true,
"webhookForSlotFilling": false,
"lastUpdate": 1542252675,
"fallbackIntent": false,
"events": [
{
"name": "actions_intent_NEXT"
}
],
"userSays": [
{
"id": "e28a4087-2bf9-494d-9790-c7347f870ee4",
"data": [
{
"text": "next song",
"alias": "nextPlay",
"meta": "#nextPlay",
"userDefined": true
}
],
"isTemplate": false,
"count": 0,
"updated": 1542249008,
"isAuto": false
},
{
"id": "7a8cf2a2-d131-490d-977d-1212f4642f52",
"data": [
{
"text": "next",
"alias": "nextPlay",
"meta": "#nextPlay",
"userDefined": true
}
],
"isTemplate": false,
"count": 0,
"updated": 1542248700,
"isAuto": false
}
],
"followUpIntents": [],
"liveAgentHandoff": false,
"endInteraction": false,
"templates": []
}

Related

Alexa CBT Test: Failed Test due to DeepQuery=True

My colleagues and I have been working to fix a reported issue on our Amazon Alexa CBT Test regarding the value “DeepQuery=true”.
Our code has been modified, so that every state change is reported automatically and all the used interfaces have the properties “proactivelyReported” and “retrievable” set to true.
As has been suggested by the WWA-Support we used the Smart Home Debugger of the Developer Console to validate the ReportEvents (e.g. Discovery or ChangeReport) and we checked the state of our device on the “View Device State” page (both pages are referenced on: https://developer.amazon.com/en-US/docs/alexa/smarthome/debug-your-smart-home-skill.html).
For debugging purposes we scaled our device capabilities down to just the PowerController. The AddOrUpdateReport of Alexa.Discovery looks to our eyes now exactly as expected/documented. Same goes for the ChangeReport, which we proactively send right after the AddOrUpdateReport (Two sample-Reports for both are provided at the end).
Unfortunately we are still faced with the issue, that “DeepQuery=true” on the “View Device State” page.
If we set the interface property “retrievable” to false, “DeepQuery=false”, but the Alexa-App does not retain the current state of the device. In this configuration the Alexa-App can only be used to send commands, which unfortunately will lead to other test cases to fail.
Does anyone know how to solve this issue?
How can we set “proactivelyReported” and “retrievable” to true and have “DeepQuery=false”?
Any help would be greatly appreciated and I will gladly provide more informations if needed.
Sample AddOrUpdateReport from Smart Home Debugger
{
"header": {
"namespace": "SkillDebugger",
"name": "CaptureDebuggingInfo",
"messageId": "05b030fb-6393-4ae0-80d0-47fc27876f0e"
},
"payload": {
"skillId": "amzn1.ask.skill.055ca62d-3cf8-4f51-a683-9a98b36f4637",
"timestamp": "2021-09-09T13:28:21.629Z",
"dialogRequestId": null,
"skillRequestId": null,
"type": "SmartHomeAddOrUpdateReportSuccess",
"content": {
"addOrUpdateReport": {
"event": {
"header": {
"namespace": "Alexa.Discovery",
"name": "AddOrUpdateReport",
"messageId": "2458b969-7c3e-47e2-ab0b-6e13a999be76",
"payloadVersion": "3"
},
"payload": {
"endpoints": [
{
"manufacturerName": "Our Company Name",
"description": "Our Product Name",
"endpointId": "device--cb12b420-1171-11ec-81f3-cb34e87ea438",
"friendlyName": "Lampe 1",
"capabilities": [
{
"type": "AlexaInterface",
"version": "3",
"interface": "Alexa.PowerController",
"properties": {
"supported": [
{
"name": "powerState"
}
],
"proactivelyReported": true,
"retrievable": true
}
},
{
"type": "AlexaInterface",
"interface": "Alexa",
"version": "3"
}
],
"displayCategories": [
"LIGHT"
],
"connections": [],
"relationships": {},
"cookie": {}
}
],
"scope": null
}
}
}
}
}
}
Sample ChangeReport from Smart Home Debugger
{
"header": {
"namespace": "SkillDebugger",
"name": "CaptureDebuggingInfo",
"messageId": "194a96a1-6747-46ba-8751-5c9ef715fd34"
},
"payload": {
"skillId": "amzn1.ask.skill.055ca62d-3cf8-4f51-a683-9a98b36f4637",
"timestamp": "2021-09-09T13:28:23.227Z",
"dialogRequestId": null,
"skillRequestId": null,
"type": "SmartHomeChangeReportSuccess",
"content": {
"changeReport": {
"event": {
"header": {
"namespace": "Alexa",
"name": "ChangeReport",
"messageId": "8972e386-9622-40e6-85e7-1a7d81c79c8a",
"payloadVersion": "3"
},
"endpoint": {
"scope": null,
"endpointId": "device--cb12b420-1171-11ec-81f3-cb34e87ea438"
},
"payload": {
"change": {
"cause": {
"type": "APP_INTERACTION"
},
"properties": [
{
"namespace": "Alexa.PowerController",
"name": "powerState",
"value": "ON",
"timeOfSample": "2021-09-09T13:28:18.088Z",
"uncertaintyInMilliseconds": 500
}
]
}
}
},
"context": {
"properties": []
}
}
}
}
}

How to index complex types into Edm.ComplexType with Azure Cognitive Search

I am indexing data into an Azure Search Index that is produced by a custom skill. This custom skill produces complex data which I want to preserve into the Azure Search Index.
Source data is coming from blob storage and I am constrained to using the REST API without a very solid argument for using the .NET SDK.
Current code
The following is a brief rundown of what I currently have. I cannot change the index's field or the format of data produced by the endpoint used by the custom skill.
Complex data
The following is an example of complex data produced by the custom skill (in the correct value/recordId/etc. format):
{
"field1": 0.135412,
"field2": 0.123513,
"field3": 0.243655
}
Custom skill
Here is the custom skill which creates said data:
{
"#odata.type": "#Microsoft.Skills.Custom.WebApiSkill",
"uri": "https://myfunction.azurewebsites.com/api,
"httpHeaders": {},
"httpMethod": "POST",
"timeout": "PT3M50S",
"batchSize": 1,
"degreeOfParallelism": 5,
"name": "MySkill",
"context": "/document/mycomplex
"inputs": [
{
"name": "text",
"source": "/document/content"
}
],
"outputs": [
{
"name": "field1",
"targetName": "field1"
},
{
"name": "field2",
"targetName": "field2"
},
{
"name": "field3",
"targetName": "field3"
}
]
}
I have attempted several variations, notable using the ShaperSkill with each field as an input and the output "targetName" as "mycomplex" (with the appropriate context).
Indexer
Here is the indexer's output field mapping for the skill:
{
"sourceFieldName": "/document/mycomplex,
"targetFieldName": "mycomplex"
}
I have tried several variations such as "sourceFieldName": "/document/mycomplex/*.
Search index
And this is the targeted index field:
{
"name": "mycomplex",
"type": "Edm.ComplexType",
"fields": [
{
"name": "field1",
"type": "Edm.Double",
"retrievable": true,
"filterable": true,
"sortable": true,
"facetable": false,
"searchable": false
},
{
"name": "field2",
"type": "Edm.Double",
"retrievable": true,
"filterable": true,
"sortable": true,
"facetable": false,
"searchable": false
},
{
"name": "field3",
"type": "Edm.Double",
"retrievable": true,
"filterable": true,
"sortable": true,
"facetable": false,
"searchable": false
}
]
}
Result
My result is usually similar to Could not map output field 'mycomplex' to search index. Check your indexer's 'outputFieldMappings' property..
This may be a mistake with the context of your skill. Instead of setting the context to /document/mycomplex, can you try setting it to /document? You can then add a ShaperSkill with the context also set to /document and the output field being mycomplex to generate the expected complex type shape
Example skills:
"skills":
[
{
"#odata.type": "#Microsoft.Skills.Custom.WebApiSkill",
"uri": "https://myfunction.azurewebsites.com/api,
"httpHeaders": {},
"httpMethod": "POST",
"timeout": "PT3M50S",
"batchSize": 1,
"degreeOfParallelism": 5,
"name": "MySkill",
"context": "/document"
"inputs": [
{
"name": "text",
"source": "/document/content"
}
],
"outputs": [
{
"name": "field1",
"targetName": "field1"
},
{
"name": "field2",
"targetName": "field2"
},
{
"name": "field3",
"targetName": "field3"
}
]
},
{
"#odata.type": "#Microsoft.Skills.Util.ShaperSkill",
"context": "/document",
"inputs": [
{
"name": "field1",
"source": "/document/field1"
},
{
"name": "field2",
"source": "/document/field2"
},
{
"name": "field3",
"source": "/document/field3"
}
],
"outputs": [
{
"name": "output",
"targetName": "mycomplex"
}
]
}
]
Please refer to the documentation on shaper skill for specifics.

Mark GetStream activity as seen and read

I want to apply mark_seen and mark_read for a specific activity of getstream.io, but the counter is still the same. Here is my code:
const notification = client.feed('NOTIFICATION', '1');
notification.get({ mark_seen: ['1111111-f22d-333-33-0000001f'] }); //getstream activity id
Update:
Here is my get stream response:
{
"results": [
{
"activities": [
{
"actor": "user:1",
"foreign_id": "POST:50",
"id": "00000000-0000-11e0-8000-800001be0000",
"object": "POST:50",
"origin": null,
"target": "USER:1",
"time": "2018-01-12T14:08:03.000000",
"verb": "CREATE"
}
],
"activity_count": 1,
"actor_count": 1,
"created_at": "2018-01-12T14:08:12.324882",
"group": "user:1_POST:50",
"id": "111111bb-f1a1-11e1-1111-111111111b11.user:1_POST:50",
"is_read": false,
"is_seen": false,
"updated_at": "2018-01-12T14:08:12.324882",
"verb": "CREATE"
}
],
"next": "",
"duration": "25.85ms",
"unseen": 3,
"unread": 3
}
I am using activity.id to mark my activity as read, but its not working.
However it works (and decrements the counter) when I use id (the group id).

List_card in AoG passing item title to the next query instead of key

In an app, I'm returning messages of type list_card with option key for an intent.
Here is the json of a sample query:
{
"id": "275212ef-cf97-4576-afa7-facfbc044ada",
"timestamp": "2017-07-17T17:36:03.655Z",
"lang": "en",
"result": {
"source": "agent",
"resolvedQuery": "who is Sneha",
"action": "cp.name_search",
"actionIncomplete": false,
"parameters": {
"keyword": "Sneha"
},
"contexts": [
{
"name": "cpname_search-followup",
"parameters": {
"keyword.original": "Sneha",
"keyword": "Sneha"
},
"lifespan": 2
},
{
"name": "cpuid_search-followup",
"parameters": {
"keyword.original": "Sneha",
"keyword": "Sneha"
},
"lifespan": 1
}
],
"metadata": {
"intentId": "86bd1a17-8e9a-4956-b270-5fb4ac952f5f",
"webhookUsed": "true",
"webhookForSlotFillingUsed": "false",
"webhookResponseTime": 135,
"intentName": "cp.name_search"
},
"fulfillment": {
"speech": "Searching...",
"source": "agent",
"messages": [
{
"type": "simple_response",
"platform": "google",
"textToSpeech": "Here are the search results. \nWant anything else?"
},
{
"type": "list_card",
"platform": "google",
"title": "Search results",
"items": [
{
"optionInfo": {
"key": "uid 72",
"synonyms": []
},
"title": "Sneha Vasista",
"description": "Srinivas Institute of Technology",
"image": {
"url": "//www.curlpad.com/assets/img/custom_images/user.png"
}
},
{
"optionInfo": {
"key": "uid 2053",
"synonyms": []
},
"title": "Sneha Bhat",
"description": "Canara Engineering College",
"image": {
"url": "//www.curlpad.com/assets/img/custom_images/user.png"
}
},
{
"optionInfo": {
"key": "uid 2114",
"synonyms": []
},
"title": "Sneha Sajan",
"description": "P.A College of Engineering",
"image": {
"url": "//www.curlpad.com/assets/img/custom_images/user.png"
}
},
{
"optionInfo": {
"key": "uid 2320",
"synonyms": []
},
"title": "Sneha ",
"description": "sdit",
"image": {
"url": "//www.curlpad.com/assets/img/custom_images/user.png"
}
},
{
"optionInfo": {
"key": "uid 2363",
"synonyms": []
},
"title": "Sneha ",
"description": "Srinivas School of Engineering, Mukka",
"image": {
"url": "//www.curlpad.com/assets/img/custom_images/user.png"
}
}
]
},
{
"type": "0",
"speech": "Here are the search results."
}
]
},
"score": 1
},
"status": {
"code": 200,
"errorType": "success"
},
"sessionId": "e6aa9e52-a9e1-481a-adb5-476c5b386e02"
}
Now the problem is, when I tap the list item in AoG simulator, it passes title of item to next query.
But while testing in Api.ai simulator, it behaves well and passes that key to the next query.
What can be the problem here?
Any hints?
If you're using API.AI, then it will appear in the Intent as an actions_intent_OPTION Event.
One good solution is to have the Intent that sends the list with an OutputContext. Then create a particular Fallback Intent with actions_intent_OPTION as Event and your desired action for that Context which should handle both the voice and press responses.
And you will find your option_key at ["originalRequest"]["data"]["inputs"][0]["arguments"][0]["textValue"] instead of parameter.
You can also see the response value in the actions_intent_option context.
What you need to do is set up a fallback intent directly under your current intent.
For example, if you display the list from the default welcome intent, you can do the following.
Click "Add follow-up intent" and choose fallback.
Don't forget to set the action and enable webhook in the fallback intent.
Now, you should be able to retrieve your answer from the fallback intent using the following code.
const param = app.getContextArgument('actions_intent_option','OPTION').value;

Is context telemetry "grouped" during the sampling of request telemetry?

Is context telemetry "grouped" during the sampling of request telemetry?
For example, the data below contains a request which has a sample count of 10 ("count": 10), meaning that it is being used to represent 9 other "similar" requests; 90% of the telemetry has actually been discarded.
Does Application Insights only sample data together when the context data is exactly the same for the requests? For example, can I assume that the other 9 requests were also from 41.191.204.0 and have a custom field company of value 22f0141f-b3dc-53e1-86b8-dd0727c14497?
{
"request": [
{
"id": "bs6o2dRoL/Q=",
"name": "GET /api/resources",
"count": 10,
"responseCode": 200,
"success": true,
"url": "https://example.com/api/resources",
"urlData": {
"base": "/api/resources",
"host": "example.com",
"hashTag": "",
"protocol": "https"
},
"durationMetric": {
"value": 1073743.0,
"count": 11.0,
"min": 97613.0,
"max": 97613.0,
"stdDev": 0.0,
"sampledValue": 97613.0
}
}
],
"internal": {
"data": {
"id": "8cbd12ec-9780-11e6-b38b-c5e9335e7642",
"documentVersion": "1.61"
}
},
"context": {
"application": {
"version": "1.0.16286.5"
},
"data": {
"eventTime": "2016-10-21T11:21:16.942Z",
"isSynthetic": false,
"samplingRate": 9.09090909090909
},
"device": {
"type": "PC",
"osVersion": "Windows 10",
"roleInstance": "RD0003FF727A10",
"deviceName": "Other",
"deviceModel": "Other",
"browser": "Chrome",
"browserVersion": "Chrome 53.0",
},
"user": {
"isAuthenticated": false
},
"session": {
"isFirst": false
},
"operation": {
"id": "bs6o2dRoL/Q=",
"parentId": "bs6o2dRoL/Q=",
"name": "GET Resources/GetResourceAsync [id]"
},
"location": {
"clientip": "41.191.204.0",
"continent": "Africa",
"country": "South Africa",
"province": "Eastern Cape"
},
"custom": {
"dimensions": [
{
"company": "22f0141f-b3dc-53e1-86b8-dd0727c14497"
},
{
"factor": "100"
}
]
}
}
}
Application Insights does not group telemetry events based on the context, but based on the Operation ID. This is synchronized between the SDK sampling and the server side sampling to make sure you will be able to navigate between related page views and requests.
So if you want to make sure some events are grouped together in sampling, set their OperationId to be the same.
See here for full details on how Application Insights implements it's sampling.
Hope this helps,
Asaf

Resources