Alexa intent schema issues - amazon

I've been playing with Alexa skills and looking to do some basic home automation. I have defined the following basic intent schema to start:
{
"intents": [
{
"intent": "Lock",
"slots": [
{
"name" : "Door",
"type" : "AMAZON.LITERAL"
}
]
},
{
"intent": "Unlock",
"slots": [
{
"name" : "Door",
"type" : "AMAZON.LITERAL"
}
]
}
]
}
And then the sample utterances:
Lock lock {slot value|Door}
Lock lock door {slot value|Door}
Lock lock the door {slot value|Door}
Unlock unlock {slot value|Door}
Unlock unlock door {slot value|Door}
Unlock unlock the door {slot value|Door}
The idea being that the door names would have to be freeform, since they won't be known ahead of time. However, when i try out a phrase like:
lock door front
It finds the right intent, but the "Door" slot value contains extra words:
"intent": {
"name": "Lock",
"slots": {
"Door": {
"name": "Door",
"value": "door front"
}
}
}
Is this normal, or is it a byproduct of using an AMAZON.LITERAL? I've also tried a custom slot type, but multiple word device names don't seem to work well with it, and it always uses the last word in that case.

I would define utterances as ending with 'door' word:
Lock lock {slot value|Door} door
So, user will have to say:
Alexa, ask Lock lock kitchen door
So you would mostlikely receive only one word as a door type. Then you parse the string. You might want to test not for exact equality, but for inclusion.
I have to admit that I never use LITERAL type, as it is not advised by Amazon tutorials, so I would define a custom type and list possible values for door type.
Turning on/off lights/thermostats is a different story. You have to use Alexa SmartHome API for that. Then 'turn on/off', 'set value', etc become reserved key words for Alexa. There will be no intents like these (in your question) in SmartHome API, no utterances and no custom slot types. All you need is to implement processing Discovery request and Control request. I think user sets device names in official apps/accounts of device vendor, and when Alexa discovers device (due to Discovery request), the skill just fetches the devices descriptions from vendors server and provides it for Alexa. That is how Alexa will know the names of available devices.

Alexa stops with the first match it finds. So you need to move the more general utterances after the more specific.
Lock lock door {slot value|Door}
Lock lock {slot value|Door}
This way "lock door front" matches with slot = "front".
If you have any utterances with no slots, be sure to put them LAST.

Update: The comment I made about LITERAL going away is dated. Thanks for pointing it out. LITERAL will remain. But Alexa (and Lex) do return slot values not in the slot list. I see this often. It's nice.
For those that may stumble across this question, know that skills using AMAZON.LITERAL will no longer be approved starting in December, 2016. You should use custom slots instead. Interesting, the documentation says that even when using custom slots you can receive words not defined in the custom list, like with a literal. I've not tested for this, but it could come in handy.

Related

Terraform providers - how would you represent a resource that doesn't have clearly defined CRUD operations?

For work I'm learning Go and Terraform. I read in their tutorial how the different contexts are defined but I'm not clear on exactly when these different contexts are called and what triggers them.
From looking at the Hashicups example it looks like when you put this:
resource "hashicups_order" "new" {
items {
coffee {
id = 3
}
quantity = 2
}
items {
coffee {
id = 2
}
quantity = 2
}
}
in your Terraform file that is going to go look at hashicups_order remove the hashicups prefix and look for a resource called order. The order resource provides the following contexts:
func resourceOrder() *schema.Resource {
return &schema.Resource{
CreateContext: resourceOrderCreate,
ReadContext: resourceOrderRead,
UpdateContext: resourceOrderUpdate,
DeleteContext: resourceOrderDelete,
What isn't clear to me is what triggers each context . From that example it seems like since you are increasing the value of quantity it will trigger the update context. If this were the first run and no previous state existed it would trigger create etc.
However it my case the resource is a server and one API resource I want to present to the user is server power control. However you would never "create/destroy" this resource... or would you? You could read the current power state and you could update the power state but, at least intuitively, you wouldn't create or destroy it. I'm having trouble wrapping my head around how this would be modeled in Terraform/Go. I conceptually understand the coffee resource in the example but I'm having trouble making the leap to imagining what that looks like as something like a server power capability or other things without a clear matching to the different CRUD operations.

Is there a way to influence how user voice input is interpreted?

We have an Acton on Google where a user needs to say one of these answers: 'High', 'Rising', 'Low' or 'Falling'.
But when user says "high", it is often recognised as "hi", and "low" as "hello".
I found #Leon Nicholls uses speechBiasing here: https://github.com/entertailion/Magnificent-Escape-Action/blob/4258a544789624b82253b4d29355a7519aab4179/game.js
So I addeded this before doing onv.ask(...):
conv.speechBiasing = ['High', 'Rising', 'Low', 'Falling'];
This resulted in this:
"speechBiasingHints": [
"High",
"Rising",
"Low",
"Falling"
],
Unfortunately, the user answer is still showing on SmartScreen as "hi" and not "high".
Is there another way to influence how the user voice input is interpreted?
If you want to force a specific intent to be selected as a response, you can use possibleIntents: [] (doc) in addition to speechBiasingHints: [].
You can also use follow-up intents as described here. Note that although the implementation is done in Dialogflow in the documentation you can recreate the logic in code if you're not using Dialogflow.

LUIS: Action Parameter cannot be passed (with Dialog Execution)

By using LUIS and it's "Dialog Execution" under Action Binding, i'm expecting to be able to provide the required parameter (of an Action). (So that the Action can be triggered, or the Dialog can be continued.)
As far as i understand, once the Parameter has been asked to provide, we should provide it in the follow-up query call. For example:
First query:
https://westus.api.cognitive.microsoft.com/luis/v2.0/apps/...?subscription-key=...&q=what are the available items
Then, it asks me "Under what category?" (expecting me to provide the required parameter), like:
Then i provided it in the follow-up query:
https://westus.api.cognitive.microsoft.com/luis/v2.0/apps/...?subscription-key=...&q=electronics&contextId=d754ce3...
But then, seems like i still don't get the value accepted, and therefore it is still showing as null. Like this:
So the Parameter is not captured. So that Action can ever be triggered, yet. (Or i cannot reach to next Parameter, if there any)
Am i doing something wrong with it, or what seems to be the problem please?
(Below is the screenshot of that Intent with the "Action Parameters")
I have experienced this before. (In fact it still happens). Even in the Microsoft's Official LUIS API Example DEMOS, it still happens.
For example, in their Weather Bot there, just try something like:
You: What will the weather be tomorrow?
Bot: Where would you like the weather?
You: Singapore
Bot:
{
"name": "location",
"required": true,
"value": null
}
Then now try again, like:
You: What will the weather be tomorrow?
Bot: Where would you like the weather?
You: in Singapore
Bot:
{
"name": "location",
"required": true,
"value": [
{
"entity": "singapore",
"type": "builtin.geography.country"
}
]
}
Conclusion?
Prepositions! (in, at, on, by, under, ...) LUIS still doesn't understand the Entity input without the proper preposition provided, sometimes, in some cases.
I'm pretty sure this is the reason for your case. Try again with a preposition.
( This problem took me like 1~2 weeks to realise. Hope Microsoft can improve LUIS better in all this aspects asap. )

Embedding questionnaire scoring data into the FHIR Questionnaire/Response?

We have a system where citizens download Questionnaire from a server, fill it in and submit a QuestionnaireResponse back into the server, storing it there. In our case, these are simple questions about how you're feeling and symptoms. A health worker can then access the QuestionnaireResponse. The health workers don't want the answers, but a score which has been calculated based on the answers.
Some vendors (non-FHIR) allow creating a form and a scoring system at the same time. If we wanted to support this within FHIR, I'm assuming we would have to embed the scoring information inside the Questionnaire (or potentially a separate resource, but that would give some redundance perhaps).
Is this best solved with extensions to the Questionnaire-resource, another resource or some other mechanism? And what'd be the best way (architectureally) to implement the actual scoring. Would it best be a separate application which subscribes to the QuestionnaireResponses, downloads the Questionnaire, extracts the scoring system, evaluates and then writes the score back into the QuestionnareResponse?
Are there other standards we should be looking to for help on this?
And for those especially interested, here's a really simplified Questionnaire resource. Typically it'd have more questions of course. Right now we've put the score into the 'code', which doesn't seem like a good idea.
{
"resourceType":"Questionnaire",
"id":"1140",
"meta":{
"versionId":"11",
"lastUpdated":"2016-06-14T13:01:47.000+00:00"
},
"text":{
"status":"generated",
"div":"<div><!-- Snipped for Brevity --></div>"
},
"status":"published",
"date":"2016",
"group":{
"linkId":"group1",
"title":"HelsaMi Hjertesvikt",
"concept":[
{
"system":"unknown",
"code":"unknown",
"display":"Hjertesvikt"
}
],
"group":[
{
"linkId":"group2",
"question":[
{
"linkId":"Feeling",
"text":"How do you feel today?",
"type":"choice",
"option":[
{
"system":"unknown",
"code":"3",
"display":"Good"
},
{
"system":"unknown",
"code":"2",
"display":"Medium"
},
{
"system":"unknown",
"code":"1",
"display":"Bad"
}
]
}
]
}
]
}
}
Would an extension for example look like this (embedded in to each option):
"extension": [{
"url": "http://example.com/scoring",
"valueInteger": 10
}
]
The score would simply be another answer to a "special" question. The question would have an extension that defines how the score is calculated. The question would likely be "read only" and could be hidden. You could even have multiple such questions, for example one per section to provide a sub-calculation and then one for the overall questionnare to total it up. As well, look at the coded ordinal extension for the Coding data type as it may be helpful for capturing scores for individual question answers.

What are the possible kinds of webhooks Trello can send? What attributes come in each?

I'm developing an app that is tightly integrated with Trello and uses Trello webhooks for a lot of things. However, I can't find anywhere in Trello's developer documentation what are the "actions" that may trigger a webhook and what data will come in each of these.
In fact, in my experience, the data that comes with each webhook is kinda random. For example, while most webhooks contain the shortLink of the card which is being the target of some action, some do not, in a totally unpredictable way. Also, creating cards from checklists doesn't seem to trigger the same webhook that is triggered when a card is created normally, and so on.
So, is that documented somewhere?
After fighting against these issues and my raw memory of what data should come in each webhook, along with the name of each different action, I decided to document this myself and released it as a (constantly updating as I find new webhooks out there) set of JSON files showing samples of the data each webhook will send to your endpoint:
https://github.com/fiatjaf/trello-webhooks
For example, when a board is closed, a webhook will be sent with
{
"id": "55d7232fc3597726f3e13ddf",
"idMemberCreator": "50e853a3a98492ed05002257",
"data": {
"old": {
"closed": false
},
"board": {
"shortLink": "V50D5SXr",
"id": "55af0b659f5c12edf972ac2e",
"closed": true,
"name": "Communal Website"
}
},
"type": "updateBoard",
"date": "2015-08-21T13:10:07.216Z",
"memberCreator": {
"username": "fiatjaf",
"fullName": "fiatjaf",
"avatarHash": "d2f9f8c8995019e2d3fda00f45d939b8",
"id": "50e853a3a98492ed05002257",
"initials": "F"
}
}
In fact, what comes is a JSON object like {"model": ..., "action": ... the data you see up there...}, but I've removed these for the sake o brevity and I'm showing only what comes inside the "action" key.
based on #flatjaf's repo, I gathered and summarized all* the webhooks types.
addAttachmentToCard
addChecklistToCard
addLabelToCard
addMemberToBoard
addMemberToCard
commentCard
convertToCardFromCheckItem
copyCard
createCard
createCheckItem
createLabel
createList
deleteAttachmentFromCard
deleteCard
deleteCheckItem
deleteComment
deleteLabel
emailCard
moveCardFromBoard
moveCardToBoard
moveListFromBoard
moveListToBoard
removeChecklistFromCard
removeLabelFromCard
removeMemberFromBoard
removeMemberFromCard
updateBoard
updateCard
updateCheckItem
updateCheckItemStateOnCard
updateChecklist
updateComment
updateLabel
updateList
hope it helps!
*I don't know if that list includes all the available webhooks types because as i already said, it's based on flatjaf's repo created 2 years ago

Resources