How to make TAG_ALPHA_IDENTIFIER empty not to ask user for a confirmation - javacard

My wallet applet requires to perform actions like PLAY TONE etc. But it requires a prompt "Yes or No?" from user. AFAIK, it is TAG_ALPHA_IDENTIFIER which is responsible for that. However, if I try this code below, it still asks user confirmation but now with "#" text. How to get rid of user confirmation at all?
Attempt 1. Failed with NullPtrException
proHdlr.appendTLV(ToolkitConstants.TAG_ALPHA_IDENTIFIER, null, (short)0, (short)0);
proHdlr.send();
Attempt 2. Prompts '##'
proHdlr.appendTLV(ToolkitConstants.TAG_ALPHA_IDENTIFIER, (byte)0, (byte)0);
proHdlr.send();
Attempt 3. Prompts '#'
proHdlr.appendTLV(ToolkitConstants.TAG_ALPHA_IDENTIFIER, (byte)0);
proHdlr.send();
Attempt 4. Prompts Default Text
byte[] ALPHA_MSG = {};
proHdlr.appendTLV(ToolkitConstants.TAG_ALPHA_IDENTIFIER, ALPHA_MSG, (short)0, (short)ALPHA_MSG.length);
proHdlr.send();
According to ETSI 102.223, "8.2 Alpha identifier" section, it should be:
Description
Length
Alpha identifier tag
1
Length(X)
Y
Alpha identifier
X
And there is also "Default text" in documentation, however since "5.3.7 Text attributes" requires Alpha Identifier to be present, Default text should not bother, right?
In this document "6.4.5 PLAY TONE" section, page 45 it says:
if the alpha identifier is provided by the UICC and is a null data object (i.e. length = '00' and no value part), the terminal should not give any information to the user;
That's what I need. How should I do it Java with ProactiveHandler? All my Google searches end up with some text/menu title for Alpha Identifier.
How to get rid of user confirmation and perform the proactive action without it?

a) Try to pass no data at all, i.e. leave out the proHdlr.appendTLV(ToolkitConstants.TAG_ALPHA_IDENTIFIER line.
b) The behavior might be phone-related or more specific modem-related. Check out a MediaTek based one, a Qualcomm based one and an iPhone and compare the results.

Related

Let Alexa ask the user a follow up question (NodeJS)

Background
I have an Intent that fetches some Data from an API. This data contains an array and I am iterating over the first 10 entries of said array and read the results back to the user. However the Array is almost always bigger than 10 entries. I am using Lambda for my backend and NodeJS as my language.
Note that I am just starting out on Alexa and this is my first skill.
What I want to archive is the following
When the user triggers the intent and the first 10 entries have been read to the user Alexa should ask "Do you want to hear the next 10 entries?" or something similar. The user should be able to reply with either yes or no. Then it should read the next entries aka. access the array again.
I am struggling with the Alexa implementation of this dialog.
What I have tried so far: I've stumbled across this post here, however I couldn't get it to work and I didn't find any other examples.
Any help or further pointers are appreciated.
That tutorial gets the concept right, but glosses over a few things.
1: Add the yes and no intents to your model. They're "built in" intents, but you have to add them to the model (and rebuild it).
2: Add your new intent handlers to the list in the .addRequestHandlers(...) function call near the bottom of the base skill template. This is often forgotten and is not mentioned in the tutorial.
3: Use const sessionAttributes = handlerInput.attributesManager.getSessionAttributes(); to get your stored session attributes object and assign it to a variable. Make changes to that object's properties, then save it with handlerInput.attributesManager.setSessionAttributes(sessionAttributes);
You can add any valid property name and the values can be a string, number, boolean, or object literal.
So assume your launch handler greets the customer and immediately reads the first 10 items, then asks if they'd like to hear 10 more. You might store sessionAttributes.num_heard = 10.
Both the YesIntent and LaunchIntent handlers should simply pass a num_heard value to a function that retrieves the next 10 items and feeds it back as a string for Alexa to speak.
You just increment sessionAttributes.num_heard by 10 each time that yes intent runs and then save it with handlerInput.attributesManager.setSessionAttributes(sessionAttributes).
What you need to do is something called "Paging".
Let's imagine that you have a stock of data. each page contains 10 entries.
page 1: 1-10, page 2: 11-20, page 3: 21-30 and so on.
When you fetching your data from DB you can set your limitations, In SQL it's implemented with LIMIT ,. But how you get those values based on the page index?
Well, a simple calculation can help you:
let page = 1 //Your identifier or page index. Managed by your client frontend.
let chunk = 10
let _start = page * chunk - (chunk - 1)
let _end = start + (chunk - 1)
Hope this helped you :)

FIXED -- Voice prompt for Measurement concept from Library capsule does not work for Bixby runtime version 6

I have a very simple action where it takes a single parameter of type measurement.Length. if user does not provide the value for this parameter i want to prompt user for missing value.
To do this i designed my action like below
action (AddDistance) {
description (adding exercise)
type(Search)
collect {
input (distance){
type (measurement.Length)
min (Required) max (One)
}
}
output (Result)
}
I have added a NL training [g:Result] add distance of (2 km)[v:measurement.Length] which learned perfectly and giving this utterance works fine.
Now if i give the utterance like "add distance" the Bixby does not prompt for the missing input value instead give error with description
TypeError: Cannot read property 'display' of undefined1000Cannot read property 'display' of undefinedTypeError: Cannot read property 'display' of undefined at
What should i do show prompt for this concept measurement.Lenght from Library capsule.
N.b. voice prompt works on runtime version 5 with no issue.
This is an important change, for runtime-version above 5, in order to use default dialog (which is true in many library capsules), developer needs to include the following override in capsule.bxb.
runtime-version (6) {
overrides {
no-fallback-dialog-in-views (false)
}
}
The original bug cause the JS error has been fixed, but override is required to render library dialog.
Library capsules already got input-view and voice training handled. From your last capsule, change distance to Required in action model, and add following training missing distance.
The input-view would be triggered, and let's use voice-input
It runs fine to result-view, and confirmed on debugger
BTW, to replace the default "I need ..." message, add a dialog model as this
dialog (Elicitation) {
match: measurement.Length
template("How far would you like to run")
}

Where is scripted the Dalaran Well teleport (game object)?

When you try to reach the Dalaran Well in Dalaran, you are teleported to the sewers.
It is using this Game object: Doodad_Dalaran_Well_01 (id = 193904 )
Where is it scripted? How?
I've found nothing in the table smart_scripts, and found nothing in the core about this specific id so I'm curious because this type of teleport is really better than clicking on a game object
This gameobject is a unique case because it works like instance teleports do. If you check the gameobject_template table, you will see that it has several Data columns that have diferent values based on the type of the gameobject.
The gameobject you are refering too is the Well It self but the portal gameobject inside the well gives the player a dummy spell to tell the core that the player has been teleported (spell ID 61652).
For the specific case of the dalaran well, it's type is 30 which means, as the documentation says, GAMEOBJECT_TYPE_AURAGENERATOR. As soon as the player is in range, a dummy aura is cast on him to notify the core that this areatrigger has been activated (You could do stuff when player gets hit by the dummy spell).
The trick here is a bunny, but not the bunny itself since it is there mostly to determine an areatrigger. If you use command .go gobject 61148 you can check him out, he's inside the well.
Areatriggers are a DBC object that are actually present on our database on world.areatrigger. You can check the columns here. When the player enters the Radius box specified on the areatrigger, another thing happens in the core which is world.areatrigger_teleport.
If you run the following query you will be able to check the position where the trigger will teleport the player to.
SELECT * FROM areatrigger_teleport WHERE `Name` LIKE '%Dalaran Well teleporter%';

Getting arguments/parameters values from api.ai

I'm now stuck on the problem of getting user input (what user says) in my index.js. For example, the user says: please tell me if {animals} can live between temperature {x} to {y}. I want to get exact value (in string) for what animals, x and y so that I can check if it is possible in my own server. I am wondering how to do that since the entities need to map to some exact key values if I annotate these three parameters to some entities category.
The methods for ApiAiApp is very limited: https://developers.google.com/actions/reference/nodejs/ApiAiApp
And from my perspective, none of the listed methods work in this case.
Please help!
Generally API.AI entities are for some set of known values, rather than listening for any value and validating in the webhook. First, I'd identify the kinds of entities you expect to validate against. For the temperatures (x and y), I'd use API.AI's system entities. Calling getArgument() for those parameters (as explained in the previous answer) should return the exact number value.
For the animals, I'd use API.AI's developer entities. You can upload them in the Entity console using JSON or CSV. You can enable API.AI's automated expansion to allow the user to speak animals which you don't support, and then getArgument() in webhook the webhook will return the new value recognized by API.AI. You can use this to validate and respond with an appropriate message. For each animal, you can also specify synonymous names and when any of these are spoken, and getArgument() will return the canonical entity value for that animal.
Extra tip, if you expect the user might speak more than one animal, make sure to check the Is List box in the parameter section of the API.AI intent.
If "animals", "x", and "y" are defined as parameters in your Intent in API.AI, then you can use the getArgument() method defined in ApiAiApp.
So the following should work:
function tempCheck( app ){
var animals = app.getArgument('animals');
var x = app.getArgument('x');
var y = app.getArgument('y');
// Check the range and then use something like app.tell() to give a response.
}

Cucumber "OR" clause?

Is it possible to specify some kind of "OR" (alternative) clause in Cucumber?
I.e. if I have two valid responses to some event I would like my test to pass if either of them happens.
Something like that:
"When I press a button"
"Then I should see the text 'Boo'"
"Or I should see the text 'Foo'"
My particular scenario is a login screen. When I try to log in with some random password, I should see an error message "invalid password" if the server is working or a message "network error" if it is not.
You can't really define OR functionality using the Gherkin but you can pass in a list and check that one of the values in the list matches what was returned.
Define list:
Then the greeting service response will contain one of the following messages
|Hello how are you doing?|
|Welcome to the front door!|
|How has your day been?|
|Come right on in!|
Check list:
#Then("the get messages service response will contain one of the following messages")
public void text_matching_one_of_the_following(List<String> greetingMessages){
boolean success = false;
for(String message : greetingMessages){
assertTrue(textMatchesResponse(message));
}
}
OR is not supported. You can use Given, When, Then, And and But. Please refer to http://docs.behat.org/en/v2.5/guides/1.gherkin.html
But perhaps you could make use of the But keyword to achieve what you are looking for.

Resources