How to change speech pronunciation -- conditional SSML? - bixby

I have the following view code
template ("Here's an AltBrain by #{value(this.author)} on #{value(this.name)}") {speech ("Here's an AltBrain by #{value(this.author)} on #{value(this.name)}")}
When it encounters value(this.author) = "GitLab" it butchers the pronunciation to "Gitlib." How can I correct this?
I see that I could use SSML's speak and sub commands like this
<speak> ... <sub alias = \"Git Lab\"> ... but how do I do this dynamically where I want it to adjust this.name if and only if it is a term that has a different pronunciation?
Note that as I continue to explore this I realize that there is a fundamental problem which is that everywhere Bixby encounters "GitLab" she is going to mispronounce it. We need a mechanism for a global change in that, like a dialog file. Is there such a thing?

First, it sounds like a TTS bug that require fix from the Bixby Platform. I will file a ticket for that.
However there are cases where developers may want display and speech be different. One trick is to use the value dialog. When in display, use raw, and when in speech, use value.
Here is the value dialog:
dialog (Value) {
match: TextSource (this)
if (this == 'GitLab') {
template("Git Lab")
}
else {
template("#{raw(this)}")
}
}
Here is how to take advantage of the value dialog.
message {
template ("From #{raw(action.question.source)}, #{value(action.question.textQuestion)}") {
speech ("From #{value(action.question.source)}, #{value(action.question.textQuestion)}")
}
}
For the complete capsule to test, download it from Github. I've made the capsule for other purpose, but try the the utterance "try one question" and see the input-view.

This is a hideous hack, and provides only a partial solution.
if (exists(this.author) && this.author == 'GitLab')
{ template ("Here's an AltBrain by **GitLab** on #{value(this.name)}")
{speech ("Here's a AltBrain by **Git Lab** on #{value(this.name)}")}}
else ...

Related

What is the best practice to avoid utterance conflicts in an Alexa Skill

In the screenshot below, I have got an utterance conflict, which is obvious because I am using similar patterns of samples in both the utterances.
My question is, the skill I am developing requires similar kind of patterns in multiple utterances and I cannot force users to say something like “Yes I want to continue”, or “I want to store…”, something like this.
In such a scenario what is the best practice to avoid utterance conflicts and that too having the multiple similar patterns?
I can use a single utterance and based on what a user says, I can decide what to do.
Here is an example of what I have in my mind:
User says something against {note}
In the skill I check this:
if(this$inputs.note.value === "no") {
// auto route to stop intent
} else if(this$inputs.note.value === "yes") {
// stays inside the same intent
} else {
// does the database stuff and saves the value.
// then asks the user whether he wants to continue
}
The above loop continues until the user says “no”.
But is this the right way to do it? If not, what is the best practice?
Please suggest.
The issue is really that for those two intents you have slots with no context around them. I'm also assuming you're using these slots as catch-all slots meaning you want to capture everything the person says.
From experience: this is very difficult/annoying to implement and will not result in a good user experience.
For the HaveMoreNotesIntent what you want to do is have a separate YesIntent and NoIntent and then route the user to the correct function/intent based on the intent history (aka context). You'll have to just enable this in your config file.
YesIntent() {
console.log(this.$user.$context.prev[0].request.intent);
// Check if last intent was either of the following
if (
['TutorialState.TutorialStartIntent', 'TutorialLearnIntent'].includes(
this.$user.$context.prev[0].request.intent
)
) {
return this.toStateIntent('TutorialState', 'TutorialTrainIntent');
} else {
return this.toStateIntent('TutorialState', 'TutorialLearnIntent');
}
}
OR if you are inside a state you can have yes and no intents inside that state that will only work in that state.
ISPBuyState: {
async _buySpecificPack() {
console.log('_buySpecificPack');
this.$speech.addText(
'Right now I have a "sports expansion pack". Would you like to hear more about it?'
);
return this.ask(this.$speech);
},
async YesIntent() {
console.log('ISPBuyState.YesIntent');
this.$session.$data.productReferenceName = 'sports';
return this.toStatelessIntent('buy_intent');
},
async NoIntent() {
console.log('ISPBuyState.NoIntent');
return this.toStatelessIntent('LAUNCH');
},
async CancelIntent() {
console.log('ISPBuyState.CancelIntent()');
return this.toStatelessIntent('LAUNCH');
}
}
I hope this helps!

How to make Bixby ask input without user

I want to make Bixby ask for input values when the user just states what he/she wants to do(without any valuable input given.)
For example,
user: I want to search something
Bixby: What do you want to search?
user: *possible-input-value*
Is this possible? If so, how can I implement this?
That's easy in Bixby. If you make an input to your action required...it will prompt the user for input. Let's say you have an action like this:
action (FindSomething) {
type(Search)
description (Search for something)
collect {
input (search) {
type (Search)
min (Required) max (One) // Force Bixby to prompt for an input.
}
}
output (viv.core.Text) // some result
}
And you have a search concept defined like this:
name (Search) {
description (Search term)
}
You can provide an input view for the user to enter the term (via screen).
input-view {
match: Search(search)
message {
template ("What do you want to search?")
}
render {
form {
elements {
text-input {
id (search)
label (Search Term)
type (Search)
max-length (50)
value ("#{raw(search)}")
}
}
on-submit {
goal: Search
value: viv.core.FormElement(search)
}
}
}
}
In addition to Pete's response, you need to enable this for voice input (UI only input will not pass capsule review for submission to the marketplace). To do so, you need to create natural language training for Search
Since you are asking for input at a prompt, you need to create a training that will be used when prompting for Search
Training source for this would look like:
[g:Search:prompt] (sample search text)[v:Search]
Or in the UI
Definitely check out the sample code at https://github.com/bixbydevelopers for more examples. A simple example of input would be in https://github.com/bixbydevelopers/capsule-sample-fact - note the training that uses tags
In addition to Pete's response, I would recommend taking a look the design principles for Bixby development. These principles will guide you in making a targeted capsule that solves the use case you would like to address.

Need to force Bixby to interpret -- in speech as pause not minus

I have text output that contains "--John Doe" to indicate that the source of the quotation is John Doe. Bixby is reading it as "minus John Doe". I want to read it as "pause John Doe".
I have enclosed the speech() in tags.
dialog (Result) {
match: Content (text)
if ($handsFree) {
template ("\n\n") {
speech ("<speak>#{value(text)}</speak>")
}
} else {
}
}
Conversation pane in debug:
Dialog/<speak>I claim to be an average man of less than average ability. I have not the shadow of a doubt that any man or woman can achieve what I have, if he or she would make the same effort and cultivate the same hope and faith. --Mohandas Karamchand Gandhi</speak>
Template
<speak>#{value(text)}</speak>
It pronounces the -- as "minus". I want it to be a pause.
Bixby supports limited SSML and I don't think the <break/> tag is supported yet (you could give it a try), but that is the tag you would want. Inside this tag, you can specify how long you want to break for, e.g. <break time="1s"/> or <break time="500ms"/>. So applying this to your example:
<speak>I claim to be an average man of less than average ability. I have not the shadow of a doubt that any man or woman can achieve what I have, if he or she would make the same effort and cultivate the same hope and faith. <break time="1s"/> Mohandas Karamchand Gandhi</speak>
In your action JS, you would need to have something like
let quote = 'I claim to be an average man of less than average ability. I have not the shadow of a doubt that any man or woman can achieve what I have, if he or she would make the same effort and cultivate the same hope and faith. --Mohandas Karamchand Gandhi';
quote = quote.replace('--', '<break time="1s"/>');
To replace the -- with the appropriate SSML tag.
The documentation doesn't say Bixby supports this tag yet. About a month ago, some of the Bixby staff said in Slack that more SSML support is "coming very soon", but I don't think it has arrived yet.
User error. 😉 Replace the two "minuses" with — (em-dash). Maybe that'll help?
There is no quick way, nor SSML would help in this case. You should use different content for display and speech.
define a structure with display and speech property. TextDisplay and TextSpeech are just Text primitive concept.
structure (MyStruct) {
property (display) {
type (TextDisplay) min (Required) max (One)
}
property (speech) {
type (TextSpeech) min (Required) max (One)
}
}
When creating such concept, make sure do the the replacement in your JS script so that myStruct.display = "xyz -- John"; and myStruct.speech = "xyz .. John";
define view file, the trick is that each dot in speech will create a little pause. You can control the length of pause by adding more dots.
result-view {
match: MyStruct(this)
message {
template ("#{value(this.display)}") {
speech ("#{value(this.speech)}")
}
}
}
If you use 。, reading it to be paused
like this:
template ("Hello #{value(text)}") {
speech ("Hello。 #{value(text)}")
}
and
quote.replace() is not good,
Try to add source (property) to your Content structure.
Content.model.bxb
Structure (Content) {
property (quote) {
type (viv.core.Text)
min (Require)
}
property (source) {
type (viv.core.Text)
min (Require)
}
}
and your dialog
dialog (Result) {
match: Content (content)
if ($handsFree) {
template("#{value(content.quote)} -- #{value(content.source)}") {
speech("#{value(content.quote)} 。。 #{value(content.source)}")
}
}
}

Groovy help... About def edit and controllers

What does def edit = {} contain by default? You see, I was following a book but it turns out to be using an older version that's why some of the code don't work. I have this piece of code:
def edit= {
def user = User.get(params.id)
if (session?.user?.id == null){
flash.message = "You have to login first before editting your stuff."
redirect(action:'login')
return
}else if(session?.user?.id != params.id) {
flash.message = "You can only edit yourself."
redirect(action:list)
return
}else{
//What should I put here?
}
}
It's already functional. If the user clicks on edit without logging in, then he's redirected to a login page. Otherwise, if he did login, then he's only allowed to edit himself. What should I put on the "else" clause? It should already should already allow the user to edit his stuff, but I don't really know how to implement what I want. :(
It would be great if someone could share the default edit snippet.
I'm a bit new to all these, so go easy on me.
If you're talking about Grails, back up your UserController and try grails generate-controller - it will give you the complete text of default actions.
I also suggest that you look through scaffolding chapter - it's a great point to start.
the default edit action should look like this (pseudo-code, it depends on the actual domain class you create the code upon):
def edit = {
redirect(action: "show", id: params.id)
return true
def <domain>Instance = <DomainClass>.get(params.id)
if (!<domain>Instance) {
flash.message = "${message(code: 'default.not.found.message', args: [message(code: '<DomainClass>.label', default: '<DomainClass>'), params.id])}"
redirect(action: "list")
}
else {
return [<domain>Instance: <domain>Instance]
}
}
btw: most of the time you don't have to do the security checks by programming these explicitly in the controller code, check out the Grails Spring Security Plugin for that purpose.

Why the command button is not being displayed in my emulator?

I have already added 5 cammands in a form and I want to add a sixth but It does not display the sixth?
I am posting my codes below.
public Command getOk_Lastjourney() {
if (Ok_Lastjourney == null) {
// write pre-init user code here
Ok_Lastjourney = new Command("Last Journey", Command.OK, 0);
// write post-init user code here
}
return Ok_Lastjourney;
}
public Form getFrm_planjourney() {
if (frm_planjourney == null) {
// write pre-init user code here
frm_planjourney = new Form("Plan journey", new Item[] { getTxt_From(), getTxt_To(), getCg_usertype(), getCg_userpref(), getCg_searchalgo() });
frm_planjourney.addCommand(getExt_planjourney());
frm_planjourney.addCommand(getOk_planjourney());
frm_planjourney.addCommand(getOk_planFare());
frm_planjourney.addCommand(getOk_planDistance());
frm_planjourney.addCommand(getOk_planTime());
frm_planjourney.addCommand(getOk_planRoute());
frm_planjourney.setCommandListener(this);
// write post-init user code here
System.out.println("Appending.....");
System.out.println("Append completed...");
System.out.println(frm_planjourney.size());
frm_planjourney.setItemStateListener(this);
}
return frm_planjourney;
}
Given System.out.println I assume you were debugging with emulator, right? in that case it would be really helpful to provide a screen shot showing how exactly does not display the sixth looks like.
Most likely you just got too many commands to fit to area allocated so that some of them are not shown until scrolled. There is also a chance that sixth command was reassigned to some other soft-button and you didn't notice that. Or there's something else - hard to tell with details you provided.
A general note - handling six actions with commands might be not the best choice in MIDP UI. For stuff like that, consider using lcdui List API instead. IMPLICIT kind of lists allow for more reliable and user friendly design than commands.

Resources