How can bixby read text that is not displayed? - bixby

I would like to have a bixby result-view read text that is different from what is displayed.
I see that I can create a dialog and match it to the results, or I can also put it in the message field of a result-view. Both of these options let you specify a "template", which has a speech key option, but I have not been able to get that to work to see what the speech key actually does. It might be what I am looking for but I cannot find any examples of the syntax and usually it complains about missing values.
Is this possible to have different displayed vs spoken text? Even if this functionality is not what the speech key is for, can you guys explain and give me an example of what the usage of the speech key would look like just so i can understand going forward?

Is this possible to have different displayed vs spoken text?
Yes, that is exactly what the speech key option is for. What trouble are you running into for it?
Here's a sample use case.
content {
template ("This is the displayed text.") {
speech ("This is the text that Bixby will read.")
}
}
I see that I can create a dialog and match it to the results, or I can also put it in the message field of a result-view.
Both ways are legitimate, but note that is it significantly harder to override the dialog generated in the message field for result-view.
Instead, if you use normal dialog files (such as dialog (Result)), you can write different match patterns to create different dialog for different situations.

Related

How should i enable textfield before entering text using Blue Prism tool?

I'm inputting text in a texbox which is working fine. But there is an existing watermark in the textbox. The entered text in the textbox is inputted as a watermark, hence clicking on 'Next' button results in an error.
Can somebody help on how to enable textfield before entering text using Blue Prism tool?
Can you sandkey event(like SAPCE key) into that field it may cause the watermark to clear it self?
How do you clear that filed in the first palce?
quoting directly from the browser automation guide which you should've read and consumed before being a developer:
Using a Write stage to write to an HTML element such as a text field
does not always work properly. For example, you might try writing to a
username field, only to see a message appear on the web site saying
something like “Please enter a username”, even though you can see that
the value has been correctly written to the field. This can happen
because the data validation functions used by the web form might be
expecting keystrokes, and the write stage has “fooled” it. To get
around this, you will need to use a Navigate stage to call the Send
Keys Action instead of using a Write stage. Some websites have maximum
character limits imposed on some text fields. Using a Write stage can
sometimes fool the website into allowing too many characters into the
field, because the Write stage “sets” the field value rather than
keying characters into it. This is important to bear in mind because
if a field has been filled with more characters than the website would
usually allow, the website could produce an error when you try to
submit or post the data.
to get around this go ahead and use sendkeys, those are likely the best option to push past this validation tool issue on the website.

Building a response in Dialogflow using multiple responses

Please excuse me if this is a really basic question - I'm very much still learning, and I just cannot find a solution.
I'm trying to use the standard basic text responses in Dialogflow, which from what I understand, should work.
What I want to do, is have a set statement (Okay, let's see what I can find), then a random pick from a list, then another set statement, essentially stacking the responses in Dialogflow (see screenshot).
It works absolutely fine in Dialogflow's test console - however, it doesn't do what I want when I take it into the Google action simulator.
Have I made a stupid error, missed a toggle switch somewhere, or am I trying to do something unsupported?
To surface text responses defined in Dialogflow's default response tab go to the Google Assistant response tab and turn on the switch that says "Use response from the DEFAULT tab as the first response.":

While using azure OCR as a web service, if I know the orientation of text, what param should I use?

I am using Microsoft Azure OCR web service. When I use flag "detectOrientation" as true, sometimes it gives weird result. (Tries to identify vertical text, even though I want it to read horizontal text) So, I want to set my orientation as I know it as "Up". Even if I set "detectOrientation" as false, it returns same result.
Surprisingly, if I use Microsoft demo page, https://azure.microsoft.com/en-in/services/cognitive-services/computer-vision/, it is returning correct result. Might be it is doing some pre/post processing or adding some flags. But, it is not revealing this information. Reported this issue so many times to Microsoft but no reply.
You can't set the orientation manually as the parameter detectOrientation is a boolean (true/false) as stated here
The response from the demo page is not the result of the Computer Vision API's OCR, it is the result of using the Computer Vision API's Recognize Text then Get Recognize Text Operation Result to get the result of the operation.
The response of the OCR includes following:
textAngle
orientation
language
regions
lines
words
boundingBox
text
While the response from the Get Recognize Text Operation Result includes the following:
Status Code
Lines
Words
BoundingBox
Text
If you compare the results of the demo page you'll find that they match the Recognize Text, not the OCR.
Surprisingly, if I use Microsoft demo page,
https://azure.microsoft.com/en-in/services/cognitive-services/computer-vision/,
it is returning correct result.
On the demo page, as stated, they don't use OCR operation of the Web service but the new Recognise Text API operation.
Switch to this one, and your results will be consistent.
And to answer your other question about passing the orientation, no there is no such parameter.
I believe the detect Orientation parameter just detects the orientation of all the text in the image, it is not an actual setting that lets you choose which text to read based on its orientation like you're trying to use it.

How to correctly utilize Zip Code entity for DialogFlow?

I'm currently trying to use the built in entity '#sys.zip-code' from DialogFlow (formerly API.AI) for capturing Zip Codes. However so far it does not seem to recognize any actual zipcodes except those which I explicitly set through training. It also does not recognize the '5 digit' pattern as a possible match if #sys.phone-numbers is used in another intent (ex: 54545 gets recognized as a phone number, rather than a zip).
Should I upload a list of known zipcodes through the training section to get this working? Or is there something I'm missing from the built in functionality? Haven't seen a ton of info online on how to best utilize this entity, so figured I'd ask here before coming up with a custom solution.
Thanks in advance!
I think the best way to prompt a user when the bot says something like "could I get your name and zip code? ".The intent which i have created contains multiple combinations of “User says”.They are as below
#"#sys.given-name #sys.zip-code"
#"#sys.zip-code #sys.given-name"
#"#sys.given-name"
#"#sys.zip-code"
and I also have required Parameters setup to pick these values with prompt messages.
So I have attached a picture for this which i have iterated

How can I configure QnAMaker tool to modify liststyle button in Skype?

I have made a bot using QnA Maker and Node JS which is running on Skype.
When the user inputs a word which has got multiple matches in FAQ link or document uploaded in QnA Maker, it shows button of choice using QnAMakerTool module from Node. My question is when the multiple matches has same initial words then because of the size of the choice button in Skype the half of the texts get hide. For example, I have three matches like
Whom should I contact for parking?
Whom should I contact for canteen?
Whom should I contact for Stationery?
It shows in Skype as
Whom should I contact for...
Whom should I contact for...
Whom should I contact for...
If the option text is too long then few parts get hidden. What can I do for this?
First of all, there is a limitation on the max characters in Skype, so that's something you will have to live with. However, you can implement some custom logic to change the text being shown.
That current logic that you are seeing is on the QnAMakerTools file.
The way to go here is probably providing your own QnAMakerTools implementation (it needs to follow this interface).
The QnAMakerDialog receives an IQnAMakerOptions parameter. One of the properties of that interface is feedbackLib which basically is the QnAMakerTools instance that the dialog will later use to disambiguate the question as you can see here.

Resources