Apologies for the naive question.
In DialogFlow v2 APIs, there are 2 similar RichResponse options. Basic card and card. From the object definitions, it looks almost similar.
Does anyone know when to use one vs the other?
When should I use Basic card and When should I use Card?
The interfaces of most Google Cloud services are defined in Protobuf messages, which are published in the googleapis repo on Github. You can thus peek directly under the hood of Dialogflow, where you find these two definitions for Card and BasicCard:
// The card response message.
message Card {
// Optional. Contains information about a button.
message Button {
// Optional. The text to show on the button.
string text = 1;
// Optional. The text to send back to the Dialogflow API or a URI to
// open.
string postback = 2;
}
// Optional. The title of the card.
string title = 1;
// Optional. The subtitle of the card.
string subtitle = 2;
// Optional. The public URI to an image file for the card.
string image_uri = 3;
// Optional. The collection of card buttons.
repeated Button buttons = 4;
}
// The basic card message. Useful for displaying information.
message BasicCard {
// The button object that appears at the bottom of a card.
message Button {
// Opens the given URI.
message OpenUriAction {
// Required. The HTTP or HTTPS scheme URI.
string uri = 1;
}
// Required. The title of the button.
string title = 1;
// Required. Action to take when a user taps on the button.
OpenUriAction open_uri_action = 2;
}
// Optional. The title of the card.
string title = 1;
// Optional. The subtitle of the card.
string subtitle = 2;
// Required, unless image is present. The body text of the card.
string formatted_text = 3;
// Optional. The image for the card.
Image image = 4;
// Optional. The collection of card buttons.
repeated Button buttons = 5;
}
The only differences seem to be:
The buttons on a Card can send a text back to your agent, while on a BasicCard they always open an external URL.
A BasicCard can have formatted text instead of an image, although I can't find any information about what kind of formatting they are referring to (HTML? Markdown?).
The image of a BasicCard can have an accessibility_text, which is used by certain devices without a screen (e.g. screen readers).
An important difference that is not apparent from the protobufs, but from the Dialogflow documentation is that Card is a generic Rich Message that can be used both on Actions on Google and other integrations such as Facebook Messenger, Twitter, Slack etc. BasicCard is an Actions on Google-specific type that does not work on any other platform.
Unless you really need the formatted text you would probably be better advised to use the more generic Card because it doesn't break when you decide to integrate your agent with another platform. Keep in mind though that each platform has it's own limitations for Rich Messages, so how platform-agnostic your card really is depends on the data you fill it with.
Related
I have a teams bot (nodejs) that renders an adaptive card with some table data. We want to provide a richer data viewing experience by iframing a javascript widget inside of a task module that will display the data using interactive D3 charts.
Basically the adaptive card will have a "see more" button, which will invoke a task module containing the html contents.
The part I can't figure out is how to access the data from the html inside the task module. I realize that there is a global object called microsoftTeams that contains metadata and context, but it doesn't seem specific to the adaptive card that was clicked (the adaptive card that invoked the task module). It has much more global info used for teams such as the user and conversation metadata.
I was able to insert the data into the taskInfo object when invoking the task module as a custom param. So my question is, is there a way to access the taskInfo object from inside the HTML iframe?
I am using query strings to get data from the server to the client (There might be a better way, hopefully #Saonti-MSFT comes back with an easier/cleaner way).
This is my task object:
task: {
type: "continue",
value: {
url: `${process.env.HostName}?message=${JSON.stringify(Buffer.from(message).toString("base64"))}`,
width: 500,
height: 736
},
}
On the client side I then get the search query and convert it back do the following
const urlParams = new URLSearchParams(window.location.search);
const message = urlParams.get('message');
// Needed to remove the quotes and convert spaces to "+" as this was getting lost
const data = atob(message.replace(/"/g,"").replace(/\s/g, "+"));
NOTE: I was only doing this for a single string, but I don't see why would couldn't use an object if you converted it to a string on the server then parsed it on the client.
Hope this helps
You can check out this Sample. Here you have HTML pages which can be customized. You can create your own webpages and integrate in the task module.
I want the functionality of Dialogflow custom payload chips but just wanted to make it's width to cover the full line
Summary:
I want a button that covers the width of the chatbot and upon click, it should return the value (which is return in it) as msg.
I am using custom payload in the Dialogflow console.
Does anyone know or have a clue on implementing a phone call functionality in a Bixby capsule. That is for example, like from the Yelp capsule, a user presses a 'Call Business' button at the bottom and the capsule initiates a phone call with the business.
I have extensively been looking at the Developer's Guide (Contact library):
https://bixbydevelopers.com/dev/docs/dev-guide/developers/library.contact
But it seems like they don't have an action call for a phone call.
compound-card {
content {
single-line {
image {
url("../assets/ic_btn_call.png")
}
spacer
text {
value ("Call #{value(phoneNumber)}")
style(Title_XS)
}
}
}
on-click {
intent {
// goal : Call action not implemented
value {
$expr (phoneNumber)
}
}
}
}
An alternative solution:
Although it is not documented, use app-launch to take advantage of Android's built-in href
add the following code in your view file
app-launch {
payload-uri ("tel: 1-800-726-7864") // samsung number
}
do a private submission and load revision number to your phone.
get to the view with app-launch, and you will be re-direct to phone-call with the number.
you need to confirm the call by press the green dial icon.
Some of Bixby's early adopters helped us explore and develop new features. As a part of this partnership, they have access to newer features.
The ability to make a call is one such feature which is being developed comprehensively and will soon be available to all of our developer community!
Please follow this feature request https://support.bixbydevelopers.com/hc/en-us/community/posts/360029568074-Allow-access-to-phone-dialer for updates on release date and to share your comments or thoughts.
Is there any way i can make bixby to read information on my card. Like if my card has title, date, description, so after bixby read the message "Here is what i found", it should read like "India vs Australia, Green park, Kanpur, 10th March, 1:30 PM".
Is it possible to add speech in result-view? I am showing like 6 cards on one result and want bixby to read all of them one by one and need like 3-4 seconds pause between each cards.
I am showing compound card in my result-view and using single-line. Adding speech in template resulting nothing. I am adding few lines of my code.
list-of (all) {
has-details (false)
where-each (single) {
compound-card {
content {
single-line {
if (exists(single.Name)){
text {
value {
template ("#{value(single.Name)}"){
speech("#{value(single.Name)}")
}
}
style (Title_S)
}
}
}
}
}
}
}
You would need to define the speech key of your dialog template (documentation link). This can be different from what the dialog template text says so you can customize it as needed. Your speech did not work since it needs to be a part of the dialog template.
Adding more information after initial question was modified:
Hands-free List Navigation would be the correct way for you to enable a voice output for every entry in a summary mode result-view. This allows you to choose a few different ways the summary content will be read to the user and how the user can select one of the options.
I have a view which receives emails from web, then I catch those and send them to a website. The problem is that the rich text which contains the body of the email loses its attributes (color, bold text, etc.) and images are lost or, if I use embedObject, they lost position (only are placed in the begging or end of the email).
There is a lot of information on the web but, few examples which work. Something with MIME format, convert to hmtl or xml are the options, but I cannot put them to work.
Seeing the properties of the document in the view, the MIME format converts all the email in a bunch of pieces, a lot of body ones. One of the body's have all the html code of the email, but I cannot access it. That is I think, the easiest solution, and my question. How can I access it, put in to a string and send it to the web?
Anyone have any tips or other solution to it?
P.S.: In the client view the email is perfectly formatted and images in place, I cannot just put the view on the web.
The body's parts and the html one
The base of my java agent is the following code:
public void NotesMain() {
try {
//I create a doc and use replaceItemValue to copy the other parts of the email.
RichTextItem rich = null;
String string = "";
rich = (RichTextItem) doc.getFirstItem("Body");
string = rich.getFormattedText(false, 0, 0);
rich.appendText("string");
}
}
This code kind of converts MIME/RichText to plane text.
Just readdress (change the SendTo field) and forward the existing document instead of creating a new document and copying items.
If you don't want to alter the existing document, that's okay. You just call the Send() method without calling the Save() method.