RichResponse VS basicCard order - node.js

I'm facing a quite annoying issue with the Actions On Google SDK.
I want to send to the user these things in this order :
A basic card
A text
A suggestion chip
I simply did this :
let richResponse = assistant.buildRichResponse();
richResponse.addBasicCard( ... );
richResponse.addSimpleResponse( ... );
richResponse.addSuggestions( ... );
Problem is, no matter the order set in my code, google will always send the simple response before the card.
If i log the JSON before sending it, the card is indeed AFTER the message.
I tried to simply switch them in the JSON before sending it but then the assistant simply crashes.
All in all, i see no option to achieve what i want :/
If i could send a 1 item carousel i wouldn't need all that, but it's apparently impossible to send such carousel because the assistant also crashes.
If i could add buttons with JSON payload instead of external URL in BasicCard i could also workaround all these issues, but that's not possible either... I feel quite stuck.
Anyone has a workaround ?
Regards

The RichResponse object requires that the first item in the response be a SimpleResponse object, so you need some text first.
However, you are allowed to have two SimpleResponse objects, so you can try adding a SimpleResponse, the card, another SimpleResponse, and then the suggestions.
It isn't clear how being able to have just one option would let you work around this (although I agree). You would still need a SimpleResponse that appears before the option.
It isn't clear what you mean by "buttons with JSON". In this sense, suggestion chips work exactly the same way options do - they send something back to your webhook (options send the tag, while the suggestion chips send their contents).

Related

How does a Gmail message Id or ThreadId map to the new Gmail UI?

Edit: addressing the first comment below and for clarity, this isn't a code question. The question is simply:
What do I put into the URI querystring of the new Gmail UI to view a draft message created by the Gmail API?
Despite this not really being a code question, I'm asking on Stack Overflow as it's Google's preferred platform for Gmail API questions.
--
If I view a draft message in the new Gmail UI, the URI is something like this:
https://mail.google.com/mail/u/1/?zx=iij9apqgzdf4#drafts?compose=jrjtXSqXwlFGnSGCQgDCdnHGVFdlpFMgzsCNgpQstQLxdLCMkjKstBmWZkCmjhWTQnpsZCJF
I can't see any way to create such a link from the Id or ThreadId of a message created via the Gmail API.
Previously, one could do this:
https://mail.google.com/mail/u/1/?zx=ov61dxfbrcga#drafts?compose=1631caae9dbb074d
where the value of "compose" is the Id.
How can the same thing be accomplished in the new UI?
I've been encountering the same problem and have had some success in this problem, as well as some issues I still can't get past.
Good news: The new compose parameter format is some kind of "base40" encoding. I searched the Gmail source for a restricted alphabet string, and found and deobfuscated the bit of code doing this encoding/decoding: https://gist.github.com/danrouse/52212f0de2fbfe33cfc56583f20ccb74
This code includes an encode and decode function which should work for Gmail-format query parameters.
Bad news: The values that it is encoding to open draft emails do not appear to be available using the Gmail API. Specifically, they look like this:
thread-f:NEW_THREAD_ID+msg-a:DRAFT_ID -- while the draft ID is the same as it was before, the Thread ID does not appear to match any of the IDs that the Gmail API returns.
Interestingly, if you inspect the subject row in the Gmail UI, it has dataset attributes including all of both the old format and new format IDs - but it's still unclear how to get the new ones programatically.
Thanks to #frank-szilinski - he pointed out that the old format is now translated. I.e. this now works again:
https://mail.google.com/mail/ca/u/1/#drafts/1661237c4db71ace
It doesn't seem to work when the Gmail tab isn't already open, however.
Building on #kremonte gist, and #chris-wood comments, I made a rails gem that correctly creates the open-the-draft-inside-gmail URL.
It's here - https://github.com/GoodMeasuresLLC/gmail_compose_encoder
It's for the use case of "my code created a draft (prepopulated with some text, of course) and now I want to open the draft in compose mode so that my user can review it before hitting "send".
How to get the URL for a draft
If, for example you use a list request from which you get your draft objects:
{
"id": string,
"message": {
object (Message)
}
}
You can take this id and put it into a URL in this format:
mail.google.com/mail/#inbox?compose=[id]
Eg.
mail.google.com/mail/#inbox?compose=3doinm3d08932d
This will open up GMail with the relevant draft open.
I was struggling because I wanted it to work with multiple accounts. However the authuser parameter did not help.
Inserting the email address instead of the integer after the u/ component solved the problem.
https://mail.google.com/mail/u/{email_address}/#drafts?compose={message_id}
The message id is the one provided by the API.

BotFramework: Create Suggested Actions without text attribute

I'm creating a bot in DirectLine. I'm trying to use SuggestedActions to display a suggested action and I don't want to include the text attribute for that. When I try to run my code without the text attribute, I see a blank message being displayed. How can I avoid that?
My code
var msg = new builder.Message(session)
.suggestedActions(
builder.SuggestedActions.create(
session, [
builder.CardAction.imBack(session, "disconnect", "Disconnect"),
]
));
session.send(msg);
The Output i'm getting:
Per my understanding, you want a button which is shown absoluted at bottom and always display to remind your agent that he can disconnect conversation any time.
However, per me testing and understanding, in my opinion, there 2 points that it's maybe not a good idea to achieve this feature:
SuggestedAction is based on Messasge in Bot framework. And basically Bot application is for conversation. So every message between user and bot renderred in different channels should always be contained in a textbox, shown like in your capture. We cannot bypass this feature.
Per your requirements, I think you want this button should be always display unless the agent click it. But I didn't find any feature like this in Bot framework, and you may need to send this meesage additionally beside every message from bot, which is not graceful and will raise unpredictable risk.
My suggestion is that you can create a triggerAction to handle global disconnect requests. Refer https://learn.microsoft.com/en-us/bot-framework/nodejs/bot-builder-nodejs-dialog-actions for more info.

How to send multiple statements in google assistant app?

I am creating a Google Assistant app for telling quotes, I am currently using Api.ai with ApiAi NodeJs webhook. I wanted that my response should be in this way:
Innovation is the only way to win.
By Steve Jobs
Want one more?
Note that all the three lines are different lines. I know it is possible if I just use api.ai's ux without webhook (using multiple Simple Response) but I cannot figure out how to do it when combined with webhook.
I tried:
assistant.ask("Innovation is the only way to win.");
assistant.ask("By Steve Jobs");
assistant.ask("Want one more?");
But it seems to speak only the first sentence. I also tried by replace it with:
assistant.tell("Innovation is the only way to win.");
assistant.tell("By Steve Jobs");
assistant.ask("Want one more?");
But it exits just after the first statement. How to do it?
Both ask() and tell() take their parameters and send back a response. The only difference is that ask() keeps the conversation going, expecting the user to say something back, while tell() indicates the conversation is over. If you think of this in terms of a web server, both ask() and tell() send back the equivalent of a page and then close the connection, but ask() has included a form on the page, while tell() has not.
Both of them can take a RichResponse object, which may include one or two strings or SimpleResponse objects which will be rendered as chat bubbles. You can't do three, however, at least not according to the documentation. So it sounds like your best bet will be to include one SimpleResponse with the quote and attribution, and the second with the prompt for another.
This also sounds like a case where you want the audio to be different than the displayed text. In this case, you'd want to build the SimpleResponse so it has both speech fields and displayText fields.
That might look something like this (tho I haven't tested the code):
var simpleResponse = {
speech: 'Steve Jobs said "Innovation is the only way to win."',
displayText: '"Innovation is the only way to win." -- Steve Jobs'
};
var richResponse = assistant.buildRichResponse();
richResponse.addSimpleResponse(simpleResponse);
richResponse.addSimpleResponse('Do you want another?');
assistant.ask( richResponse );
This will also let you do things like add cards in the middle of these two blurbs that could, for example, contain a picture of the person in question. To do this, you'd call the richResponse.addBasicCard() method with a BasicCard object. This might even be better visually than including the quote attribution on a second line.
As for design - keep in mind that you're designing for a wide range of devices. Trying to focus on the line formatting when you have display modes that are different (and sometimes non-existent) is of questionable design. Don't try to focus on what the conversation will look like, instead you should focus on how much the conversation feels like a conversation your user will have with another person. Remember that voice is the primary means of this conversation with visual intended to supplement that conversation, not rule it.
From what I can gather from the documentation, .tell and .ask both close the mic. Try putting all of your statements into one string. As far as I can tell, .ask doesn't actually affect the tone of the speech; it just tells Assistant to wait for input.
assistant.ask("Innovation is the only way to win. By Steve Jobs. Want one more?");

API.AI with google assistant - phone number capture problems

We are trying to capture a phone number. Actually many other numbers, like amounts, zip, etc. We are using Google Home.
The below urls are JSON payloads we received on the fulfillment side. The entity name is TheNumber.
One JSON is when we setup the entity as #sys.number the other JSON when it was #sys.phone-number.
https://s3.amazonaws.com/xapp-bela/gh/number-test.json
https://s3.amazonaws.com/xapp-bela/gh/phone-number-test.json
The first problem is that the google assistant is really struggling to recognize number sequences, like phone numbers or zip codes. But even when it gets it right (according to the originalRequest in the JSON payload), the entity still has the wrong value when it arrives to the fulfillment side.
I guess my question is what am I doing wrong? Is anybody seeing the same problems?
Not sure this will help since this is more about talking to the Google Home device but.... I too was having a similar issue with a long number. If you use #sys.number-sequence as part of your Intent's context, this will allow you to recite much longer numbers without the device interrupting you. In your NodeJS code, you can grab the argument for that number-sequence for use in your Google Home agent.
if (assistant.getArgument('number-sequence') != null) { <do something> }

Parse text of a Received SmsTool3 & eventhandler

I am searching this forum(and others) and I can't find how exactly the eventhandler of the SMSTools works. How does it know when it's receiving or sending in order to take an action? Think is better to explain what I want.
I want to use the eventhandler in this scenario:
I am using a IDS which is sending information by SMS via smstools. Everything is ok by now, I am receiving what I need.
The problem is that when the smstools is receiving an SMS, I want to check if it's from the correct phone number (mine for example or a list of numbers would be better).
If it's the correct number, I want to see the text (the text will pe simple, like: yes or no) and take an action depending on it.
I will really appreciate any answer.

Resources