Parse text of a Received SmsTool3 & eventhandler - string

I am searching this forum(and others) and I can't find how exactly the eventhandler of the SMSTools works. How does it know when it's receiving or sending in order to take an action? Think is better to explain what I want.
I want to use the eventhandler in this scenario:
I am using a IDS which is sending information by SMS via smstools. Everything is ok by now, I am receiving what I need.
The problem is that when the smstools is receiving an SMS, I want to check if it's from the correct phone number (mine for example or a list of numbers would be better).
If it's the correct number, I want to see the text (the text will pe simple, like: yes or no) and take an action depending on it.
I will really appreciate any answer.

Related

RichResponse VS basicCard order

I'm facing a quite annoying issue with the Actions On Google SDK.
I want to send to the user these things in this order :
A basic card
A text
A suggestion chip
I simply did this :
let richResponse = assistant.buildRichResponse();
richResponse.addBasicCard( ... );
richResponse.addSimpleResponse( ... );
richResponse.addSuggestions( ... );
Problem is, no matter the order set in my code, google will always send the simple response before the card.
If i log the JSON before sending it, the card is indeed AFTER the message.
I tried to simply switch them in the JSON before sending it but then the assistant simply crashes.
All in all, i see no option to achieve what i want :/
If i could send a 1 item carousel i wouldn't need all that, but it's apparently impossible to send such carousel because the assistant also crashes.
If i could add buttons with JSON payload instead of external URL in BasicCard i could also workaround all these issues, but that's not possible either... I feel quite stuck.
Anyone has a workaround ?
Regards
The RichResponse object requires that the first item in the response be a SimpleResponse object, so you need some text first.
However, you are allowed to have two SimpleResponse objects, so you can try adding a SimpleResponse, the card, another SimpleResponse, and then the suggestions.
It isn't clear how being able to have just one option would let you work around this (although I agree). You would still need a SimpleResponse that appears before the option.
It isn't clear what you mean by "buttons with JSON". In this sense, suggestion chips work exactly the same way options do - they send something back to your webhook (options send the tag, while the suggestion chips send their contents).

How to send multiple statements in google assistant app?

I am creating a Google Assistant app for telling quotes, I am currently using Api.ai with ApiAi NodeJs webhook. I wanted that my response should be in this way:
Innovation is the only way to win.
By Steve Jobs
Want one more?
Note that all the three lines are different lines. I know it is possible if I just use api.ai's ux without webhook (using multiple Simple Response) but I cannot figure out how to do it when combined with webhook.
I tried:
assistant.ask("Innovation is the only way to win.");
assistant.ask("By Steve Jobs");
assistant.ask("Want one more?");
But it seems to speak only the first sentence. I also tried by replace it with:
assistant.tell("Innovation is the only way to win.");
assistant.tell("By Steve Jobs");
assistant.ask("Want one more?");
But it exits just after the first statement. How to do it?
Both ask() and tell() take their parameters and send back a response. The only difference is that ask() keeps the conversation going, expecting the user to say something back, while tell() indicates the conversation is over. If you think of this in terms of a web server, both ask() and tell() send back the equivalent of a page and then close the connection, but ask() has included a form on the page, while tell() has not.
Both of them can take a RichResponse object, which may include one or two strings or SimpleResponse objects which will be rendered as chat bubbles. You can't do three, however, at least not according to the documentation. So it sounds like your best bet will be to include one SimpleResponse with the quote and attribution, and the second with the prompt for another.
This also sounds like a case where you want the audio to be different than the displayed text. In this case, you'd want to build the SimpleResponse so it has both speech fields and displayText fields.
That might look something like this (tho I haven't tested the code):
var simpleResponse = {
speech: 'Steve Jobs said "Innovation is the only way to win."',
displayText: '"Innovation is the only way to win." -- Steve Jobs'
};
var richResponse = assistant.buildRichResponse();
richResponse.addSimpleResponse(simpleResponse);
richResponse.addSimpleResponse('Do you want another?');
assistant.ask( richResponse );
This will also let you do things like add cards in the middle of these two blurbs that could, for example, contain a picture of the person in question. To do this, you'd call the richResponse.addBasicCard() method with a BasicCard object. This might even be better visually than including the quote attribution on a second line.
As for design - keep in mind that you're designing for a wide range of devices. Trying to focus on the line formatting when you have display modes that are different (and sometimes non-existent) is of questionable design. Don't try to focus on what the conversation will look like, instead you should focus on how much the conversation feels like a conversation your user will have with another person. Remember that voice is the primary means of this conversation with visual intended to supplement that conversation, not rule it.
From what I can gather from the documentation, .tell and .ask both close the mic. Try putting all of your statements into one string. As far as I can tell, .ask doesn't actually affect the tone of the speech; it just tells Assistant to wait for input.
assistant.ask("Innovation is the only way to win. By Steve Jobs. Want one more?");

Sending specific words to webhook

I'm trying to make an agent that can give me details about movies.
For example, the user says "Tell me about (movie-name)", which sends a post request to my API with the (movie-name) which then returns the response.
However, I don't understand how to grab the movie name from the user's speech without creating a movieName entity with a list of all the movies out there. I just want to grab the next word the user says after "tell me about" and store it as a parameter. How do I go about achieving that?
Yes, you must create a movieName entity, but you do not need to create a list of all movies. Maybe you are experienced with Alexa which requires a list of suggested values, but in api.ai you don't need to do that.
I find that api.ai is not very good at figuring out which words are part of a free-form entity like movieName, but hopefully adding enough user expressions will help it with that.
edit: the entity I was thinking of is '#sys.any' but maybe it would be better to use a list of movie names with the 'automated expansion' feature. I haven't tried that, but it sounds like the way that Alexa's custom slots work, which is actually a lot more flexible (just using the list as a guideline) then people seem to think.

WATIR: how do drive outlook web access

since the emails loads dynamically how do you find a specific email that contains a button back to your site. This is like signing up at a site. Customer receives email to confirm.
Thanks for the support
BigD
OWA, bless MS's little hearts (at least in the circa 2003 version I'm looking at here) uses frames, so first of all brush up on that or you are gonna be hating life. The list of incoming messages is in a frame named 'viewer' The message summaries are contained in a table lacking any useful means to identify it that is in a div of class 'msgViewerCont" and an ID of dvContents. So to see if a message exists you want to look to see if you can find a row in that table which contains the subject you expect to see.
(be careful using ID values on OWA.. apparently nobody in the group that developed it read the part of the HTML standard that specifies that ID values are supposed to be unique.. the re-use them all over that page.)
Presuming you know the subject of the message you are about to receive, and also that you keep that mail account cleared out so that it will be the ONLY message there with that subject line, then you can check to see if it exists usng
subject = regex.new("subject you are looking for")
browser.frame(:name, 'viewer').div(:id, dvContents).table(:index, 1).row(:text, subject).exists?
to click on it use .click instead of exists.
once you've clicked it, OWA will refresh the PreviewPane iframe.. inside that iframe is another one that has the message body in it.
all those frames, are nested inside the viewer frame. welcome to nested frame hell. hope you enjoy your stay. (like I said, bone up on frames, you're in for a fun ride)

Designing a one EVERYTHING search box (date+address+keywords)

I'm storing information about local "events". They are described by 3 things - address, date, keywords(tags). I want to have only one search box for at least address and keywords. The date might go to a separate field. I'm assuming that most people will search for events that are taking place "today" so this filter won't get that much traffic.
I need those addresses to be correct (because I'm geocoding them afterwards) so I need to validate them before submitting the form and display a list of "did you mean" if a user made a typo there. I can't do life search here. I can do a live search on keywords. Keep in mind that a user can make a typo there too and I want to catch that.
Is there a clever way to design the input's parser in this case to guess which is supposed to be address and which keywords?
OR
Is there a way to actually parse it as user is entering his query? Maybe I should show autocomplete hints for keywords, after 3 first characters are entered, and if user denies to use them then to assume that it's a part of an address he's typing.
What do You think?
Take a look at Document Cloud's Visual search
http://documentcloud.github.com/visualsearch/#demo

Resources