Unable to test buttons in cards in botiumbox community edition - direct-line-botframework

Hi,
I am testing the chatbot as above bot. When I click on the buttons in the adaptive card the respective text is displayed in the card. But while I'm running that in live chat I'm getting ajax error.
The output data is linked in the json object internally. So should I need to change any advanced settings in botium box community edition to test the bot? In this case, the user will just click on the button, bot will respond by giving the respective adaptive card, the user will not enter any text.
Can we test this type of scenario in the botium box community edition?
buttons taking null value
Thank you.

Asserting problem should be fixed with the next release. (There will
be licence changes in next version. Most important is, you have to
renew it monthly)
I suppose you set DIRECTLINE3_BUTTON_TYPE capability to text, but you are sending json as button click. (send text as button click, or set
DIRECTLINE3_BUTTON_TYPE capability to "event") Details: It depends on your backend how the
button clicks are handled. You have to configure Botium
correspondingly. See also this
short article

Related

Can we read text from the selected objects in perception simulation

I am automating the hololense application using perception simulation. In one of the scenario, I need to perform click action on the specific objects based on the name.So, Is it possible to read the text of the selected objects ? (Note: I have selected the objects using right hand /light hand move and object is selected with distinguished color )
It seems that you want to build test automation for your app or file explorer base on the Hololens2 emulator. And what your requirement is making it automatically tap an object with a matching name in the emulator.
If so, the emulator does not support the feature which recognizing text or direct returning data from the application memory. However, you can provide more information about your business request and submit a feature request via feedback hub on new feature request to be considered in future releases of HoloLens2 emulator.
For how to post feedback request, you can follow this doc: Send feedback to Microsoft with the Feedback Hub app.
Out of the field of HoloLens app development, you can code your own desktop program to capture the view in the emulator window, and then use OCR technology to recognize the character in the screen. Finally, customize your input to the simulator according to the result. However, this is not a simple way.

How to use google assistant link out suggestion in DialogFlow?

I have two google assistant responses:
simple response, cause I must to make it for google assistant
link out suggestion response, which I need to display
When I test it, a have just simple response.
Can you prompt please, what should I do to get linked out suggestion response?
You have sent a screenshot of the speech interactions of your conversation. Suggestion chips are only shown in the visual display section of the simulator. This can be found on the left side of the web page, either under the Suggestion section or in the visual display of your device.
If you do not see anything on the left side, check if you have set your simulator to a platform that supports visuals during its conversation, for instance:
Phone
Smart Display ( Only normal suggestions will show on smart displays)

Google actions simulator does not work for standard Google Assistant features

I have built an action with Actions-on-Google(2.5.0) and dialogflow-fulfillment(0.6.1) Node.js Library. I cannot test my app on dialogflow test console because I return conv object which is not supported there. Now, I cannot test it in the google action simulator, either. This is the error I get:
Invocation Error
You cannot use standard Google Assistant features in the Simulator. If you want to try them, use Google Assistant on your phone or other compatible devices.
I'd like to use the simulator, so I can debug better.
It is how the error message says: The simulator lacks many features that normal Assistant surfaces (speaker, Assistant app) have and can even sometimes give you completely wrong error messages. There is really no way around testing your app on real devices.
You can however view the same logs that you see in the simulator in Google Stackdriver Logging. To activate this go to the settings of your Dialogflow agent, select the "General" tab and activate the "Log interactions to Google Cloud" option. Then click on the link below the button to get to the logs. The default view will probably show you only the Actions-on-Google logs, i.e. the requests between your users and AoG. To see the requests between Dialogflow and your webhook click on the dropdown arrow in the filter box, select "Convert to advanced filter" and set the filter to resource.type="global".
If you have multiple Actions projects that use the same display name, the simulator chooses one at random. For consistent testing results, use unique names or release channels for each Action.
Reference Link: https://support.google.com/actions-console/answer/9613473?hl=en
Now how to give a display name or change the display name.
Go to develop tab and give display name or change display name as follows
You should definitely be able to test your action in the Actions simulator. Note that the interaction model b/w Dialogflow and Actions simulators are different. In Dialogflow, you can send commands directly to your agent. In the Actions simulator, you first need to invoke your Action.
At the bottom of the screen, you'll see a suggested input like "talk to my test app".
You'll need to send this, or a similar command, first. That will then invoke your action, and you'll be able to send commands to it after. You will see it is invoked by a banner at the top of the simulator.

Where is AUTHORIZE and PREVIEW button in API.AI integration panel

I am trying to test my agent on a real device. Following instruction from
Official Google video
However, my panel for integrating Actions on Google doesn't look similar to the one shown in the video.
I see neither AUTHORIZE nor PREVIEW button. I can not set invocation name and TTS voice as well.
I attached my panel that I see. Is there anything missing?
My Action on Google dialog:
That video predates recent changes in the API.AI Actions on Google screen.
The name and voice are now set in the Actions Console, but neither are required to do testing.
If you're willing to accept the default voice for testing, you can
Click on the "Test" button in the screen you're referencing.
You can then go to the Simulator (there will be a link provided) or ask any Assistant device (such as Home) to start your action with "Talk to my test app".
They launched this new platform on the Google I/O.
Now you have the invocation name and everything else in the Actions on Google Console.
It's a pretty intuitive platform, the only annoying thing is that you need to fulfill all of the App information before testing. You can see that the simulator is in this console as well.
Whenever you modify things in API.ai, click the UPDATE button (the one in your print screen), then TEST. Then you can test in the console simulator.

how pocket chrome extension integrate buttons on Twitter for one-click saving

Pocket chrome extension has a feature that "Integrated buttons on Twitter.com and Google Reader for one-click saving".
Here's a screenshot of my twitter timeline.
Every tweet will have a bunch of buttons (Reply, Retweet, Favorite, etc) if we hover over it. With the Pocket extension enabled, we can also have a integrated pocket button and save tweet to pocket with one-click (red line).
I wonder how that can be implemented since I intend to add my own buttons that will sync tweet to other services.
And ideas or links would be helpful.
Thanks
To modify the content of a web page, you have to use content scripts. In your content scripts, you will call DOM functions to add those share buttons (depending on how the page is generated and the structure of the page, it can be very easy or very tricky). And you add a click event listener to those added buttons. In the event handler, you can obtain the tweet text and send a message to the background page. Your background page handles these messages and make appropriate XHRs to share the tweet to other services.

Resources