How does a jump between skills should be handled in Botium Box (testing Watson Assistant)? - bots

I am trying to understand how a jump between skills should be handled in Botium Box. I am testing the dialogues of my Watson Assistant skills and I am noticing something weird.
Differently from the jumps within one skill, where each row of my input test file (in my case excel file) contains the text of a different node, when I do a jump between skills, it seems like the text of the nodes is concatenated, and therefore to make the test case passing I should write all the text in one row of my excel file (very difficult to maintain)
If that is the case, how do I concatenate normal text and utterances variables? Is there a command for that? Or am I missing something in the configuration of my botium box?

While I don't know where the described behaviour is coming from, here are some comments:
Usually, one Watson Assistant is linked to one Dialog skill (and, for Plus plans, optionally a search skill). Botium can either connect to
an assistant by using the Watson Assistant V2 SDK
or a skill by using the Watson Assistant V1 SDK with the Skill legacy mode
If you plan to use multiple skills in your chatbot, then you have to develop some code to switch between the assistants - this is called an Orchestrator in IBM terms, and you can find example code by IBM here.
In this case, the best option you have in Botium is to use the Generic HTTP/JSON Connector to connect to the API of the Orchestrator, instead of going directly to Watson APIs.
I wrote about a similar topic - how to choose a Botium connector - in my blog.

Florian, thanks for your reply. It is clear to me the concept of the orchestrator and I am actually using it as well as the HTTP/JSON Connector.
My question is more about how to write test cases in excel files that include both normal text and Utterances variables. For example, if I have in the utterances files:
utterance file
Can I have in my test case something like:
test case with text and utterance variable
If that is the case, what is the key word needed to concatenate the text ("hello") and the utterance variable (GREETING)?

Related

Cannot create intent with certain trigger phrases

It seems Google Assistant is unable to handle certain trigger phrases in the intent. The ones I have come across are the following:
Send message to scott
chat with q
Send text to felix
It seems to work fine inside dialogflow simulator. However, it doesn't work at all in Action Console Simulator or on a real device like google home mini. On Action Console Simulator, it gives "You cannot use standard Google Assistant features in the Simulator. If you want to try them, use Google Assistant on your phone or other compatible devices" and on a real device it gives an error "I am sorry, i cannot help you .." and exits completely and leaves the device in a funky state. It doesn't seem to trigger any fallback intent. I have tried adding input context but make no difference.
It's very easy to reproduce. Just create a demo action with an intent for the above phrases along with "communicate with penny", invoke your app and then try the above phrases after the welcome message. It will only work if you say "communicate with ..".
Is this a known issue/limitation? Is there a list of phrases that we cannot use to trigger an intent?
The Actions on Google libraries (Node.js, Java) include a limited feature-set that allow third-party developers to build actions for the Google Assistant.
Standard features available in the Google Assistant (like "text Mary 'Hello, world!'") won't be available in your action until you build that feature, using fulfillment.
Rather than looking for a list of phrases you can't use, review the documentation for invocation to see what you can use. Third party actions for the Google Assistant are invoked like:
To learn how to get started building for the Google Assistant, check out Google's codelabs at https://codelabs.developers.google.com/codelabs/actions-1/#0
If you've already reviewed Google's Actions on Google Codelabs through level three, consider updating your question to include a list of your intents and a code sample of your fulfillment, so other Stack Overflow users can understand how your project behaves.
See also, What topics can I ask about here?, and How do I ask a good question?

Is there a way now to get requests in an Action in two different languages?

Is it possible to receive requests in two different languages in one Action, now that the Google Assistant is Bilingual:
It is possible for on Action to be written to work in multiple languages and locales. There's guidance in the documentation to extend a single Dialogflow agent as well. However, I don't believe the Action will be multilingual. It'll depend on what invocation phrase you use.
If you say "Talk to my test app", you'll get the English version.
If you say "parler avec mon application de test", you'll get the French version.
If your Action has a fulfillment, you'll be able to get the current locale by getting conv.user.locale.
Yes you can.
You select up multiple languages in console.Actions.google.com
You then add a second language to your project.
Under your project name you now have multiple language markers.
You have to set up intent verbal triggers for both the main language and also the secondary language in dialogflow.
It is in this way that the system recognises what language is being spoken and kicks off the appropriate intent.
Note: If you use webhook functions they too will have to be updated to support multilingual functionality. I have implemented i18n as my framework.
There is a great tutorial at:
https://medium.com/voiceano/publishing-bilingual-actions-for-google-assistant-61c326d1b79?fbclid=IwAR1ysjBecJpZUP2bpUgXMZDkvpS6V4qvY75S0RdVw8q0PtZAjPMoTcty2vU

Importing dialogflow intents and entities to IBM conversation workspace

I have an exported intents & entities zip file from Dialogflow workspace and tried to import this to IBM Watson workspace its not working is there anyway we could do this ?
Is there any methods we could use to migrate intents from Dialogflow to IBM Watson workspace?
Check out a web app called QBox, it's primary purpose is to benchmark your training data, but it also lets you benechmark against multiple providers. If you run a test by uploading your DialogFlow training data, and select IBM Watson as output, you'll get an option on the resut page to download the training data in IBM Watson format.
Select Watson after you've uploaded your training data:
Run the test, wait a few minutes, and on the results page use the menu on the top right to get your Watson formatted training data:
(Disclaimer: this is a tool I work on, so I have not provided a link, but if you Google the name 'Qbox' along with the term 'chatbot', you should find it!)
DialogFlow is a different product than Watson Assistant. Watson Assistant only allows you to import a workspace that was exported from Watson Assistant (using the export workspace option in the UI) so Assistant is not able to import the dialog from DialogFlow.
You can download Intents and Entities in Watson Assistant in CSV format, see https://console.bluemix.net/docs/services/conversation/intents.html#defining-intents and https://console.bluemix.net/docs/services/conversation/entities.html#defining-entities
You can import entities and intents in a CSV format, but they must be in the specific format that Assistant requires. See the previous 2 links.
There is also this open source project Watson Assistant Workbench (WAW) that allows you to use a few additional formats to create your dialog with Watson Assistant.
Watson Assistant Workbench supports the WA JSON format, custom XML format and even MS XLS format which you can use to define your dialog. Then WAW can process all the data and generate a JSON that can be imported into WA.
This might simplify the conversation process as now you need to "only" convert dialog flow format into WAW XML or XLS (where you for example don't need to work with unique ids of the dialog nodes as in WA JSON format).
So just sharing if this might be interesting - the link to GitHub with Watson Assistant Workbench.

How To construct Intents in Dialogflow

I'm creating a chatbot to identify questions about store and products and answer accordingly with Dialogflow. But when constructing intents I came across this problem. The approaches I think I can construct as follows.
1st Approach
Create multiple intents
GetPrice, GetColor, GetAvailability, GetType, GetStoreName, GetStoreContact
The difficulty that I found in this approach is I have to create dozens of intents for all product types and for all types of questions about store
The advantage is that I can train for the intents seperately.
2nd Approach
Create 2 intents
ProductQuestions, StoreQuestions
The training has to be done for all the 1st approach question types in those 2
What approach I should take? In future this will be more scalable.
Most logic for conversation design can be based on your personal preferences. If you're looking for best practices, check out Google's documentation here:
https://developers.google.com/actions/assistant/best-practices
As per my opinion you should go with 1st approach. It is more flexible and scalable.
You would need to make many intents for sure but you would be able to get what user wants to know exactly.
In the 2nd approach, you would need to do many things for which you are using DialogFlow.
Try making conversation flow chart before designing the intents.
Using Dialogflow:
WorkFlow:
Open the Actions Console.
Click on Add/import project.
Type in a Project name, like "actions-codelab". This name is for your own internal reference; later on, you can set an external name for your project.
Click Create Project.
Rather than pick a category, click Skip on the upper-right corner.
Click Build > Actions in the left nav.
Click Add your first Action.
Select at least one language for your Action, followed by Update. For this codelab, we recommend only selecting English.
On the Custom intent card, click Build. This will open the Dialogflow Console in another tab.
2. Test with Dialogflow:
Dialogflow generates and uploads an Action package to your actions project automatically when you test it. To test your Action:
Make sure the Web & App Activity, Device Information, and Voice & Audio Activity permissions are enabled on the Activity controls page for your Google account.
Click on Integrations in the Dialogflow console's left navigation.
Click on the Google Assistant card to bring up the integration screen and click TEST. Dialogflow uploads your Action package to Google's servers, so you can test the latest version in the simulator.
In the Actions console simulator, enter "talk to my test app" in the Input area of the simulator to test your Action. If you have already specified an invocation name and saved your invocation information, you can start the conversation by saying talk to instead.
Note: If you don't see a TEST button, you need to click on the AUTHORISE button first to give Dialogflow access to your Google account and Actions project.
For more information refer below link:
https://codelabs.developers.google.com/codelabs/actions-1/index.html#0

How can I get intent confidence in the IBM Watson Assistance user interface?

I can get intents confidence via the JSON response object by back-end languages like Node or Python, but I can't get that in the browser-based IBM Watson Assistant user interface. Is there a way to get that?
You can use <? intents ?> to get the information on the intents.
The object is read/writable in dialog.
The Watson Assistant online tool to edit workspaces and the dialog elements has the "Try it out". However, it does not have the capabilities yet to explore the JSON structure returned by the message API.
What I use is this tool which allows to test a conversation, see and edit the context and inspect the confidence levels. There is also another, browser-based, tool you could use. None of them are official.
You can also use network tools on your browser to see the Json of the whole message being passed around.
Typically you right click anywhere and click “inspect element” and then go to the network tab.

Resources