I have an exported intents & entities zip file from Dialogflow workspace and tried to import this to IBM Watson workspace its not working is there anyway we could do this ?
Is there any methods we could use to migrate intents from Dialogflow to IBM Watson workspace?
Check out a web app called QBox, it's primary purpose is to benchmark your training data, but it also lets you benechmark against multiple providers. If you run a test by uploading your DialogFlow training data, and select IBM Watson as output, you'll get an option on the resut page to download the training data in IBM Watson format.
Select Watson after you've uploaded your training data:
Run the test, wait a few minutes, and on the results page use the menu on the top right to get your Watson formatted training data:
(Disclaimer: this is a tool I work on, so I have not provided a link, but if you Google the name 'Qbox' along with the term 'chatbot', you should find it!)
DialogFlow is a different product than Watson Assistant. Watson Assistant only allows you to import a workspace that was exported from Watson Assistant (using the export workspace option in the UI) so Assistant is not able to import the dialog from DialogFlow.
You can download Intents and Entities in Watson Assistant in CSV format, see https://console.bluemix.net/docs/services/conversation/intents.html#defining-intents and https://console.bluemix.net/docs/services/conversation/entities.html#defining-entities
You can import entities and intents in a CSV format, but they must be in the specific format that Assistant requires. See the previous 2 links.
There is also this open source project Watson Assistant Workbench (WAW) that allows you to use a few additional formats to create your dialog with Watson Assistant.
Watson Assistant Workbench supports the WA JSON format, custom XML format and even MS XLS format which you can use to define your dialog. Then WAW can process all the data and generate a JSON that can be imported into WA.
This might simplify the conversation process as now you need to "only" convert dialog flow format into WAW XML or XLS (where you for example don't need to work with unique ids of the dialog nodes as in WA JSON format).
So just sharing if this might be interesting - the link to GitHub with Watson Assistant Workbench.
Related
I have some entities and intent sentences that I could use right away. It is tedious to input them by hand in the UI. I've seen that the output is just a bunch of JSONs although there are some Ids that were generated from the Dialogflow UI.
The question is, can Dialogflow be used like a regular programming language and the somehow package the JSONs into a zip that can be imported. Is the process streamlined somehow with any tool?
Following up on my previous response, these APIs can also be used in creating the agent from scratch as long as you have a GCP project you can link your agent to.
If you're referring to the JSON schema of the exported agent, currently, there’s no JSON schema for the exported agent zip file in the Dialogflow documentation as this is not intended to be edited or replicated. The exported agent zip file is meant to be a backup of your agent for future use. You can use this exported agent to replicate the current agent to a new agent. For restoring and importing agents, you need to upload a zip file which contains the agent.json file as well as the intents or entities folders.
As best practice, it is better to create the agent using Dialogflow APIs if you opt to use JSON files. By following the JSON representation of each type (agent, intents, entities, etc), you will be sure that you are providing the correct and required fields.
Dialogflow has APIs where you can make requests to create and update your agent programmatically. Each Dialogflow edition offers their own API methods.
For Dialogflow Trial and Essentials Edition, you can check the following documentation:
Setup
Dialogflow V2 API reference
Dialogflow V2Beta1 reference
Client Libraries supported
For Dialogflow CX Edition, here are the documentation that will be helpful in creating your agent programmatically:
Setup
Dialogflow CX V3 API reference
Currently available Dialogflow CX client libraries
I am trying to understand how a jump between skills should be handled in Botium Box. I am testing the dialogues of my Watson Assistant skills and I am noticing something weird.
Differently from the jumps within one skill, where each row of my input test file (in my case excel file) contains the text of a different node, when I do a jump between skills, it seems like the text of the nodes is concatenated, and therefore to make the test case passing I should write all the text in one row of my excel file (very difficult to maintain)
If that is the case, how do I concatenate normal text and utterances variables? Is there a command for that? Or am I missing something in the configuration of my botium box?
While I don't know where the described behaviour is coming from, here are some comments:
Usually, one Watson Assistant is linked to one Dialog skill (and, for Plus plans, optionally a search skill). Botium can either connect to
an assistant by using the Watson Assistant V2 SDK
or a skill by using the Watson Assistant V1 SDK with the Skill legacy mode
If you plan to use multiple skills in your chatbot, then you have to develop some code to switch between the assistants - this is called an Orchestrator in IBM terms, and you can find example code by IBM here.
In this case, the best option you have in Botium is to use the Generic HTTP/JSON Connector to connect to the API of the Orchestrator, instead of going directly to Watson APIs.
I wrote about a similar topic - how to choose a Botium connector - in my blog.
Florian, thanks for your reply. It is clear to me the concept of the orchestrator and I am actually using it as well as the HTTP/JSON Connector.
My question is more about how to write test cases in excel files that include both normal text and Utterances variables. For example, if I have in the utterances files:
utterance file
Can I have in my test case something like:
test case with text and utterance variable
If that is the case, what is the key word needed to concatenate the text ("hello") and the utterance variable (GREETING)?
I'm working on a music player for the Google Assistant Action. Are there pre-built agents for Intents and Trainingsphases available for languages other than English?
It seems possible to upload a JSON file with intents.
Are there resource available for spanish and/or german intents?
I would suggest you to take Intents from pre-built intents, from each intent fetch the training phrase and translate them into desired language then compile intent of your own.
This process requires interacting with Dialogflow using rest API's.
This reference page will help you understand different required API's.
Also, as you said, it is possible to upload json of intents so maybe convert the translated intents into json file and upload them manually.
I can get intents confidence via the JSON response object by back-end languages like Node or Python, but I can't get that in the browser-based IBM Watson Assistant user interface. Is there a way to get that?
You can use <? intents ?> to get the information on the intents.
The object is read/writable in dialog.
The Watson Assistant online tool to edit workspaces and the dialog elements has the "Try it out". However, it does not have the capabilities yet to explore the JSON structure returned by the message API.
What I use is this tool which allows to test a conversation, see and edit the context and inspect the confidence levels. There is also another, browser-based, tool you could use. None of them are official.
You can also use network tools on your browser to see the Json of the whole message being passed around.
Typically you right click anywhere and click “inspect element” and then go to the network tab.
This is my first experience of using IBM watson and I am stuck with integrating watson conversation with speech-to-text and text-to-speech api services using node.js platform.
Done with conversation part but can't find a method to make
input speech ==> output of STT => input of conversation => output of conversation => input to TTS ==> output speech
I have tried multiple ways but still can't get even 1% of success. Followed multiple github repos even this one too with most forks https://github.com/watson-developer-cloud/node-sdk and multipletjbot recipes, etc still no results.
Can anyone here guide me with the right method?
the error with this link is attached below
does this help? I think this demo is similar to what you are doing. It is using STT, TTS, Conversation I believe.
https://github.com/watson-developer-cloud/speech-javascript-sdk/tree/master/examples
https://speech-dialog.mybluemix.net/
https://github.com/nfriedly/speech-dialog
There are some great examples that you can download, and play around with on the Watson Starter Kits Page.
Create a few of them and download the code, and then plunder what you need for your app or use one of the starter kits as the beginning of your app.
Starter kits on the page linked above that I think can help:
Watson Speech to Text Basic
Watson Assistant Basic
Watson Text to Speech Basic
Each of the starter kits listed above are available in Node and have README.md files to help you set everything up.