I'm trying to implement an integration between DialogFlow and Cortana using : https://dialogflow.com/docs/integrations/cortana-exporter
But I would like to know if anyone has succeeded because I have considerable difficulty in implementing the C # code of visual studio.
Is the example available correct and integrates correctly?
THK
Related
I have used Dialogflow for developing the app for Google Assistant. I have created intents and entities in the Dialogflow web GUI and I'm using a webhook response for further conversation.
Now I want to build a chatbot that is part of an existing Android or iOS app and use the code I already wrote for Dialogflow as part of this. What do I need to be aware of when I do so? It looks like I can use the SDK for that platform or make calls to the Dialogflow REST API. Which is faster or are there any tradeoffs? Can I use the Dialogflow NLP without going over the network?
Note: Dialogflow API V1 is deprecated and will be shut down on October 23th, 2019.
That means that the official Javascript, native Android, native iOS and Cordova clients will stop working since they all use V1. There's no word if and when these clients will be upgraded to V2.
So the best bet right now is to use the REST APIs.
There are a few things to be aware of when moving from fulfillment that was built for Actions on Google to using this to also provide responses for other platforms. Actions on Google expects the responses to be formatted slightly differently, and if you're using AoG specific characteristics (such as a SimpleResponse object or a Card object), then it might not appear for other Dialogflow integrations. So you'll need to go over your webhook code to make sure what you send back works across platforms. Your logic and the Dialogflow UI builder should pretty much remain the same - it is just your backend that might need some work.
To make the call, as you say, you can either do the REST call yourself or use the SDK built by Dialogflow. While the SDK will be slightly faster, since it is using ProtoBuffs instead of REST, the difference will likely be fairly slight in most cases. If you're planning to stream audio, you will likely need to either use the SDK or your own ProtoBuff implementation because REST doesn't handle that as well. If you're just sending text, and are more comfortable with doing REST APIs, then this is a perfectly reasonable approach.
There is no "local Dialogflow" library. All calls have to go over the network. There are other libraries that do Speech-to-Text and NLP locally if that is what you need.
i just started on a project in DialogFlow and i was wondering is it possible to link my dialogflow to a specific desktop application? And if possible, what is the solution?
For example:
By saying "launch app", it will open up the desktop application "app"
While this is certainly something that Dialogflow's APIs can help with - this isn't a feature provided by Dialogflow itself. Dialogflow's NLP runs in the cloud - there is nothing local that it can "do".
However, you can create a launcher app that does this sort of thing by opening the microphone and sending either the stream or a speech-to-text version to Dialogflow through the Detect Intent API. Dialogflow can determine an Intent that would handle this and pass that information back to your launcher, and your launcher can then locate the app and start it.
I'm not sure how practical this would be, however. Microsoft already has this feature built-in with Cortana, and Google is building the Assistant into ChromeOS which will do this as well. While I'm not aware of Apple doing this, I may just have missed an announcement that Siri does this as well. And if there isn't someone who is doing this for Linux using some local speech-to-text libraries, it sounds like the perfect opportunity to do so.
You may try and use different Dialogflow clients available on their GitHub page. Java Client 2 may be helpful to start your work. However, you will be required to write your own UI code and have to consume Dialogflow API.
I am facing a weird issue. I am trying to create a custom model in IBM Watson natural language understanding with lite plan.No launch tool option is shown to create custom model. To be clear, ideallly the page should be like this as described in all the tutorials,
But What I am getting is
I tried all possibilities there is no way to navigate to the annotator tool page. Please somebody help
Your first pic looks Watson Knowledge Studio. Watson Knowledge Studio is a different service you can also create IBM Cloud Catalog. Please check it.
https://www.ibm.com/watson/services/knowledge-studio/
This is my first experience of using IBM watson and I am stuck with integrating watson conversation with speech-to-text and text-to-speech api services using node.js platform.
Done with conversation part but can't find a method to make
input speech ==> output of STT => input of conversation => output of conversation => input to TTS ==> output speech
I have tried multiple ways but still can't get even 1% of success. Followed multiple github repos even this one too with most forks https://github.com/watson-developer-cloud/node-sdk and multipletjbot recipes, etc still no results.
Can anyone here guide me with the right method?
the error with this link is attached below
does this help? I think this demo is similar to what you are doing. It is using STT, TTS, Conversation I believe.
https://github.com/watson-developer-cloud/speech-javascript-sdk/tree/master/examples
https://speech-dialog.mybluemix.net/
https://github.com/nfriedly/speech-dialog
There are some great examples that you can download, and play around with on the Watson Starter Kits Page.
Create a few of them and download the code, and then plunder what you need for your app or use one of the starter kits as the beginning of your app.
Starter kits on the page linked above that I think can help:
Watson Speech to Text Basic
Watson Assistant Basic
Watson Text to Speech Basic
Each of the starter kits listed above are available in Node and have README.md files to help you set everything up.
I am still trying to understand Chatbots. Currently i have already made chatbot which is integrated in skype. I have Sharepoint online where user search for FAQ. If they dont find then they ask BOT which sends request to LUIS and Qnamaker.
Qnamaker then sends response back by looking it into its database. I upload FAQ from sharepoint to Qnamaker using sharepoint workflows. But i want to write my own logic and get rid of Qnamaker.
What are ways to do it? Any good tutorials? I also wanted to know how the flow happens. For example if we dont use Qnamaker then we fire queries in sharepoint based on what user asked? I dont understand how i can fire queries in sharepoint if user makes typo then we will not get anything from sharepoint. So any tips on how to implement this without using qnamaker is highly appreciated?
The FAQ bot generator is a subset of the main Microsoft bot framework. You should do some research on the Microsoft Bot Framework. The link above takes you right to the documentation overview of the bot framework and from there you can get into developing one. They have links to a few sample projects as well as a large number of code snippets within some of the article explanations. It has a full setup guide that will walk you through the initial setup so it should be easy to get a basic echo bot running, but if you are not a programmer you should stick to the FAQ generator.
I suggest you use either node.js or c# to develop the bot since these are directly supported by the framework. I am personally using c# to build my bot from the ground up. The purpose of mine is to be used within a customer facing android/ios app that will help with questions, checking the status of different things, and even paying bills.
Just remember you will need to manually set up your cloud hosting. I host mine in azure alongside a web interface I built for it (you can build the website inside your bot if you are using c#, just replace the default.htm file in the web.config with the main page of the interface).