I want to know if chatbase.com supports others language than English. My bot talks in Portuguese.
I saw the chatbase.com documentation, but doesn't found it there.
Advance, my chatbot has IBM Watson technology. Chatbase works properly with it, sure?
All Chatbase standard reports support UTF8 characters. Some of the advanced features such as clustering are currently only available for English and Spanish.
You can set up a custom server to server integration with any platform. Please see this guide for details on how to format your messages into a JSON payload to upload to Chatbase.
Related
Google Clouds Text-To-Speech API has a WaveNet model whose output in my opinion sounds way better than the standard speech. This model can be used in Dialogflow agents (Settings > Speech > Text To Speech), which results in the generated speech being included in the DetectIntentResponse. However, I can find no way to use this speech with the Actions-On-Google integration, i.e. in an actual Google Assistant app. Have I overlooked this, or is this really not possible, and if so, does anyone know when they plan to enable this?
In the Actions console, going to the Invocation page lets you select a TTS voice.
All of the voices can be demoed on the Languages & Locales page of the docs, and the vast majority of them use WaveNet voices.
As right now i have some problems regarding APIs query for language. I knew that Cambodian language is not support in DialogFlow and when i wrote in DialgFlow in test within DialogFlow , it works fine but when i'm trying to send through APIs it's not work because resolve query going to be unreadable code. Is there any possible way that we can send it through APIs in another language that is not yet support?
Thanks
When you say Cambodian language, do you mean Khmer? Google Cloud's Translate API (https://cloud.google.com/translate/) supports Khmer, which could help with this.
As far as Dialogflow is concerned, these pages of documentation should help with fulfillment:
https://dialogflow.com/docs/reference/language
https://dialogflow.com/docs/agents/multilingual
i just started on a project in DialogFlow and i was wondering is it possible to link my dialogflow to a specific desktop application? And if possible, what is the solution?
For example:
By saying "launch app", it will open up the desktop application "app"
While this is certainly something that Dialogflow's APIs can help with - this isn't a feature provided by Dialogflow itself. Dialogflow's NLP runs in the cloud - there is nothing local that it can "do".
However, you can create a launcher app that does this sort of thing by opening the microphone and sending either the stream or a speech-to-text version to Dialogflow through the Detect Intent API. Dialogflow can determine an Intent that would handle this and pass that information back to your launcher, and your launcher can then locate the app and start it.
I'm not sure how practical this would be, however. Microsoft already has this feature built-in with Cortana, and Google is building the Assistant into ChromeOS which will do this as well. While I'm not aware of Apple doing this, I may just have missed an announcement that Siri does this as well. And if there isn't someone who is doing this for Linux using some local speech-to-text libraries, it sounds like the perfect opportunity to do so.
You may try and use different Dialogflow clients available on their GitHub page. Java Client 2 may be helpful to start your work. However, you will be required to write your own UI code and have to consume Dialogflow API.
This is my first experience of using IBM watson and I am stuck with integrating watson conversation with speech-to-text and text-to-speech api services using node.js platform.
Done with conversation part but can't find a method to make
input speech ==> output of STT => input of conversation => output of conversation => input to TTS ==> output speech
I have tried multiple ways but still can't get even 1% of success. Followed multiple github repos even this one too with most forks https://github.com/watson-developer-cloud/node-sdk and multipletjbot recipes, etc still no results.
Can anyone here guide me with the right method?
the error with this link is attached below
does this help? I think this demo is similar to what you are doing. It is using STT, TTS, Conversation I believe.
https://github.com/watson-developer-cloud/speech-javascript-sdk/tree/master/examples
https://speech-dialog.mybluemix.net/
https://github.com/nfriedly/speech-dialog
There are some great examples that you can download, and play around with on the Watson Starter Kits Page.
Create a few of them and download the code, and then plunder what you need for your app or use one of the starter kits as the beginning of your app.
Starter kits on the page linked above that I think can help:
Watson Speech to Text Basic
Watson Assistant Basic
Watson Text to Speech Basic
Each of the starter kits listed above are available in Node and have README.md files to help you set everything up.
I have been struggling for a while and have been looking through many examples on how to enable the mic in a browser with Node.js. I have seen several Javascript examples but, I can't get the spoken content out of them and store it in variables. How can I enable the mic using Node.js? Will I need a specific npm package? I am currently working with the IBM Watson Speech to Text api. Any help is appreciated! Thanks in advance!
You will need to enable the mic in the browser using a client side library.
Use the Speech-to-Text SDK here:
https://github.com/watson-developer-cloud/speech-javascript-sdk
And a working example here:
https://watson-speech.mybluemix.net/microphone-streaming.html
Please be aware that streaming microphone will not work on any version of Safari. You will need to use FireFox, Chrome or IE to use streaming microphone into Watson Speech to Text. There's a YouTube tutorial on building a simple Bluemix App using Speech to Text here: (see Chapter 3) Youtube TutorialThe supporting code is in a public git repo here: Zero To Cognitive Repo