Microsoft has provided some sample applications of Speech To Text & Vice-versa in GitHub.
I am using demo-RollerSkill sample code. I have hosted this application in our environment and created new BOT using dev.BotFramework.com.
As you all may know that this application is in C#.net
I am facing below challenge,
This application voice commands(Speech To Text & Vice versa) works fine in emulator but in WebChat channel, it does not show MIC icon.
So is voice command at the moment supports only node.js BOT?
Any quick help is appreciable.
Thank you.
Related
I just wanted to know if we can somehow integrate alexa or google assistant onto a website I made. I don't want to make completely new skills or apps. I just wanted to know if I type something, can I get a reply from google assistant or alexa and show on website.
I think Google has some google assistant sdk but then, it's written on python and even the node-js one is dependent on python environments
So is there any chance I can do this?
No.
The only way to initiate conversation with the smart speaker is by voice. No server side activation, sorry.
Im doing a project where i use a node mcu board to control lights in my house and im using a service called Blynk which provides an app like environment to interact , turn the lights on and off with virtual buttons and for each button we have a webhook that can be used.
So i want to integrate that webhook with custom google assistant commands but as i scoured the internet i only found ifttt which i already know but, is there any way to trigger webhooks with voice command without using ifttt and using the console google thingy?
If yes can someone please explain me how to do that?
Thanks in advance.
An android app has been developed by my development team. For which a chatbot is to be developed. So I chose DialogFlow platform to create the chatbot. Here, the API's for the app screens has been created by the development team. For the chatbot in DialogFlow, after creating all the necessary intents, is it just enough to enter the API url in the webhook url section or do I need to apply any logic in inline editor. [Here, the API's are created using python and it is connected with MySql DB]
As a beginner to DialogFlow, I couldn't move forward. Can anyone please help me out. Thanks in advance.
If you made your intents for your chatbot you will still need to write code which chooses what to do for each intent. Dialogflow's documentation can explain you more about the details of what you need to do.
Below is an image of an overview of all components from the documentation. To connect your chatbot to your MySQL API you will have to write a webhook service. This service can be an API which you host somewhere or code that you write in Dialogflows inline editor. Here you can program what API's to call on each intent and how to output it to the user. More info about that can be found here.
I have chat bot developed using Microsoft bot frame work in , in chat bot we have a option for customer to ask a question for that we need to make it voice enabled as of now we are using node.js and azure for development . i wanted to know how we can achieve it ?
On google chrome, mic can be enabled easily. Google chrome will also translate the voice for you. First follow below link, see if you are using same framework : https://learn.microsoft.com/en-us/azure/bot-service/bot-service-channel-connect-webchat-speech?view=azure-bot-service-3.0
Post that following link can also help enabling the voice. Can't Chrome's speechSynthesis work offline?
You can enable voice on Firefox as well using bing translate. This link will help you: https://github.com/Microsoft/BotFramework-WebChat/issues/1141
i just started on a project in DialogFlow and i was wondering is it possible to link my dialogflow to a specific desktop application? And if possible, what is the solution?
For example:
By saying "launch app", it will open up the desktop application "app"
While this is certainly something that Dialogflow's APIs can help with - this isn't a feature provided by Dialogflow itself. Dialogflow's NLP runs in the cloud - there is nothing local that it can "do".
However, you can create a launcher app that does this sort of thing by opening the microphone and sending either the stream or a speech-to-text version to Dialogflow through the Detect Intent API. Dialogflow can determine an Intent that would handle this and pass that information back to your launcher, and your launcher can then locate the app and start it.
I'm not sure how practical this would be, however. Microsoft already has this feature built-in with Cortana, and Google is building the Assistant into ChromeOS which will do this as well. While I'm not aware of Apple doing this, I may just have missed an announcement that Siri does this as well. And if there isn't someone who is doing this for Linux using some local speech-to-text libraries, it sounds like the perfect opportunity to do so.
You may try and use different Dialogflow clients available on their GitHub page. Java Client 2 may be helpful to start your work. However, you will be required to write your own UI code and have to consume Dialogflow API.