explore voice capability in Microsoft chat bot - azure

I have chat bot developed using Microsoft bot frame work in , in chat bot we have a option for customer to ask a question for that we need to make it voice enabled as of now we are using node.js and azure for development . i wanted to know how we can achieve it ?

On google chrome, mic can be enabled easily. Google chrome will also translate the voice for you. First follow below link, see if you are using same framework : https://learn.microsoft.com/en-us/azure/bot-service/bot-service-channel-connect-webchat-speech?view=azure-bot-service-3.0
Post that following link can also help enabling the voice. Can't Chrome's speechSynthesis work offline?
You can enable voice on Firefox as well using bing translate. This link will help you: https://github.com/Microsoft/BotFramework-WebChat/issues/1141

Related

How to integrate dialogflow with instagram

I wanted to create an Instagram chatbot but I can't find a way to connect Dialogflow and Instagram is there a way to do so? Even if using another chatbot rather than Dialogflow?
The only way I found was by using the following android app
https://play.google.com/store/apps/details?id=tkstudio.autoresponderforig&hl=en
This requires to have your phone ON and Connected to internet all time tho to work
but untill they release an official api I think this is all I have.

Watson Assistant with Instagram

I'm trying to develope a chatbot to Direct Instagram. Anybody knows if is possible to integrate Watson Assistant (Chatbot) with Direct Instagram?
It is possible to create chat-bots in Instagram. Whether you can use Watson Assistant or not depends mostly on the Instagram API, that you should check.
But definitely yes! There are nowadays some IG chatbots! You can check some sources to see that!
For example, what this image shows is a spanish IG chatbot:
As far as I know Instagram does not publish APIs for chatbot development, so I don't believe any platforms really support chatbots on Instagram, including Watson.

Enabling the microphone in a browser using node.js and capturing the information spoken

I have been struggling for a while and have been looking through many examples on how to enable the mic in a browser with Node.js. I have seen several Javascript examples but, I can't get the spoken content out of them and store it in variables. How can I enable the mic using Node.js? Will I need a specific npm package? I am currently working with the IBM Watson Speech to Text api. Any help is appreciated! Thanks in advance!
You will need to enable the mic in the browser using a client side library.
Use the Speech-to-Text SDK here:
https://github.com/watson-developer-cloud/speech-javascript-sdk
And a working example here:
https://watson-speech.mybluemix.net/microphone-streaming.html
Please be aware that streaming microphone will not work on any version of Safari. You will need to use FireFox, Chrome or IE to use streaming microphone into Watson Speech to Text. There's a YouTube tutorial on building a simple Bluemix App using Speech to Text here: (see Chapter 3) Youtube TutorialThe supporting code is in a public git repo here: Zero To Cognitive Repo

Using Spotify with the new Google Assistant SDK?

Is it possible? I can't figure out how. I can only find instructions detailing how to set up Spotify using the Google Home app.
Per the release notes, playing music is not currently supported in the developer preview.

Possible to return an image in the Google Actions webhook response?

From the docs it seems like SpeechResponse is the only documented type of response you can return:
https://developers.google.com/actions/reference/conversation#SpeechResponse
Is it be possible to load an image or some other type of media in the assistant conversation via API.AI or the Actions SDK? Seems like this is supported with api.ai for FB, other messengers:
https://docs.api.ai/docs/rich-messages#image
Thanks!
As of today, Google Actions SDK supports Conversation Actions, by building a better Voice UI, which is integrated with Google Home.
Even API.AI integrations with Google Actions can be checked out here, which shows currently no support for images in the response.
When they provide integrations with Google Allo, then in the messaging interface, they might start supporting images, videos etc.
That feature seems to be present now. You can look it up in the docs at https://developers.google.com/actions/assistant/responses
Note: But images would be supported only on devices with a visual output. So Google Home would obviously not be able to do it. But the devices with screen do support a card with an image.
Pro Tip: Yes you can
What you want to do is represent your (image/video) as a URL within API.AI and render the URL as a (image/video) within your app
see working example

Resources