building a cortana skill to take a picture - node.js

i am trying to build a Cortana skill to be able to take a picture using the surface camera. how to do that? currently my skill is able to do question answers using the bot framework and using nodejs. code looks like
bot.dialog('ScanCardDialog', function(session){
<what need to be done to take a picture? goes here>
}).triggerAction({ matches: /(\w)+ (card)/i});

The only way you can do that is if you have a companion UWP app that can take a picture. Cortana skills can deep-link to a UWP app but cannot access the device camera yet. Hope this helps!!

Related

How to build a searchable list with scroll in adaptive card as WhoBot in MS Teams bot framework

I am developing a MS Teams bot using Bot Framework with nodejs.
After the user asks a question, my bot has a list of items to display as WhoBot does, as shown in the below image. Does anybody know how to build this beautiful and clean menu results with a scroll? Or who can guide me on the tutorial or webpage on how to make this? Please help me!
This is the list card implementation. You can find the docs related to it here: https://learn.microsoft.com/en-us/microsoftteams/platform/task-modules-and-cards/cards/design-effective-cards#lists. Regarding the search logic, you will need to implement it at the backend in your application and pass the items to list card.

explore voice capability in Microsoft chat bot

I have chat bot developed using Microsoft bot frame work in , in chat bot we have a option for customer to ask a question for that we need to make it voice enabled as of now we are using node.js and azure for development . i wanted to know how we can achieve it ?
On google chrome, mic can be enabled easily. Google chrome will also translate the voice for you. First follow below link, see if you are using same framework : https://learn.microsoft.com/en-us/azure/bot-service/bot-service-channel-connect-webchat-speech?view=azure-bot-service-3.0
Post that following link can also help enabling the voice. Can't Chrome's speechSynthesis work offline?
You can enable voice on Firefox as well using bing translate. This link will help you: https://github.com/Microsoft/BotFramework-WebChat/issues/1141

Take panoramic photos in Android

I want to make an Android application that allows the user to take panoramic pictures... I have been searching for several hours for some library or some sample code or tutorial but I didn't find anything very interesting. Some applications like "Cardboard camera" or the standard Android camera can do this! Is there a way to call these application functions? Or some API? It still would be good if the app I want to make just would use an external app to take the photo. Please help me, thank you :)

Detect audio from the user and converte to text to command AI bots in Unity

I am making a game where I want to command the AI using word i speak.
Say for example I can say go and AI bot goes to certain distance.
Question is I am finding asset and no provider is giving me grantee that it is possible ?
What are the difficulties for doing it?
I am programmer so if some one suggest the way to handle it I can do it.
Should I make mic listener on all the time and read audio and then pass audio to some external sdk which can convert my voice to text ?
these are the asset provider i have contacted.
https://www.assetstore.unity3d.com/en/#!/content/73036
https://www.assetstore.unity3d.com/en/#!/content/45168
https://www.assetstore.unity3d.com/en/#!/content/47520
and few more !
If someone just explains the steps I need to follow then I can try it for sure.
I am currently using this external api for pretty much the same thing: https://api.ai/
It comes with a unity SDK that works quite well:
https://github.com/api-ai/api-ai-unity-sample#apiai-unity-plugin
You have to connect a audio source to the sdk, and tell it to start listening. It will then convert your voice audio to text, and even detect pre-selected intentions from your voice audio / text.
You can find all steps on how to integrate the unity plugin in the api.ai Unity SDK documentation on github.
EDIT: It's free too btw :)
If you want to recognize offline without sending data to the server, you need to try this plugin:
https://github.com/dimixar/unity3DPocketSphinx-android-lib
It uses open source speech recognition engine CMUSphinx

Chrome based slideshow app for ChromeCast

I am trying to write a simple chrome app to play a sequence of online pictures on my chromecast device.
I have looked at some examples, but could't find anything which I could tweak around to get the simple behavior i needed. Maybe someone here could help, by providing directions or advise on getting started with developing something like that for chromecast.
UPDATE:
To give you a better idea, about the specifics, let me add some more details to my requirements.
It needs to be controlled from chrome
I want to pass a playlist with 10s-100s of images so it can slide them in circles.
After receiving playlist chromecast device should be able to continue on its own, without continuously asking for next image.
This is actually similar to backdrop feature Google is planning to introduce, but I wanted to write something myself.
Thanks
If you don't want to develop your own Cast receiver, then you can use the media namespace channel and the Styled Media Receiver to display a photo at a time:
https://developers.google.com/cast/docs/styled_receiver
You will have to add the logic to advance from photo to photo in your sender app.
If you are willing to develop your own custom receiver, then you can start with this Cast sample app:
https://github.com/googlecast/CastHelloText-android
It allows you to send messages to a custom receiver. You can use that to send the URLs of the photos and then you can add JavaScript logic in the receiver to play a slideshow.
Just to let you know, I have tried various options and ended up writing custom receiver and Chrome sender applications. This was really straightforward and exactly what I wanted.
See the links above for guidance and also examples here.

Resources