Cortana-Activate app using text - winrt-xaml

I need help in understanding how I can search or activate my app using Cortana.
I have integrated the VCD file and able to launch my application using Speech.
I also wanted to activate my app when User searches with keyword on windows instead of using Speech.
Any pointers!!
Thanks,

Remember, Cortana requires the moniker you selected for your app. When you have to say "MyApp do my command", you must also type "MyApp do my command".
So here are the two things to remember:
Cortana works fine without a microphone
Typing to Cortana has the same rules
When your app is activated, you can ask how it was activated (voice | type)
Best of luck!

Related

Linking DialogFlow to a Desktop Application

i just started on a project in DialogFlow and i was wondering is it possible to link my dialogflow to a specific desktop application? And if possible, what is the solution?
For example:
By saying "launch app", it will open up the desktop application "app"
While this is certainly something that Dialogflow's APIs can help with - this isn't a feature provided by Dialogflow itself. Dialogflow's NLP runs in the cloud - there is nothing local that it can "do".
However, you can create a launcher app that does this sort of thing by opening the microphone and sending either the stream or a speech-to-text version to Dialogflow through the Detect Intent API. Dialogflow can determine an Intent that would handle this and pass that information back to your launcher, and your launcher can then locate the app and start it.
I'm not sure how practical this would be, however. Microsoft already has this feature built-in with Cortana, and Google is building the Assistant into ChromeOS which will do this as well. While I'm not aware of Apple doing this, I may just have missed an announcement that Siri does this as well. And if there isn't someone who is doing this for Linux using some local speech-to-text libraries, it sounds like the perfect opportunity to do so.
You may try and use different Dialogflow clients available on their GitHub page. Java Client 2 may be helpful to start your work. However, you will be required to write your own UI code and have to consume Dialogflow API.

How i can make a code in dialogflow that open the camera and take a picture?

i'm new in the chat bot programming . I would like to do exactly the command "ok google, take a picture", that the android open the camera and in 3 second take the pic. Dialogflow is a service from google, so I thinking that there are some library with some example of this, or if not, how I need to search to put this command in my action ?
PS: I'm making a location and opinion action that receive from the user the place and the opinion about the place, so i would like to ask if the user want to take a pic from the place using this, but a don't know how a search this!
Unfortunately, there is currently no library and no direct support for this type of thing. The Assistant does not give Action developers access to the camera. In fact, most of the work your Action does is on a cloud-based server, not on the device itself.
You can, in some cases, use something like the Android Link helper, but this requires the user to have installed your app on their phone, and doesn't quite do what it sounds like you want.

actions on google demo code is not working on my android phone

I followed https://developers.google.com/actions/dialogflow/first-app tutorial and built my first google action. The action is working fine on my laptop browser in the test (Followed "Preview the App" section of the tutorial) environment. However, when I am trying to use this action on my phone by saying "OK Google talk to my first app". It's not working. Is it supposed to work on my phone as well? I have logged in with the same Gmail account on my phone also.
PS - I have posted the same question on "actions on google" google plus community google as well but I am not sure if that community is to post such questions or not. Therefore, I am posting it here also.
Thanks in advance!
Once you have enabled testing through the simulator, it should be available on all devices (mobile, speakers like Google Home, etc) with the same account your simulator is running in. Double check to make sure they're the same account.
In your case, however, you may be using the wrong invocation phrase. If you actually said what you did above, you need to say the exact same thing that you typed to invoke it.
If you haven't set a name in the configuration, then that phrase will be
Talk to my test app
Update
As you note in the comments - you also need to make sure you're running the Google Assistant, and not one of the other voice search components. The Google Assistant requires:
Android 6.0 or higher
Google app 6.13 or higher
Google Play services
1.5 GB of memory
720p screen resolution
Phone set to a supported language
As you probably already know you have to enable Web & App Activity, Device Information, Voice & Audio Activity in the Activity controls page.
Then you also have to be sure that the language of your agent is the same used by your google assistant.
I solved thanks this last step.
Hope this helps

Google Actions/Home/Assistant - Custom App Name not recognized. Mistaken for another word

been trying to figure out how to resolve this.
I have an app via api.ai to Google Assistant for Google Home and if I "type" my app into google assistant in test mode, it works. For example "Hey Google, let me talk to Simonee". Google Assistant replies with "Sure, here is Sinomee and then the app kicks in".
However, if I speak it, no matter how I try it, over the mic, Google Assistant thinks I'm saying "cinnamon". Is there any way to register the name of the app on Google home? or tell is the pronunciation so it knows to kick off your app? So that the name of the app overrides a similar word?
Thanks.
If you're still testing, there isn't much that you can do. Adding a shortcut through the Google Home app might help.
However, when you submit your app for review, one of the things you need to do is specify the invocation name, which can be different than the name of the app itself. This is to deal with pronunciation issues, and is why they suggest you record the invocation name, rather than typing it in. For very complicated pronunciations, you may wish to also specify in the notes how it is pronounced and why - this will help them shape the recognizer to capture your name correctly.

Cortana skill not working on Windows 10 or iOS

I have added a Cortana skill using Microsoft Bot Framework. My invocation phrase is "My Skill". When I tried to talk to Cortana in iOS or Windows, it doesn't invoke the skill. Instead, it keep directing me to Bing results. Here are the sentences I talked to Cortana:
Start My Skill
Ask My Skill to
Tell My Skill that
Anyone advices?
One friend fixed this issue, so at the moment Cortana only supports English(United States). To fix it, go to Cortana setting (please note , not the OS language setting) -> In Language, select English (United States). And it should work, please note for apps on the phone, a restart will be needed.
You might find a better success rate for skill invocation by following the Cortana Invocation Name Guidelines. More specifically, the docs have a short (but helpful) section on Invocation Name Recommendations.
Following this, the emphasis and intonation you place on the skill invocation name will also help determine if your skill activates, or if you end up with a Bing search for, "ask my skill..."
I added myself to the Test Group and then joined the group using the Group Access URL. Then it worked.
It also helped to use a very distinct invocation name.

Resources