After I input App ID and password, I selected Language Understanding Intelligent Service Template and clicked create button, it's showing loading page and then get stuck on this step forever.
Can someone advise why this happened?
Thanks,
Jack
Related
I was looking for a basic ability for bot to learn from the questions and answers the agents provide that can be used as suggestion the chatbot replies to users before it connects them to live agent. I looked up QnA maker does similar stuff and can I be able to integrate it over here?
Go to https://language.cognitive.azure.com/
Click on Create
Choose Custom Question Answering
After creating, go to that project
Click on Edit Knowledge Base and create question and answer pairs.
After creating the Knowledge base, go to deploy the knowledge base
Click on Deploy
Create a bot and attach the existing knowledge base to bot before going to the Live Chat
I'm creating a chatbot to identify questions about store and products and answer accordingly with Dialogflow. But when constructing intents I came across this problem. The approaches I think I can construct as follows.
1st Approach
Create multiple intents
GetPrice, GetColor, GetAvailability, GetType, GetStoreName, GetStoreContact
The difficulty that I found in this approach is I have to create dozens of intents for all product types and for all types of questions about store
The advantage is that I can train for the intents seperately.
2nd Approach
Create 2 intents
ProductQuestions, StoreQuestions
The training has to be done for all the 1st approach question types in those 2
What approach I should take? In future this will be more scalable.
Most logic for conversation design can be based on your personal preferences. If you're looking for best practices, check out Google's documentation here:
https://developers.google.com/actions/assistant/best-practices
As per my opinion you should go with 1st approach. It is more flexible and scalable.
You would need to make many intents for sure but you would be able to get what user wants to know exactly.
In the 2nd approach, you would need to do many things for which you are using DialogFlow.
Try making conversation flow chart before designing the intents.
Using Dialogflow:
WorkFlow:
Open the Actions Console.
Click on Add/import project.
Type in a Project name, like "actions-codelab". This name is for your own internal reference; later on, you can set an external name for your project.
Click Create Project.
Rather than pick a category, click Skip on the upper-right corner.
Click Build > Actions in the left nav.
Click Add your first Action.
Select at least one language for your Action, followed by Update. For this codelab, we recommend only selecting English.
On the Custom intent card, click Build. This will open the Dialogflow Console in another tab.
2. Test with Dialogflow:
Dialogflow generates and uploads an Action package to your actions project automatically when you test it. To test your Action:
Make sure the Web & App Activity, Device Information, and Voice & Audio Activity permissions are enabled on the Activity controls page for your Google account.
Click on Integrations in the Dialogflow console's left navigation.
Click on the Google Assistant card to bring up the integration screen and click TEST. Dialogflow uploads your Action package to Google's servers, so you can test the latest version in the simulator.
In the Actions console simulator, enter "talk to my test app" in the Input area of the simulator to test your Action. If you have already specified an invocation name and saved your invocation information, you can start the conversation by saying talk to instead.
Note: If you don't see a TEST button, you need to click on the AUTHORISE button first to give Dialogflow access to your Google account and Actions project.
For more information refer below link:
https://codelabs.developers.google.com/codelabs/actions-1/index.html#0
been trying to figure out how to resolve this.
I have an app via api.ai to Google Assistant for Google Home and if I "type" my app into google assistant in test mode, it works. For example "Hey Google, let me talk to Simonee". Google Assistant replies with "Sure, here is Sinomee and then the app kicks in".
However, if I speak it, no matter how I try it, over the mic, Google Assistant thinks I'm saying "cinnamon". Is there any way to register the name of the app on Google home? or tell is the pronunciation so it knows to kick off your app? So that the name of the app overrides a similar word?
Thanks.
If you're still testing, there isn't much that you can do. Adding a shortcut through the Google Home app might help.
However, when you submit your app for review, one of the things you need to do is specify the invocation name, which can be different than the name of the app itself. This is to deal with pronunciation issues, and is why they suggest you record the invocation name, rather than typing it in. For very complicated pronunciations, you may wish to also specify in the notes how it is pronounced and why - this will help them shape the recognizer to capture your name correctly.
I'm working for a client on their new website.
It's my last mission, so another developer is going to continue my work after my final delivry.
In order to accelerate the process of "switching developer", my client asks me to give an access of the source code to the next developer.. but as my mission is not finished yet (98% of the website is OK) I risk that the new developer steals the code and the client refuses to pay me.
Is there any tool allowing me to securely give the new developer an access to view my code, ask questions, etc without allowing him to steal it (at least easily) ?
Thank you.
No, this seems more like a legal/contract issue. If the other developer can see the source, he could always duplicate it.
In short, no: If they see they source code, they can duplicate it.
However, if your client simply wants the new developer to have an idea of how your code is structured, you could send them UML diagrams of the Class Hierarchy and the flow of the site.
I hope this helps.
I am using Oform to create contact-us form in Orchard.
But it is not working properly. I refered "http://extendorchard.co.uk/tutorial-oforms" and done the same.
But now i am getting error
Serial number needed! Invalidated oForms install is a fully functional Orchard module which has No limitations. However for a small fee you can remove this text and the link on the front end, and HELP us to continue the development and support of this module. Please click here to get the serial number: http://extendorchard.co.uk/license-oforms.
You don't need to use oForms to create a contact form. This post gives you a step by step walkthrough on how to build a contact form using just the built-in features of Orchard.
https://web.archive.org/web/20170604194143/http://devdirective.com:80/post/160/how-to-create-custom-forms-in-orchard-cms-with-email-and-recaptcha