I'm a bot developer and I just migrated our QnA from preview to GA. But, then I realised I can't find the answer with what I'm asking previously. Then I did some research from my side, and I found the scores of preview and GA are way to different for a same question. I have attached few screenshots about this issue from both preview and GA.
1. Screenshot: This is the Question and Answer from QnA preview
Then, I'm asking "explain braveheart". In this case, even though I don't have exactly "Explain braveheart" in my question, but QnA should understand what I'm trying to ask. In preview, as we can't see the score from Test portal, so I write a simple program to extract the score from code. Like below:
2. Screenshot: QnA maker preview works great for this question in my code
Then, I migrated my QnA from preview to GA, and score changed dramatically, like below:
3. Screenshot: The same question and answers in QnA maker GA
Then, this is the result when I test the same question within QnA maker GA.
4. See the confidence score is way too lower than preview
I would like to know why the QnA GA works different with preview, and it is quite important for use to manage our questions and answers. Moreover, if they are performing differently, so we can't simply migrate the QnA KB from preview to GA. This is a huge different with what we have from the MS QnA document.
Related
I'm developing an Azure Bot using Bot Framework Composer.
I've got my QnA knowledgebase set-up with a number of Context-Only questions.
These questions work perfectly when testing in the QnA portal.
I've tested with both my original QnA knowledge base & also the Bot generated knowledge bases.
However when testing via emulator, or working with the Bot in a live environment it bypasses the Context-Only element entirely.
I need the Context-Only elements to work as we have a number of identical departments in different locations - so the same question will require a different answer depending on where our users are based.
Not sure what more info to provide, but if anyone has any insight I'd gratefully welcome it.
I reproduced the thread and tested it in QnA emulator and web chat. In both cases it worked for and got a response from the bot.
Go to https://language.azure.com/
Choose the Custom question answering
Choose Custom question answering
Click on “Open custom question answering”
Click on “Create new project”
Click on “Edit knowledge base”
Click on Add questions pair
Tested in Studio. Worked well
Click on “Deploy knowledge base”
Click on create a bot.
Create a bot
Go to “Test in web chat”. Test there. It worked for me.
We can create synonyms as the context-based elements. When we have context-based, the model can be trained for short form questions, instead of complete pattern of the question.
Managed to resolve this by adding my Bot to Bot Emulator, and adding the QnA knowledgebase as a service.
This allowed me to trace the QnA pairs, publish & train within emulator & drive the questions down the correct Context-Only route.
I was looking for a basic ability for bot to learn from the questions and answers the agents provide that can be used as suggestion the chatbot replies to users before it connects them to live agent. I looked up QnA maker does similar stuff and can I be able to integrate it over here?
Go to https://language.cognitive.azure.com/
Click on Create
Choose Custom Question Answering
After creating, go to that project
Click on Edit Knowledge Base and create question and answer pairs.
After creating the Knowledge base, go to deploy the knowledge base
Click on Deploy
Create a bot and attach the existing knowledge base to bot before going to the Live Chat
I have QnA Maker and Dialogflow knowledge base. I am trying to develop a faq bot. I need to know which is better to use the Dialogflow knowledge base or QnA Maker. Can some one tell me which is better?
Those two options are highly acceptable. As QnA is some easier method. While we need to accept in some cases like even QnA maker is created with Knowledge base. The best solution can be using Knowledge base to create FAQ bot.
Refer the links to create knowledge base and FAQ bot.
To create knowledge base, refer the below link
https://learn.microsoft.com/en-us/azure/cognitive-services/qnamaker/quickstarts/create-publish-knowledge-base
To create Bot, using the below link. But need to complete the procedure of link 1
https://learn.microsoft.com/en-us/azure/cognitive-services/qnamaker/tutorials/create-faq-bot-with-azure-bot-service
From the Azure QnA Maker documentation:
The precise answering feature introduced in QnA Maker managed (Preview), allows you to get the precise short answer from the best candidate answer passage present in the knowledge-base for any user query. This feature uses a deep learning model which on runtime, which understands the intent of the user query and detects the precise short answer from the answer passage, if there is a short answer present as a fact in the answer passage.
This feature is on by-default in the test pane, so that you can test the functionality specific to your scenario.
In the QnA Maker portal (qnamaker.ai), when you open the test-pane, you will see an option to Display short answer on the top. This option will be selected by-default. When you enter a query in the test pane, you will see a short-answer along with the answer passage, if there is a short answer present in the answer passage (see this image for context).
Now, what I want to do is disable the displaying of the short answer from the actual chatbot itself (so that only the long answer is displayed), not just in the test pane in qnamaker.ai.
In knowledge base creation in qnamaker.ai, I created a QnA pair, with "Hello" as a question and "Hello 123" as an answer. Saving and training the knowledge base, and publishing it, pushes the knowledgebase changes and the endpoint becomes available for use in my Bot.
Testing this new QnA pair from the Azure Portal via the Test in Web Chat feature in my QnA Web App bot is displaying some weird behaviour: supplying the bot with the phrase "Hello" returns a short answer "123" and a long answer "Hello 123" and this long answer seems to formatted in some weird way.
. Supplying the bot with the phrase "123" returns only the full answer "Hello 123" (see here).
Displaying both short and long answers may be disruptive and confusing for the user. This seems to happen for almost all QnA pairs that I've tested. Is there some sort of configuration setting to disable this behaviour?
Managed to find a solution to this issue on a question on a microsoft techcommunity question. This the reply that solves it:
If you navigate to the bot's app service in Azure portal, go to the
configuration settings and add the key value pair
EnablePreciseAnswer:false then this will remove the precise answer or
short answer from the response. You will need to save the change and
restart the app service for the change to take effect.
I have deployed a qnamaker bot to Microsoft Teams but the dialog buttons don't show up anymore.
In the qnamaker site, the buttons work
In teams, the buttons don't appear ;(
Does anyone have any ideas as to why this changes?
Is there anything I can do to solve this?
#Ceal clem Are you trying to use Suggested actions? If so, suggested actions are not supported in Teams. Could you please try using Cards to show the buttons?
From the documentation here: https://learn.microsoft.com/en-us/azure/cognitive-services/QnAMaker/how-to/multiturn-conversation#enable-multi-turn-during-testing-of-follow-up-prompts
You need to add this sample to enable prompts in client.