The LUIS pricing page states
Standard-Web/Container
Text Requests €1.265 per 1,000 transactions
Speech Requests €4.639 per 1,000 transactions
"https://azure.microsoft.com/en-in/pricing/details/cognitive-services/language-understanding-intelligent-services/"
What does "speech requests" refer to?
Does this mean it's possible to send audio to LUIS instead of text? I can find only API and SDK methods accepting text input. Where is the corresponding documenation?
Yes, it is possible as per the documentation where you can find a tutorial that walks you through the necessary steps to use the Speech SDK to develop a C# console application that derives intents from user utterances through your device's microphone.
LUIS integrates with the Speech Services to recognize intents from
speech. You don't need a Speech Services subscription, just LUIS.
The price corresponding to the "Speech Requests" feature is for the speech transcriptions you do using LUIS.
The Speech SDK is also available in many languages and for many platforms.
Related
I have a Azure Bot Service and it is integrated with LUIS application where i have some intents.
What I want according to my need i can forward same examples to different intents in LUIS.
But there is limitation that same example can't be there for multiple intents.
So i created another application in the LUIS and created the same example but different intent name.
Here my problem is i have to connect Single Bot service with two different LUIS application.
Can i do it in Azure Bot Services in Node.js??
Yes, it is absolutely possible to make use of multiple LUIS applications in one bot. The dispatch tool helps to determine which LUIS model best matches the user input. The dispatch tool does this by creating a single LUIS app to route user input to the correct model. Also, ensure that you do not have overlapping intents and make use of prediction data from LUIS to determine if your intents are overlapping.
The dispatch model is used in cases when:
Your bot consists of multiple modules and you need assistance in routing user's utterances to these modules and evaluate the bot integration.
Evaluate the quality of intents classification of a single LUIS model.
Create a text classification model from text files.
Refer to this sample, which is an NLP with Dispatch bot dealing with multiple LUIS models and also this documentation that provides more information about using multiple LUIS and QnAMaker models.
Hope this helps!!
I have a fully mature agent in DialogFlow with utterances, intents and entities. Is there a way to quickly bring this info over into a new app in Luis? I really don't want to reenter the information. I know DialogFlow allows for a zip export. Is this format recognized by Luis?
This format is not recognised per Luis but both Dialogflow and luis have APIs and you can use those APIs to extract the utterances from Dialogflow and then POST those into Luis
I am developing an Azure bot, and I am intending to link it to Cortana channel. But not sure if that need the speech to text to be part of the services needed to create that bot. or Cortana client handles the communication between cortana and the bot with the text?
Responding via speech is completely under your control. The entire Cortana experience can be driven via text entry on Windows 10 and mobile app. However, your skill may not pass certification if published because of screen-less devices like the Invoke and best practice of responding with voice is triggered by voice. You can pull DeviceInfo and fail gracefully if there is no display.
I have developed a bot using Microsoft Azure Bot framework but now I am trying to see if there is a way to see what were some of the utterances that were missed/not mapped to an intent. I tried to look around but couldn't find anything related to this. Is it possible to see the log of missed utterances?
There is no built-in capability in Bot Framework or LUIS for this. You will have to log the utterances that go through the None intent somewhere or use some analytics service like (App Insights) to get the information you are looking for.
Some initial information at https://learn.microsoft.com/en-us/bot-framework/portal-analytics-overview
LUIS provides a feature called Active Learning where it tracks utterances it is relatively unsure of, and places them in a list called Suggested Utterances. More information can be found here: https://learn.microsoft.com/en-us/azure/cognitive-services/luis/label-suggested-utterances
In the active learning process, LUIS examines all the utterances that
have been sent to it, and calls to your attention the ones that it
would like you to label. LUIS identifies the utterances that it is
relatively unsure of and asks you to label them. Suggested utterances
are the utterances that your LUIS app suggests for labeling.
I have MS BOT client which the user will interact with. The Bot will use LUIS models for NLP. For any queries tagged under None, we should involve a human agent immediately. What is the best way to transfer the conversation from Bot client to human agent?
I think, none intent is not the right place to divert chat to human assistant. You should use text analytics to find the sentiment of user's reply and if it scores negative then you should redirect the chat to human assistant.
In Microsoft Bot it provides a setting to transfer Bot-to-Human if needed or the bot is unable to understand. Here are the steps to do on the Microsoft platform.
This changes depending on the Chatbot platform, for example, in Dialogflow JSON need to be used. In that JSON code, you can mention the human agent Id and set like the "Talk to Human" button and transfer.