I have created a basic Azure 'Custom Question Answering' bot (https://learn.microsoft.com/en-us/azure/cognitive-services/language-service/question-answering/quickstart/sdk?pivots=studio). I created the bot through Language Studio:
bot creation through language studio
I want to add authentication so that only users with who are part of my Azure AD are able to interact with the bot. I've tried following the tutorials listed below:
https://learn.microsoft.com/en-us/azure/bot-service/bot-builder-authentication?view=azure-bot-service-4.0&tabs=singletenant%2Caadv2%2Ccsharp
https://learn.microsoft.com/en-us/microsoftteams/platform/bots/how-to/authentication/add-authentication?tabs=dotnet%2Cdotnet-sample
I've not been able to follow these tutorials, as they assume the bot is built from either of the following code bases:
https://github.com/microsoft/BotBuilder-Samples/tree/master/samples/csharp_dotnetcore/46.teams-auth
https://github.com/Microsoft/BotBuilder-Samples/tree/main/samples/csharp_dotnetcore/18.bot-authentication
Whereas the bot that I deployed through Language Studio looks like it is built from the following framework:
https://github.com/microsoft/BotBuilder-Samples/tree/main/samples/csharp_dotnetcore/12.customQABot
How can I add authentication to the custom question answering bot I deployed through Azure Language Studio (Cognitive services)? Currently anyone would be bale to interact with my bot.
Thanks
To add authentication, there is a need to install a library supportive with Python Programming.
pip install azure-ai-language-questionanswering
Add the above library to start authentication process.
Authenticate with Client
Get an API key from Azure Cognitive Services.
az cognitiveservices account keys list --resource-group <resource-group-name> --name <resource-name>
To Instantiate QuestioningAnswerClient
from azure.core.credentials import AzureKeyCredential
from azure.ai.language.questionanswering import QuestionAnsweringClient
endpoint = "https://{myaccount}.api.cognitive.microsoft.com"
credential = AzureKeyCredential("{api-key}")
client = QuestionAnsweringClient(endpoint, credential)
For further reference, Check the below link:
https://pypi.org/project/azure-ai-language-questionanswering/
https://learn.microsoft.com/en-us/azure/cognitive-services/authentication?tabs=powershell
Since these are all samples, they're all designed to do one thing for simplicity's sake. What you need to do is actually straightforward (in concept, if not always in practice). You should look at the code and docs of the auth samples and integrate the authentication sections with your CQA bot. Alternatively, you could integrate the CQA sections with an auth bot. Regardless of which base you start with, the goal is the same. Combine the two.
Related
I'm working on a project to integrate a healthcare assistant bot on a mobile application with React Native
I saw that there was a bot from Microsoft for health and therefore adapted for my project, so I would like to use it
So I created an account on Azure and created my bot, however I can't and don't really understand how to configure it to use it and integrate it into my project.
On the Microsoft documentation I see that we have to configure the DirectLine to use a Microsoft bot but I can't activate this on mine, I don't have the option, moreover it's considered as a Saas and not as a bot application on azure, so I don't have the same options and I don't really understand exactly why (I tried with the cli without success, so I think we can't configure DirectLine on a HealthBot)
Then I found this https://github.com/Microsoft/HealthBotContainerSample/tree/live_agent_handoff
The README.md indicates that we must deploy the bot, which I did, I also set the variables to add. But then I don't know exactly what to do to integrate that into my application. I also wonder how to take into account the creation of scenarios on azure.
If someone could enlighten me on how I should proceed to integrate this, I would be grateful. I also saw that there was a module (react-gifted-chat) for creating chat bot, but in the tutorials I meet, everyone uses the DirectLine, so I wonder if it's possible or do I have to go through a Web View?
Thank you in advance!
Microsoft has a doc, titled “Embed a health bot instance in your application”, that should answer your questions. You can find the doc here.
It includes:
GitHub samples
Code examples
Steps for securing communication
Information on how to setup Direct Line
A link and steps for setting up WebChat as an iframe or web page element (in a div, for example)
Hope of help!
So I've been reviewing the DialogFlow documentation and wondering if it's possible to use the API fully programmatically and create agents via the API as well? A sample use case being the user on my platform being to able to create their own bot. I'm not able to find the functionality listed in their docs and wanted to double check with the community here.
You can now create and update agents with the API. See the REST and RPC documentation.
You can’t create an agent through the API but once it’s been created in the UI it can be edited through the API. Users will need to grant your service account the dialogflow editor IAM role and then tell you their project ID.
I'm trying to create a bot in a sharepoint site. To do that i need to retrieve the user info using implicit flow authentification method, however i haven't found any example using .NET and all the examples are in Node.js.
Can someone show me how to do that please ?
There are a number of .net examples online:
https://stephaneeyskens.wordpress.com/2017/01/04/microsoft-bot-framework-transparent-authentication-with-the-webchat-control/
https://github.com/Ellerbach/SharePointBot
https://www.rickvanrousselt.com/contextual-authentication-webchat-control/
https://www.linkedin.com/pulse/chat-bot-luis-sharepoint-part-one-akshay-deshmukh/
However, they are a little dated. Just this week, Microsoft released a new integrated OAuth feature for bots.
One of the Service Providers is Sharepoint:
I am trying to write an app that, upon receiving the credentials of an Azure user, will be able to show him various pieces of information using the Azure billing apis.
However, the following git sample that shows how to use one of those apis, lists a series of steps that should be done in the Azure cloud platform in order for things to work, and these steps need to be done by the user himself. Specifically, step 1 talks about registering an app and configuring it so that it has access and permissions to use the apis.
Only after those steps, will I be able to access the billing apis and retrieve his information.
Seeing as how I don't want the user to have to do anything after he gives me his username and password, is there an API or some other automated way with which I can register my app to view his account?
https://github.com/Azure-Samples/billing-dotnet-usage-api
Ideally, I would want some sort of imaginary code that maybe looks like:
someObj obj = someAPI.loginToAzureWithCredentials("123456", "someUserName")
obj.registerApplication();
Obviously the "code" is very lacking in details, but it's just for emphasizing what I'm searching for.
I own a Luis.ai application which I use for my chat BOT.
i want my qa guys to be able to train my Luis.ai application, so that my BOT would be smarter.
how do i grant permissions to other user to train my APP?
Now there is an option to set other users as "Collaborator" and they can then Train and modify the luis.ai app as needed.
The accounts can be independent and don't need to be from same Azure Active Directory or otherwise linked.
Basic description by Microsoft is here.
This way you add the Collaborator at
https://www.luis.ai/applications/yourAppId/0.1/settings
And this way the collaborator sees the UI:
I don't think that it's currently possible.
The easiest workaround I can think of is to create a shared account, export your LUIS application from your account and import it into the new shared account. Have in mind that the keys of your LUIS app will likely change and so you will likely need to update your bot too.
Also, you can see if using the Cognitive Services API is suitable for your scenario. If it's, then there are a bunch of operations available there.
You can use the Cognitive Services API for this.
Link - https://dev.projectoxford.ai/docs/services/5890b47c39e2bb17b84a55ff/operations/5890b47c39e2bb052c5b9c45
I haven't actually used it but you can have a look. Hope this helps.