Is there any Azure based service similar to IBM Watson Knowledge Studio? - azure

I tried to find some Azure service similar to IBM Watson Knowledge Studio but I failed. I'm looking for something I can train to analyze texts and retrieving entities, relations between entities and entities related sentiments.
Do you know if there is anything in Azure I could use to do that?

Yes, there is remote similarity between Azure Language Understanding (LUIS) and IBM Watson Knowledge Studio (WKS), but they are not substitutes.
The notable differences:
LUIS is for building chatbots - conversations with utterance data. WKS is for general unstructured text, usually much larger in volume than utterances. In this respect, LUIS is competing with IBM Conversation service, not with WKS and the IBM Watson services that run the WKS custom models - Watson Natural Language Understanding and Watson Discovery.
Because LUIS is built for processing utterances, it has much lower limits, compared to WKS. For example, LUIS limits the input text to 500 characters, while WKS processes text of up to 40,000 characters. LUIS also limits the Simple entities to 30, which may be ok for processing targeted utterances, but not for building high quality model for processing large documents and complex domains.
LUIS supports only customization of Entity Type mentions (in various forms, like Simple, Hierarchical, Composite, RegEx, List,.. - very similar to the WKS Entity types). WKS (and the runtime services that use the WKS custom models), on the other hand, supports Entity Relations - an important feature that helps you extract insights from the client-specific corpus that you cannot do with Entity mentions alone.
LUIS supports only a fraction of the languages that WKS supports. And the LUIS language support is partial - see https://learn.microsoft.com/en-us/azure/cognitive-services/luis/luis-supported-languages
LUIS, similarly to IBM Conversation service, is a runtime NLP service that allows customization in its tooling. WKS, on the other hand, is a stand alone customization SaaS offering, that was specifically designed to organize a team of domain subject matter experts (SME) and cognitive solution developers to transfer the SME domain knowledge into the custom model that is then deployed to and used by IBM Watson runtime services, like Natural Language Understanding and Discovery. In other words, while LUIS and IBM Conversion provide tooling for customizing the solution directly, WKS provides a separate environment with built-in methodology for managing customization projects and for annotation skill building.
LUIS, to my understanding, is offered as a multi-tenant public service. WKS is offered in both multi-tenant and isolated configurations. In that respect, WKS is suitable not only for the general public, but also for projects with sensitive client data.
In conclusion, there's no WKS equivalent (substitute) that I'm aware of. LUIS may be considered as the Azure alternative to IBM Conversation service, if your solution is built in Azure, but LUIS is not a substitute for IBM Watson Knowledge Studio.
So, it's very important to consider your use case (application domain), when choosing on which platform to build your solution.
Hope this helps.

Did you look at the Azure AI gallery? One common approach is to customise the solutions there for your particular requirements. Here's a search of all text-related items, which you can for example refine to be Microsoft content only.
I'm not aware of a single service that maps directly; the Text Analytics API for example just does language and phrase detection and sentiment analysis.

Have a look at Luis.ai...should do what you need.

Related

How to add mutliple QnA bots to Enterprise Assistant?

Our teams create individual bots for themselves and now we want to integrate them into the Enterprise Assistant.
What I need help with is Multi Bot Orchestration.
Has anyone been able to do this efficiently?
Basically, the user asks a question in the Enterprise Assistant and then the bot gets the answer from the respective child/skill QnA bot.
I am able to add skills like calendar, people, SAP, etc. but dealing with QnA bots is proving to be an impossible challenge.
To create a multi bot orchestration, we need to include multiple levels of dispatchers, pipelines for exception handling and fallbacks. It needs an expert system type agent who can handle multiple NLP models. A chat bot is designed based on NLP modelling. The NLP implementation will be depending on number of operations that enterprise application will handle. The following are the different NLP models required to handle a perfect enterprise application.
General QnA
Billing information
Calendar
Leave Planning
Employee storage disk
On demand service to the clients
Fault tolerance
Saving the QnA pattern of chat between bot and human
Calculating the accuracy
Taking users own request and performing NLP
like this we have so many requirements to implement a Bot. All the models will be trained and connected to the dispatcher.
The following are the things followed in bot orchestration.
It consists of 5 different implementations:
Ingress Pipeline
Egress Pipeline
Dispatcher
Exception pipeline
Fallback pipeline
Ingress Pipeline:
The ingress pipeline will be detection few things like
Language detect and translate
Entity recognition
User input redaction
Egress Pipeline:
the following are requirement and operations in egress pipeline
3rd party tools for analytics
Managed responses (default responses)
Data service redaction
Dispatcher:
The dispatcher will be having two different implementations.
Policies and rules,
NLU (Natural Language Understanding) strategy.
Policies and Rules
The policies and rules of the organization must be trained to the model. Because, if the user is asking for the financial status of the orgzanition, the bot must understand the policies and rules it must follow in order to answer for the question.
Natual Language Understanding
Natural Language Understanding (NLU)  — The computer’s ability to understand what we say. The context what user is typing in his/her own words is the procedure of NLU.
The dispatcher will be communicating with the egress pipeline and ingress pipeline as it is the parent handler of all the operations and model training results.
To address the exceptions and rollbacks, we need to have exception handlers who are also expert system agents. An agent is having some expertise in handling the exceptions.
Example:
Question: Give the details of pithon projects of this monthh.
Handled Exceptions: In the above question "python" spelt like "pithon" and "month" was spelt like "monthh". NLP and NLU have to understand these cluases and have to handle in exception of mis-spelt words.
The following link can explain in detailed architecture of Multi-bot orchestration.
https://servisbot.com/multi-bot-orchestration-architecture/

Custom entity recognition using Azure Text Analytics API?

Is is possible to define custom/special entities to be used for entity recognition within the Azure Text Analytics API?
NER (Names Entities Recognition) allows to discover a wide range of entities but for our purposes we're focusing on some model-specific entities (e.g. brand and product names) which we need to relate to the overall sentiment. General NER might not be enough for our purposes since we're looking for very specific appreciation/criticism terms during the topic generation.
The theme has already been presented in different flavors with no answers so far:
in 2016 it seemed to be an "upcoming" feature: Customizing the Named Entity Recogntition model in Azure ML
in 2018 someone was searching for much more specialized version capable to physically locate custom entities spatials position within documents: Documentation / Examples for Custom Entity Detection (Azure NLP / Text Analytics)

Decision path for Azure Service Fabric Programming Models

Background
We are looking at porting a 'monolithic' 3 tier Web app to a microservices architecture. The web app displays listings to a consumer (think Craiglist).
The backend consists of a REST API that calls into a SQL DB and returns JSON for a SPA app to build a UI (there's also a mobile app). Data is written to the SQL DB via background services (ftp + worker roles). There's also some pages that allow writes by the user.
Information required:
I'm trying to figure out how (if at all), Azure Service Fabric would be a good fit for a microservices architecture in my scenario. I know the pros/cons of microservices vs monolith, but i'm trying to figure out the application of various microservice programming models to our current architecture.
Questions
Is Azure Service Fabric a good fit for this? If not, other recommendations? Currently i'm leaning towards a bunch of OWIN-based .NET web sites, split up by area/service, each hosted on their own machine and tied together by an API gateway.
Which Service Fabric programming model would i go for? Stateless services with their own backing DB? I can't see how Stateful or Actor model would help here.
If i went with Stateful services/Actor, how would i go about updating data as part of a maintenance/ad-hoc admin request? Traditionally we would simply login to the DB and update the data, and the API would return the new data - but if it's persisted in-memory/across nodes in a cluster, how would we update it? Would i have to expose this all via methods on the service? Similarly, how would I import my existing SQL data into a stateful service?
For Stateful services/actor model, how can I 'see' the data visually, with an object Explorer/UI. Our data is our Gold, and I'm concerned of the lack of control/visibility of it in the reliable services models
Basically, is there some documentation on the decision path towards which programming model to go for? I could model a "listing" as an Actor, and have millions of those - sure, but i could also have a Stateful service that stores the listing locally, and i could also have a Stateless service that fetches it from the DB. How does one decide as to which is the best approach, for a given use case?
Thanks.
What is it about your current setup that isn't meeting your requirements? What do you hope to gain from a more complex architecture?
Microservices aren't a magic bullet. You mainly get four benefits:
You can scale and distribute pieces of your overall system independently. Service Fabric has very sophisticated tools and advanced capabilities for this.
You can deploy and upgrade pieces of your overall system independently. Service Fabric again has advanced capabilities for this.
You can have a polyglot system - each service can be written in a different language/platform.
You can use conflicting dependencies - each service can have its own set of dependencies, like different framework versions.
All of this comes at a cost and introduces complexity and new ways your system can fail. For example: your fast, compile-time checked in-proc method calls now become slow (by comparison to an in-proc function call) failure-prone network calls. And these are not specific to Service Fabric, btw, this is just what happens you go from in-proc method calls to cross-machine I/O - doesn't matter what platform you use. The decision path here is a pro/con list specific to your application and your requirements.
To answer your Service Fabric questions specifically:
Which programming model do you go for? Start with stateless services with ASP.NET Core. It's going to be the simplest translation of your current architecture that doesn't require mucking around with your data layer.
Stateful has a lot of great uses, but it's not necessarily a replacement for your RDBMS. A good place to start is hot data that can be stored in simple key-value pairs, is accessed frequently and needs to be low-latency (you get local reads!), and doesn't need to be datamined. Some examples include user session state, cache data, a "snapshot" of the most recent items in a data stream (like the most recent stock quote in a stream of stock quotes).
Currently the only way to see or query your data is programmatically directly against the Reliable Collection APIs. There is no viewer or "management studio" tool. You have to write (and secure) an API in each service that can display and query data.
Finally, the actor model is a very niche model. It serves specific purposes but if you just treat it as a data store it will not work for you. Like in your example, a listing per actor probably wouldn't work because you can't query across that list, or even have multiple users reading the same listing simultaneously.

How to implement BOT engine like WIT.AI for on an on-premise solution?

I want to build a chatbot for a customer service application. I tried SaaS services like Wit.Ai, Motion.Ai, Api.Ai, LUIS.ai etc. These cognitive services find the "intent" and "entities" when trained with the typical interactions model.
I need to build chatbot for on-premise solution, without using any of these SaaS services.
e.g Typical conversation would be as following -
Can you book me a ticket?
Is my ticket booked?
What is the status of my booking BK02?
I want to cancel the booking BK02.
Book the tickets
StandFord NLP toolkit looks promising but there are licensing constraints. Hence I started experimenting with the OpenNLP. I assume, there are two OpenNLP tasks involved -
Use 'Document Categorizer' to find out the intent
Use 'Named Entity Recognition' to find out entities
Once the context is identified, I will call my application APIS to build the response.
Is it a right approach?
How good OpenNLP is in parsing the text?
Can I use Facebook FASTTEXT library for Intent identification?
Is there any other open source library which can be helpful in building the BOT?
Will "SyntaxNet" be useful for my adventure?
I prefer to do this in Java. BUT open to node or python solution too.
PS - I am new to NLP.
Have a look at this. It says it is an Open-source language understanding for bots and a drop-in replacement for popular NLP tools like wit.ai, api.ai or LUIS
https://rasa.ai/
Have a look at my other answer for a plan of attack when using Luis.ai:
Creating an API for LUIS.AI or using .JSON files in order to train the bot for non-technical users
In short use Luis.ai and setup some intents, start with one or two and train it based on your domain. I am using asp.net to call the Cognitive Service API as outlined above. Then customize the response via some JQuery...you could search a list of your rules in a javascript array when each intent or action is raised by the response from Luis.
If your Bot is english based, then I would use OpenNLP's sentence parser to dump the customer input into a database (I do this today). I then use the OpenNLP tokenizer and push the keywords (less the stop words) and Parts of Speech into a database table for keyword analysis. I have a custom Sentiment model built for OpenNLP that will tag each sentence with a Pos, Neg, Neutral sentiment...You can then use this to identify negative customer service feedback. To build your own Sentiment model have a look at SentiWord.net and download their domain agnostic data file to build and train an OpenNLP model or have a look at this Node version...
https://www.npmjs.com/package/sentiword
Hope that helps.
I'd definitely recommend Rasa, it's great for your use case, working on-premise easily, handling intents and entities for you and on top of that it has a friendly community too.
Check out my repo for an example of how to build a chatbot with Rasa that interacts with a simple database: https://github.com/nmstoker/lockebot
I tried RASA, But one glitch I found there was the inability of Rasa to answer unmatched/untrained user texts.
Now, I'm using ChatterBot and I'm totally in love with it.
Use "ChatterBot", and host it locally using - 'flask-chatterbot-master"
Links:
ChatterBot Installation: https://chatterbot.readthedocs.io/en/stable/setup.html
Host Locally using - flask-chatterbot-master: https://github.com/chamkank/flask-chatterbot
Cheers,
Ratnakar
With the help of the RASA and Botkit framework we can build the onpremise chatbot and the NLP engine for any channel. Please follow this link for End to End steps on building the same. An awsome blog that helped me to create a one for my office
https://creospiders.blogspot.com/2018/03/complete-on-premise-and-fully.html
First of all any chatbot is going to be the program that runs along with the NLP, Its the NLP that brings the knowledge to the chatbot. NLP lies on the hands of the Machine learning techniques.
There are few reasons why the on premise chatbots are less.
We need to build the infrastructure
We need to train the model often
But using the cloud based NLP may not provide the data privacy and security and also the flexibility of including my business logic is very less.
All together going to the on premise or on cloud is based on the needs and the use case of the requirements.
How ever please refer this link for end to end knowledge on building the chatbot on premise with very few steps and easily and fully customisable.
Complete On-Premise and Fully Customisable Chat Bot - Part 1 - Overview
Complete On-Premise and Fully Customisable Chat Bot - Part 2 - Agent Building Using Botkit
Complete On-Premise and Fully Customisable Chat Bot - Part 3 - Communicating to the Agent that has been built
Complete On-Premise and Fully Customisable Chat Bot - Part 4 - Integrating the Natural Language Processor NLP
Disclaimer: I am the author of this package.
Abodit NLP (https://nlp.abodit.com) can do what you want but it's .NET only at present.
In particular you can easily connect it to databases and can provide custom Tokens that are queries against a database. It's all strongly-typed and adding new rules is as easy as adding a method in C#.
It's also particularly adept at turning date time expressions into queries. For example "next month on a Thursday after 4pm" becomes ((((DatePart(year,[DATEFIELD])=2019) AND (DatePart(month,[DATEFIELD])=7)) AND (DatePart(dw,[DATEFIELD])=4)) AND DatePart(hour,[DATEFIELD])>=16)

What does make up an applicaion (in traditional sense) in SharePoint? [Do not mix up with Web Application]

[NOTE: I am not using word 'Application' to mean what is called a 'Web Application' in SharePoint terminology. Word 'Application' is used in general sense.]
In mainstream application development platforms (like ASP.NET, Java EE) there is clear concept of an application and application boundaries.
What a SharePoint application is made up of? Sites? Lists? Features? Libraries?
A SharePoint solution can include more than one Features. Can a set of related Features be called an application? Is there a way to define what Features are related or inter-dependent?
This is not just a theoretical question. Boundary around application are needed for measuring resource utilization, controlling access, assuring SLAs (performance, availability etc.), change control, application ownership, application life-cycle management and more.
One definition can be based on hosting location. Is it hosted on SharePoint Server? Server side, it could be a solution build on SharePoint object model; client side could be based on client object model, SOAP web srevices or WCF services. Further, an application build on these paradigms could be a web, windows or console application or a web service. Performance and availability depends on host variable too.
In SharePoint there is a clear concept of boundaries, depending on your classifications and definitions. Your confusion stems from compairing technologies with product. ASP.NET, J2EE are technologies. SharePoint is a product built on a stack of technologies. Any large scale product built on many underlying technologies is no different from SharePoint.
From dev point of view boundaries could be around representation of data or management of content or both - a direct derivative of scope of your solution. Inter-dependency of features, customisations does not constitute part of one application alone. Extending SharePoint could involve dependency on existing feautures or building new ones. It is designers prerogative to define boundaries based on scope and whether to reuse existing functionality. My definition of an application would be a business case and a technical solution.
You didn't clariy what kind of resource utilisation you want to measure, it is instrumentation or project management? Change management is part of project management for development efforts. I am not even talking about Services here. In this sense any custom solution that aims to modify the default or current customised deployment of SharePoint will have a boundry defined by the changes it is bringing in.

Resources