Wit.ai intent issues - nlp

I am having real difficulty in understanding the intent part in Wit.ai. When I go to the understanding tab inside an app it shows me intent as a user defined entity.
Do we need to create intents as a user entity or do we need to create intents as a separate in-build entity. Just today when I was going through their HTTP API they first took me to the docs of version 20141022
and then to the version 20160526 in a matter of 15 minutes (meaning they are not stable with http api versions?). In the older version I see that for posting intent there is a separate API which is deprecated in the latter (newer version).
Also I went through some of the apps from the explore section and in each one of them I could see that intent is a different property altogether.
How should I treat the intent?
Considering the newer versions of the API if I take intent as a User defined entity and then add another entity with the search strategy trait. Wit.ai internally removes my expression for the other entity. I also need some help in understanding how this flow is working?

In the new version of Wit.ai, intent is not a special, built-in entity anymore. It's just a user-defined entity that's created for you when you create an app. The search strategy is normally trait.
If you add another entity with trait, it will be completely orthogonal to intent. What do you mean by "Wit.ai internally removes my expression"?
Don't hesitate to explain what exactly you are trying to achieve.

Related

Is there a way to program intent and is it possible to call different models on different steps in DialogFlow?

I'm currently working on building a chatbot and I saw the Dialogflow tool that can provides a lot of help in this topic, so my question if it's usable to have multiple contexts at once and also to be able to call my NLP model (stored in an API) many times? Or do I have to build my own platform for that since Dialogflow can't be call multiple webhooks at once?
Example:
I have a model to classify the initial intent,
I have a regression model to do something else if the intent is XXX.
First, remember that an Intent represents what the user says or does and not how you react to that.
Within that, yes, it is perfectly feasible to have multiple Contexts active at once. The lifespan of a Context determines how many rounds of the conversation it will be active for. All of the Input Contexts for an Intent must be active for that Intent to be considered for matching.
While Dialogflow only lets you register one webhook for all of the Intents, it provides information to that webhook about which Intent was triggered (along with which Contexts are active, parameter values, etc). Based on that, or any other information you wish, you can choose which code or handler to execute. In this way, you can certainly make multiple calls to other APIs if that makes sense - as long as you return within the timeout period (5-7 seconds).

How to clear the session between two consecutive intents?

We are using bot trees master which uses bot graph dialog for loading graph based dialogs dynamically. We have created a scenario to load multiple intents(whose intents scores are nearly matching). Now the problem I am facing is when I request for single intent , I am getting the bot response and able to enter into next intent but when I request for multiple intents bot is giving the response and when I request for another intent , bot is giving the same response of multiple intents. When bot is entering into multiple intent handler it is not clearing the session and not coming out of that handler. I have tried using session.endConversation().
To understand about bot graph dialog:
https://www.microsoft.com/developerblog/2016/11/11/extending-microsofts-bot-framework-with-graph-baseddialogs-/
Can somebody help on this. Thank you
the builder.IntentDialog() used in this project is no longer considered a best practice. Additionally, further support is not expected. Instead, you will want to use bot.Dialog() to create your dialog flows with a .triggerAction(). The trigger matches on phrases which you can set a score to. Unfortunately, it means you will need to recreate the bot-trees project to fit this new model.
You can find information on managing conversations here, including creating triggers.
If you need a more robust system, you can integrate LUIS into your project. Information regarding recognizing intents with LUIS can be found here.

Dialogflow: reference to output context in intent (ie what's this NodeJS Client Library for?)

In my NodeJS Dialogflow fulfillment, I want to reference an output context parameter from an intent from 2 requests ago within the session.
The queryResult of the latest request doesn't have that data. And the samples only seem to process WebhookRequest and WebhookResponse (
reference: https://dialogflow.com/docs/reference/api-v2/rest/v2beta1/WebhookResponse )
If I can access https://dialogflow.com/docs/reference/api-v2/rest/v2beta1/projects.agent.sessions.contexts/get I may be able to do it. But I don't quite understand if that implies mixing https://github.com/dialogflow/fulfillment-webhook-nodejs/blob/master/functions/index.js with this Client Library:
https://github.com/googleapis/nodejs-language .
In other words, it's not clear to me what the purpose of https://github.com/googleapis/nodejs-language is. Is nodejs-language intended to substitute actions-on-google fulfillments (in the format of https://github.com/dialogflow/fulfillment-webhook-nodejs/blob/master/functions/index.js ) ?
There is a lot going on here, and it isn't quite clear why you think things fit together the way you do.
The nodejs-language library is used to access Google's Natural Language API that runs as part of the Google Cloud Machine Learning API family. This is a completely separate product from the Google Assistant, Actions on Google, and Dialogflow systems. It is meant as an API for people who are looking for a pre-trained AI that can do things like sentiment and syntax analysis. It is not meant as a substitute for any part of the AoG or Dialogflow platform.
As long as the context set two requests ago was set with a lifetime more than 2, and wasn't cleared in between, then it still should be valid and sent to your fulfillment webhook. Since it sounds like you're using Dialogflow V2, you should be able to get all the currently valid contexts as part of the request that is sent to your fulfillment webhook by looking at the queryResult.contexts object in the request body.
If you're using the fulfillment-webhook-nodejs library that you referenced in your post, this should be available to you in the inputContexts variable.

How to ensure my Google Home Assistant application is not rejected?

During our testing, we were unable to complete at least one of the behaviors or actions advertised by your app. Please make sure that a user can complete all core conversational flows listed in your registration information or recommended by your app.
Thank you for submitting your assistant app for review!
During testing, your app was unable to complete a function detailed in the app’s description. The reviewer interacted with the app by saying: “how many iphones were sold in the UK?” and app replied “I didn't get that. Can you try with other question?" and left conversation.
How can I resolve the above point to approve my Google Assistant action skills?
Without seeing the code in question or the intent you think should be handling this in Dialogflow, it is pretty difficult - but we can generalize.
It sounds like you have two issues:
Your fallback intent that generated the "I didn't get that" message is closing the conversation. This means that either the "close conversation" checkbox is checked in Dialogflow, you're using the app.tell() method when you should be using app.ask() instead, or the JSON you're sending back has close conversation set to true.
You don't have an intent to handle the question about how many iPhones were sold in the UK. This could be because you just don't list anything like that as a sample phrase, or the two parameters (the one for object type and the one for location) aren't using entity types that would match.
It means that somewhere, either in your app description or in a Dialogflow intent(they have full access to see what's in your intents) you hinted that “how many iphones were sold in the UK?” would be a valid question. Try changing the description/intents to properly match the restrictions of your app.

wit.ai message api call from java

I am making call to wit.ai message API from java. It works and i can get the intent and entities for the message as per my story definition.
However I have doubt - when i will be adding multiple stories to my bot on wit.ai platform, i need to
use the same wit.ai URL. In that case how exactly i will extract the intents and entities from the wit response?
Message from user could be for any story message. For extracting entity value from the
wit.ai response i need to specify entity name (in json response) in my method. Lets say - entities.[0].value
Any idea how can i do that in java? Also am not getting how exactly the custom actions defined in the story has to be implemented and use in java.
I tried sample POC in node.js(by following the messenger.js file) and it works fine for custom actions defined. But am not
getting how to do it in java?
Any help/pointers in this regard greatly appreciated.
Thanks.
Even iam looking for the same. Though in the HTTP API they have given how to call conversation involving actions there is no description how to create actions with Java. It is descriptive in case of node js but not java. Let me know if you make any breakthrough. I have written to the support team yet to hear back

Resources