Botman not listening to Dialogflow action - nlp

I am working on a chatbot with Botman. I want to integrate Dialogflow's NLP so I'm calling the middleware and one of it's actions. The problem is that Botman is not hearing it. I just keep getting this error:
This is my intent's action name
This is the way I'm calling the middleware
I'm using my Client access token. I tried calling the action different names like 'input.automovil', 'automovil', (.*), but it's still failing and I haven't found enough examples.

The documentation is not updated. ApiAi is renamed as Dialogflow
Replace
use BotMan\BotMan\Middleware\ApiAi; with
use BotMan\BotMan\Middleware\Dialogflow;
and
$dialogflow = ApiAi::create('your-key')->listenForAction(); with $dialogflow = Dialogflow::create('your-key')->listenForAction();

try changing your lines 27 to 33 the the below
$botman->hears('automovil', function (BotMan $bot) {
// The incoming message matched the "my_api_action" on Dialogflow
// Retrieve Dialogflow information:
$extras = $bot->getMessage()->getExtras();
$apiReply = $extras['apiReply'];
$apiAction = $extras['apiAction'];
$apiIntent = $extras['apiIntent'];
$bot->reply($apiReply);
})->middleware($dialogflow);

Related

Should we have separate function handler for all intents if number of intents are more than 200

What if I have more than 100 intents including the followup intents. Should we write separate handler for each 100 intent and call a common function from the handler function. Is it correct?
Here we want to have common function with intent name as parameter, because all we do is fetch the response from database.
Shall we have parameterized function in intentmap set or have separate handler function for all these intents and call common parameterized function from inside. Please suggest.
Yes using paramerized functions or classes is a good practice. With this setup you can easily re-use any required logic if two intents perform similair actions in the webhook.
If you require some different behaviour you can enter values into the parameters, an example of this would be a function that ends the conversation.
app.intent("Stop Conversation"), (conv) => {
const message = "Okay, have a nice day";
endConversation(conv, message);
});
app.intent("Cancel Reservation"), (conv) => {
const message = "Okay, I will cancel your reservation. Have a nice day."
endConversation(conv, message)
});
endConversation(conv, message) {
conv.close(message);
}
You could choose to go for one single handler that looks up the intent name and then fetches the response, but this can cause some issues when working with Helper intents. These Helper intents require extra parameters that normal intents do no use, so you will have to account for them in your common handler or write seperate handlers for them. If you do not need these intents, then there isn't any harm in using a single handler.
One extra thing to note, having 100 intents is quite alot. Remember that intents should be used to indicate what you user says and not as a step in your flow. Usually this means that you only have one intent to handle yes input from your users and you will use context to detirmine which step of the conversation you are in.
If you are using the actions-on-google or dialogflow-fulfillment libraries, then yes, having an Intent Handler for each Intent and having those handlers call other functions with the parameters you want is the best approach.
However... if you're not using these libraries, you certainly have other options available.
For example, using multivocal you can set builder functions that extract parameters into the request environment and make the database call. If you set the "Action" field in Dialogflow you can (but don't have to) use this as the basis for an Action Handler.
If you just want to stick to your own libraries, you can parse the JSON yourself and make whatever function calls based on whatever values you wish.

List option in dialog flow for google assistant not giving correct results

I have been exploring Dialogflow from last 6-7 days and have created a bot which has menu in the form of List.
After going through lot of articles got to know that we need to have event actions_intent_OPTION in one of the intents for List to work properly. Also got to know that on top of it we need to have handler/intent for actions_intent_OPTION. This intent would be triggered once user taps one of the option of List.
Now i am struggling in defining handler for event actions_intent_OPTION. I had defined intent with name "actions_intent_OPTION-handler" but I am not able to find the code which i can code in for fulfillment section of Dialogflow, which will identify the option selected by user and will call the intent associated to that option.
I am a not from coding background, and I tried one code (index.js), but when deployed doesn't given any error however when executed on simulator it throws error "Failed to parse Dialogflow response into AppResponse because of empty speech response."
Reiterating my requirement, I am looking for a sample code which can capture the option selected by user (from list) and trigger the already defined intent.
Details about bot,list and intents is attached herewith.
This is the list defined by me and currently iam trying to code to capture the Payment Due Date option (which has text Payment Due Date Electricity defined in list
Code in fulfillment section
Intents defined
Note - Intent which needs to be called is "1.1 - ElectricityDetails - DueDate"
Here is the code -> Please don't ask me why i have used certain peice of code, as iam newbie :).
'use strict';
const functions = require('firebase-functions');
const {WebhookClient} = require('dialogflow-fulfillment');
const {dialogflow} = require('actions-on-google');
const app = dialogflow({debug: true});
//const agent = new WebhookClient({ request, response });
let intentMap = new Map();
app.intent('actions_intent_OPTION-handler', (conv, params, option) => {
if (!option) {
conv.ask('You did not select any item from the list or carousel');
} else if (option === 'Payment Due Date Electricity') {
//conv.ask('You are great');
//intentMap.set('Default Welcome Intent', welcome);
intentMap.set('1.1 - ElectricityDetails - DueDate',option);
} else {
conv.ask('You selected ' + option);
}
});
//agent.handleRequest(intentMap);
exports.dialogflowFirebaseFulfillment = functions.https.onRequest(app);
You have a few issues here, so it is difficult to tell exactly which one is the real problem, but they all boil down to this statement in your question:
Please don't ask me why i have used certain peice of code
We understand that you're new - we've all been there! But copying code without understanding what it is supposed to do is a risky path.
That said, there are a few things about your design and code that jump out at me as issues:
Mixing libraries
You seem to be loading both the actions-on-google library and the dialogflow-fulfillment library. While most of what you're doing is with the actions-on-google library, the intentMap is what is used by the d-f library.
You can't mix the two. Pick one and understand how to register handlers and how those handlers are chosen.
Register handlers with actions-on-google
If you're using the a-o-g library, you'll typically create the app object with something like
const app = dialogflow();
and then register each handler with something like
app.intent( 'intent name', conv => {
// handler code here
});
You'll register the app to handle the request and response with something like
exports.dialogflowFirebaseFulfillment = functions.https.onRequest(app);
Register handler with dialogflow-fulfillment
The dialogflow-fulfillment approach is similar, but it suggests creating a Map that maps from Intent Name to handler function. Something like this:
let intentMap = new Map();
intentMap.set( 'intent name', handlerFunction );
Where handlerFunction is also the name of a function you want to use as the handler. It might look something like
function handlerFunction( agent ){
// Handler stuff here
}
You can then create an agent, set the request and response objects it should use, and tell it to use the map to figure out which Intent Handler to call with something like
exports.dialogflowFirebaseFulfillment = functions.https.onRequest((request, response) => {
const agent = new WebhookClient({ request, response });
agent.handleRequest( intentMap );
Intents represent what the user does, not what you do with it
Remember that Intents represent a user's action.
What you do based on that action depends on a lot of things. In your case, once they have selected an option, you want to reply the same way as if they had triggered it with a particular Intent. Right?
You don't do that by trying to trigger that Intent.
What you do is you have both Handlers call a function that does what you want. There is nothing fancy about this - both are just calling the same function, just like lots of other code that can call common functions.
Don't try to dynamically register handlers
Related to the previous issue, trying to register a new Handler inside an existing Handler won't do what you want. By that time, it is too late, and the handlers are already called.
There may be situations where this makes sense - but they are very few, far between, and a very very advanced concept. In general, register all your handlers in a central place as I outlined above.

How to send accessToken in Detect Intent Text API in Dialogflow

I'm developing my chatbot backend using detect intent API in Python, once the intent recognized, it goes to my webhook and gets the appropriate data.
here is my detectIntent code to get data from text
def detect_intent_texts(project_id, session_id, text, language_code='en-US'):
session_client = dialogflow.SessionsClient().from_service_account_file(
'my_service_account_json.json')
session = session_client.session_path(project_id, session_id)
print('Session path: {}\n'.format(session))
text_input = dialogflow.types.TextInput(text=text, language_code=language_code)
query_input = dialogflow.types.QueryInput(text=text_input)
response = session_client.detect_intent(session=session, query_input=query_input)
data = MessageToDict(response.query_result)
print (json.dumps(data, indent=4))
response = parse_response(data)
return response
How can I sent access_token with it so that my webhook can identify
which user is accessing the bot
P.S. my webhook is looking for an access token in this path
req.get("originalDetectIntentRequest").get("payload").get("user").get("accessToken")
Everything under the "payload" attribute is platform dependent. The Assistant platform puts user information under here, for example, and this is what your webhook is currently trying to process.
If you wanted to put the access token in the same place, you can pass a query_params named parameter to the call to detect_intent() along with the text you're querying, and anything else that may be relevant. (Reference) That parameter can be a dict and, if so, has to have the same field names as a QueryParameters object.

How can I get confidence level value in ms bot framework?

I want to access the confidence level from luis with the middleware so I can route low confidence level responses to humans instead of the bot.
The value I am looking for is this one (gets logged with emulator):
Library("*")recognize() recognized: Hallo(0.8215488)
Is this even possible in the middleware or does that happen afterwards?
I tried finding it in the "session" but didn't find it yet.
When using an IntentDialog from the botbuilder library, you could specify the intentThreshold property which will set the minimum score needed to trigger the recognition of an intent. Check following link for reference: https://docs.botframework.com/en-us/node/builder/chat-reference/interfaces/_botbuilder_d_.iintentdialogoptions.html#intentthreshold
If the user's input is not recognised by your LUIS models or the score value is below that intentThreshold value, the onDefault method from the IntentDialog will handle it. So, it's in here where you could add your logic to hand over the customer conversation from a bot to a human:
let recognizer = new builder.LuisRecognizer(models);
let minimumScore = 0.3;
let intentArgs = {};
intentArgs.recognizers = [recognizer];
intentArgs.intentThreshold = minimumScore;
var intents = new builder.IntentDialog(intentArgs)
.onBegin()
.onDefault(
// Add logic to handle conversation to human
);
library.dialog('options', intents);

Using the Bot Framework to send message on the user's behalf

I'm currently attempting to accept voice input from the user, feed it into the Bing Speech API to get text, and pass that text as a user response. I've gotten as far as receiving the text back from Bing, but I'm not sure how to send that text as a user response. I've been scouring GitHub, so any feedback is appreciated. Relevant code is below:
function(session){
var bing = new client.BingSpeechClient('mykey');
var results = '';
var wave = fs.readFileSync('./new.wav');
const text = bing.recognize(wave).then(result => {
console.log('Speech To Text completed');
console.log(result.header.lexical)
console.log('\n');
results.response = result.header.lexical;
});
}]
You should use session.send.
I recommend you to take a look to the intelligence-SpeechToText sample, where a similar scenario is being shown.
Update: Figured it out (sorta). In order to take advantage of sending this user input back, I had to use another card. Within the context of the card, I'm able to use the imBack function

Resources