I created one parameter Named "Purpose" under actions and parameter. I gave it as required parameter and in the prompt I gave as "What is the purpose ?". Entity type I am trying is #sys.any.
After prompt what ever I gave like "Child protective services" or "Child protective service" I am getting reply from simulator that "I missed that can you say that again" or "Sorry I couldn't understand".
This was working two weeks before and suddenly its happening like this in DF. I tried other way also by creating user defined entity and nothing helps.
Any update happened in dialog flow and do I have to change anything to work ?
It’s a bug! Since yesterday Google Assistant is no longer recognizing both Intents and parameters properly. Lots of people are facing that problem.
I already opened a issue and am waiting for a solution.
_DM
Related
I'm having an issue when attempting to enter specific intents based on the value of a property.
I currently have a question that gets asked, which then fires off to the Microsoft Translator via a HTTP Request and from that, it fires off to the LUIS API with that text.
After that, I would like to enter an intent based on the top intent that the LUIS API Call brought back.
I have the Translator and The LUIS API bringing back values and I can output these using Send Responses:
However, when I attempt to call an intent based on the value of the property, I just get an Object Reference error:
Is what I'm trying to do possible and if so am I going about this entirely the wrong way causing more issues for myself?
Thanks In Advance
I'm trying to understand exactly what you are trying to achieve. Do I summarize it correctly as following?
You start a main dialog. In that dialog you take some user input.
You translate the input, and manually send the the translated text off to LUIS for intent recognition.
Based on the recognized intent, you want to start a specific sub dialog.
I don't believe you can just 'call an intent'. An intent is the result of a LUIS or Regex recognizer, which is processed automatically by Bot Framework. The recognizer is processed at every user input. There is no need to call LUIS yourself as a HTTP request. The recognizer (LUIS or RegEx) is configured on the main dialog properties in Bot Framework Composer:
Although in this case it looks like you are manually doing the LUIS intent recognition, because you want to do translation upfront. To achieve that scenario with the built-in recognizer, you would need a translation middleware. There is a short discussion going on here on Github about translation middleware for Bot Framework Composer, although the sample code is not ready yet.
While there is no code samples for the translation middleware yet, I believe what could already help you today is to start a subdialog based on the recognized intent, similar to what you already show in your screenshots.
Basically instead of "Send a response" at the end of your dialog, you would have something following like:
My sample here uses user input instead of the recognized intent. You would replace the user input with your intent variable instead. Based on the recognized intent, you would be able to spin up a specific dialog to handle that recognized intent.
The result would look something like:
About triggers, what you currently configured in your screenshot shows "no editor for null". I believe this might cause the "object reference" issue. Normally it should display a trigger phrase. For example, the below means:
If user inputs the text "triggerphrase"
And the dialog variable 'topintent' was previously set to 'test', then run this trigger.
This issue started this morning (21 June 2019) affecting ALL our dialogflow agents. Previously they have been working fine, though we had observed this behaviour occasionally over the past month, but found it difficult to reproduce.
Now we can reliably reproduce it and it has hammered all our voice work.
Our webhook returns a piece of json like this to trigger an event to move the user to the next intent:
"followupEventInput": {
"name": "Textbox",
"languageCode": "en-AU"
}
The problem is that if we use events more than twice after the initial trigger, the user is just given a message "Sorry, I can't help" and the Agent is forcibly closed.
Example conversation:
"Talk to Foobar Toys"
"Welcome to Foobar Toys. How can I help you?" (Start app)
"I'd like to know about Lego"
"Do you want to know about Technic, or Star Wars lego?" (Invocation started)
"Technic"
"Are you interested in sets or minifigs?" (Interaction 1)
"sets"
"What kind of sets?" (Interaction 2)
"cars"
"Sorry, I can't help." (Failure after interaction 2.)
This is very similar behaviour to as if we were using a default fallback intent all the time, but we aren't.
The interactions are all intents triggered by events.
If we DO happen to trigger a fallback intent or help text, the counter resets and we can keep going until we next hit this.
A LOT of our workflows involve more than 2 interactions. So this is a pretty big deal. Any advice appreciated. I've spent a day or two trying to work out a scenario in which this doesn't happen for us with no luck at all.
So, we've worked out what caused this, and have managed to work around it.
Our Agent was composed of several intents that each had a required input parameter called "input." Triggering of the intents via our webhook was done (sometimes) by use of the follow-up event. In FireBase this is achieved by using a statement like:
agent.setFollowupEvent('message');
where "message" is the name of the event that is linked to your intent.
It seems that by taking the workflow out of the hands of the dialogFlow core, we somehow triggered it into thinking that it wasn't managing to match any intents, even though our code was effectively telling it which intent to send the conversation to.
Our workaround for now is to have a single intent that matches on sys.any and not pass back followup events any more.
If anyone is interested, I have a very simple workflow+firebase that reproduces this issue.
Added later - Response from Google
"it seems that the cause of the issue is the slot filling using #sys.any as an entity. Please don't use #sys.any on slot filing as to this is not a standard practice on using #sys.any."
Here was my setup and my hacky fix:
intent1, triggered by event "eventIntent1", with parameter 'value' of type #sys.number. Intent gets one number, stores it in the conversation context. It it doesn't have four numbers yet, it calls itself through followup("eventIntent1") to get another number.
Desired conversation:
assistant: "give me a number"
user: "1"
assistant: "give me a number"
user: "2"
assistant: "give me a number"
user: "3"
assistant: "give me a number"
user: "4"
assistant: "You gave me 1 2 3 4"
Actual conversation:
assistant: "give me a number"
user: "1"
assistant: "give me a number"
user: "2"
assistant: "give me a number"
user: "3"
assistant: "Sorry, I can't help"
Fix:
The fix was to setup another intent called "intent2" triggered by an event "eventIntent2". The slot filling for them is identical (the logic above), except intent1 calls "eventIntent2" for a followup, while "intent2" calls "eventIntent1" for a followup. This tricks it into not having the same intent called times in a row. It allowed me to record an unlimited number of values.
I'm developing an Action, let's call it "foo". It's a grocery list, so users should be able to explicitly invoke it like so:
"ask foo to add milk" (fails)
"ask foo add milk" (works, but grammatically awful)
"tell foo add milk" (fails, even though it's basically identical to the above?)
"talk to foo" ... "add milk" (works, but awkward)
I've defined "add {item} to my foo list" and "add {item}" (as well as many others) as training phrases in Dialogflow. So it seems like everything should be configured correctly.
The explicit invocations "talk to foo" (wait) "add milk" and "ask foo add milk" work fine, but I cannot get any others to work in the Actions simulator or on an actual device. In all cases it returns "Sorry, this action is not available in simulation". When I test in Dialogflow, it works fine.
It seems like the Assistant is trying to match some other unrelated skill (I'm assuming that's what that debug error means). But why would it fail when I explicitly invoke "ask foo to add milk"?
Additionally, my action name is already pretty unique, but even if I change it to something really unique ("buffalo bananas", "painter oscar", whatever) it still doesn't match my action. Which leads me to think that I'm not understanding something, or Actions is just really broken.
Can anyone help me debug this?
Edit: I spent weeks in conversation with the Actions support team, and they determined it was a "problem with my account", but didn't know how to fix it. Unfortunately, at that point they simply punted me to GSuite support, who of course know nothing about Actions and also couldn't help. I'm all out of luck and ideas at this point.
Implicit invocation is not based directly on what training phrases you have. Google will try to match users to the best action for a given query, but it might not.
To get explicit invocation with an invocation phrase, you may need to go back to the Dialogflow integrations section and configure each intent you want to serve as an implicit intent.
I am building a bot which contains the Slot Filling approach and I want to provide a rich message from a webhook once an exit phrase is input to the bot.
I am building a bot which contains the Slot Filling approach. I came across through "cancelling slot filling dialog" in the documentation from the link https://dialogflow.com/docs/concepts/slot-filling#canceling_slot_filling_dialog
While I was trying it out, I found that not only the mentioned utterances in the documentation, there are more exit phrases like that. Ex: nothing, abort.
I couldn't find any intent/settings to configure/change this behaviour.
Is there a way that I could find out all the exit phrases?
Is there a way to change the output message displayed when the user says an exit phrase?
Can we connect with a webhook after user says an exit phrase to provide a custom rich response?
Attached is the response I get when I say an exit phrase to bot while slot filling
There's no built in way to do it as far as I can tell, but there is a hacky way you could use 3 to achieve 2. I'll assume you are familiar with how Dialogflow webhook requests and responses work generally. Please see here if not.
It basically boils down to checking if Dialogflow is about to respond with one of its stock cancellation phrases, then replacing it with one of your own.
Make sure "enable webhook call for slot filling" is on. When the user types a slot filling exit phrase, the webhook JSON that Dialogflow sends will still have the same intent.name property as the intent you're working with. So you can catch that intent in a switch statement.
Then inside that you can simply use an 'if' statement to check the "FulfillmentText" property of the webhook request and see if it's any of the stock phrases Dialogflow uses to respond to cancellations, such as "Sure, cancelling", or "No problem, cancelling". I don't know how many there are, but I assume there's not too many, you'll have to test to try and find them all.
If it is any of those phrases, you can then change what Dialogflow says to the user by giving back a response to the webhook with your own FulfillmentText set to whatever you want (see the link above with how the JSON response should be structured).
This method isn't exactly ideal as the stock exit responses Dialogflow uses could change and it's hard to know if you've found them all, but it should be a workaround until Dialogflow becomes more flexible.
Also copying my comment about question 1 before since it seems to work (thanks for the typo correction):
I suspect the list of cancelling phrases is the same as that found in the "cancel" intent of the prebuilt smalltalk agent. To find this, go to Prebuilt Agents -> Small Talk -> Import. Then navigate to that agent and find the intent "smalltalk.confirmation.cancel" to view the list of phrases.
Hope this helps.
I have created a google action, which takes in three parameters, I have done training phrases for many word combinations, but sometimes it will not pick it up.
I set my input parameters in the dialog flow to number1, number2, and number3.
It seems by default, if it misses a value it will say: "what is $varName"
however, this could be misleading to users since it may be unclear if it just prompts the user for 'what is number3'.
Id like to edit this response to be a more descriptive message.
I hope this is clear enough - I cant really post any code since its all concerning this dialogflow ui...
cheers!
If you want to add prompt variations for capturing parameters in an entity follow the "adding prompt variation" explained here. Just add variations to prompts as below or handle it from webhook by enabling slot-filling for webhook.
If you want to ask questions when the agent did not understand the intent then you can either use a Default Fallback Intent for a generic reply or create a follow-up fallback intent for the intent you are targetting.
or