I have several actions published on Dialogflow that are getting invoked by random string of numbers such as the one below:
1,548,760,289,473 (1)
It invokes the assistant and then doesn't move pas the welcome message. The training data is coming completely empty.
This sounds like it might be the Action health check that Google's bots perform throughout the day to ensure your Action responds.
If you examine the arguments, there will be one with the name is_health_check. The numbers don't appear to be random but look like they might be the number of milliseconds since midnight UTC, Jan 1, 1970 (the UNIX epoch start time).
Related
Is there a way to train an llm model to store a specific context? For example, I had a long story I want to ask questions about, but I don't want to put the whole story in every prompt. How can I make the llm model "remember the story"?
Taking into account that GPT-3 models have no parameter that enables memorization of past conversations, it seems the only way at the moment to "memorize" past conversations is to include past conversations in the prompt.
If we take a look at the following example:
You are a friendly support person. The customer will ask you questions, and you will provide polite responses
Q: My phone won't start. What do I do? <-- This is a past question
A: Try plugging your phone into the charger for an hour and then turn it on. The most common cause for a phone not starting is that the battery is dead.
Q: I've tried that. What else can I try? <-- This is a past question
A: Hold the button in for 15 seconds. It may need a reset.
Q: I did that. It worked, but the screen is blank. <-- This is a current question
A:
Rule to follow:
Include prompt-completion pairs in the prompt with the oldest conversations at the top.
Problem you will face:
You will hit a token limit at some point (if you chat long enough). Each GPT-3 model has a maximum number of tokens you can pass to it. In the case of text-davinci-003, it is 4096 tokens. When you hit this limit, the OpenAI API will start to throw errors. When this happens, you need to reduce the number of past prompt-completion pairs (e.g., include only the most recent 4 past prompt-completion pairs).
Pros:
By including past prompt-completion pairs in the prompt, we are able to give
GPT-3 models the context of the conversation.
Cons:
What if a user asks a question that relates to a conversation that occurred more than 4 prompt-completion pairs ago?
Including past prompt-completion pairs in the prompt will cost (a lot of) money!
I'm creating a chatbot in Dialogflow in which the user is expected to enter a frequency of time, followed by specifying the time. i.e :
Bot: how many entries will you make on that day? (Or what so the frequency of your entries)?
User: Twice daily or two times a day.
Bot: Please enter those times.
User: 9 am and 7 pm
Now the problem is even if I enter more than two times it will still get accepted as the time by Dialogflow.
I need to implement a check here that will take only times if the user enters twice daily and accept three times if the frequency is thrice daily.
Is it possible to do this by manipulating entities and intents? I want to avoid doing this in the webhook.
Also the webhook I'll be implementing is in python. So can't use Node.js inline editor.
No, this can't be done in the Intents only. Remember that an Intent represents what the user has said, and not how you are using.
As you surmise, the best place to check these values are in your webhook fulfillment. Since you already have a webhook, it isn't clear why you are avoiding this.
In terms of design, you may wish to skip asking for the frequency and just ask the user to tell you when they'll be making the entries. You can then confirm that was all that they wanted, accept more if they needed more, etc.
I am using Google DialogFlow for my application for identifying the text response in parsing the resume. Every time the response keeps on changing.
1 week back I trained a string and get the proper response but today while checking the same string the response is not coming proper, it is not taking a few of the fields.
Also for date identification, the problem is the very similar dialog flow after training the string properly keep on varying the response.
If I try the same string 5 times all the time's results are not same it keep on changing like -
This is the string i trained-
"SSCE(CBSE) from L.B.S. Public School, Pratap Nagar,Jaipur(2013-2014) with aggregate 69.20%."
below are the screenshots attached of varying response-
response I am getting first time
response I am getting second time
Dialogflow is not a parser - the training phrases you give it aren't strings that will be matched, they help set the pattern for a Natural Language Understanding (NLU) system. So they're tuned for how people naturally speak or type.
It is also somewhat unusual to have multiple parameters with the same name. I can easily see how the system would ignore a second occurrence when done this way. (Although you may try setting up those parameters as lists.)
I created an agent about travel.
For example :
1st human question : I want to go to Nepal
The bot answers correctly with: what is your departure date
Human answers : 9th of April
etc.
I need to ask for the date of return, and I don't know how to create answers or intent correctly to make the bot understand the difference between departure date and date of return.
Remember that Intents represent what the user says and not how you interpret it.
You can keep track of where in the conversation you are, and thus what the date represents, by storing the state in a Context.
I have an intent with a simple slot filling question, which gathers a Number-Sequence of 4 characters long.
It looks like this:
The problems are as follows:
There are 3 different phrases defined under the slot filling list. (3rd image above). However, only the first one is prompted twice in a row by the system.
After the phrase is prompted twice, the system exits. I expect it to keep prompting the 3 different phrases round-robin style, until the user gets it right.
Is the maximum number of attempts specified somewhere? Can it be changed?
Can we make it use all of the slot-filling phrases, instead of just the first one?
First thing, if you say your verification code is 4-digit long, you should train your agent for 4-digit code only, I can see it in the first snap that you've trained it for 1-digit code only.
Anyways, now coming to your first question, The number of prompts that you have defined here are variations of each other. Api.ai will randomly select any one of them & send it as a response to the user. You do not have a choice to tell the system which one to be prompted first & which one to be second one neither you can define the cycle of prompts like round-robin.
Now, Answer to your second question, I have tried the same set-up at my end. It keeps on prompting unless & until it receives the correct code. Please check snaps. so, there is no limitation to the number of attempts you can have.