Follow up intent for NO INPUT not firing with dialogflow - dialogflow-es

I have a "book reading" action and I tried to add a follow up intent for my read intent to reprompt a user if there was no response. Following the doc https://developers.google.com/actions/assistant/reprompts - my webhook never gets called.
However, if I add the no input handler as a main intent, I do get this event!
Is this a bug or did I miss something.

The no-input event is a little unusual, since it is handled differently internally compared to many other events. It would not surprise me if this difference requires it to be handled as a top-level Intent. You may also wish to just try setting the context in your book reading portion and having this as an input context for your no-input event.
However... this will also likely not do what you want it to do.
The no-input event will automatically terminate the conversation after three sequential events, even if you don't explicitly close the conversation.
The current way to handle this would be to use a Media Response after each portion you read. This would include a very short audio file. After the audio plays, your Action will be actions_intent_MEDIA_STATUS event, which you can use to trigger the next portion to be read.

No Input will be main intent as it can be reused by other intents. You may need to save bots response in a parameter in the context to check what the bot replied when handling the re-prompts from this generic No Input intent.

Related

How to manage the conversation flow if face timeout limit (5 seconds) in Dialogflow / Api.ai?

I am making a bot on Dialogflow with a Fulfillment. Considering the given strict 5-second window in DialogFlow, I am getting [empty response] as a response.
I want to overcome this issue, but my web service requires more than 9 seconds for the execution.
I am considering to redesigning the conversation flow in such a way that we will start streaming audio till the Response is processed.
Example:
User Question: xx xxx xxx xxxx xxxxx?
Response: a). We'll play fixed audio to keep the user engaged for few seconds till it finds a response text in the back end; b).
Receive answers from the web service and save them in the session to
display further.
How can I achieve this and how can I handle the Timeout issue?
You're on the right track, but there are a number of other things to consider.
First, however, keep in mind that anything that is trying to "avoid" the 5 second timeout already indicates some issues with the design. Waiting 10 seconds for a reply is a pretty long time with something as interactive as voice! Even 5 seconds, which is the timeout, is a long time. (And there is no way to change this timeout.)
So the first thing you may want to do is consider if there is a better/faster way to do what you want.
If not, the rough approach would be something like this:
Get the request from the user.
Track a unique identifier, either tied to the user or tied to the session. You'll be using this as a key into some kind of database or data store.
Start the API call as part of an asynchronous request or in another thread.
Reply immediately that you're working on it in a way that the user will send another request. (See below for this issue.) You'll want to make sure that the ID is maintained as part of this session - so you'll need to save it as part of the Session data.
At this point - you're basically doing two things in parallel.
When the API call completes, it needs to save the result in the datastore against the identifier. (It can't save it in the session itself - that response was already sent back to the Assistant.)
You're also waiting for a reply from the user. When it comes in:
Check to see if you have a response saved for this session yet.
If not, then go back to step 4. (You may want to track how many times you get here and give up at some point.)
If you do have the result, reply to the user with the information.
There is an issue with how you reply in step 4, since you want to do something that will guarantee you another request from the person expecting an answer. There are a few possible approaches:
The most straightforward way would be to send back a Media response to play a few seconds of "hold music". This has the advantage that, when the music stops, it will send an event to Dialogflow which you can capture as an Intent and then continue with step 5.
But there are some problems:
Not all versions of the Assistant support the Media response. You will need to check to confirm the feature is supported before you use it and, if not, use another approach (see below).
The media player that is presented on some Assistants allow the user to stop playback, or will not correctly send an event when the audio stops in some situations. So you may never get another request in this session.
Another approach involves some more advanced conversation design tricks, so may not always be suitable for your conversation. Your response can say that you're looking up the results but then ask the user a question - possibly one that is related to other information that you will need. With their reply, you can collect this information (if you need it) and then see if you have a result yet.
In some conversations - this works really well. For example, if you're looking up flights to somewhere, while you're looking that up you might ask them if they will need a hotel or rental car, which you might ask about anyway.
Other conversations, however, don't easily have such questions. In these cases, you may need to ask something that isn't relevant while you stall for time.

How to get the output context in dialogflow?

I have created the bot and it contains two intents, each intent is having 20 follow up intents, after completion of one intent it automatically calls the follow-up intent. So the problem is if the user has answered 10 prompts i.e up to 10 follow-up intents and after sometime, the user wants to continue from 11th follow-up intent. Is there any possibility to do that. Currently, I am saving the data of the user previous conversation and trying to start from that conversation point, but after starting the conversation it automatically ask the 11th followup intent prompts and then again it goes to the default-welcome intent instead of continuing with the 12th follow-up intent.
Apart from the lifeSpan that we set in the context, there is also a time-limit for the contexts. After 10 minutes all the contexts expires so it might be the problem in your case.
In the documentation, it is given that time-out is 20 minutes, but after a lot of testing, it was observed that time-out is indeed 10 minutes.
What you can do is store the context in the some cache or DB after each call, and before calling Dialogflow, append the context with your query from the cache/DB.
I have done the same thing and it is working flawlessly.
Hope it helps.
I don't know if you did this but I would recommend setting the lifespan of each output context to 1. You might also want to programmatically modify the outpu context if you're using the fulfillment funcitonality. You said you're already keeping track of the previous conversation so I'm assuming that setting it programmatically would be a viable solution for you.

Loading events from inside event handlers?

We have an event sourced system using GetEventStore where the command-side and denormalisers are running in two separate processes.
I have an event handler which sends emails as the result of a user saving an application (an ApplicationSaved event), and I need to change this so that the email is sent only once for a given application.
I can see a few ways of doing this but I'm not really sure which is the right way to proceed.
1) I could look in the read store to see if theres a matching application however there's no guarantee that the data will be there when my email handler is processing the event.
2) I could attach something to my ApplicationSaved event, maybe Revision which gets incremented on each subsequent save. I then only send the email if Revision is 1.
3) In my event handler I could load in the events from my event store for the matching customer using a repository, and kind of build up an aggregate separate from the one in my domain. It could contain a list of applications with which I can use to make my decision.
My thoughts:
1) This seems a no-go as the data may or may not be in the read store
2) If the data can be derived from a stream of events then it doesn't need to be on the event itself.
3) I'm leaning towards this, but there's meant to be a clear separation between read and write sides which this feels like it violates. Is this permitted?
I can see a few ways of doing this but I'm not really sure which is the right way to proceed.
There's no perfect answer - in most cases, externally observable side effects are independent of your book of record; you're always likely to have some failure mode where an email is sent but the system doesn't know, or where the system records that an email was sent but there was actually a failure.
For a pretty good answer: you're normally going to start with a facility that sends and email and reports as an event that the email was sent successfully, or not. That's fundamentally an event stream - your model doesn't get to veto whether or not the email was sent.
With that piece in place, you effectively have a query to run, which asks "what emails do I need to send now?" You fold the ApplicationSaved events with the EmailSent events, compute from that what new work needs to be done.
Rinat Abdullin, writing Evolving Business Processes a la Lokad, suggested using a human operator to drive the process. Imagine building a screen, that shows what emails need to be sent, and then having buttons where the human says to actually do "it", and the work of sending an email happens when the human clicks the button.
What the human is looking at is a view, or projection, which is to say a read model of the state of the system computed from the recorded events. The button click sends a message to the "write model" (the button clicked event tells the system to try to send the email and write down the outcome).
When all of the information you need to act is included in the representation of the event you are reacting to, it is normal to think in terms of "pushing" data to the subscribers. But when the subscriber needs information about prior state, a "pull" based approach is often easier to reason about. The delivered event signals the project to wake up (reducing latency).
Greg Young covers push vs pull in some detail in his Polyglot Data talk.

How to solve persistence delay or error while handling event in CQRS and ES?

I am trying to implement another DDD bounded context with CQRS and ES.
I wonder, given there is CreateUserCommand that creates User in my domain model (not a word about saving). Then it fires UserCreatedEvent.
I have two event handlers for that event:
PersistUserEventHandler (updates state of app) and
SendWelcomeEmailEventHandler (sends welcome email to user)
Now, I know, that:
Order of processing event in Event Handlers should not matter
Saving state should be detail, because source of truth is in my event store.
But, what if I do not want to send welcome email until my read model is fully updated? Because, what if for example process is delayed or some error occurs and I am not able to persist that user into read model right now? Then I do not want send that welcome email now, because if user clicked to for example link to his profile in mail, he would see "user does not exists".
I saw people are persisting changes through repository directly in command handlers (which would solve this problem), but that does not make sense with Event Sourcing, because I want to be able to replay all events (with event handlers for persisting only to prevent all other side effects) and get actual state of application in persistence layer.
Or should I listen to UserCreatedEvent only with event handler that actually persists it into read model and then raise in this event handler another event CreatedUserSavedEvent and all emails etc. would have been sent by their handlers?
I suppose NO too, because it reminds me some event hell and also if I get EventBus into some event handler, I am getting into circular reference problem which is just effect of that I am violating rule that every depencency should point down to lower components of my system and not the other side.
So, how is this usualy solved or am I missing something?
PersistUserEventHandler (updates state of app)
You might be mistaking Read Models for a homogeneous whole that accurately represents the current state of an application, i.e. a second source of absolute truth besides the event log.
I tend to see them more as a bunch of partial, opinionated parcels of state that may not all be updated at the same time and may reflect different truths.
I don't recommend taking read models as a source of data in another context than the use case they were designed for. In your example, SendWelcomeEmail should probably not rely on the User read model but only on the data contained in the UserCreated event.
Now you can share code between read model projectors and other types of event handlers to avoid duplication, but sharing data seems risky.
If users have random UUID then it should not be a problem. If a user arrive at an url and the readmodel is not up to date then you could show a "loading in progress,please wait" message.
If you really want to know if the user really exists - for example you want to see the difference between "user does not exists" and "read model is not sunchronized yet" then you could send a special command that don't generate any events (or just test a command if your command dispatcher supports dry running of commands) and throw exception if user does not exist.

Event Sourcing with Side-Effects

I'm building a service using the familiar event sourcing pattern:
A request is received.
The aggregate's history is loaded.
The aggregate is rebuilt (from its history).
New events are prepared and the aggregate is updated in response to the incoming request from Step 1.
These events are written to the log, and are made available (published) to any subscribers.
In my case, Step 5 is accomplished in two parts. The events are written to the event log. A background process reads from the event log and publishes all events starting from an offset.
In some cases, I need to publish side effects in addition to events related to the aggregate. As far as the system is concerned, these are events too because they are consumed by and affect the state of other services. However, they don't affect the history of the aggregate in this service and are not needed to rebuild it.
How should I handle these in the code?
Option 1-
Don't write side-effecting events to the event log. Publish these in the main process prior to Step 5.
Option 2-
Write everything to the event log and ignore side-effecting events when the history is loaded. (These aren't part of the history!)
Option 3-
Write side-effecting events to a dummy aggregate so they are published, but never loaded.
Option 4-
?
In the first option, there may be trouble if there is a concurrency violation. If the write fails in Step 5, the side effect cannot be easily rolled back. The second option write events that are not part of the aggregate's history. When loading in Step 2, these side-effecting events would have to be ignored. The 3rd option feels like a hack.
Which of these seems right to you?
Name events correctly
Events are "things that happened". So if you are able to name the events that only trigger side effects in a "X happened" fashion, they become a natural part of the event history.
In my experience, this is always possible, because side-effects don't happen out of thin air. Sometimes the name becomes a bit artificial, but it is still better to name events that way than to call them e.g. "send email to that client event".
In terms of your list of alternatives, this would be option 2.
Example
Instead of calling an event "send status email to customer event", call it "status email triggered event". Of course, if there is a better name for the actual trigger, use that one :-)
Option 4 - Have some other service subscribe to the events and produce the side effects, and any additional events related to them.
Events should be fine-grained.
Option 1- Don't write side-effecting events to the event log. Publish
these in the main process prior to Step 5.
What if you later need this part of the history by building a new bounded context?
Option 2- Write everything to the event log and ignore side-effecting
events when the history is loaded. (These aren't part of the history!)
How to ignore the effect of something which does not have any effect? :D
Option 3- Write side-effecting events to a dummy aggregate so they are
published, but never loaded.
Why do you need consistency boundary around something which you will never change?
What you are talking about is the most common form of domain events, which you use to communicate with other BC-s. Ofc. you need to save them.

Resources