I am trying to create a following scenario:
a task gets assigned to the user to complete
a task get created for the manager to reassign the user task if necessary (don't ask, they wanted it this way)
an email reminder neeeds to be sent when the task is nearing a due date
So, I thought of using EventHandlingScope for this:
I am listening for a task change on the main branch of eventhandlingscope activity,
listening to reassign task change in event driven branch - and if reassign task gets activated, reassign the first task to the user specified
in another event driven branch use a delay activity and check periodically if user assigned task is nearing a due date and send an email reminder
So, I though eventhandlingscope would be good for this, and it mostly is, except for the problem with the DelayActivity.
If I put a delay activity in one of the Event Handlers branches it fires once, but not more.
Whereas if I put an onTaskChange activity there it fires everytime somebody changes that task.
So, is this the expected behaviour? Why doesn't DelayActivity loop?
How could I do this differently? My thought is with a CAG, but this looks a bit more complex...
Update: the problem with CAG is that the whole thing blocks until delay activity fires, even if the onChange event fired. This makes sense, but makes it a bit trickier to use.
Update2: I've reworded the text to make it hopefully clearer
The Solution
The fundemental activity arrangement that solves this problem is a WhileActivity containing a ListenActivity.
The listen activity is given 3 EventDrivenActivity branches. On the first is your "User Task Completed" branch, the second is the "Manager Changed the assigned user" branch and the third contains a DelayActivity followed by your emailing logic.
In a listen activity any of the branches can complete the Listen activity and when they do the other activities in the Listen activity will be canceled.
You will need to ensure the the "User Task Completed" sequence sets some value that can be tested by the while loop such that the while loop exits when a user completes a task.
When a branch other than the "User Task Completed" branch is responsible for completing the the ListenActivity workflow will loop back to the ListenActivity and re-execute all 3 event driven activities including the one containing the DelayActivity.
Note that this is slightly different from the EventHandlingScope approach because the "listen for user task completed" will get canceled and re-executed whereas with the EventHandlingScope that wouldn't happen. IMO this a better arrangement since it means that the user that was currently selected to do the task at the start of the Listen activity is guaranteed to be unchanged at the end (because if it is changed the whole activity is discarded and a new one started).
Why the Delay only fired once in the EventHandlingScope
Effectively what you had set up is a scope that is listening for two events. One was your managers change assigned user event, the other was a "timer fired event".
Now the way its described in the documentation it sounds like some loop is involved as if once one of these activities completes they are restarted. However its not quite like that, it actually just continues to listen for the original event and will re-run the contents if another such event is fired.
In the case of the DelayActivity there is some internal "timer fired event" that is being listened to. When the Delay is first entered the timeout is setup so that the timer will fire at the appropriate time, it then listens for that event. Once it has fired the scope returns to listening to a "timer fired event", however, there is no re-running of the initial code that setup the timeout hence no other "timer fired event" is forth coming.
I know you don't want to hear this but you would be better off creating a workflow in place of the handler as workflows are designed to handle the time dimension much better as they are "long running". Event handlers are more scoped for a moment-in-time event triggers them and then they complete an action. Not only that, but judging from what you write, if the requirements are that simple you can create a SharePoint Designer Workflow so you wouldn't even have to crach open Visual Studio.
Also, not sure if you know this but SharePoint tasks do send out emails, these tasks will send out daily reminders when the task is late so you might be able to address your delay activity using out-of-the-box functionality.
Finally, if you are running in debug mode and you have hard-coded your taskid, you can only run one task per debug session otherwise your Event Handler will stop when another item with the same ID is added to SharePoint. This might explain why your delay activity is blocked.
Related
We have an event sourced system using GetEventStore where the command-side and denormalisers are running in two separate processes.
I have an event handler which sends emails as the result of a user saving an application (an ApplicationSaved event), and I need to change this so that the email is sent only once for a given application.
I can see a few ways of doing this but I'm not really sure which is the right way to proceed.
1) I could look in the read store to see if theres a matching application however there's no guarantee that the data will be there when my email handler is processing the event.
2) I could attach something to my ApplicationSaved event, maybe Revision which gets incremented on each subsequent save. I then only send the email if Revision is 1.
3) In my event handler I could load in the events from my event store for the matching customer using a repository, and kind of build up an aggregate separate from the one in my domain. It could contain a list of applications with which I can use to make my decision.
My thoughts:
1) This seems a no-go as the data may or may not be in the read store
2) If the data can be derived from a stream of events then it doesn't need to be on the event itself.
3) I'm leaning towards this, but there's meant to be a clear separation between read and write sides which this feels like it violates. Is this permitted?
I can see a few ways of doing this but I'm not really sure which is the right way to proceed.
There's no perfect answer - in most cases, externally observable side effects are independent of your book of record; you're always likely to have some failure mode where an email is sent but the system doesn't know, or where the system records that an email was sent but there was actually a failure.
For a pretty good answer: you're normally going to start with a facility that sends and email and reports as an event that the email was sent successfully, or not. That's fundamentally an event stream - your model doesn't get to veto whether or not the email was sent.
With that piece in place, you effectively have a query to run, which asks "what emails do I need to send now?" You fold the ApplicationSaved events with the EmailSent events, compute from that what new work needs to be done.
Rinat Abdullin, writing Evolving Business Processes a la Lokad, suggested using a human operator to drive the process. Imagine building a screen, that shows what emails need to be sent, and then having buttons where the human says to actually do "it", and the work of sending an email happens when the human clicks the button.
What the human is looking at is a view, or projection, which is to say a read model of the state of the system computed from the recorded events. The button click sends a message to the "write model" (the button clicked event tells the system to try to send the email and write down the outcome).
When all of the information you need to act is included in the representation of the event you are reacting to, it is normal to think in terms of "pushing" data to the subscribers. But when the subscriber needs information about prior state, a "pull" based approach is often easier to reason about. The delivered event signals the project to wake up (reducing latency).
Greg Young covers push vs pull in some detail in his Polyglot Data talk.
I'm building a service using the familiar event sourcing pattern:
A request is received.
The aggregate's history is loaded.
The aggregate is rebuilt (from its history).
New events are prepared and the aggregate is updated in response to the incoming request from Step 1.
These events are written to the log, and are made available (published) to any subscribers.
In my case, Step 5 is accomplished in two parts. The events are written to the event log. A background process reads from the event log and publishes all events starting from an offset.
In some cases, I need to publish side effects in addition to events related to the aggregate. As far as the system is concerned, these are events too because they are consumed by and affect the state of other services. However, they don't affect the history of the aggregate in this service and are not needed to rebuild it.
How should I handle these in the code?
Option 1-
Don't write side-effecting events to the event log. Publish these in the main process prior to Step 5.
Option 2-
Write everything to the event log and ignore side-effecting events when the history is loaded. (These aren't part of the history!)
Option 3-
Write side-effecting events to a dummy aggregate so they are published, but never loaded.
Option 4-
?
In the first option, there may be trouble if there is a concurrency violation. If the write fails in Step 5, the side effect cannot be easily rolled back. The second option write events that are not part of the aggregate's history. When loading in Step 2, these side-effecting events would have to be ignored. The 3rd option feels like a hack.
Which of these seems right to you?
Name events correctly
Events are "things that happened". So if you are able to name the events that only trigger side effects in a "X happened" fashion, they become a natural part of the event history.
In my experience, this is always possible, because side-effects don't happen out of thin air. Sometimes the name becomes a bit artificial, but it is still better to name events that way than to call them e.g. "send email to that client event".
In terms of your list of alternatives, this would be option 2.
Example
Instead of calling an event "send status email to customer event", call it "status email triggered event". Of course, if there is a better name for the actual trigger, use that one :-)
Option 4 - Have some other service subscribe to the events and produce the side effects, and any additional events related to them.
Events should be fine-grained.
Option 1- Don't write side-effecting events to the event log. Publish
these in the main process prior to Step 5.
What if you later need this part of the history by building a new bounded context?
Option 2- Write everything to the event log and ignore side-effecting
events when the history is loaded. (These aren't part of the history!)
How to ignore the effect of something which does not have any effect? :D
Option 3- Write side-effecting events to a dummy aggregate so they are
published, but never loaded.
Why do you need consistency boundary around something which you will never change?
What you are talking about is the most common form of domain events, which you use to communicate with other BC-s. Ofc. you need to save them.
I have a Windows Delphi application that receives events, on each of these events i'd like to run a task in a parallel way (so i can be ready for the following event). There is many way to do this through omnithread library's abstractions.
The issue is that part of my code needs to be executed immediately after the reception of the event (basically to "decode" the events params), and another part needs to be executed a few seconds after only under the condition of nothing new happend for the same context.
This behaviour should respond to "only store this new value if it last longer than 3000ms, otherwise just cancel it".
So what I need would be to "cancel" a running task (the one waiting 3000ms) if a new event arrives with the same context.
I cannot use a pipeline abstraction because when the first stage ends, it automatically fills the second stage queue without asking me if i want to cancel it or not.
Is that possible?
Thank you.
Sounds like you need a Dictionary<Context, Event> where the events also carry a "created" timestamp property, and a background tread which continuously checks if there are event entries in this dictionary with elapsed time > 3000ms.
Incoming events update the timestamp and event params, until the thread detects an entry which matches the condition and then extracts the entry from the dictionary.
i have a requirement to schedule recurrent tasks. My application is in MFC. For Eg I may need to send a file to a particular location on "From Date" "To Date" "Frequency" "Start Time" "End Time". i thought of having a list and add these parameters there and create a timer that elapses every second. Where i can check the list for the conditions and invoke the file transfer. But the problem is if the list is huge then i may not be able to do it. Is there any other way to achieve this?
Create a priority queue of scheduled events, and for each "schedule", fill the queue with only NEXT event for that "schedule". Wait only for the first EVENT in the priority queue, and when used, look up into schedule item for that event, and let it fill its next event into the queue.
Please ask if anything above needs more clarification.
EDIT:
You'll trigger your event on the particular date and time depending what are you most comfortable with. Since you'll have only ONE event that you'll have to wait for (you can copy it from the HEAD of the queue), you have multiple options, for example:
SetTimer() for one second intervals, when compare current time with event time.
SetTimer() for the duration of the current time to event time.
start another thread, waitforsingleobject inside of it, with delay computed as eventi_time-now - this will be most difficult since you'll have to be careful when calling something on the main thread
... and so on
This might seem like a silly thing to say, the final branch in a parallel activity so I'll clarify. It's a parallel activity with three branches each containing a simple create task, on task changed and complete task. The branch containing the task that is last to complete seems to break. So every task works in it's own right, but the last one encounters a problem.
Say the user clicks the final tasks link to open the attached infopath form and submits that. Execution gets to the event handler for that onTaskChanged where a taskCompleted variable gets set to true which will exit the while loop. I've successfully hit a breakpoint on this line so I know that happens. However the final activity in that branch, the completeTask doesn't get hit.
When submit is clicked in the final form, the operation in progess screen says of for quite a while before returning to the workflow status page. The task that was opened and submitted says "Not Started".
I can disable any of the branches to leave only two, but the same problem happens with the last to be completed. Earlier on in the workflow I do essencially the same thing. I have another 3 branch parallel activity with each brach containing a task. This one works correctly which leads me to believe that it might be a problem with having two parallel activites in the same sequential workflow.
I've considered the possibility that it might be a correlation token problem. The token that every task branch uses is unique to that branch and it's owner activity name is est to that of the branch. It stands to reason that if the task complete variable is indeed getting set to true but the while loop isn't being exited, then there's a wire crossing with the variable somewhere. However I'd still have thought that the task status back on the workflow status page would at least say that the task is in progress.
This is a frustrating show stopper of a bug for me. Any thoughts or suggestions would be much appricated so I can investigate them.
my workflow scenario is to reassign task to it's originator after due date of the task expires, by firing a delay activity.
in my workflow I have a parallel replicator which is used to assign(create) different tasks to different users at the same time.Inside replicator I used a listen activity so in the left branch there is a OnTaskChanged activity+...+ completetask1, In the right branch of listenActivity there is a Delay Activity followed by a CompleteTask2 activity and a code activity to reassign task to task originator.I'm sure about the correlation tokens on both completetasks activities.every thing works fine on the left branch but error occurs in the right branch which contains Delay activity-->Completetask activity.
let consider that we have two tasks assigned to 2 users and they have one hour to complete their tasks, but they didn't.so Delay activity fires for both tasks.then in workflow first task will be completed but for the second task it makes error.
I think the problem is with taskid property of the completetask.it doesn't updated with the second task id, so it tries to complete a task which has been completed.