When reviewing in cases in a work queue the message:
Automatically set exception at clean up
appears as the exception reason.
Why has Blue Prism set the case as an exception?
The "Automatically set exception at clean up" happens when you the process finishes or gets terminated without unlocking the item queue that is being processed.
I imagine that you are getting data from the Work Queue using and action like "Get next item". Every time that you get an item from the queue BP locks it to prevent other bot from processing it at the same time.
To solve your problem, use the "Mark Completed" if you finished processing that item, or the "Unlock Item" if you want to keep working with it later.
"Automatically set exception at cleanup" appears when you have picked up a case and not declared it as completed or exception WHILE THE PROCESS FINISHES WITHOUT ANY FURTHER ACTION ON THE QUEUE ITEM. In other words, if you leave the queue item to be in the locked state and your process execution finishes, it will still go in the said reason.
Well, the clean up phase is the phase that happens after the process is done. There are two important things that are done then - cleaning of objects and cleaning of the queue.
For every object used in the process, the action "finalize" is executed. It's rarely used option - i've never scen it used.
During the cleaning of the queue, all locked tems are marked with exception that you're asking about.
So, my advice is to investigate how it was possible that an item has been left behind.
Related
with access to a trio.Nursery instance nursery, how may I print state of all nursery.child_tasks, specifically which have not yet exited?
I'm not understanding, reading docs & the trio NurseryManager code:
how "nested child" tasks might be relevant. I see [direct] children removed when a task completes with _child_finished(), but am not understanding use of _nested_child_finished().
the window of time between one task failing (raiseing), and all tasks completing. Being cooperative, I would expect to be able to find "active" tasks, in the window ~soon after one failure, with both states
"failed, exception captured"
and "running, has not handled Canceled yet"
"Nested child" is our internal name for "the block of code that's actually part of the parent task, that you typed inside the async with open_nursery():. This code runs a bit differently than a real child task, but it has similar semantics (the nursery won't exit until it exits, if it raises an exception it cancels the real child tasks and vice-versa, etc.), so that's why we call it that.
You're correct that there's a window of time between one task raiseing and other tasks completing. When a task raises then the other tasks get cancelled, but this means injecting trio.Cancelled exceptions, waiting for those exceptions to unwind, etc., so it might take some time. (You can tell check whether the nursery has been cancelled with nursery.cancel_scope.cancel_called.)
During this period, nursery.child_tasks will have only the tasks that are still running (i.e., still processing their cancellation). Currently Trio doesn't keep track of "failed tasks" – the nursery keeps a list of the exception objects themselves, so it can re-raise them, but it doesn't track which tasks those came from or anything, and there's currently no API to introspect the list of pending exceptions.
Zooming out: Trio's general philosophy is that when thinking about code organization, functions are more useful than tasks. So it really de-emphasizes tasks: outside of debugging/introspection/low-level-plumbing, you never encounter a "task object" or give a task a name. (See also Go's take on this.) Depending on what you're doing, you might find it helpful to step back and think if there's a better way to keep track of what operations you're doing and how they're progressing.
I am launching page https://www.nasdaq.com/ . After that I am also waiting for 5 sec to load the page. After this I want to check whether the page exist or not, or gets loaded or not then throw the exception. So how and when to use exception handling in this scenario. see the image attached. I tried putting recover, resume, exception stage on launch stage as well as on wait stage. But I dont know where to put the exception.
1st of all, don't use arbitrary (fixed) wait stages until it's completely necessary. Use intelligent wait stages instead, which means wait for something to happen and then proceed or throw an exception if it times out. In your case, you can use intelligent wait stage for example to check if the website has been loaded.
When it comes to throwing an exception, in your case I would just simply launch, then wait for the document to be loaded and throw an exception if it times out. See below diagram.
Also, I would leave retry logic (recover - resume) for the process layer. Object should ideally contain small reusable actions and no business logic, so decisions if and how many times to retry should be taken in the Process.
I have a Windows Delphi application that receives events, on each of these events i'd like to run a task in a parallel way (so i can be ready for the following event). There is many way to do this through omnithread library's abstractions.
The issue is that part of my code needs to be executed immediately after the reception of the event (basically to "decode" the events params), and another part needs to be executed a few seconds after only under the condition of nothing new happend for the same context.
This behaviour should respond to "only store this new value if it last longer than 3000ms, otherwise just cancel it".
So what I need would be to "cancel" a running task (the one waiting 3000ms) if a new event arrives with the same context.
I cannot use a pipeline abstraction because when the first stage ends, it automatically fills the second stage queue without asking me if i want to cancel it or not.
Is that possible?
Thank you.
Sounds like you need a Dictionary<Context, Event> where the events also carry a "created" timestamp property, and a background tread which continuously checks if there are event entries in this dictionary with elapsed time > 3000ms.
Incoming events update the timestamp and event params, until the thread detects an entry which matches the condition and then extracts the entry from the dictionary.
I know that if a worker fails to process a message off of the queue that it will become visible again and you have to code against this (idempotent). But is it possible that a worker can dequeue a message twice? Based on my logging, I seem to be seeing this behavior and I'm not sure why. I'm even deleting the message in between going go get the next message and it seems like I got it again.
Yes, you can dequeue same message twice. This can happen for two reasons:
Worker A dequeues Message B and invisibility timeout expires. Message B becomes visible again and Worker C dequeues Message B, invalidating Worker A's pop receipt. Worker A finishes work, goes to delete Message B and error is thrown. This is most common.
In certain conditions (very frequent queue polling) you can get the same message twice on a GetMessage. This is a type of race condition that while rare does occur. Worker A and B are polling very quickly and hit the queue simultaneously and both get same message. This used to be much more common (SDK 1.0 time frame) under high polling scenarios, but it has become much more rare now in later storage updates (can't recall seeing this recently).
That being said - if you only have 1 worker popping messages, then you are queueing message twice. 1 and 2 only happen when you have more than 1 worker.
You shouldn't be able to dequeue it twice. And if I recall things properly, even deleting it twice shouldn't be possible because the pop receipt should change after the second dequeue and lock.
As SilverNinja suggests, I'd look to see if perhaps the message was inadvertantly queued twice.
Do you have more than one worker role?
It is possible (especially with processes that take a while) that the timeout on the queue item visibility could end before your role has finished processing whatever it is doing. In this case another identical role could pick up the same message (which is effectively what you need to allow for - you do not want it to be a problem if the same message is processed multiple times).
At this point the first role will finish and dequeue the message and then the other role that picked it up after the timeout will end and attempt to dequeue the message. Off the top of my head I don't recall what exactly happens when a role attempts to dequeue an already dequeued message.
I am trying to create a following scenario:
a task gets assigned to the user to complete
a task get created for the manager to reassign the user task if necessary (don't ask, they wanted it this way)
an email reminder neeeds to be sent when the task is nearing a due date
So, I thought of using EventHandlingScope for this:
I am listening for a task change on the main branch of eventhandlingscope activity,
listening to reassign task change in event driven branch - and if reassign task gets activated, reassign the first task to the user specified
in another event driven branch use a delay activity and check periodically if user assigned task is nearing a due date and send an email reminder
So, I though eventhandlingscope would be good for this, and it mostly is, except for the problem with the DelayActivity.
If I put a delay activity in one of the Event Handlers branches it fires once, but not more.
Whereas if I put an onTaskChange activity there it fires everytime somebody changes that task.
So, is this the expected behaviour? Why doesn't DelayActivity loop?
How could I do this differently? My thought is with a CAG, but this looks a bit more complex...
Update: the problem with CAG is that the whole thing blocks until delay activity fires, even if the onChange event fired. This makes sense, but makes it a bit trickier to use.
Update2: I've reworded the text to make it hopefully clearer
The Solution
The fundemental activity arrangement that solves this problem is a WhileActivity containing a ListenActivity.
The listen activity is given 3 EventDrivenActivity branches. On the first is your "User Task Completed" branch, the second is the "Manager Changed the assigned user" branch and the third contains a DelayActivity followed by your emailing logic.
In a listen activity any of the branches can complete the Listen activity and when they do the other activities in the Listen activity will be canceled.
You will need to ensure the the "User Task Completed" sequence sets some value that can be tested by the while loop such that the while loop exits when a user completes a task.
When a branch other than the "User Task Completed" branch is responsible for completing the the ListenActivity workflow will loop back to the ListenActivity and re-execute all 3 event driven activities including the one containing the DelayActivity.
Note that this is slightly different from the EventHandlingScope approach because the "listen for user task completed" will get canceled and re-executed whereas with the EventHandlingScope that wouldn't happen. IMO this a better arrangement since it means that the user that was currently selected to do the task at the start of the Listen activity is guaranteed to be unchanged at the end (because if it is changed the whole activity is discarded and a new one started).
Why the Delay only fired once in the EventHandlingScope
Effectively what you had set up is a scope that is listening for two events. One was your managers change assigned user event, the other was a "timer fired event".
Now the way its described in the documentation it sounds like some loop is involved as if once one of these activities completes they are restarted. However its not quite like that, it actually just continues to listen for the original event and will re-run the contents if another such event is fired.
In the case of the DelayActivity there is some internal "timer fired event" that is being listened to. When the Delay is first entered the timeout is setup so that the timer will fire at the appropriate time, it then listens for that event. Once it has fired the scope returns to listening to a "timer fired event", however, there is no re-running of the initial code that setup the timeout hence no other "timer fired event" is forth coming.
I know you don't want to hear this but you would be better off creating a workflow in place of the handler as workflows are designed to handle the time dimension much better as they are "long running". Event handlers are more scoped for a moment-in-time event triggers them and then they complete an action. Not only that, but judging from what you write, if the requirements are that simple you can create a SharePoint Designer Workflow so you wouldn't even have to crach open Visual Studio.
Also, not sure if you know this but SharePoint tasks do send out emails, these tasks will send out daily reminders when the task is late so you might be able to address your delay activity using out-of-the-box functionality.
Finally, if you are running in debug mode and you have hard-coded your taskid, you can only run one task per debug session otherwise your Event Handler will stop when another item with the same ID is added to SharePoint. This might explain why your delay activity is blocked.