Can we pick item from a queue from another process? - blueprism

I have two processes and one queue.
First item is already picked from the queue by one process but it is not completed or in exception.
Can we directly pick second item from the queue by another process?
What is going to happen to the first item. Will it be in exception.

Queue items which has a lock on it is in process of being made. If a new process or the same process runs 'Get Next Item' again then you will lock next item. First item will continue being locked and only on marking item complete or failed or crashing process then this item will get a new status.
You can test this easily by populating your queue with 5 items, then run 'Get Next item', then run 'Get Next item' again in same process. Then you will now have two locked items. When you reset your run BP will set both item to failed as they never got a new status before you crashed/restarted your process.
It's generally not a good idea to use same queue for different processes. If you use same process to run it multiple times then it's absolutely acceptable, but then you need to be sure all actions are thread safe. Example on an thread error, two processes can't write to same file at the same time.

Related

"Automatically set exception at clean up" exception showing in Control Room

When reviewing in cases in a work queue the message:
Automatically set exception at clean up
appears as the exception reason.
Why has Blue Prism set the case as an exception?
The "Automatically set exception at clean up" happens when you the process finishes or gets terminated without unlocking the item queue that is being processed.
I imagine that you are getting data from the Work Queue using and action like "Get next item". Every time that you get an item from the queue BP locks it to prevent other bot from processing it at the same time.
To solve your problem, use the "Mark Completed" if you finished processing that item, or the "Unlock Item" if you want to keep working with it later.
"Automatically set exception at cleanup" appears when you have picked up a case and not declared it as completed or exception WHILE THE PROCESS FINISHES WITHOUT ANY FURTHER ACTION ON THE QUEUE ITEM. In other words, if you leave the queue item to be in the locked state and your process execution finishes, it will still go in the said reason.
Well, the clean up phase is the phase that happens after the process is done. There are two important things that are done then - cleaning of objects and cleaning of the queue.
For every object used in the process, the action "finalize" is executed. It's rarely used option - i've never scen it used.
During the cleaning of the queue, all locked tems are marked with exception that you're asking about.
So, my advice is to investigate how it was possible that an item has been left behind.

Cancel a scheduled task

I have a Windows Delphi application that receives events, on each of these events i'd like to run a task in a parallel way (so i can be ready for the following event). There is many way to do this through omnithread library's abstractions.
The issue is that part of my code needs to be executed immediately after the reception of the event (basically to "decode" the events params), and another part needs to be executed a few seconds after only under the condition of nothing new happend for the same context.
This behaviour should respond to "only store this new value if it last longer than 3000ms, otherwise just cancel it".
So what I need would be to "cancel" a running task (the one waiting 3000ms) if a new event arrives with the same context.
I cannot use a pipeline abstraction because when the first stage ends, it automatically fills the second stage queue without asking me if i want to cancel it or not.
Is that possible?
Thank you.
Sounds like you need a Dictionary<Context, Event> where the events also carry a "created" timestamp property, and a background tread which continuously checks if there are event entries in this dictionary with elapsed time > 3000ms.
Incoming events update the timestamp and event params, until the thread detects an entry which matches the condition and then extracts the entry from the dictionary.

Worker processes called in order azure

If Multiple worker processes have to called in order after every task by the previous worker gets done (there is a queue containing pointer to blobs and every worker has multiple instances. Pls see my previous questions.) how should this be done ?
Will Azure fabric do this automatically ? or is there a way to set this in the config file ?
You just follow the same process that you're already got but with more layers. If worker 1 reads something from queue 1, and it needs to let worker 2 know that it's time for it to start processing the same file, worker 1 simply puts a message in queue 2.
Edit: OK, let me see if I fully understand what you're after here. It sounds like you have here is a batch of files that need to go through several processes, but they can't go on to the next step of the process until they've all finished going through the previous step.
If that is the case then, no, there is nothing in Azure that will do that for you automatically.
Because of this, if possible I'd rework my workers so that each file could just be sent on without worrying about what state the other files were in.
If that is not possible, then you need some way of monitoring which files have been completed and which ones are still pending. One way to do this (and hopefully you can expand on this) is the code that creates the batch, creates a progress row in a table somewhere (SQL Azure or Azure Tables, it doesn't matter really) for each file, sends a message to worker one and starts a background task to monitor this table.
When worker 1 finishes processing a file, it updates the relevant row in the monitoring table to say, "Worker 1 finished".
The background thread that was created above waits until all of the rows have "Worker 1 finished" set to true, then creates the messages for Worker 2 and starts looking at the "Worker 2 finished" flag. Rinse repeat for as many worker steps as you have.
When all steps are finished, you'll probably want the background task to clean up this table and also have some sort of timeout in case a message gets lost somewhere.
Although what #knightpfhor is suggesting would do the trick, I would try and go about this in a more simple kind of way without referencing the names of workers :-)
Specifically, If there is a way you already know how many docs need to be processed, I would first create N-amount of rows in a Table, each holdung some info relevant to the current batch, each having columnKey set to be the batch id. I'd then put N number of messages in my queue and let the worker processes pick them up. When each worker is done, it would delete the corresponding row in the table as well. A monitoring process would simoly know a batch started and do a count every once in a while (if it is not cricital, or the worker would do a count after it finishes removing the row) and spawn a new message in the relevant queue for the next worker role to process.
If you wamt even more control you could go with having a row in your table storing the state of your process (processing files, post-processing), etc. In this case, I'd store the state transitions in a queue, and make sure you only make them once. But that's a whole new question alltogether.
Hope it heps.

Multithreading Task Library, Threading.Timer or threads?

Hi we are building an application that will have the possibility to register scheduled tasks.
Each task has an time interval when it should be executed
Each task should have an timeout
The amount of tasks can be infinite but around 100 in normal cases.
So we have an list of tasks that need to be executed in intervals, which are the best solution?
I have looked at giving each task their timer and when the timer elapses the work will be started, another timer keeps tracks on the timeout so if the timeout is reached the other timer stops the thread.
This feels like we are overusing timers? Or could it work?
Another solution is to use timers for each task, but when the time elapses we are putting the task on a queue that will be read with some threads that executes the work?
Any other good solutions I should look for?
There is not too much information but it looks like that you can consider RX as well - check more at MSDN.com.
You can think about your tasks as generated events which should be composed (scheduled) in some way. So you can do the following:
Spawn cancellable tasks with Observable.GenerateWithDisposable and your own Scheduler - check more at Rx 101 Sample
Delay tasks with Observable.Delay
Wait for tasks with 'Observable.Timeout
Compose tasks in any preferable way
Once again you can check more at specified above links.
You should check out Quartz.NET.
Quartz.NET is a full-featured, open
source job scheduling system that can
be used from smallest apps to large
scale enterprise systems.
I believe you would need to implement your timeout requirement by yourself but all the plumbing needed to schedule tasks could be handled by Quartz.NET.
I have done something like this before where there were a lot of socket objects that needed periodic starts and timeouts. I used a 'TimedAction' class with 'OnStart' and 'OnTimeout' events, (socket classes etc. derived from this), and one thread that handled all the timed actions. The thread maintained a list of TimedAction instances ordered by the tick time of the next action required, (delta queue). The TimedAction objects were added to the list by queueing them to the thread input queue. The thread waitied on this input queue with a timeout, (this was Windows, so 'WaitForSingleObject' on the handle of the semaphore that managed the queue), set to the 'next action required' tick count of the first item in the list. If the queue wait timed out, the relevant action event of the first item in the list was called and the item removed from the list - the next queue wait would then be set by the new 'first item in the list', which would contain the new 'nearest action time'. If a new TimedAction arrived on the queue, the thread calculated its timeout tick time, (GetTickCount + ms interval from the object), and inserted it in the sorted list at the correct place, (yes, this sometimes meant moving a lot of objects up the list to make space).
The events called by the timeout handler thread could not take any lengthy actions in order to prevent delays to the handling of other timeouts. Typically, the event handlers would set some status enumeration, signal some synchro object or queue the TimedAction to some other P-C queue or IO completion port.
Does that make sense? It worked OK, processing thousands of timed actions in my server in a reasonably timely and efficient manner.
One enhancement I planned to make was to use multiple lists with a restricted set of timeout intervals. There were only three const timeout intervals used in my system, so I could get away with using three lists, one for each interval. This would mean that the lists would not need sorting explicitly - new TimedActions would always go to the end of their list. This would eliminate costly insertion of objects in the middle of the list/s. I never got around to doing this as my first design worked well enough and I had plenty other bugs to fix :(
Two things:
Beware 32-bit tickCount rollover.
You need a loop in the queue timeout block - there may be items on the list with exactly the same, or near-same, timeout tick count. Once the queue timeout happens, you need to remove from the list and fire the events of every object until the newly claculated timeout time is >0. I fell foul of this one. Two objects with equal timeout tick count arrived at the head of the list. One got its events fired, but the system tick count had moved on and so the calcualted timeout tick for the next object was -1: INFINITE! My server stopped working properly and eventually locked up :(
Rgds,
Martin

Creating a task scheduler

i have a requirement to schedule recurrent tasks. My application is in MFC. For Eg I may need to send a file to a particular location on "From Date" "To Date" "Frequency" "Start Time" "End Time". i thought of having a list and add these parameters there and create a timer that elapses every second. Where i can check the list for the conditions and invoke the file transfer. But the problem is if the list is huge then i may not be able to do it. Is there any other way to achieve this?
Create a priority queue of scheduled events, and for each "schedule", fill the queue with only NEXT event for that "schedule". Wait only for the first EVENT in the priority queue, and when used, look up into schedule item for that event, and let it fill its next event into the queue.
Please ask if anything above needs more clarification.
EDIT:
You'll trigger your event on the particular date and time depending what are you most comfortable with. Since you'll have only ONE event that you'll have to wait for (you can copy it from the HEAD of the queue), you have multiple options, for example:
SetTimer() for one second intervals, when compare current time with event time.
SetTimer() for the duration of the current time to event time.
start another thread, waitforsingleobject inside of it, with delay computed as eventi_time-now - this will be most difficult since you'll have to be careful when calling something on the main thread
... and so on

Resources