I want release this flow:
I tried use only one node End, but the other branch remains active.
How to finish all active tasks and the whole process when finishes one branch?
By a BPNM specification flow.End finishes only tasks tokens that comes into it. Parallel task stays unfinished.
If you have a situation where one of parallel flows need to be canceled, in BPMN such processes modeled by a complex split gateway that waits till on of there subsequent task finishes and cancels others. Here is the sketch implementation for a viewflow split-first node. You can adapt it for your specific case.
class SplitFirst(flow.Split):
shape = {
'width': 50,
'height': 50,
'svg': """
<path class="gateway" d="M25,0L50,25L25,50L0,25L25,0"/>
<text class="gateway-marker" font-size="32px" x="25" y="35">1</text>
"""
}
def on_signal(self, sender, **signal_kwargs):
task = signal_kwargs['task']
split_first = task.previous.filter(flow_task=self).first()
if split_first:
for leading in split_first.leading.all().exclude(pk=task.pk):
activation = leading.activate()
if hasattr(activation, 'cancel') and activation.cancel.can_proceed():
activation.cancel()
def ready(self):
super(SplitFirst, self).ready()
task_finished.connect(
self.on_signal,
sender=self.flow_class,
dispatch_uid="sample.splitfirst/{}.{}.{}".format(
self.flow_class.__module__, self.flow_class.__name__, self.name
)
)
What you need is called event based gateway in BPMN, and it is not supported by Viewflow out of the box, you have to implement the code provided by kmmbvnr.
What this gateway does, is to activate the outgoing paths and wait for any of the tasks to be completed, and when the first task is done, the other paths or tasks are cancelled.
When you are using flows that go back in one of the paths, you have to consider than no other paths are active, only the one than first completed a task.
Related
I'm creating a thread manager class that handles executing tasks as threads and passing the results to the next process step. The flow works properly upon the first execution of receiving a task, but the second execution fails with the following error :
...python3.8/concurrent/futures/thread.py", line 179, in submit
raise RuntimeError('cannot schedule new futures after shutdown')
RuntimeError: cannot schedule new futures after shutdown
The tasks come from Cmd.cmdloop user input - so, the script is persistent and meant to not shutdown. Instead, run will be called multiple times, as input is received from the user.
I've implemented a ThreadPoolExecutor to handle the work load and trying to gather the results chronologically with concurrent.futures.as_completed so each item is processed to the next step in order of completion.
The run method below works perfect for the first execution, but returns the error upon second execution of the same task (that succeeded during the first execution).
def run ( self, _executor=None, _futures={}, ) -> bool :
task = self.pipeline.get( )
with _executor or self.__default_executor as executor :
_futures = { executor.submit ( task.target.execute, ), }
for future in concurrent.futures.as_completed ( _futures, ) :
print( future.result ( ) )
return True
So, the idea is that each call to run will create and teardown the executor with the context. But the error suggests the context shutdown properly after the first execution, and cannot be reopened/recreated when run is called during the second iteration... what is this error pointing to? .. what am I missing?
Any help would be great - thanks in advance.
Your easiest solution will be to use the multiprocessing library instead sending futures and ThreadPoolExecutore with Context Manager:
pool = ThreadPool(50)
pool.starmap(test_function, zip(array1,array2...))
pool.close()
pool.join()
While (array1[0] , array2[0]) will be the values sent to function "test_function" at the first thread, (array1[1] , array2[1]) at the second thread, and so on.
I have a series of smoke tests that my company uses to validate its web-application. These tests are written in Ruby. We want to splits these tests into a series of tasks within locust.io. I am a newby when it comes to Locust.IO. I have written python code that can run these tasks one after the other in succession. However, when make them locust.io tasks nothing is being reported in the stats window. I can see the tests run in the console but the statistics never get updated. What do I need to do? Here is a snippet of the Locustfile.py I generate.
def RunTask(name, task):
code, logs = RunSmokeTestTask(name, task)
info("Smoke Test Task {0}.{1} returned errorcode {2}".format(name, task, code))
info("Smoke Test Task Log Follows ...")
info(logs)
class SmokeTasks(TaskSet):
#task
def ssoTests_test_access_sso(self):
RunTask("ssoTests.rb", "test_access_sso")
. . .
RunSmokeTestTask is what actually runs the task. It is the same code that I am using when I invoke the task outside of Locust.IO. I can see the info in the logfile. Some of them fail but the statistics never update. I know I am probably missing something silly.
You need to actually report the events. (edit: I realize now that maybe you were hoping that locust/python would be able to detect the requests made from Ruby, but that is not possible. If you are ok with just reporting the whole test as a single "request", then keep reading)
Add something like this to your taskset:
self.user.events.request_success.fire(request_type="runtask", name=name, response_time=total_time, response_length=0)
You'll also need to measure the time it took. Here is a more complete example (but also a little complex):
https://docs.locust.io/en/stable/testing-other-systems.html#sample-xml-rpc-user-client
Note: TaskSets are an advanced (useless, imho) feature, you probably want to put the #task directly under a User, and the RunTask method as well.
something like:
class SmokeUser(User):
def RunTask(self, name, task):
start_time = time.time()
code, logs = RunSmokeTestTask(name, task)
total_time = time.time() - start_time
self.events.request_success.fire(request_type="runtask", name=name, response_time=total_time, response_length=0)
info("Smoke Test Task {0}.{1} returned errorcode {2}".format(name, task, code))
info("Smoke Test Task Log Follows ...")
info(logs)
#task
def ssoTests_test_access_sso(self):
self.RunTask("ssoTests.rb", "test_access_sso")
I'm using NodeJS to manage a Twilio Taskrouter workflow. My goal is to have a task assigned to an Idle worker in the main queue identified with queueSid, unless one of the following is true:
No workers in the queue are set to Idle
Reservations for the task have already been rejected by every worker in the queue
In these cases, the task should fall through to the next queue identified with automaticQueueSid. Here is how I construct the JSON for the workflow (it includes a filter such that an inbound call from an agent should not generate an outbound call to that same agent):
configurationJSON(){
var config={
"task_routing":{
"filters":[
{
"filter_friendly_name":"don't call self",
"expression":"1==1",
"targets":[
{
"queue":queueSid,
"expression":"(task.caller!=worker.contact_uri) and (worker.sid NOT IN task.rejectedWorkers)",
"skip_if": "workers.available == 0"
},
{
"queue":automaticQueueSid
}
]
}
],
"default_filter":{
"queue":queueSid
}
}
}
return config;
}
This results in no reservation being created after the task reaches the queue. My event logger shows that the following events have occurred:
workflow.target-matched
workflow.entered
task.created
That's as far as it gets and just hangs there. When I replace the line
"expression":"(task.caller!=worker.contact_uri) and (worker.sid NOT IN task.rejectedWorkers)"
with
"expression":"(task.caller!=worker.contact_uri)
Then the reservation is correctly created for the next available worker, or sent to automaticQueueSid if no workers are available when the call comes in, so I guess the skip_if is working correctly. So maybe there is something wrong with how I wrote the target expression?
I tried working around this by setting a worker to unavailable once they reject a reservation, as follows:
clientWorkspace
.workers(parameters.workerSid)
.reservations(parameters.reservationSid)
.update({
reservationStatus:'rejected'
})
.then(reservation=>{
//this function sets the worker's Activity to Offline
var updateResult=worker.updateWorkerFromSid(parameters.workerSid,process.env.TWILIO_OFFLINE_SID);
})
.catch(err=>console.log("/agent_rejects: error rejecting reservation: "+err));
But what seems to be happening is that as soon as the reservation is rejected, before worker.updateWorkerFromSid() is called, Taskrouter has already generated a new reservation and assigned it to that same worker, and my Activity update fails with the following error:
Error: Worker [workerSid] cannot have its activity updated while it has 1 pending reservations.
Eventually, it seems that the worker is naturally set to Offline and the task does time out and get moved into the next queue, as shown by the following events/descriptions:
worker.activity.update
Worker [friendly name] updated to Offline Activity
reservation.timeout
Reservation [sid] timed out
task-queue.moved
Task [sid] moved out of TaskQueue [friendly name]
task-queue.timeout
Task [sid] timed out of TaskQueue [friendly name]
After this point the task is moved into the next queue automaticQueueSid to be handled by available workers registered with that queue. I'm not sure why a timeout is being used, as I haven't included one in my workflow configuration.
I'm stumped--how can I get the task to successfully move to the next queue upon the last worker's reservation rejection?
UPDATE: although #philnash's answer helped me correctly handle the worker.sid NOT IN task.rejectedWorkers issue, I ultimately ended up implementing this feature using the RejectPendingReservations parameter when updating the worker's availability.
Twilio developer evangelist here.
rejectedWorkers is not an attribute that is automatically handled by TaskRouter. You reference this answer by my colleague Megan in which she says:
For example, you could update TaskAttributes to have a rejected worker SID list, and then in the workflow say that worker.sid NOT IN task.rejectedWorkerSids.
So, in order to filter by a rejectedWorkers attribute you need to maintain one yourself, by updating the task before you reject the reservation.
Let me know if that helps at all.
I am trying to use titanium execution contexts to produce parallel code execution between the main application context and others. I am using CreateWindow with a url property refers to a .js file inside "lib" folder. But by logging the execution on both iOS and Android devices it seems that different contexts are executed on the app main thread, no parallelism here.
My new context trigger inside my Alloy controller:
var win2 = Ti.UI.createWindow({
title: 'New Window',
url: 'thread.js',
backgroundColor:'#fff'
});
win2.open();
Ti.API.log('after open');
My thread.js contents:
Ti.API.log("this is the new context");
Ti.App.fireEvent("go" , {});
while(true)
{
Ti.API.log('second context');
}
This while loop apparently blocks the main context (my Alloy controller) waiting it to exit.
Any suggestions of how can I execute some code (mainly heavy sqlite db access) in background so that the UI be responsive? (Web workers are not a choice for me).
You could try to achieve the wanted behaviour with a setInterval() or setTimeout() method.
setInterval()[source]:
function myFunc() {
//your code
}
//set the interval
setInterval(myFunc,2000) //this will run the function for every 2 sec.
Another suggested method would be to fire a custom event when you need the background behavior since it is processed in its own thread. This is also suggested in the official documentation.
AFAIK, titanium is single threaded, because JavaScript is single threaded. You can get parallel execution with native modules, but you'll have to code that yourself for each platform.
Another option is to use web workers, but I consider that to be a hack.
I have an expensive function that is called via a Tkinter callback:
def func: # called whenever there is a mouse press in the screen.
print("Busy? " + str(X.busy)) # X.busy is my own varaible and is initialized False.
X.busy = True
do_calculations() # do_calculations contains several tk.Canvas().update() calls
X.busy = False
When I click too quickly, the func()'s appear to pile up because the print gives "Busy? True", indicating that the function hasen't finished yet and we are starting it on another thread.
However, print(threading.current_thread()) always gives <_MainThread(MainThread, started 123...)>, the 123... is always the same each print for a given program run. How can the same thread be multiple threads?
It looks to me like you're running into recursive message processing. In particular, tk.Canvas().update() will process any pending messages, including extra button clicks. Further, it will do this on the same thread (at least on Windows).
So your thread ID is constant, but your stack trace will have multiple nested calls to func.