Is it possible to get the tasks whose status changes in django viewflow? - django-viewflow

HI is it possible to get the tasks whose status changed on performing a task in django-viewflow?

You can examining activation.task.leading chain after calling activation.done()

Related

Twilio Taskrouter: how to implement a "do not contact" list of WorkerSids in Workflow Configuration?

This question is similar to this one I previously asked, in that I want the task to perform a Target Worker Expression check on a list of WorkerSids that I've added as one of the task's attributes. But I think this problem is different enough to warrant its own question.
My goal is to associate a "do not contact" list of WorkerSids with a Task; these are workers who should not be assigned the task (maybe the customer previously had a bad interaction with them).
I have the following workflow configuration:
{
"task_routing":{
"filters":[
{
"filter_friendly_name":"don't call self",
"expression":"1==1",
"targets":[
{
"queue":queueSid,
"expression":"(task.caller!=worker.contact_uri) and (worker.sid not in task.do_not_contact)",
"skip_if": "workers.available == 0"
},
{
"queue":automaticQueueSid
}
]
}
],
"default_filter":{
"queue":queueSid
}
}
}
When I create a task, checking the Twilio Console, I can see that the task has the following attributes:
{"from_country":"US","do_not_contact":["WORKER_SID1_HERE","WORKER_SID_2_HERE"],
... bunch of other attributes...
}
So I know that the task has successfully been assigned the array of WorkerSids as one of its attributes.
There is only one worker who is Idle and whose attributes match the queueSid TaskQueue. That worker's SID is WORKER_SID1_HERE, so the only available worker is ineligible to receive the task reservation. So what should happen is that the first target expression worker.sid not in task.do_not_contact returns false, and the task falls through to the automaticQueueSid TaskQueue.
Instead, the task remains in queueSid unassigned. The following sequence of Taskrouter events are logged:
task-queue.entered
Task TASK_SID entered TaskQueue QUEUESID_QUEUENAME
task.created
Task TASK_SID created
workflow.target-matched
Task TASK_SID matched a workflow target
workflow.entered
Task TASK_SID entered Workflow WORKFLOW_NAME
What do I need to change to get the desired workflow behavior?
Changing the skip_if to
"skip_if": "1==1"
solved the problem.
Per Twilio developer support, the worker.sid not in task.do_not_contact returns true for workers who are unavailable but are also not in do_not_contact, so the target expression still returns a set of workers, and then the "skip_if": "workers.available==0" returns false because technically there is one "available" worker--the one who is ineligible due to the do_not_contact list.
What's needed is for the skip_if to always return true, so when the first target processes the task without assigning it, the skip_if then passes it to the next target, as discussed in Taskrouter Workflow documentation:
"TaskRouter will only skip a routing step in a Workflow if:
No Reservations are immediately created when a Task enters the routing step
The Skip Timeout expression evaluates to true"

Alfresco Activiti - Create multiple instances of the same subprocess

I have what seemed like a fairly simple requirement for a process that im beginning to question is even possible.
The image below shows my current process. I am trying to achieve two things:
A user creates an initial user task for adding a note, they should be able to add as many notes as they wish with one user task per note
A new sub-process is spawned for each new note (user task) that the user has created.
The process above presents the following problems:
A sub-process should be spawned for each task, however they seem to overwrite each other
Im not sure if the sub-process requires a unique id for each new sub-process spawned
So it turns out that the solution to this question requires a bit of scripting using groovy.
Below is the updated process model diagram, in it I start a new instance of the Complete Task process using a script task then if the user wishes to add more tasks the exclusive gateway can return the user to the Create task (user task) OR finish the process.
I clear down any values in the fields held within the user task within the script task before I pass the scope back to the user task.
The image below shows my Complete Task process that gets called by the main process using a script
Here I avoid using parallel gateways in preference of creating a new instance of the Create Task (user task) and a new instance of the Complete task process (not subprocess) via means of the script.
To start a new instance of the Complete Task process we have to start the process using the function startProcessInstanceByKeyAndTenantId() under a runtimeService instance for the process, although I could also use startProcessInstanceByIdAndTenantId():
//Import required libraries
import org.activiti.engine.RuntimeService;
import org.activiti.engine.runtime.ProcessInstance;
//instantiate RunTimeService instance
RuntimeService runtimeService = execution.getEngineServices().getRuntimeService();
//get tenant id
String tenantId = execution.getTenantId();
//variables Map
Map<String, Object> variables = runtimeService.getVariablesLocal(execution.getProcessInstanceId());
//start process (processId, variables, tenantId)
ProcessInstance completeTask = runtimeService.startProcessInstanceByKeyAndTenantId("CompleteTask", variables, tenantId);
//Clear variables to create a fresh task
execution.setVariable("title", "");
execution.setVariable("details", "");
Using this approach I avoid creating multiple subprocesses from the parent process and instead create multiple processes that run separate from the parent process. This benefits me as if the parent process completes the others continue to run.
Seems like you are updating only one variable (or a single set of variables) as a result of each task. This will override the previous value. use distinct variables, or append something before each variable to mark it unique for the task/ sub-process completed. see collapsed sub-process
Yes, each sub process gets its own unique execution id, But the main execution ID or process instance ID remains same

Right way to delete and then reindex ES documents

I have a python3 script that attempts to reindex certain documents in an existing ElasticSearch index. I can't update the documents because I'm changing from an autogenerated id to an explicitly assigned id.
I'm currently attempting to do this by deleting existing documents using delete_by_query and then indexing once the delete is complete:
self.elasticsearch.delete_by_query(
index='%s_*' % base_index_name,
doc_type='type_a',
conflicts='proceed',
wait_for_completion=True,
refresh=True,
body={}
)
However, the index is massive, and so the delete can take several hours to finish. I'm currently getting a ReadTimeoutError, which is causing the script to crash:
WARNING:elasticsearch:Connection <Urllib3HttpConnection: X> has failed for 2 times in a row, putting on 120 second timeout.
WARNING:elasticsearch:POST X:9200/base_index_name_*/type_a/_delete_by_query?conflicts=proceed&wait_for_completion=true&refresh=true [status:N/A request:140.117s]
urllib3.exceptions.ReadTimeoutError: HTTPSConnectionPool(host='X', port=9200): Read timed out. (read timeout=140)
Is my approach correct? If so, how can I make my script wait long enough for the delete_by_query to complete? There are 2 timeout parameters that can be passed to delete_by_query - search_timeout and timeout, but search_timeout defaults to no timeout (which is I think what I want), and timeout doesn't seem to do what I want. Is there some other parameter I can pass to delete_by_query to make it wait as long as it takes for the delete to finish? Or do I need to make my script wait some other way?
Or is there some better way to do this using the ElasticSearch API?
You should set wait_for_completion to False. In this case you'll get task details and will be able to track task progress using corresponding API: https://www.elastic.co/guide/en/elasticsearch/reference/current/docs-delete-by-query.html#docs-delete-by-query-task-api
Just to explain more in the form of codebase explained by Random for the newbee in ES/python like me:
ES = Elasticsearch(['http://localhost:9200'])
query = {'query': {'match_all': dict()}}
task_id = ES.delete_by_query(index='index_name', doc_type='sample_doc', wait_for_completion=False, body=query, ignore=[400, 404])
response_task = ES.tasks.get(task_id) # check if the task is completed
isCompleted = response_task["completed"] # if complete key is true it means task is completed
One can write custom definition to check if the task is completed in some interval using while loop.
I have used python 3.x and ElasticSearch 6.x
You can use the 'request_timeout' global param. This will reset the Connections timeout settings, as mentioned here
For example -
es.delete_by_query(index=<index_name>, body=<query>,request_timeout=300)
Or set it at connection level, for example
es = Elasticsearch(**(get_es_connection_parms()),timeout=60)

Determine if a cucumber scenario has pending steps

I would like to retrieve the scenario state in the "After" scenario hook. I noticed that the .failed? method does not consider pending steps as failed steps.
So How can I determine that a scenario did not execute completely, because it failed OR because some steps were not implemented/defined.
You can use status method. The default value of status is :skipped, the failed one is :failed and the passed step is :passed. So you can write something like this:
do sth if step.status != :passed
Also, if you use !step.passed? it does the same thing because it only checks for the :passed status.
http://cukes.info/api/cucumber/ruby/yardoc/Cucumber/Ast/Scenario.html#failed%3F-instance_method
On that subject, you can also take a look at this post about demoing your feature specs to your customers: http://multifaceted.io/2013/demo-feature-tests/
LiohAu, you can use the 'status' method on a scenario itself rather than on individual steps. Try this: In hooks, add
After do |scenario|
p scenario.status
end
This will give the statuses as follows:
Any step not implemented / defined, it'll give you :undefined
Scenario fails (when all steps are defined) :failed
Scenario passes :passed
Using the same hook, it'll give you the status for scenario outline, but for each example row (since for each example row, it is an individual scenario). So if at all you want the result of an entire outline, you'll need to capture result for all example rows and compute the final result accordingly.
Hope this helps.

BPEL process stopped on a second receive

I am new in BPEL writing. I have realized the simple process below:
receive1
|
|
invoke1
|
|
receive2
|
|
invoke2
The problem is that the process correctly runs till to the "receive2" but when I invoke, via soapUI, the operation associated to the "receive2" nothing happens. I have read other posts about BPEL but nothing matching with this question. Below the real activities (I omitted the Assign ones) involved.
<bpel:receive name="receiveInput" partnerLink="client"
portType="tns:HealthMobility"
operation="initiate" variable="input"
createInstance="yes"/>
<bpel:invoke name="getTreatmentOptions"
partnerLink="treatmentProviderPL" operation="getTreatmentOptions"
inputVariable="getTreatmentOptionsReq" outputVariable="getTreatmentOptionsResp">
</bpel:invoke>
<bpel:receive name="bookMobility" partnerLink="client" operation="bookMobility"
variable="bookMobilityReq" portType="tns:HealthMobility"/>
<bpel:invoke name="getTripOptions" partnerLink="mobilityMultiProvidersPL"
operation="getTripOptions" inputVariable="getTripOptionsReq"
outputVariable="getTripOptionsResp"></bpel:invoke>
I have tried to make debugging simply by deleting the receive and initializing statically the input variable required by the getTriOptions invoke. In this case all works fine so it means, necessarly, that the process continue to wait on the receive also if I invoke bookMobility via SOAPUI. My question is: Why? I'm missing something?
Thanks
You need to define a correlation set for the second receive. Each message that is sent to the operation that is connected to the first receive activity will create a new process instance. This means you may have multiple instance running in parallel. When these instances have reached the second receive, they are waiting for the second message, but in your example, there are no means to distinguish, which message is targeted to which process instance. I assume that your BPEL engine also logged that it could not route the message to a target instance.
To solve this problem, you need to find an identifier in the payload of a message and initialize a correlation set with this value. Then, when using the same correlation set with the second receive, all messages that contain the same identifier will be routed to this particular process instance. For further information about correlation sets I recommend reading the BPEL primer, section 4.2.4.

Resources