I have what seemed like a fairly simple requirement for a process that im beginning to question is even possible.
The image below shows my current process. I am trying to achieve two things:
A user creates an initial user task for adding a note, they should be able to add as many notes as they wish with one user task per note
A new sub-process is spawned for each new note (user task) that the user has created.
The process above presents the following problems:
A sub-process should be spawned for each task, however they seem to overwrite each other
Im not sure if the sub-process requires a unique id for each new sub-process spawned
So it turns out that the solution to this question requires a bit of scripting using groovy.
Below is the updated process model diagram, in it I start a new instance of the Complete Task process using a script task then if the user wishes to add more tasks the exclusive gateway can return the user to the Create task (user task) OR finish the process.
I clear down any values in the fields held within the user task within the script task before I pass the scope back to the user task.
The image below shows my Complete Task process that gets called by the main process using a script
Here I avoid using parallel gateways in preference of creating a new instance of the Create Task (user task) and a new instance of the Complete task process (not subprocess) via means of the script.
To start a new instance of the Complete Task process we have to start the process using the function startProcessInstanceByKeyAndTenantId() under a runtimeService instance for the process, although I could also use startProcessInstanceByIdAndTenantId():
//Import required libraries
import org.activiti.engine.RuntimeService;
import org.activiti.engine.runtime.ProcessInstance;
//instantiate RunTimeService instance
RuntimeService runtimeService = execution.getEngineServices().getRuntimeService();
//get tenant id
String tenantId = execution.getTenantId();
//variables Map
Map<String, Object> variables = runtimeService.getVariablesLocal(execution.getProcessInstanceId());
//start process (processId, variables, tenantId)
ProcessInstance completeTask = runtimeService.startProcessInstanceByKeyAndTenantId("CompleteTask", variables, tenantId);
//Clear variables to create a fresh task
execution.setVariable("title", "");
execution.setVariable("details", "");
Using this approach I avoid creating multiple subprocesses from the parent process and instead create multiple processes that run separate from the parent process. This benefits me as if the parent process completes the others continue to run.
Seems like you are updating only one variable (or a single set of variables) as a result of each task. This will override the previous value. use distinct variables, or append something before each variable to mark it unique for the task/ sub-process completed. see collapsed sub-process
Yes, each sub process gets its own unique execution id, But the main execution ID or process instance ID remains same
Related
I created a one-to one relationship between two tables in strapi.
As an example, suppose that Bob currently has a job, say messenger, if we assign Bob’s Job to secretary, Strapi simply reassigns the new Job, without warning that Bob was already in a job
If a person is not in a current job, it’s job would be ‘none’
I’d like to forbid the reassignment of the job, if Bob was already in a job (the user would have to assign the Bob's job to ‘none’ before assigning a new job)
In strapi, what would be the right way to forbid it (checking if the current job is not ‘none’, and, if it’s the case, stopping the assignment), using a service, a controller or a lifecycle hook?
One way to handle this in Strapi would be to use a lifecycle hook. Lifecycle hooks allow you to perform specific actions at certain stages of the CRUD operations (create, update, delete) on a model. In this case, you can use the beforeUpdate hook to check if the current job is not none before allowing the assignment of a new job:
// api/person/models/Person.js
module.exports = {
lifecycles: {
// This hook will be called before updating a person
async beforeUpdate(params, data) {
// Check if the current job is not 'none'
if (params.current.job !== 'none') {
// If the current job is not 'none', throw an error
throw new Error('Cannot reassign a job to a person who already has a job');
}
}
}
};
You can also use a service or a controller to handle this logic, but using a lifecycle hook allows you to centralize this logic and keep it separate from your business logic.
First day learning viewflow, I managed to get the tutorial to work, but I have a use case that I don't know how to implement.
What I want is when a workflow is started, I want it to automatically assign the task to the workflow starter (the user), how do I go about reference the current request object inside the workflow?
eg.
start = (flow.Start(CreateProcessView)).Permission(auto_create=True).Next(this.fill_request)
fill_request = (flow.View(UpdateProcessView).Assign(#current user))
An .Assign(...) could be specified with a callable that takes a process activation and should return a user. Ex .Assign(lambda act: User.objects.get(...))
There are several callable shortcuts provided by Viewflow. Any this.[task_name].owner point to a user who completed that task, and activation,process.created_by points to a user who made the .Start task
fill_request = (
flow.View(UpdateProcessView)
.Assign(lambda act: act.process.created_by)
# .Assign(this.start.owner)
)
This question is similar to this one I previously asked, in that I want the task to perform a Target Worker Expression check on a list of WorkerSids that I've added as one of the task's attributes. But I think this problem is different enough to warrant its own question.
My goal is to associate a "do not contact" list of WorkerSids with a Task; these are workers who should not be assigned the task (maybe the customer previously had a bad interaction with them).
I have the following workflow configuration:
{
"task_routing":{
"filters":[
{
"filter_friendly_name":"don't call self",
"expression":"1==1",
"targets":[
{
"queue":queueSid,
"expression":"(task.caller!=worker.contact_uri) and (worker.sid not in task.do_not_contact)",
"skip_if": "workers.available == 0"
},
{
"queue":automaticQueueSid
}
]
}
],
"default_filter":{
"queue":queueSid
}
}
}
When I create a task, checking the Twilio Console, I can see that the task has the following attributes:
{"from_country":"US","do_not_contact":["WORKER_SID1_HERE","WORKER_SID_2_HERE"],
... bunch of other attributes...
}
So I know that the task has successfully been assigned the array of WorkerSids as one of its attributes.
There is only one worker who is Idle and whose attributes match the queueSid TaskQueue. That worker's SID is WORKER_SID1_HERE, so the only available worker is ineligible to receive the task reservation. So what should happen is that the first target expression worker.sid not in task.do_not_contact returns false, and the task falls through to the automaticQueueSid TaskQueue.
Instead, the task remains in queueSid unassigned. The following sequence of Taskrouter events are logged:
task-queue.entered
Task TASK_SID entered TaskQueue QUEUESID_QUEUENAME
task.created
Task TASK_SID created
workflow.target-matched
Task TASK_SID matched a workflow target
workflow.entered
Task TASK_SID entered Workflow WORKFLOW_NAME
What do I need to change to get the desired workflow behavior?
Changing the skip_if to
"skip_if": "1==1"
solved the problem.
Per Twilio developer support, the worker.sid not in task.do_not_contact returns true for workers who are unavailable but are also not in do_not_contact, so the target expression still returns a set of workers, and then the "skip_if": "workers.available==0" returns false because technically there is one "available" worker--the one who is ineligible due to the do_not_contact list.
What's needed is for the skip_if to always return true, so when the first target processes the task without assigning it, the skip_if then passes it to the next target, as discussed in Taskrouter Workflow documentation:
"TaskRouter will only skip a routing step in a Workflow if:
No Reservations are immediately created when a Task enters the routing step
The Skip Timeout expression evaluates to true"
I'm trying to implement two identical independent Processes (flows) under one frontend. After fresh migration I can start one of them (as many times as I like) and it works fine. But when I try to start another one it raises DoesNotExist exception ("Process(Х) matching query does not exist"). After this it's not possible to start neither of them. It looks like when next node is been initialized (after start) the process object can't be found.
Update:
I tried adding my app to viewflow demo. My processes is OK only when started first. Starting it after any demo processes (helloworld etc.) raises the exception. All demo processes start smoothly anytime.
The difference makes that my process model has simple custom primary key:
class Order(Process):
order_no = models.AutoField(primary_key=True)
...
When I tried commenting the custom key out the problem went away.
Since it's Multi-table inheritance, OneToOneField field should be used if a custom field is needed:
class Order(Process):
process_ptr = models.OneToOneField(
Process, on_delete=models.CASCADE,
parent_link=True,
)
...
I am working on Oracle 10gR2.
And here is my problem -
I have a procedure, lets call it *proc_parent* (inside a package) which is supposed to call another procedure, lets call it *user_creation*. I have to call *user_creation* inside a loop, which is reading some columns from a table - and these column values are passed as parameters to the *user_creation* procedure.
The code is like this:
FOR i IN (SELECT community_id,
password,
username
FROM customer
WHERE community_id IS NOT NULL
AND created_by = 'SRC_GLOB'
)
LOOP
user_creation (i.community_id,i.password,i.username);
END LOOP;
COMMIT;
user_Creation procedure is invoking a web service for some business logic, and then based on the response updates a table.
I need to find a way by which I can use multi-threading here, so that I can run multiple instances of this procedure to speed up things. I know I can use *DBMS_SCHEDULER* and probably *DBMS_ALERT* but I am not able to figure out, how to use them inside a loop.
Can someone guide me in the right direction?
Thanks,
Ankur
what you can do is submit lots of jobs in the same time. See Example 28-2 Creating a Set of Lightweight Jobs in a Single Transaction
This fills a pl/sql table with all jobs you want to submit in one tx, all at the same time. As soon as they are submitted (enabled) they will start running, as many as the system can handle, or as many as are allowed by a resource manager plan.
The overhead that the Lightweight jobs have is very ... minimal/light.
I would like to close this question. DBMS_SCHEDULER as well as DBMS_JOB (though DBMS_SCHEDULER is preferred) can be used inside the loop to submit and execute the job.
For instance, here's a sample code, using DBMS_JOB which can be invoked inside a loop:
...
FOR i IN (SELECT community_id,
password,
username
FROM customer
WHERE community_id IS NOT NULL
AND created_by = 'SRC_GLOB'
)
LOOP
DBMS_JOB.SUBMIT(JOB => jobnum,
WHAT => 'BEGIN user_creation (i.community_id,i.password,i.username); END;'
COMMIT;
END LOOP;
Using a commit after SUBMIT will kick off the job (and hence the procedure) in parallel.