I want to know when EcuMDriverInitListBswM lists are executed? Are they executed during BswM init, or there will be a BswMGenericRequest which is responsible for a mode which will be arbitrated and its true action list contain BswMEcuMDriverInitListBswM action?
Related
HI is it possible to get the tasks whose status changed on performing a task in django-viewflow?
You can examining activation.task.leading chain after calling activation.done()
This question is similar to this one I previously asked, in that I want the task to perform a Target Worker Expression check on a list of WorkerSids that I've added as one of the task's attributes. But I think this problem is different enough to warrant its own question.
My goal is to associate a "do not contact" list of WorkerSids with a Task; these are workers who should not be assigned the task (maybe the customer previously had a bad interaction with them).
I have the following workflow configuration:
{
"task_routing":{
"filters":[
{
"filter_friendly_name":"don't call self",
"expression":"1==1",
"targets":[
{
"queue":queueSid,
"expression":"(task.caller!=worker.contact_uri) and (worker.sid not in task.do_not_contact)",
"skip_if": "workers.available == 0"
},
{
"queue":automaticQueueSid
}
]
}
],
"default_filter":{
"queue":queueSid
}
}
}
When I create a task, checking the Twilio Console, I can see that the task has the following attributes:
{"from_country":"US","do_not_contact":["WORKER_SID1_HERE","WORKER_SID_2_HERE"],
... bunch of other attributes...
}
So I know that the task has successfully been assigned the array of WorkerSids as one of its attributes.
There is only one worker who is Idle and whose attributes match the queueSid TaskQueue. That worker's SID is WORKER_SID1_HERE, so the only available worker is ineligible to receive the task reservation. So what should happen is that the first target expression worker.sid not in task.do_not_contact returns false, and the task falls through to the automaticQueueSid TaskQueue.
Instead, the task remains in queueSid unassigned. The following sequence of Taskrouter events are logged:
task-queue.entered
Task TASK_SID entered TaskQueue QUEUESID_QUEUENAME
task.created
Task TASK_SID created
workflow.target-matched
Task TASK_SID matched a workflow target
workflow.entered
Task TASK_SID entered Workflow WORKFLOW_NAME
What do I need to change to get the desired workflow behavior?
Changing the skip_if to
"skip_if": "1==1"
solved the problem.
Per Twilio developer support, the worker.sid not in task.do_not_contact returns true for workers who are unavailable but are also not in do_not_contact, so the target expression still returns a set of workers, and then the "skip_if": "workers.available==0" returns false because technically there is one "available" worker--the one who is ineligible due to the do_not_contact list.
What's needed is for the skip_if to always return true, so when the first target processes the task without assigning it, the skip_if then passes it to the next target, as discussed in Taskrouter Workflow documentation:
"TaskRouter will only skip a routing step in a Workflow if:
No Reservations are immediately created when a Task enters the routing step
The Skip Timeout expression evaluates to true"
Mono<WriteResult> result = reactiveCassandraTemplate.delete(...)
We are handling onSuccess() and onError(), but does something need to be handled specially where the WriteResult "wasApplied" is false but no error is returned? What does that actually mean if it didn't fail, BUT it was not applied.
Thanks!
The wasApplied need to be checked if your query contained the conditional update (for so-called light-weight transactions and for conditional creation of keyspaces/tables, etc.). So, if this field is equal to false then your query was executed but wasn't applied because condition in query didn't allow it.
By default this method always returns true for non-conditional queries.
I have what seemed like a fairly simple requirement for a process that im beginning to question is even possible.
The image below shows my current process. I am trying to achieve two things:
A user creates an initial user task for adding a note, they should be able to add as many notes as they wish with one user task per note
A new sub-process is spawned for each new note (user task) that the user has created.
The process above presents the following problems:
A sub-process should be spawned for each task, however they seem to overwrite each other
Im not sure if the sub-process requires a unique id for each new sub-process spawned
So it turns out that the solution to this question requires a bit of scripting using groovy.
Below is the updated process model diagram, in it I start a new instance of the Complete Task process using a script task then if the user wishes to add more tasks the exclusive gateway can return the user to the Create task (user task) OR finish the process.
I clear down any values in the fields held within the user task within the script task before I pass the scope back to the user task.
The image below shows my Complete Task process that gets called by the main process using a script
Here I avoid using parallel gateways in preference of creating a new instance of the Create Task (user task) and a new instance of the Complete task process (not subprocess) via means of the script.
To start a new instance of the Complete Task process we have to start the process using the function startProcessInstanceByKeyAndTenantId() under a runtimeService instance for the process, although I could also use startProcessInstanceByIdAndTenantId():
//Import required libraries
import org.activiti.engine.RuntimeService;
import org.activiti.engine.runtime.ProcessInstance;
//instantiate RunTimeService instance
RuntimeService runtimeService = execution.getEngineServices().getRuntimeService();
//get tenant id
String tenantId = execution.getTenantId();
//variables Map
Map<String, Object> variables = runtimeService.getVariablesLocal(execution.getProcessInstanceId());
//start process (processId, variables, tenantId)
ProcessInstance completeTask = runtimeService.startProcessInstanceByKeyAndTenantId("CompleteTask", variables, tenantId);
//Clear variables to create a fresh task
execution.setVariable("title", "");
execution.setVariable("details", "");
Using this approach I avoid creating multiple subprocesses from the parent process and instead create multiple processes that run separate from the parent process. This benefits me as if the parent process completes the others continue to run.
Seems like you are updating only one variable (or a single set of variables) as a result of each task. This will override the previous value. use distinct variables, or append something before each variable to mark it unique for the task/ sub-process completed. see collapsed sub-process
Yes, each sub process gets its own unique execution id, But the main execution ID or process instance ID remains same
I am working on Oracle 10gR2.
And here is my problem -
I have a procedure, lets call it *proc_parent* (inside a package) which is supposed to call another procedure, lets call it *user_creation*. I have to call *user_creation* inside a loop, which is reading some columns from a table - and these column values are passed as parameters to the *user_creation* procedure.
The code is like this:
FOR i IN (SELECT community_id,
password,
username
FROM customer
WHERE community_id IS NOT NULL
AND created_by = 'SRC_GLOB'
)
LOOP
user_creation (i.community_id,i.password,i.username);
END LOOP;
COMMIT;
user_Creation procedure is invoking a web service for some business logic, and then based on the response updates a table.
I need to find a way by which I can use multi-threading here, so that I can run multiple instances of this procedure to speed up things. I know I can use *DBMS_SCHEDULER* and probably *DBMS_ALERT* but I am not able to figure out, how to use them inside a loop.
Can someone guide me in the right direction?
Thanks,
Ankur
what you can do is submit lots of jobs in the same time. See Example 28-2 Creating a Set of Lightweight Jobs in a Single Transaction
This fills a pl/sql table with all jobs you want to submit in one tx, all at the same time. As soon as they are submitted (enabled) they will start running, as many as the system can handle, or as many as are allowed by a resource manager plan.
The overhead that the Lightweight jobs have is very ... minimal/light.
I would like to close this question. DBMS_SCHEDULER as well as DBMS_JOB (though DBMS_SCHEDULER is preferred) can be used inside the loop to submit and execute the job.
For instance, here's a sample code, using DBMS_JOB which can be invoked inside a loop:
...
FOR i IN (SELECT community_id,
password,
username
FROM customer
WHERE community_id IS NOT NULL
AND created_by = 'SRC_GLOB'
)
LOOP
DBMS_JOB.SUBMIT(JOB => jobnum,
WHAT => 'BEGIN user_creation (i.community_id,i.password,i.username); END;'
COMMIT;
END LOOP;
Using a commit after SUBMIT will kick off the job (and hence the procedure) in parallel.