How to update multiple Kogito process variables with results from service task - kogito

Please advise how to update many Kogito process variables with outputs from a service task? I mapped service task's outputs to process variables on "Data Outputs and Assignments" screen of BPMN editor and implemented service task handler to return java Map<String, Object>. Solution compiles and runs but process variables do not update with results and process stucks in Active state ...
#ApplicationScoped
public class Handler {
public Map<String, Object> Execute() {
Map<String, Object> results = new HashMap<>();
results.put("processVariableA", true);
results.put("processVariableB", "message");
return results;
}
}
Log
2022-08-23 16:47:49,164 INFO [org.kie.kog.qua.pro.dev.DevModeWorkflowLogger] (main) Starting workflow 'Provisioning' (8ce17dc5-9606-425b-b563-a9deacdcc812)
2022-08-23 16:47:49,168 INFO [org.kie.kog.qua.pro.dev.DevModeWorkflowLogger] (main) Triggered node 'Start' for process 'Provisioning' (8ce17dc5-9606-425b-b563-a9deacdcc812)
2022-08-23 16:47:49,172 INFO [org.kie.kog.qua.pro.dev.DevModeWorkflowLogger] (main) Triggered node 'Set Variables' for process 'Provisioning' (8ce17dc5-9606-425b-b563-a9deacdcc812)
2022-08-23 16:47:49,365 INFO [org.kie.kog.qua.pro.dev.DevModeWorkflowLogger] (main) Workflow 'PortsProvisioning' (8ce17dc5-9606-425b-b563-a9deacdcc812) was started, now 'ACTIVE'

I raised this question at zulip as well. Maybe that triggers someone: https://kie.zulipchat.com/#narrow/stream/232676-kogito/topic/Question.3A.20update.20multiple.20proces.20variables

Related

ServiceStack with MiniProfiler for .Net 6

I was attempting to add Profiling into ServiceStack 6 with .Net 6 and using the .Net Framework MiniProfiler Plugin code as a starting point.
I noticed that ServiceStack still has Profiler.Current.Step("Step Name") in the Handlers, AutoQueryFeature and others.
What is currently causing me some stress is the following:
In ServiceStackHandlerBase.GetResponseAsync(IRequest httpReq, object request) the Async Task is not awaited. This causes the step to be disposed of the when it reaches the first async method it must await, causing all the subsequent nested steps to not be children. Is there something simple I'm missing here or is this just a bug in a seldom used feature?
In SqlServerOrmLiteDialectProvider most of the async methods make use of an Unwrap function that drills down to the SqlConnection or SqlCommand this causes an issue when attempting to wrap a command to enable profiling as it ignores the override methods in the wrapper in favour of the IHasDbCommand.DbCommand nested within. Not using IHasDbCommand on the wrapping command makes it attempt to use wrapping command but hits a snag because of the forced cast to SqlCommand. Is there an easy way to combat this issue, or do I have to extend each OrmliteDialectProvider I wish to use that has this issue to take into account the wrapping command if it is present?
Any input would be appreciated.
Thanks.
Extra Information Point 1
Below is the code from ServiceStackHandlerBase that appears (to me) to be a bug?
public virtual Task<object> GetResponseAsync(IRequest httpReq, object request)
{
using (Profiler.Current.Step("Execute " + GetType().Name + " Service"))
{
return appHost.ServiceController.ExecuteAsync(request, httpReq);
}
}
I made a small example that shows what I am looking at:
using System;
using System.Threading.Tasks;
public class Program
{
public static async Task<int> Main(string[] args)
{
Console.WriteLine("App Start.");
await GetResponseAsync();
Console.WriteLine("App End.");
return 0;
}
// Async method with a using and non-awaited task.
private static Task GetResponseAsync()
{
using(new Test())
{
return AdditionAsync();
}
}
// Placeholder async method.
private static async Task AdditionAsync()
{
Console.WriteLine("Async Task Started.");
await Task.Delay(2000);
Console.WriteLine("Async Task Complete.");
}
}
public class Test : IDisposable
{
public Test()
{
Console.WriteLine("Disposable instance created.");
}
public void Dispose()
{
Console.WriteLine("Disposable instance disposed.");
}
}
My Desired Result:
App Start.
Disposable instance created.
Async Task Started.
Async Task Complete.
Disposable instance disposed.
App End.
My Actual Result:
App Start.
Disposable instance created.
Async Task Started.
Disposable instance disposed.
Async Task Complete.
App End.
This to me shows that even though the task is awaited at a later point in the code, the using has already disposed of the contained object.
Mini Profiler was coupled to System.Web so isn't supported in ServiceStack .NET6.
To view the generated SQL you can use a BeforeExecFilter to inspect the IDbCommand before it's executed.
This is what PrintSql() uses to write all generated SQL to the console:
OrmLiteUtils.PrintSql();
Note: when you return a non-awaited task it just means it doesn't get awaited at that point, it still gets executed when the return task is eventually awaited.
To avoid the explicit casting you should be able to override a SQL Server Dialect Provider where you'll be able to replace the existing implementation with your own.

Call each task in a separate thread. ThreadPerTaskScheduler task scheduler

I want to run each TPL task in a separate thread (the idea is having TPL advantages in the same time with working with a separate threads). It looks like this task scheduler is exactly what I'm looking for: ThreadPerTaskScheduler .
I made several local tests and I see that it works as I expected including the ability to call Task.WaitAll.
var task = Task.Factory.StartNew(() =>
{
Thread.Sleep(10000);
}, CancellationToken.None, TaskCreationOptions.None, new ThreadPerTaskScheduler());
Task.WaitAll(task);
But, I have a question about this line from the task scheduler implementation:
protected override void QueueTask(Task task)
{
new Thread(() => TryExecuteTask(task)) { IsBackground = true }.Start();
}
as I see, we just create a new thread without saving any reference on this thread anywhere. If so, how does Task.WaitAll work?

Understanding SqsMessageDrivenChannelAdapter behaviour

I'm trying to understand the behavior of SqsMessageDrivenChannelAdapter to address memory issue.
The upstream system dumps thousands of messages in aws-sqs-queue, all of the messages are received immediately by SqsMessageDrivenChannelAdapter. On the AWS console I do not see any messages available on the queue.
The SqsMessageProcesser then processes 1 message every 5 seconds.
Here's the log:
2019-05-21 17:28:18 INFO SQSMessageProcessor:88 - --- inside
sqsMessageProcesser--- 2019-05-21 17:28:23 INFO
SQSMessageProcessor:88 - --- inside sqsMessageProcesser--- 2019-05-21
17:28:28 INFO SQSMessageProcessor:88 - --- inside
sqsMessageProcesser--- 2019-05-21 17:28:33 INFO
SQSMessageProcessor:88 - --- inside sqsMessageProcesser--- 2019-05-21
17:28:38 INFO SQSMessageProcessor:88 - --- inside
sqsMessageProcesser--- .........................
Does this mean that while SqsMessageProcesser is processing 1 message every 5 seconds, thousands of messages are being held in (server) memory of the in-channel?
Each db transaction takes around 5 seconds and currently we are facing 'outofmemory' issues on PRD.
Will it help if i set the capacity on the QueueChannel and setMaxNumberOfMessages for SqsMessageDrivenChannelAdapter?
If yes, is there a standard way to calculate these values?
#Bean(name = "in-channel")
public PollableChannel sqsInputChannel() {
return new QueueChannel();
}
#Autowired
private AmazonSQSAsync amazonSqs;
#Bean
public MessageProducer sqsMessageDrivenChannelAdapterForItems() {
SqsMessageDrivenChannelAdapter adapter =
new SqsMessageDrivenChannelAdapter(amazonSqs, "aws-sqs-queue");
adapter.setOutputChannelName("in-channel");
return adapter;
}
#ServiceActivator(inputChannel = "in-channel",
poller = #Poller(fixedRate = "5000" , maxMessagesPerPoll = "1"))
public void sqsMessageProcesser(Message<?> receive) throws ProcesserException {
logger.info("--- inside sqsMessageProcesser---")
// db transactions.
}
Actually it is an anti-pattern to place a QueueChannel for message-driven channel adapter. The later is already async and based on some task scheduling. So, shifting consumed messages from source into an in-memory queue is definitely leading into some troubles.
You should consider to have a direct channel instead and let SQS consuming thread to be blocked until your sqsMessageProcesser finishes its job. This way you will guarantee no data loss.

singleton in azure queuetrigger not working as expected

My understanding of this is obviously wrong, any clarification would be helpful.
I thought that adding [Singleton] to a web job would force it to run one after another.
This does not seem to be the case.
This is my very basic test code (against a queue with about 149 messages)
[Singleton] //just run one at a time
public static void ProcessQueueMessage([QueueTrigger("datatrac-stops-to-update")] string message, TextWriter log)
{
monitorEntities mDb = new monitorEntities();
//go get the record
int recordToGet = Convert.ToInt32(message);
var record = (from r in mDb.To_Process where r.Id == recordToGet select r).FirstOrDefault();
record.status = 5;
mDb.SaveChanges();
Console.WriteLine($"Finished record {message}");
}
When it runs I get this on the console:
and as I step though it I am getting conflict errors.
What am I not understanding?
RESOLVED - MORE INFO
Here is what I did to address this, like Haitham said in his answer [Singleton] refers to how many instances of the webjob itself is running -- not how many items are processed per instance.
That was addressed by modifying my Main like:
static void Main(string[] args)
{
var config = new JobHostConfiguration();
config.Queues.BatchSize = 2;
Which when set to 1 only ran 1 at a time.
When set to 2 like above then modifying the below code:
public static void ProcessQueueMessage([QueueTrigger("datatrac-stops-to-update")] string message, TextWriter log)
{
var threadID = Thread.CurrentThread.ManagedThreadId;
Console.WriteLine($"{threadID} : started record {message}");
Produces this behavior (which is what was expected):
Link where I found documentation on above:
https://github.com/Azure/azure-webjobs-sdk/wiki/Queues#config
Singleton does not mean it will run it one after another but mainly about instantiation the instance for the web job class.
If you need to run just one at a time, you can use locks on a static variable to prevent the code to execute more than one time.
But I would not recommend that anyway and you have to see why there are conflict errors

Durable Task Framework re-queue failed task

How to use "waiting for external" event functionality of durable task framework in the code. Following is a sample code.
context.ScheduleWithRetry<LicenseActivityResponse>(
typeof(LicensesCreatorActivity),
_retryOptions,
input);
I am using ScheduleWithRetry<> method of context for scheduling my task on DTF but when there is an exception occurring in the code. The above method retries for the _retryOptions number of times.
After completing the retries, the Orchestration status will be marked as Failed.
I need a process by which i can resume my orchestration on DTF after correcting the reason of exception.
I am looking into the githib code for the concerned method in the code but no success.
I have concluded two solution:
Call a framework's method (if exist) and re-queue the orchestration from the state where it failed.
Hold the orchestration code in try catch and in catch section i implement a method CreateOrchestrationInstanceWithRaisedEventAsync whcih will put the orchestration in hold state until an external event triggers it back. Whenever a user (using some front end application) will call the external event for resuming (which means the user have made the corrections which were causing exception).
These are my understandings, if one of the above is possible then kindly guide me through some technical suggestions. otherwise find me a correct path for this task.
For the community's benefit, Salman resolved the issue by doing the following:
"I solved the problem by creating a sub orchestration in case of an exception occurs while performing an activity. The sub orchestration lock the event on azure as pending state and wait for an external event which raise the locked event so that the parent orchestration resumes the process on activity. This process helps if our orchestrations is about to fail on azure durable task framework"
I have figured out the solution for my problem by using "Signal Orchestrations" taken from code from GitHub repository.
Following is the solution diagram for the problem.
In this diagram, before the solution implemented, we only had "Process Activity" which actually executes the activity.
Azure Storage Table is for storing the multiplier values of an instanceId and ActivityName. Why we implemented this will get clear later.
Monitoring Website is the platform from where a user can re-queue/retry the orchestration activity to perform.
Now we have a pre-step and a post-step.
1. Get Retry Option (Pre-Step)
This method basically set the value of RetryOptions instance value.
private RetryOptions ModifyMaxRetires(OrchestrationContext context, string activityName)
{
var failedInstance =
_azureStorageFailedOrchestrationTasks.GetSingleEntity(context.OrchestrationInstance.InstanceId,
activityName);
var configuration = Container.GetInstance<IConfigurationManager>();
if (failedInstance.Result == null)
{
return new RetryOptions(TimeSpan.FromSeconds(configuration.OrderTaskFailureWaitInSeconds),
configuration.OrderTaskMaxRetries);
}
var multiplier = ((FailedOrchestrationEntity)failedInstance.Result).Multiplier;
return new RetryOptions(TimeSpan.FromSeconds(configuration.OrderTaskFailureWaitInSeconds),
configuration.OrderTaskMaxRetries * multiplier);
}
If we have any entry in our azure storage table against the instanceId and ActivityName, we takes the multiplier value from the table and updates the value of retry number in RetryOption instance creation. otherwise we are using the default number of retry value which is coming from our config.
Then:
We process the activity with scheduled retry number (if activity fails in any case).
2. Handle Exceptions (Post-Step)
This method basically handles the exception in case of the activity fails to complete even after the number of retry count set for the activity in RetryOption instance.
private async Task HandleExceptionForSignal(OrchestrationContext context, Exception exception, string activityName)
{
var failedInstance = _azureStorageFailedOrchestrationTasks.GetSingleEntity(context.OrchestrationInstance.InstanceId, activityName);
if (failedInstance.Result != null)
{
_azureStorageFailedOrchestrationTasks.UpdateSingleEntity(context.OrchestrationInstance.InstanceId, activityName, ((FailedOrchestrationEntity)failedInstance.Result).Multiplier + 1);
}
else
{
//const multiplier when first time exception occurs.
const int multiplier = 2;
_azureStorageFailedOrchestrationTasks.InsertActivity(new FailedOrchestrationEntity(context.OrchestrationInstance.InstanceId, activityName)
{
Multiplier = multiplier
});
}
var exceptionInput = new OrderExceptionContext
{
Exception = exception.ToString(),
Message = exception.Message
};
await context.CreateSubOrchestrationInstance<string>(typeof(ProcessFailedOrderOrchestration), $"{context.OrchestrationInstance.InstanceId}_{Guid.NewGuid()}", exceptionInput);
}
The above code first try to find the instanceID and ActivityName in azure storage. If it is not there then we simply add a new row in azure storage table for the InstanceId and ActivityName with the default multiplier value 2.
Later on we creates a new exception type instance for sending the exception message and details to sub-orchestration (which will be shown on monitoring website to a user). The sub-orchestration waits for the external event fired from a user against the InstanceId of the sub-orchestration.
Whenever it is fired from monitoring website, the sub-orchestration will end up and go back to start parent orchestration once again. But this time, when the Pre-Step activity will be called once again it will find the entry in azure storage table with a multiplier. Which means the retry options will get updated after multiplying it with default retry options.
So by this way, we can continue our orchestrations and prevent them from failing.
Following is the class of sub-orchestrations.
internal class ProcessFailedOrderOrchestration : TaskOrchestration<string, OrderExceptionContext>
{
private TaskCompletionSource<string> _resumeHandle;
public override async Task<string> RunTask(OrchestrationContext context, OrderExceptionContext input)
{
await WaitForSignal();
return "Completed";
}
private async Task<string> WaitForSignal()
{
_resumeHandle = new TaskCompletionSource<string>();
var data = await _resumeHandle.Task;
_resumeHandle = null;
return data;
}
public override void OnEvent(OrchestrationContext context, string name, string input)
{
_resumeHandle?.SetResult(input);
}
}

Resources