I am trying to stop asynchronous flow in mule by using muleContext.registry.lookupFlowConstruct('').stop() command in groovy script, but even after it still triggering the flow.I want stop it abruptly.
you can try
eventContext.setStopFurtherProcessing(true)
or
<expression-component>
app.registry.yourflowName.stop();
</expression-component>
if you are looking to stop the messages to go to your flow, you can also use filters.
Related
There is Spring integration flow. This has to be stopped manually by the user while being run and need to use ChannelInterceptor to stop the execution. This is xml based configuration.
To stop the flow there is just enough to call stop() on the endpoint which is a beginning for the flow. Autowire an AbstractEndpoint for the specific id and call its stop(). It really doesn't matter if you are going to do that in the ChannelInterceptor or some other service.
There is also a common patterns called Control Bus, where one of the commands to send could be the mentioned stop() operation: https://docs.spring.io/spring-integration/docs/current/reference/html/system-management.html#control-bus
There's an API that my node.js server accesses quite a bit. It requires me to change my password every 3 months. Fortunately, there's also an API call for changing the password. :) I have a cron job that runs regularly and changes the password when necessary.
If my app is accessing the API at the exact time the password is being changed, there's a potential race condition and the API call could fail. What are some good patterns for dealing with this?
I could put all the API calls into a queue, and use a cron job to pull the most recent one off the queue and run it. If the API call fails, it would stay in the queue and get run next time the cron job runs. But that seems like it might be overkill.
I could use a try/catch handler with the API call, inside a while loop, and just run the while loop until the API call completes successfully. But that's going to block the rest of my app.
I could use a try/catch handler with the API call, inside a setTimeOut, and just re-run the setTimeOut until the API call completes successfully. This way the API call would only run when the main thread is done with other work and gets around to it. But would this be a mistake if the server is under heavy load?
Is there a better pattern for dealing with this sort of thing?
The try/catch handlers would lose data in the event of a server crash, so I went with the cron job/queue approach. I'm using a queue maintained as a table in my db, so that if something interrupts the server, nothing will be lost.
Only several task can be run async: HTTP, EVENT, KAFKA. But why there is no a way to run SIMPLE tasks async. Especially would be very useful feature to run sub workflows async. The only workaround(and only for subworkflows) is to send event which will be handled by registered event which will run workflow
I'm late to the party here, but check out the FORK operator.
https://orkes.io/content/docs/reference-docs/fork-task
In this workflow the fork splits your workflow into 3 paths - to send an email, SMS and a HTTP notification. Each path runs asynch. You can also set your JOIN (the other half of the fork) to JOIN_ON all or just some of the fork "tines"
If you need to define the number of asynchronous flows at runtime - the Dynamic Fork is the way to go (but it is a bit more complicated to set up).
https://orkes.io/content/docs/reference-docs/dynamic-fork-task
In my application I create an Azure batch job. It's a Node app and I use an azure-batch Node client, but I could also be using REST, I don't think it matters. I can't switch to a C# client, however.
I expect the job to be completed in a few seconds and I wish to pause the code until the batch job is over but I am not sure how to detect the end of the job without polling the Job Status API. Neither the Node client nor the REST API exposes such functionality. I thought I could maybe register for an event of some sort but was not able to find anything like that. There are job release tasks but I am not sure if I can achieve this using them.
Any ideas how the end of an Azure batch job can be detected from within my application?
One way to do this is once you add your tasks to the job, set the job's onAllTasksComplete property to 'terminatejob'.
Then you can poll the Job-Get API, and check the state property on the job for when the job is complete (https://learn.microsoft.com/en-us/rest/api/batchservice/job/get#jobstate or https://learn.microsoft.com/en-us/javascript/api/azure-batch/job?view=azure-node-latest#get-string--object-).
I'm using the Azure Fluent API, Azure Management Libraries for .NET, to scale the DTU's within an Azure Elastic Pool and would like to know if it's possible to trigger an update without having to wait for the processing to complete.
Currently the following block of code will wait until the Elastic Pool has finished scaling before it continues execution. With a large premium Elastic Pool this could mean that the this line will take up to 90 minutes to complete.
ElasticPool
.Update()
.WithDtu(1000)
.Apply();
There's also a ApplyAsync() method which i could deliberately not await to allow the program to continue execution, if i take this approach the program will end execution shortly after calling this line and i am unsure if this library has been designed to work in this fashion.
Does anyone know of a better solution to trigger an update without having to wait on a response? Or if it is safe to fire the async method without waiting for a response?
There is currently no way to make a fire and forget calls in the Fluent SDK for update scenarios but we are looking to the ways of enabling a manual status polling in the future. One option would be to create a thread that will wait on the completion. The other one is to use the Inner getter and make a low level BeginCreateOrUpdateAsync/BeginUpdateAsync method calls and then do manual polls.
On the side note if you need to make multiple calls and then wait for completion of all of them you can use Task.WaitAll(...) and provide the list of the ApplyAsync tasks.
Please log an issue in the repo if you will hit any errors because that way you will be able to track the progress of the fix.
edit: FYI the call is blocking not because SDK is waiting for the response from Azure but that SDK waits until the call is completed, operation of update is finished and the resource is ready to be used for further operations. Just firing an update and then trying to use resource will cause error responses if in your case Elastic Pool is still in the middle of the update.