I need a functionality which does the following:
At a certain point of a flow the execution is paused till the specified time.
(It's like parking / staging when all messages remain in a place till the specified time)
So If you set 2016-04-20 11:12:00 for that time (ideally It's specified by cron expression) till that time everything is paused. (flow does not continue processing messages) If the specified time ellapses then a worflow continues the execution from the point where this 'staging' component resides.
Is it possible to do that with Spring Integration?
How should be implemented?
Actually the defaultDelay for the DelayHandler can be calculated from the date value:
#Autowired
#Qualifier("myDelayer.handler")
private DelayHandler myDelayer;
...
Date nextDate = ...
myDelayer.setDefaultDelay(nextDate.getTime() - System.currentTimeMillis());
and use this code somewhere after start your application, e.g. ContextRefreshedEvent.
Or you can just place a desired Date to the message header and use delay-expression.
From other side you can just place your messages to the QueueChannel and use a desired cron from the <poller> on endpoint which should poll messages from that queue.
If you have so long delay time for those messages, you should consider to use persistent MessageStore on that QueueChannel.
Related
Can the execution of an expressJS method be delayed for 30 days or more just by using setTimeout ?
Let's say I want to create an endpoint /sendMessage that send a message to my other app after a timeout of 30 days. Will my expressJS method execution will last long time enough to fire this message after this delay ?
If your server runs continuously for 30 days or more, then setTimeout() will work for that. But, it is probably not smart to rely on that fact that your server never, ever has to restart.
There are 3rd party programs/modules designed explicitly for this. If you don't want to use one of them, then what I have done in the past is I write each future firing time into a JSON file and I set a timer for it with setTimeout(). If the timer successfully fires, then I remove that time from the JSON file.
So, at any point in time, the JSON file always contains a list of times in the future that I want timers to fire for. Any timer that fires is immediately removed from the JSON file.
Anytime my server starts up, I read the times from the JSON file and reconfigure the setTimeout() for each one.
This way, even if my server restarts, I won't lose any of the timers.
In case you were wondering, the way nodejs creates timers, it does not cost you anything to have a bunch of future timers configured. Nodejs keeps the timers in a sorted linked list and the event loop just checks the time for the next timer to fire - the one at the front of the sorted list (the rest of the timers are not looked at until they get to the front of the sorted list). This means the only time it costs anything to have lots of future timers is when inserting a new timer into the sorted list and there is no regular cost in the event loop to having lots of pending timers present.
I would like to have a function called on a timer (every X minutes) but I want to ensure that only one instance of this function is running at a time. The work that is happening in the function shouldn't take long, but if for some reason it takes longer than the scheduled timer (X minutes) I don't want another instance to start and the processes to step on each other.
The simplest way that I can think of would be to set a maximum execution time on the function to also be X minutes. I would want to know how to accomplish this in both the App Service and Consumption plans, even if they are different approaches. I also want to be able to set this on an individual function level.
This type of feature is normally built-in to a FaaS environment, but I am having the hardest time google-binging it. Is this possible in the function.json? Or also are there different ways to make sure that this runs only once?
(PS. I know I could this in my own code by wrapping the work in a thread with a timeout. But I was hoping for something more idiomatic.)
Timer functions already have this behavior - they take out a blob lease from the AzureWebJobsStorage storage account to ensure that only one instance is executing the timer function. Also, the timer will not execute while a previous scheduled execution is in flight.
Another roll-your-own possibility is to handle this with storage queues and visibility timeout - when the queue has finished processing, push a new queue message with visibility timeout to match the desired schedule.
I want to mention that the functionTimeout host.json property will add a timeout to all of your functions, but has the side effect that your function will fail with a timeout error and that function instance will restart, so I wouldn't rely on it in this case.
You can specify 'functionTimeout' property in host.json
https://github.com/Azure/azure-webjobs-sdk-script/wiki/host.json
// Value indicating the timeout duration for all functions.
// In Dynamic SKUs, the valid range is from 1 second to 10 minutes and the default value is 5 minutes.
// In Paid SKUs there is no limit and the default value is null (indicating no timeut).
"functionTimeout": "00:05:00"
There is a new Azure Functions plan called Premium (in public preview as of May 2019) that allows for unlimited execution duration:
https://learn.microsoft.com/en-us/azure/azure-functions/functions-scale
It will probably end up the goto plan for most Enterprise scenarios.
I have a Windows Delphi application that receives events, on each of these events i'd like to run a task in a parallel way (so i can be ready for the following event). There is many way to do this through omnithread library's abstractions.
The issue is that part of my code needs to be executed immediately after the reception of the event (basically to "decode" the events params), and another part needs to be executed a few seconds after only under the condition of nothing new happend for the same context.
This behaviour should respond to "only store this new value if it last longer than 3000ms, otherwise just cancel it".
So what I need would be to "cancel" a running task (the one waiting 3000ms) if a new event arrives with the same context.
I cannot use a pipeline abstraction because when the first stage ends, it automatically fills the second stage queue without asking me if i want to cancel it or not.
Is that possible?
Thank you.
Sounds like you need a Dictionary<Context, Event> where the events also carry a "created" timestamp property, and a background tread which continuously checks if there are event entries in this dictionary with elapsed time > 3000ms.
Incoming events update the timestamp and event params, until the thread detects an entry which matches the condition and then extracts the entry from the dictionary.
i have a requirement to schedule recurrent tasks. My application is in MFC. For Eg I may need to send a file to a particular location on "From Date" "To Date" "Frequency" "Start Time" "End Time". i thought of having a list and add these parameters there and create a timer that elapses every second. Where i can check the list for the conditions and invoke the file transfer. But the problem is if the list is huge then i may not be able to do it. Is there any other way to achieve this?
Create a priority queue of scheduled events, and for each "schedule", fill the queue with only NEXT event for that "schedule". Wait only for the first EVENT in the priority queue, and when used, look up into schedule item for that event, and let it fill its next event into the queue.
Please ask if anything above needs more clarification.
EDIT:
You'll trigger your event on the particular date and time depending what are you most comfortable with. Since you'll have only ONE event that you'll have to wait for (you can copy it from the HEAD of the queue), you have multiple options, for example:
SetTimer() for one second intervals, when compare current time with event time.
SetTimer() for the duration of the current time to event time.
start another thread, waitforsingleobject inside of it, with delay computed as eventi_time-now - this will be most difficult since you'll have to be careful when calling something on the main thread
... and so on
I'm currently developing on a project which uses some TimerJobs. One of the jobs should check the MySites of some special users about every 2 minutes. So I create a SPMinuteSchedule object and set the BeginSecond property to 0 and the Interval property to 2. I think the use of both properties seem to be obvious but I'm not really sure how to interpret the EndSecond property.
If EndSecond is set to 30 and BeginSecond to 0, does it mean that the Timer Service will start the job some where within these 30 seconds and the job takes as long as it needs to execute its code? Or does it mean that the job can only run for 30 seconds? What happens if the code executed within the Execute() method needs more time to complete?
Whatever might be the answer, the property's name "EndSecond" was not chosen very well.
Refer to this post for more details to re-iterate below is the info extract from the Post
Notice how the schedule is set for the timer job. The SPMinuteSchedule.BeginSecond property and the SPMinuteSchedule.EndSecond property specify a start window of execution. The SharePoint Timer service starts the timer job at a random time between the BeginSecond property and the EndSecond property. This aspect of the timer service is designed for expensive jobs that execute on all servers in the farm. If all the jobs started at the same time, it could place an unwanted heavy load on the farm. The randomization helps spread the load out across the farm.