Elixir: Scheduled jobs not running Mix task after the first call - cron

I'm using Quantum to handle cron jobs. The setting is the following:
application.ex
def start
...
children = [
...
worker(MyApp.Scheduler, [])
]
opts = [strategy: :one_for_one, name: MyApp.Supervisor]
Supervisor.start_link(children, opts)
end
config.exs
config :My_app, MyApp.Scheduler,
jobs: [
{"*/5 * * * *", fn -> Mix.Task.run "first_mix_task" end},
{"*/5 * * * *", fn -> Mix.Task.run "second_mix_task" end},
{"*/5 * * * *", fn -> Mix.Task.run "third_mix_task" end},
{"*/5 * * * *", fn -> Mix.Task.run "fourth_mix_task" end}
]
The problem is, for some reason, Mix tasks run only the first time after cron jobs are added. Later, although I can see in the logs crons are started and ended (according to Quantum), Mix tasks are never triggered.
I'm not including the mix tasks here because they work fine the first run and also when called from console. So I think the issue has to be in the settings I'm including here. But if you have a good reason to look there just let me know.

Mix.Task.run/1 only executes a task the first time it's called, unless it is re-enabled.
Runs a task with the given args.
If the task was not yet invoked, it runs the task and returns the
result.
If there is an alias with the same name, the alias will be invoked instead of the original task.
If the task or alias were already invoked, it does not run them again
and simply aborts with :noop.
https://hexdocs.pm/mix/Mix.Task.html#run/2
You can use Mix.Task.rerun/1 instead of Mix.Task.run/1 to re-enable and invoke the task again:
...
{"*/5 * * * *", fn -> Mix.Task.rerun "first_mix_task" end},
...

Related

why schedule() does not lead to deadlock while using the default prepare_arch_switch()

In Linux 2.6.11.12, before the shedule() function to select the "next" task to run, it will lock the runqueue
spin_lock_irq(&rq->lock);
and the, before calling context_switch() to perform the context switching, it will call prepare_arch_switch(), which is a no-op by default:
/*
* Default context-switch locking:
*/
#ifndef prepare_arch_switch
# define prepare_arch_switch(rq, next) do { } while (0)
# define finish_arch_switch(rq, next) spin_unlock_irq(&(rq)->lock)
# define task_running(rq, p) ((rq)->curr == (p))
#endif
that is, it will hold the rq->lock until switch_to() return, and then, the macro finish_arch_switch() actually releases the lock.
Suppose that, there are tasks A, B, and C. And now A calls schedule() and switch to B (now, the rq->lock is locked). Sooner or later, B calls schedule(). At this point, how would B to get rq->lock since it is locked by A?
There is also some arch-dependent implememtation, such as:
/*
* On IA-64, we don't want to hold the runqueue's lock during the low-level context-switch,
* because that could cause a deadlock. Here is an example by Erich Focht:
*
* Example:
* CPU#0:
* schedule()
* -> spin_lock_irq(&rq->lock)
* -> context_switch()
* -> wrap_mmu_context()
* -> read_lock(&tasklist_lock)
*
* CPU#1:
* sys_wait4() or release_task() or forget_original_parent()
* -> write_lock(&tasklist_lock)
* -> do_notify_parent()
* -> wake_up_parent()
* -> try_to_wake_up()
* -> spin_lock_irq(&parent_rq->lock)
*
* If the parent's rq happens to be on CPU#0, we'll wait for the rq->lock
* of that CPU which will not be released, because there we wait for the
* tasklist_lock to become available.
*/
#define prepare_arch_switch(rq, next) \
do { \
spin_lock(&(next)->switch_lock); \
spin_unlock(&(rq)->lock); \
} while (0)
#define finish_arch_switch(rq, prev) spin_unlock_irq(&(prev)->switch_lock)
In this case, I'm very sure that this version will do things right since it unlock the rq->lock before calling context_switch().
But what happens to the default implementation? How it can do things right?
I found a comment in context_switch() of linux 2.6.32.68, that tells the story under the code:
/*
* Since the runqueue lock will be released by the next
* task (which is an invalid locking op but in the case
* of the scheduler it's an obvious special-case), so we
* do an early lockdep release here:
*/
yet we don't switch to another task with the lock locked, the next task will unlock it, and if the next task is newly created, the function ret_from_fork() will also eventually call finish_task_switch() to unlock the rq->lock

How to run cron scheduler only once?

My scheduler looks like this:
#Scheduled(cron = "* 30 11 * * *")
That's nice. Every 11:30 it runs! works great! but if the scheduler will finish working on 11:30:10, scheduler runs again. How can I add seconds here? Is it true?
#Scheduled(cron = "0 30 11 * * *")
#Scheduled(cron = "0 30 11 * * *")

ignore incoming logstash entries that are older than a given date

I want Logstash, when it's processing input entries, to simply drop entries that are older than N days.
I assume I'll use the date module and obviously drop, but I don't know how to connect them.
The only way that I know to do date level comparison is via Ruby code. You need the date filter to parse the timestamp (that's its own issue).
Once you parse the date into a field (e.g., event["#timestamp"]), then you can use it to determine if you want to ignore it or not:
5.0:
ruby {
code => "event.cancel if (Time.now.to_f - event.get('#timestamp').to_f) > (60 * 60 * 24 * 5)"
}
Pre-5.x:
ruby {
code => "event.cancel if (Time.now.to_f - event['#timestamp'].to_f) > (60 * 60 * 24 * 5)"
}
In this case, 5 is N.
Also, it's worth pointing out that this is relative to the machine time where Logstash happens to be running. If it's inaccurate, then it will impact date math. Similarly, if the source machine's system clock is wrong, then it too can be a problem.
Drawing on Alain's good point, you could use this store the lag time, in addition to just dropping based on it.
5.0:
ruby {
code => "event.set('lag_seconds', Time.now.to_f - event.get('#timestamp').to_f))"
}
# 5 represents the number of days to allow
if [lag_seconds] > (60 * 60 * 24 * 5) {
drop { }
}
Pre-5.x:
ruby {
code => "event['lag_seconds'] = Time.now.to_f - event['#timestamp'].to_f)"
}
# 5 represents the number of days to allow
if [lag_seconds] > (60 * 60 * 24 * 5) {
drop { }
}
Using this approach, you would then be indexing lag_seconds, which is a fractional amount, thereby allowing you to analyze lag in your index if this goes into ES or some other data store.

Cron expression with initial delay - Quartz

I am just can figure out how to configure a Cron job in Quartz with initial delay.
So i need something that runs every hour with an initial delay of 10 min.
"* * 0/1 * * ?"
Here's a late answer, hopefully this helps others. I solved the issue by having 2 scheduled functions in my service class:
#EnableScheduling
public class DeviceService {
#Scheduled(initialDelayString = "${devices.update.initial}", fixedDelay = 2592000000L)
public void initialUpdateDevices() {
updateDevices();
}
#Scheduled(cron = "${devices.update.cron}")
public void cronUpdateDevices() {
updateDevices();
}
private void updateDevices() {
...
}
}
The initial delay and the cron expression are set in application.properties. The fixedDelay is there because Spring doesn't allow initialDelay alone. I set it to 2592000000ms, which is 30 days. In our application, the potential extra update doesn't do any harm.
In application.properties:
devices.update.initial = 600000
devices.update.cron = 0 30 1 * * *
Initially run after 10 minutes (60000ms) and then every night at 01:30.
In application-test.properties for unit testing:
devices.update.initial = 86400000
devices.update.cron = 0 30 1 24 12 *
None of our unit tests take 1 day to execute so 86400000 milliseconds is a safe bet. The cron "0 30 1 24 12 *" is set to Christmas Eve's night when people should be dreaming of nice things.

setup a cron expression from 7.30 to 18.30 every 10 mn with one born included

Is it possible to write one expression in order to have this behavior:
fire at 7.30 every 10 mn until 18.30
result : 7.40, 7.50, 8.00, 8.10 ........ 18.10, 18.20, 18.30
Three entries-:
0,10,20,30,40,50 8-17 * * * command
30,40,50 7 * * * command
0,10,20,30 18 * * * command

Resources