Generic Setup of Quartz.NET tasks - cron

The regular setup of Quartz cron-based tasks looks like this:
IJobDetail firstJob = JobBuilder.Create<FirstJob>()
.WithIdentity("firstJob")
.Build();
ITrigger firstTrigger = TriggerBuilder.Create()
.WithIdentity("firstTrigger")
.StartNow()
.WithCronSchedule("0 * 8-22 * * ?")
.Build();
FirstJob is a specific class that implements IJob interface from Quartz. In my case I may have multiple job classes implementing that interface, each does particular type of work that needs to be scheduled.
Therefore seems I'm forced to set up as many job detail instances as I have job classes, i.e. repeat the code. Is there any other way to simplify and shorten it and have a collection of job detail objects not implicitly passing the job class names? Say, all my job classed would implement a CustomInterface : IJob, and I would rather use CustomInterface name somewhere setting up the job details.

Resolved.
IJobDetail job = JobBuilder.Create(Type.GetType(jobDetail.JobKey.Name))

Related

Can we set task wise parameters using Databricks Jobs API "run-now"

I have a job with multiple tasks like Task1 -> Task2. I am trying to call the job using api "run now". Task details are below
Task1 - It executes a Note Book with some input parameters
Task2 - It executes a Note Book with some input parameters
So, how I can provide parameters to job api using "run now" command for task1,task2?
I have a parameter "lib" which needs to have values 'pandas' and 'spark' task wise.
I know that we can give unique parameter names like Task1_lib, Task2_lib and read that way.
current way:
json = {"job_id" : 3234234, "notebook_params":{Task1_lib: a, Task2_lib: b}}
Is there a way to send task wise parameters?
It's not supported right now - parameters are defined on the job level. You can ask your Databricks representative (if you have) to communicate this ask to the product team who works on the Databricks Workflows.

Bull MQ repeatable job not triggering

This question is in continuation with this thread Repeatable jobs not getting triggered at given cron timing in Bull
I am also facing the same problem. How should I specify the timezone? I tried to specify as
repeat: { cron: '* 7 14 * * *', tz: 'Europe/Berlin'}
Meaning trigger the job at 14:07 German time zone. Though the job is listed in the queue, but the job is not triggered.
I also tried repeat:
{
cron: '* 50 15 * * *',
offset: datetime.getTimezoneOffset(),
tz: 'Europe/Berlin'
}
I finally figured out the solution.
One thing to note is that I had not initialized a Queuescheduler instance. Ofcourse timezone also plays a crucial role. But without a Queuescheduler instance (which has the same name as the Queue), the jobs doesnt get added into the queue. The Queuescheduler instance acts as a book keeper. Also take care about one more important parameter "limit". If you dont set the limit to 1, then the job which is scheduled at a particular time will get triggered unlimited number of times.
For example: To run a job at german time 22:30 every day the configuration would look like:
repeat: {
cron: '* 30 22 * * *',
offset: datetime.getTimezoneOffset(),
tz: 'Europe/Berlin',
limit: 1
}
Reference: https://docs.bullmq.io/guide/queuescheduler In this above link, the documentation clearly mentions that the queuescheduler instance does the book keeping of the jobs.
In this link - https://docs.bullmq.io/guide/jobs/repeatable, the documentation specifically warns us to ensure that we instantiate a Queuescheduler instance.
You need to manage repeatable queues with the help of QueueSchedular. QueueSchedular takes the queue name as first parameter and connection as second. The code will be as following:
const queueSchedular = new QueueSchedular(yourQueue.name, { connection });

Scheduling for Spark jobs on Bluemix

I'm trying to run my Spark application on Bluemix by schedule. For now I'm using scheduling of spark-submit.sh script locally on my machine. But I'd like to use Bluemix for this purpose. Is there any way to set scheduling directly inside Bluemix infrastructure for running Spark notebooks or Spark applications?
The Bluemix OpenWhisk offering provides an easy way to schedule actions run on a periodic schedule similar to cron jobs.
Overview of OpenWhisk-based solution
OpenWhisk provides a programming model based actions, triggers, and rules. For this use case, you would
Create an action that kicks off your spark job.
Use the /whisk.system/alarms package to arrange for triggers to arrive periodically according to your schedule.
Create a rule that declares that your action should fire whenever a trigger event occurs.
Your action can be coded in javascript if it's easy to kick off your job from a javascript function. If not, and you'd like your action to be implemented by a shell script, you can use whisk docker actions to manage your shell script as an action.
Using the whisk.system/alarms package to generate events on a schedule.
This page in the whisk docs includes a detailed description of how to accomplish this. Briefly:
The /whisk.system/alarms/alarm feed configures the Alarm service to fire a trigger event at a specified frequency. The parameters are as follows:
cron: A string, based on the Unix crontab syntax, that indicates when to fire the trigger in Coordinated Universal Time (UTC). The string is a sequence of six fields separated by spaces: X X X X X X. For more details on using cron syntax, see: https://github.com/ncb000gt/node-cron. Here are some examples of the frequency indicated by the string:
* * * * * *: every second.
0 * * * * *: top of every minute.
* 0 * * * *: top of every hour.
0 0 9 8 * *: at 9:00:00AM (UTC) on the eighth day of every month
trigger_payload: The value of this parameter becomes the content of the trigger every time the trigger is fired.
maxTriggers: Stop firing triggers when this limit is reached. Defaults to 1000.
Here is an example of creating a trigger that will be fired once every 20 seconds with name and place values in the trigger event.
$ wsk trigger create periodic --feed /whisk.system/alarms/alarm --param cron '*/20 * * * * *' --param trigger_payload '{"name":"Odin","place":"Asgard"}'
Each generated event will include as parameters the properties specified in the trigger_payload value. In this case, each trigger event will have parameters name=Odin and place=Asgard.

How do I run the camel scheduled jobs with quartz

I'm using camel framework to declare some scheduled jobs with quartz. In there, I want to execute my class in every two seconds.
So, I have mentioned this:
quartz2://quartzScheduler/Processor?cron=0/2+*+*+*+*+?
But its not executing.
the first six fields in a quartz cron expression are not optional, so your url should probably be:
quartz2://quartzScheduler/Processor?cron=0/2 * * * * *

Cron syntax with Java EE 5?

Timer Tasks in Java EE are not very comfortable. Is there any util, to configure timer with cron syntax like "0 20 20 * * "?
I wonder, if it would be a good way to use Quartzinside (clustered) Java EE application. According to http://www.prozesse-und-systeme.de/serverClustering.html (german page) there limits with Quartz and Java EE clustering:
JDBC must be used as job store for Quartz
Only cluster associated Quartz instances are allowed to use this JDBC job store
All cluster nodes must be synchronized to the split second
All cluster nodes must use the same quartz.properties file
I would prefer an easier way for configuration of timer service, instead an not Java EE managed scheduler.
Quartz definitely support cron-like syntax (with the CronTrigger) but your requirements are not clear. Also maybe have a look at Jcrontab or cron4j.
As a side note, the ability to declaratively create cron-like schedules to trigger EJB methods is one of the most important enhancement of the Timer Service in EJB 3.1 (using the #Schedule annotation). Below, an example taken from New Features in EJB 3.1:
#Stateless
public class NewsLetterGeneratorBean implements NewsLetterGenerator {
#Schedule(second="0", minute="0", hour="0",
dayOfMonth="1", month="*", year="*")
public void generateMonthlyNewsLetter() {
... Code to generate the monthly news letter goes here...
}
}

Resources