after scheduling what happens internally in an odi agent that it recognises that the job has to start at runtime - agent

May I know the answer for this as I am little curious like after scheduling what happens internally in an odi agent that it recognises that the job has to start at runtime

Related

Is there any Kotlin coroutine scheduler?

I'm building a system in Kotlin that need's to schedule a lot of jobs at the same time (sometimes even a few thousands of jobs). Most of the jobs are not complex, they are simply doing a few HTTP requests with different libraries.
I'm currently working with Quartz scheduler. the problem is that sometimes it misfires a job and that something that can be critical to me. I know Quartz has different settings you can apply to handle misfires but I didn't find something suitable.
I know that in Kotlin there are coroutines that can be considered as lightweight threads. Is there any scheduler like Quartz for Kotlin that executes the jobs as coroutines instead of threads with the same (or at least most of the same) features? I'm asking because I believe that such scheduler can increase my system performance.
follow up question: if such scheduler exists will it be safe to use him even if I don't know the implementation of the libraries I'm using inside the job? (For example making sure there are no Thread.sleep() calls)

Autosys dependencies on mainframe job

we have the autosys job (job_a) that needs to be dependent on two mainframe jobs(job_m1,job_m2). and also there is one condition the new job should start after both mainframe job completes. how to do that?
I believe the vendor (CA) has a job management agent that can run on z/OS and report job status directly to Autosys. When you do this, your mainframe jobs become just like and other job on any other platform.
If this doesn't meet your needs, I'd definitely make it the vendor's problem - your company is paying them a considerable sum for ongoing support and maintenance, and answering this type of question should certainly be something they can do for you.

Background job with a thread/process?

Technology used: EJB 3.1, Java EE 6, GlassFish 3.1.
I need to implement a background job that is execute every 2 minutes to check the status of a list of servers. I already implemented a timer and my function updateStatus get called every two minutes.
The problem is I want to use a thread to do the update because in case the timer is triggered again but my function called is not done, i will like to kill the thread and start a new one.
I understand I cannot use thread with EJB 3.1 so how should I do that? I don't really want to introduce JMS either.
You should simply use and EJB Timer for this.
When the job finishes, simply have the job reschedule itself. If you don't want the job to take more that some amount of time, then monitor the system time in the process, and when it goes to long, stop the job and reschedule it.
The other thing you need to manage is the fact that if the job is running when the server goes down, it will restart automatically when the server comes back up. You would be wise to have a startup process that scans the current jobs that exist in the Timer system, and if yours is not there, then you need to submit a new one. After that the job should take care of itself until your deploy (which erases existing Timer jobs).
The only other issue is that if the job is dependent upon some initialization code that runs on server startup, it is quite possible that the job will start BEFORE this happens when the server is firing up. So, may need to have to manage that start up race condition (or simply ensure that the job "Fails fast", and resubmits itself).

How to check whether a Timer Job has run

Is it possible to check whether a SharePoint (actually WSS 3.0) timer job has run when it was scheduled to ?
Reason is we have a few daily custom jobs and want to make sure they're always run, even if the server has been down during the time slot for the jobs to run, so I'd like to check them and then run them
And is it possible to add a setting when creating them similar to the one for standard Windows scheduled tasks ... "Run task as soon as possible after a scheduled start is missed" ?
check it in job status page and then you can look at the logs in 12 hive folder for further details
central administration/operations/monitoring/timer jobs/check jobs status
As far as the job restart is concerned when it is missed that would not be possible with OOTB features. and it make sense as well since there are lot of jobs which are executed at particular interval if everything starts at the same time load on server would be very high
You can look at the LastRunTime property of an SPJobDefinition to see when the job was actually executed. As far as I can see in Reflector, the value of this property is loaded from the database and hence it should reflect the time it was actually executed.

Eclipse RCP: Only one Job runs at a time?

The Jobs API in Eclipse RCP apparently works much differently than I expected. I thought that creating and scheduling multiple Jobs would actually cause multiple worker threads to be created, executing the Jobs in parallel unless there was an ISchedulingRule conflict.
I went back and read the documentation more closely, and also discovered this comment in the JobManager class:
/**
* Returns a running or blocked job whose scheduling rule conflicts with the
* scheduling rule of the given waiting job. Returns null if there are no
* conflicting jobs. A job can only run if there are no running jobs and no blocked
* jobs whose scheduling rule conflicts with its rule.
*/
Now it looks to me like the Job manager will only ever attempt to use one background worker thread. Am I completely wrong about this? If I'm right,
what is the point of scheduling rules and locks? If there is only one worker thread, Jobs can never preemt each other. Wouldn't these only ever be used in case a Job's sleep() method is called (e.g. sleeping while holding a Lock)?
does any part of the platform allow two Jobs to actually run concurrently, on multiple worker threads, thus making the above features useful somehow?
What am I missing here?
Take a look at the run method in the documentation, specifically this part:
Jobs can optionally finish their
execution asynchronously (in another
thread) by returning a result status
of ASYNC_FINISH. Jobs that finish
asynchronously must specify the
execution thread by calling setThread,
and must indicate when they are
finished by calling the method done.
ASYNC_FINISH there looks interresting.
AFAIK creating and scheduling multiple Jobs DO actually cause multiple worker threads to be created and to be executed in parallel.
However if you specify an optional scheduling rule to your job (using the setRule() method) and if that rule conflicts with another job's scheduling rule then those two jobs can't run simultaneously.
This Eclipse Corner article provides good description as well as few code samples for Eclipse Job API.
The IJobManager API is only needed for advanced job manipulation, e.g. when you need to use locks, synchronize between several jobs, terminate jobs, etc.
Note: Eclipse 4.5M4 will include now (Q4 2014) a way to Support for Job Groups with throttling
See bug 432049:
Eclipse provides a simple Jobs API to perform different tasks in parallel and in asynchronous fashion. One limitation of the Eclipse Jobs is that there is no easy way to limit the number of worker threads being used to execute jobs.
This may lead to a thread pool explosion when many jobs are scheduled in quick succession. Due to that it’s easy to use Jobs to perform different unrelated tasks in parallel, but hard to implement thousands of Jobs co-operating to complete a single large task.
Eclipse currently supports the concept of Job Families, which provides one way of grouping with support for join, cancel, sleep, and wakeup operations on the whole family.
To address all these issue we would like to propose a simple way to group a set of Eclipse Jobs that are responsible for pieces of the same large task.
The API would support throttling, join, cancel, combined progress and error reporting for all of the jobs in the group and the job grouping functionality can be used to rewrite performance critical algorithms to use parallel execution of cooperating jobs.
You can see the implementation in this commit 26471fa

Resources