How to execute cron distributed task in hazelcast IScheduledExecutorService? - cron

I have hazelcast cluster and I'm currently using in my project TaskScheduler from Spring to execute cron tasks.
I would like to use hazelcast IScheduledExecutorService to schedule a cron task on all members, and to schedule a multiple cron tasks which will be distributed equally across cluster, but I can't find the appropriate method.
I can't find method in IScheduledExecutorService that have cron trigger in parameter, only timeUnit. Do you know any solutions to this?

IScheduledExecutorService is not a full cron scheduler, for it only allows the scheduling of a single future execution and/or a fixed rate execution but not periodic executions at fixed times, dates or intervals. So if you are happy to do something like:
executorService.schedule(new SomeTask(), 10, TimeUnit.SECONDS)
where SomeTask will run after 10 seconds, then you can choose any of the following APIs to submit your task:
scheduleOnMember: On a specific cluster member.
scheduleOnKeyOwner: On the partition owning that key.
scheduleOnAllMembers: On all cluster members.
scheduleOnAllMembers: On all given members.
Check reference manual for details: https://docs.hazelcast.org/docs/4.0.1/manual/html-single/index.html#scheduled-executor-service

Related

Which scheduler in python helps to run cron-type scheduling for one-time execution at a particular date and time?

I want to run a python cron-type scheduler and schedule a job for one time execution at a particular date and time. Once the job is done, the job has to disappear, while the scheduler still continues to run. Also looking for examples of using scheduler and storing jobs in mongodb or redis.
You can use celery for this.Celery helps you run tasks in background and schedule cron jobs as well.
These docs state that you can
schedule tasks to execute at a specific time
other useful resources.

How does spark.dynamicAllocation.enabled influence the order of jobs?

Need an understanding on when to use spark.dynamicAllocation.enabled - What are advantages and disadvantages of using it? I have queue where jobs get submitted.
9:30 AM --> Job A gets submitted with dynamicAllocation enabled.
10:30 AM --> Job B gets submitted with dynamicAllocation enabled.
Note: My Data is huge (processing will be done on 10GB data with transformations).
Which Job gets the preference on allocation of executors to Job A or Job B and how does the spark co-ordinates b/w 2 applications?
Dynamic Allocation of Executors is about resizing your pool of executors.
Quoting Dynamic Allocation:
spark.dynamicAllocation.enabled Whether to use dynamic resource allocation, which scales the number of executors registered with this application up and down based on the workload.
And later on in Dynamic Resource Allocation:
Spark provides a mechanism to dynamically adjust the resources your application occupies based on the workload. This means that your application may give resources back to the cluster if they are no longer used and request them again later when there is demand. This feature is particularly useful if multiple applications share resources in your Spark cluster.
In other words, job A will usually finish before job B will be executed. Spark jobs are usually executed sequentially, i.e. a job has to finish before another can start.
Usually...
SparkContext is thread-safe and can handle jobs from a Spark application. That means that you may submit jobs at the same time or one after another and in some configuration expect that these two jobs will run in parallel.
Quoting Scheduling Within an Application:
Inside a given Spark application (SparkContext instance), multiple parallel jobs can run simultaneously if they were submitted from separate threads. By “job”, in this section, we mean a Spark action (e.g. save, collect) and any tasks that need to run to evaluate that action. Spark’s scheduler is fully thread-safe and supports this use case to enable applications that serve multiple requests (e.g. queries for multiple users).
By default, Spark’s scheduler runs jobs in FIFO fashion. Each job is divided into “stages” (e.g. map and reduce phases), and the first job gets priority on all available resources while its stages have tasks to launch, then the second job gets priority, etc.
it is also possible to configure fair sharing between jobs. Under fair sharing, Spark assigns tasks between jobs in a “round robin” fashion, so that all jobs get a roughly equal share of cluster resources. This means that short jobs submitted while a long job is running can start receiving resources right away and still get good response times, without waiting for the long job to finish. This mode is best for multi-user settings.
Wrapping up...
Which Job gets the preference on allocation of executors to Job A or Job B and how does the spark co-ordinates b/w 2 applications?
Job A.
Unless you have enabled Fair Scheduler Pools:
The fair scheduler also supports grouping jobs into pools, and setting different scheduling options (e.g. weight) for each pool. This can be useful to create a “high-priority” pool for more important jobs, for example, or to group the jobs of each user together and give users equal shares regardless of how many concurrent jobs they have instead of giving jobs equal shares.

Spark Schedule: FIFO or FAIR?

How to choose Spark Scheduler: FIFO or FAIR?
What is the different between Spark Scheduler and YARN Scheduler?
When you submit your jobs in the cluster either with spark-submit or any other mean, it will be given to Spark schedulers which is responsible to materialize logical plan of your jobs. In spark, we have two modes.
1. FIFO
By default, Spark’s scheduler runs jobs in FIFO fashion. Each job is divided into stages (e.g. map and reduce phases), and the first job gets priority on all available resources while its stages have tasks to launch, then the second job gets priority, etc. If the jobs at the head of the queue don’t need to use the whole cluster, later jobs can start to run right away, but if the jobs at the head of the queue are large, then later jobs may be delayed significantly.
2. FAIR
The fair scheduler also supports grouping jobs into pools and setting different scheduling options (e.g. weight) for each pool. This can be useful to create a high-priority pool for more important jobs, for example, or to group the jobs of each user together and give users equal shares regardless of how many concurrent jobs they have instead of giving jobs equal shares. This approach is modeled after the Hadoop Fair Scheduler.
Without any intervention, newly submitted jobs go into a default pool, but jobs’ pools can be set by adding the spark.scheduler.pool "local property" to the SparkContext in the thread that’s submitting them.
For more info

How do I limit the number of spark applications in state=RUNNING to 1 for a single queue in YARN?

I have multiple spark jobs. Normally I submit my spark jobs to yarn and I have an option that is --yarn_queue which tells it which yarn queue to enter.
But, the jobs seem to run in parallel in the same queue. Sometimes, the results of one spark job, are the inputs for the next spark job. How do I run my spark jobs sequentially rather than in parallel in the same queue?
I have looked at this page for a capacity scheduler. But the closest thing I can see is the property yarn.scheduler.capacity.<queue>.maximum-applications. But this only sets the number of applications that can be in both PENDING and RUNNING. I'm interested in setting the number of applications that can be in the RUNNING state, but I don't care the total number of applications in PENDING (or ACCEPTED which is the same thing).
How do I limit the number of applications in state=RUNNING to 1 for a single queue?
You can manage appropriate queue run one task a time in capacity scheduler configuration. My suggestion to use ambari for that purpose. If you haven't such opportunity apply instruction from guide
From https://hadoop.apache.org/docs/current/hadoop-yarn/hadoop-yarn-site/FairScheduler.html:
The Fair Scheduler lets all apps run by default, but it is also possible to limit the number of running apps per user and per queue through the config file. This can be useful when a user must submit hundreds of apps at once, or in general to improve performance if running too many apps at once would cause too much intermediate data to be created or too much context-switching. Limiting the apps does not cause any subsequently submitted apps to fail, only to wait in the scheduler’s queue until some of the user’s earlier apps finish.
Specifically, you need to configure:
maxRunningApps: limit the number of apps from the queue to run at once
E.g.
<?xml version="1.0"?>
<allocations>
<queue name="sample_queue">
<maxRunningApps>1</maxRunningApps>
<other options>
</queue>
</allocations>

Creating spark tasks from within tasks (map functions) on the same application

Is it possible to do a map from a mapper function (i.e from tasks) in pyspark?
In other words, is it possible to open "sub tasks" from a task?
If so - how do i pass the sparkContext to the tasks - just as a variable?
I would like to have a job that is composed from many tasks - each of these tasks should create many tasks as well, without going back to the driver.
My use case is like this:
I am doing a code porting of an application that was written using work queues - to pyspark.
In my old application tasks created other tasks - and we used this functionality. I don't want to redesign the whole code because of the move to spark (especially because i will have to make sure that both platform works in the transient phase between the systems)...
Is it possible to open "sub tasks" from a task?
No, at least not in a healthy manner*.
A task is a command sent from the driver and Spark has as one Driver (central coordinator) that communicates with many distributed workers (executors).
As a result, what you ask for here, implies that every task can play the role of a sub-Driver. Not even a worker, which would have the same faith in my answer as the task.
Remarkable resources:
What is a task in Spark? How does the Spark worker execute the jar file?
What are workers, executors, cores in Spark Standalone cluster?
*With that said, I mean that I am not aware of any hack or something, which if exists would be too specific.

Resources