How to run multiple jobs from a single job in TALEND - data-migration

I have multiple jobs of TALEND. Now i want to create a single job from which i can trigger all other jobs and also need to put some check that if one job fails then other jobs should not run. Is it possible in TALEND. Someone please guide me on this.

you can create another wrapper job and in this job drag and drop all other jobs you want to run..tRunJob component will be used..and you can set dependencies between these jobs...

Related

How to have combined activities as one for "Rerun from failed activity" in ADF

Our scenario is that ADF doesn't provide the direct activity for us to manage outer service (spark job) and thus we need to have several activities as one to manage it. Thus the activities would contain steps like get secret, call api to submit job, monitor job, etc.
Current design:
we combine those activities as one pipeline and then call this pipeline to manage spark jobs. We have many spark jobs and they depend on each other. Thus we put the parent pipeline to include several pipelines and they depend on each other. When some of the sub pipelines failed, we want to rerun from failed activity to retrigger it. The issue is that "rerun from failed activity" would skip the activities succeed and rerun from the failed activity. However, for the execute spark job (submit/monitor) if submit activity succeed while monitor fail, then rerun would start from monitor again and this would fail again. Actually we want this combined spark submit pipeline rerun again instead of from the failed activity.
Question:
Can we have the spark submit pipeline as a combined unique activity and rerun would always start from the initial instead of from the inner activities? Or several spark submit pipelines together in one parent pipeline, how could we rerun the failed sub pipelines?

Is there a way to re-run only the failed jobs added in a Dataproc workflow template?

I am designing a dataproc workflow template with multiple spark jobs. These spark jobs would run in sequence one after the other. There could be scenarios where the workflow would run few jobs successfully and might fail for others. Is there a way to just rerun the failed jobs once I have done workaround to fix the issues which failed those jobs in the first place. Please note that I am not looking for job retry mechanism of jobs. I want to re-run the workflow again by avoiding running already successful jobs.
Dataproc Workflows do not support this use case.
Please take at Cloud Composer - Apache Airflow-based orchestration service which is more flexible and should be able to satisfy your use case.

Application job submission with out duplication

We are using DataStax Spark 6.0.
We are submitting jobs using crontab to run every 5 mins. We wrote script to find if it is running to avoid duplicate submission of same application. Is there a way to stop job submission or keep job in Queue in Spark level, to avoid duplicate jobs with same application.
Thanks
Rakesh
I tried using Crontab only
You can use oozie to shedule your spark job .

Automatically spawn an Azure Batch AI job periodically

I want to automatically start a job on an Azure Batch AI cluster once a week. The jobs are all identical except for the starting time. I thought of writing a PowerShell Azure Function that does this, but Azure Functions v2 doesn't support PowerShell and I don't want to use v1 in case it will be phased out. I would prefer not to do this in C# or Java. How can I do this?
Currently, there's no option available to trigger a job on Azure Batch AI cluster. Maybe you want to run a shell script which in turn can create a regular schedule using system's task scheduler. Please see if this doc by Said Bleik helps:
https://github.com/saidbleik/batchai_mm_ad#scheduling-jobs
I assume this way you can add multiple schedules for the job!
Azure Batch portal has "Job schedules" tab. You can go there, add a Job, and set a schedule for the Job. You can specify the recurrence in the Schedule
Scheduled jobs
Job schedules enable you to create recurring jobs within the Batch service. A job schedule specifies when to run jobs and includes the specifications for the jobs to be run. You can specify the duration of the schedule--how long and when the schedule is in effect--and how frequently jobs are created during the scheduled period.

PBS automatically restart failed jobs

I use PBS job arrays to submit a number of jobs. Sometimes a small number of jobs get screwed up and not been ran successfully. Is there a way to automatically detect the failed jobs and restart them?
pbs_server supports automatic_requeue_exit_code:
an exit code, defined by the admin, that tells pbs_server to requeue the job instead of considering it as completed. This allows the user to add some additional checks that the job can run meaningfully, and if not, then the job script exits with the specified code to be requeued.
There is also a provision for requeuing jobs in the case where the prologue fails (see the prologue/epilogue script documentation).
There are probably more sophisticated ways of doing this, but they would fall outside the realm of built-in Torque options.

Resources