Autosys dependencies on mainframe job - mainframe

we have the autosys job (job_a) that needs to be dependent on two mainframe jobs(job_m1,job_m2). and also there is one condition the new job should start after both mainframe job completes. how to do that?

I believe the vendor (CA) has a job management agent that can run on z/OS and report job status directly to Autosys. When you do this, your mainframe jobs become just like and other job on any other platform.
If this doesn't meet your needs, I'd definitely make it the vendor's problem - your company is paying them a considerable sum for ongoing support and maintenance, and answering this type of question should certainly be something they can do for you.

Related

Monitor cron jobs in real-time

I have an issue currently where I've got a cron job set to run at midnight each day to reset daily API requests for a service that I run. The job failed recently which caused me a whole bunch of headaches and I've been trying to find a solution to monitor all of my cron jobs so I don't have a situation like this happen again.
I haven't been able to find a sufficient solution however, and in response I am considering creating a platform that allows you to monitor cron jobs, see logs (and past logs), last run date, failure/success of the last run, etc... in real-time and would notify you if your job hasn't completed within a specified window of time or the job failed.
I believe this might be a pain point and a good solution for others as well.
What are you thoughts? Do you think that this would be useful, have any suggestions, or just think this would be a waste of time?
Did you hear about Rundeck? (https://www.rundeck.com/open-source)
It looks like it's exactly what you're looking for.
You install it on a server, and it's like a Web UI for a crontab.
You define jobs you want to run using the Web UI, how often you want them to run and you can see some history of the past executions, their status and their output. You can also see when the next execution will happen.
I think there are also some alerting features to notify you if a job is on failure. I'm not sure if it can notify you based on the job execution time though.
This might be a good fit for what you're looking for.
2 years later, I am asking myself exactly the same questions ) Definitely you should have created such service already, haven't you? Every backend coder needs this time from time, in theory. I'm surprised this question hasn't received enough activity/voting. I got an answer leading to this though: https://uptimerobot.com/cron-job-monitoring/ that might be a good solution. Need to test it out. It does not seem to be promoted enough, as it's not easy to find. Also there is https://cronitor.io/docs/cron-job-monitoring that has ability to transmit (somewhat limited) telemetry data, +a lot of SDKs to be used from within programming languages.

Cron Job Microservices

I am using spring cloud and have various microservices for an online shopping vendor. Everything is working as expected.
But, I got a requirement where I need to run a cron job over customer's records, get the customer's who's statement date matches the current date and calculate the rate of interest to be paid. This needs to be run every day.
I am confused about how to accommodate this cron job with MS architecture. Do I need to have another server having just this cron job?
Depending on the platform (eg: cf, k8s..) that you're orchestrating the batch-jobs in SCDF, you could write a simple Quartz based Boot Application that can interact with SCDF's REST endpoints to schedule the Task definitions defined in SCDF.
There are several online literatures on Quartz + Boot solution.
We are also working on a native scheduler integration for Cloud Foundry (via PCF Scheduler). Once it ready, you'd be able to schedule (i.e., cron-expressions) for Tasks from SCDF's Dashboard natively.
As I understand you should have one centralized supervisor of jobs, because multiplied instances can potentially run the same job at the same time.
This supervisor can be a microservice, which delegates job execution to other services via rest call or message queue, and wait for result.
It means that job supervisor becomes part of infrastructure, like message queue or database.

Running HDInsight jobs howto

Few questions regarding HDInsight jobs approach.
1) How to schedule HDInsight job? Is there any ready solution for it? For example if my system will constantly get a large number of new input files collected that we need to run map/reduce job upon, what is the recommended way to implemented on-going processing?
2) From the price perspective, it is recommended to remove the HDInsight cluster for the time when there is no job running. As I understand there is no way to automate this process if we decide to run the job daily? Any recommendations here?
3) Is there a way to ensure that the same files are not processed more than once? How do you solve this issue?
4) I might be mistaken, but it looks like every hdinsight job requires a new output storage folder to store reducer results into. What is the best practice for merging of those results so that reporting always works on the whole data set?
Ok, there's a lot of questions in there! Here are I hope a few quick answers.
There isn't really a way of scheduling job submission in HDInsight, though of course you can schedule a program to run the job submissions for you. Depending on your workflow, it may be worth taking a look at Oozie, which can be a little awkward to get going on HDInsight, but should help.
On the price front, I would recommend that if you're not using the cluster, you should destroy it and bring it back again when you need it (those compute hours can really add up!). Note that this will lose anything you have in the HDFS, which should be mainly intermediate results, any output or input data held in the asv storage will persist in and Azure Storage account. You can certainly automate this by using the CLI tools, or the rest interface used by the CLI tools. (see my answer on Hadoop on Azure Create New Cluster, the first one is out of date).
I would do this by making sure I only submitted the job once for each file, and rely on Hadoop to handle the retry and reliability side, so removing the need to manage any retries in your application.
Once you have the outputs from your initial processes, if you want to reduce them to a single output for reporting the best bet is probably a secondary MapReduce job with the outputs as its inputs.
If you don't care about the individual intermediate jobs, you can just chain these directly in the one MapReduce job (which can contain as many map and reduce steps as you like) through Job chaining see Chaining multiple MapReduce jobs in Hadoop for a java based example. Sadly the .NET api does not currently support this form of job chaining.
However, you may be able to just use the ReducerCombinerBase class if your case allows for a Reducer->Combiner approach.

Lotus Notes Agent

Where can I find a great online reference on Lotus Notes Agent. I currently having problems with having simultaneous agents and understanding agents, how it works, best practices, etc? Thanks in advance!
I currently having problems with having simultaneous agents
Based on this comment I take it you are running a scheduled agent?
The way that scheduled agents work is that only one agent from a particular database can be run at one time, even if you have multiple Agent manager (AMGR) threads. Also agents cannot run less then every 5 minutes. The UI will let you put in a lower number, but it will change it.
The other factors to take into account is how long your agent will run for. If it runs for longer then the interval time you setup you will end up backlogging the running time. Also the server can be configured to kill agents that run over a certain time. So you need to make sure the agent runs within that timeframe.
Now to bypass all this you can execute an agent from the Domino console like as follows.
tell amgr run "database.nsf" 'agentName'
This will run in it's own thread outside of the scheduler. Because of this you can create a program document to execute an agent in less then 5 minute intervals and multiple agents within the same database.
This is dangerous in doing this however, as you have to be aware of a number of issues.
As the agent is outside the control of the scheduler you can't kill it as you would in the scheduler.
Running multiple threads can tie up more processes. So while the scheduler will backlog everything if the agent runs longer then the schedule, doing a program document in this instance will crash the server.
You need to be aware of what the agent is doing in the database so that it won't interfere with any other agents in the same database, and can cope if it is run twice in parallel.
For more reading material on this:
Improving Agent Manager Performance.
http://publib.boulder.ibm.com/infocenter/domhelp/v8r0/topic/com.ibm.help.domino.admin.doc/DOC/H_AGENT_MANAGER_NOTES_INI_VARIABLES.html
Agent Manager trouble shooting.
http://publib.boulder.ibm.com/infocenter/domhelp/v8r0/topic/com.ibm.help.domino.admin.doc/DOC/H_ABOUT_TROUBLESHOOTING_AGENTS.html
Troubleshooting Agents (Old material but still relevant)
http://www.ibm.com/developerworks/lotus/library/ls-Troubleshooting_agents/index.html
... and related tech notes:
Title: How to run two agents concurrently in the same database using a wrapper agent
http://www.ibm.com/support/docview.wss?uid=swg21279847
Title: How to run multiple agents in the same database using a Program document
http://www.ibm.com/support/docview.wss?uid=swg21279832

Resource mange external nodes in Jenkins for tests

My problem is that I have code that need a rebooted node. I have many long running Jenkins test jobs that needs to be executed on rebooted nodes.
My existing solution is to define multiple "proxy" machines in Jenkins with the same label (TestLable) and 1 executor per machine. I bind all the test jobs to the label (TestLable). In the test execution script I detect the Jenkins machine (Jenkins env. NODE_NAME) and use that to know what physical physical machine the tests should use.
Do anybody know of a better solution?
The above works but I need to define a high number of “nodes/machines” that may not be needed. What I would like was a plugin that would be able to grant a token to a Jenkins job. This way a job would not be executed before a Jenkins executor and a token was free. The token should be a string so that my test jobs could use it to know what external node it could use.
We have written our own scheduler that allocates stuff before starting Jenkins nodes. There may be a better solution - but this works for us mostly. I've yet to come across an off-the-shelf scheduler that can deal with complicated allocation of different hardware resources. We have n box types, allocated to n build types.
Some build types we have are not compatible together without destroying all persistent data - which may be required as it takes a long time to gather. Some jobs require combinations of these hardware types. We store the details in a DB, and then use business logic to determine how it is allocated. We've often found that particular job types need additional business logic or extra data fields to account for their specific requirements.
So it may be the best way is to write your own scheduler, in your own language of choice, which takes into account your particular needs.

Resources