Cron Jobs in Cognos- Changing owner - cron

Right now in Cognos, We are getting scheduled reports from a guy X. But since X has left the organization. I want to replace X , from my mail-id, such that everybody gets the scheduled reports from my mail-id. I have already done following with no results:
1. Changed the email credentials for cron jobs in Data Manager.
2. Changed the credentials , under modify schedule
3. Changed the owner of the report.

Step 2 is the only thing you need to do for reports.
Here's the detailed instruction:
http://pic.dhe.ibm.com/infocenter/cbi/v10r1m0/index.jsp?topic=%2Fcom.ibm.swg.im.cognos.ug_cc.10.1.0.doc%2Fug_cc_id9882change_schedule_credentials.html

Related

Azure WebJob fails to run on its schedule but it ran when I uploaded the bat file and never ran again

I created a WebJob as a Triggered job to run on a Schedule. When I uploaded the file it was accepted by the form and I went ahead and clicked RUN because I figured you have to click RUN right after uploading it so that it knows it can go ahead and start running. (I am not sure if I actually have to click RUN, or if I should have just uploaded it and let it be so it should just run on its own according to the CRON Expression provided.)
Well, the job ran as soon as I clicked start and it succeeded which was good news. The issue is, it was supposed to run on its schedule every 4 hours, but never did. It only ran once, which was the time I clicked start.
The CRON Expression I created for it is **0 50 23/4 * * *** which translates to:
At 50 minutes past the hour, every 4 hours, starting at 11:00 PM.
Basically I need the job to run every 4 hours but most importantly at 11:50pm which is why I set that as the schedule. So it should run at 11:50pm, 3:50am, 7:50am, 11:50am, 3:50pm, 7:50pm 11:50pm everyday.
I uploaded the job at about 10pm and it ran at that time because I clicked on RUN but was still expecting it to do its REAL SCHEDULED RUN at 11:50 pm but it never did. The logs show success for that initial run as you can see below.
When I look at the WebJob area in Azure the next day, it shows completed 17 hours ago and only ran once at the time of writing this.
What could be my error here? Is it something wrong with the CRON Expression that I have provided for the job? Before this one I made one that would run every 2 minutes and that one worked perfectly fine, but this one with a more complex CRON Expression seems to give me issue.
What could be my problem here?
I was able to fix this issue scrapping everything and starting over. I was using the incorrect CRON Expression for the trigger times I needed. Also found out that I could just upload the file and not have to click the RUN button since it will just run on its own following the given expression.

Product Catalog sync not working with Hybris CompositeCronjob

With CompositeCronjob, I want PRODUCT SYNC and FULL SOLR Index jobs to run, respectively.
For this, I ran the PRODUCT SYNC job once and added the resulting cronjob code to the CompositeCronjob as an Entry.
Likewise, I ran the FULL Solr Index job and added the resulting cronjob code to the CompositeCronjob as an Entry.
I would run the CompositeCronjob, no ERROR. But PRODUCT SYNC only worked for 1-2 seconds and took SUCCES and went to the next step Solr Index step.
But PRODUCT SYNC didn't actually do anything for 1-2 seconds. It didn't synchronize any products. How can I solve this problem? It does not give an error, but it does not synchronize the products.
I am using Hybris version 19.05
I am very grateful in advance for your help.
Hi Please Validate Composite cronjob entries are correct or not.
Example as follows :
INSERT_UPDATE CompositeEntry;code[unique=true] ;executableCronJob(code)
;updateComposite-$storeUidStaticIndexB2C-jobEntry ;staticContentUpdate-$storeUidStaticIndexB2C-cronJob
;updateIndex-$storeUidStaticIndexB2C-jobEntry ;update-$storeUidStaticIndexB2C-cronJob
INSERT_UPDATE CompositeCronJob;code[unique=true] ;job(code) ;sessionLanguage(isocode);compositeEntries(code);nodeId[default=0]
;compositeStaticContentUpdate-$storeUidStaticIndexB2C-cronJob ;compositeJobPerformable ;en ;updateComposite-$storeUidStaticIndexB2C-jobEntry,updateIndex-$storeUidStaticIndexB2C-jobEntry

Airflow Branch Operator and S3KeySensor doesnot respect trigger_rule='none_failed'

I have 3 S3KeySensors on different files in different folders. 2 of them have to be successful and the third one can be skipped.I am trying to trigger the downstream task with the trigger_rule='none_failed' but S3KeySensor doesnot seem to respect that. This is how my DAG looks like.
This is how it behaves
This is how i want it to behave:
You have to set trigger_rule="none_failed_or_skipped" to test_step task as explained in this documentation.
From the documentation:
none_failed_or_skipped: all parents have not failed (failed or upstream_failed) and at least one parent has succeeded.

Show progress in a azure-pipeline output

so I have my computer set up as an agent pool in azure-devops. I'm creating a test for latency so the developers can use it in their CI, the script runs in python and test various points in a system I have set up for the company which is connected to the cloud, it's mainly for informative purposes. When I run the script I have to wait some time, so the system I have connected goes through its normal network cycle inspecting all the devices in the local network, not very important for que question, however when I'm waiting I show in the terminal a message with "..." going from "." to ".." to "...", just to show the script didn't crash or anything.
the python code looks like this and works just fine when I run it locally:
sys.stdout.write("\rprocessing queue, timing varies depending on priority" + ("."*( i % 3 + 1))+ "\r")
sys.stdout.flush()
however the output shown in the azure pipeline shows all of the lines without replacing them. Is there a way to do what I want?
I am afraid showing progress is not supported in azure pipeline. Azure pipeline log console isnot user interactive. It just capture the agent machine terminal outputs.
You might have to use a simpler way to indicate that the script is now executing and not finished yet. For simple example:
sys.stdout.write("Waiting for processing queue ..." )
You can report this problem to microsoft development team. Hope they find a way to fix this in the future sprint.
I have seen it once but never actually used it myself, this can be done in both bash and PowerShell, not sure if this works inside a Python script, you might have to call bash/PowerShell from within your Python script.
It is possible to set a progress value in percent that is visible outside of the log, but as I understand it this value is step-spefific, meaning it only applies to the pipeline step you're currently in. You could drag the numeric value (however many percent) along into the next step, but the progress counter would then again show up in the next step. I believe it is not possible to have a pipeline global display of a progress.
If you export a progress value it will show up beside the step name in the left hand side step list.
This setting of a progress (also exporting one variable from one step to another, which is typically done that way) can be done by echoing special logging commands. There's a great description to be found here: Logging commands
What you want to do is something just as it is shown as an example on the linked page:
echo "Begin a lengthy process..."
for i in {0..100..10}
do
sleep 1
echo "##vso[task.setprogress value=$i;]Sample Progress Indicator"
done
echo "Lengthy process is complete."
All of these special logging commands start with ##vso[task... The VSO is a relict to the time when Azure DevOps was called Visual Studio Online.
There are a whole bunch of them, but most of the time what you really need is exporting variables from one build step context to another, which is done with ##vso[task.setvariable]value

Run section of puppet manifest once a day but with hourly poll

I have nodes checking into a puppet server every hour.
We have some tasks which we want to run on check-in but only once a day.
Would it be possible to make a function inside a puppet manifest that saves last run time and only runs if the last time was over 24 hours?
Update:
I did try one thing which semi-works. That is move the chunk of puppet code into a separate file, and have my main puppet ensure a cron job exists for it.
The complaint I go back from another department with this is that they can no longer see install errors on puppet board. This image shows 2 nodes on the old puppet branch and 1 on the new branch:
With having cron run puppet apply myFile.pp we no longer got the feedback from failures on Puppetboard, as the main script simply ensures that the cron job exists:
You have at least two options.
Assuming your unspecified task is handled by an exec resource, you could design this in such a way that Puppet only ever regards the exec as out of sync once per day. That could be achieved by having your exec write the calendar day into a file. Then you could add an unless attribute:
unless => "test $(</var/tmp/last_run) == $(date +%d)"
Obviously your exec would need to also keep track of updating that file.
A second option would be to use the schedule metaparameter:
schedule { 'everyday':
period => daily,
range => '1:00 - 1:59',
}
exec { 'do your thing':
schedule => 'everyday',
}
That assumes that Puppet really will run only once per hour. The risk of course is that Puppet runs more than once in that hour, e.g. a sysadmin might manually run it.

Resources