Mandatory executable task in cruisecontrol.net - cruisecontrol.net

Is there any way to run an exec task in cruise-control.net even if errors have occurred in the previous tasks. The same functionality which we get through the finally block in .net.
I want to execute a set of tasks which is independent of the success/failure of the previous tasks.
Thanks.

This is usually achieved using the <publishers/> element, which accepts the same Tasks as the <tasks/> element. Publishers are always executed, even if the <tasks/> fail.

Related

GitLab CI: How to keep going a job that fails

I have this gitlab-ci job and I would like it to just ignore failures and keep going. Do you have a way of doing that? Note that allow_fail: true does not work because it will just ignore that the job have failed however I want that the job keep executing in spite of failing commands in the middle.
palms up, serious look: "We don't do that here"
The pipeline is supposed to work every time, and by design its commands cannot fail. You can however:
change the commands logic and avoid failure
split the commands in different jobs, using the on_failure parameter to manage workflow
force the commands to have a clean exit code (ie: using || true after the fallible command)
During debug I often use the third option after debug statement, or after commands that I'm not sure how will behave. The definitive version, however, is supposed to always work.

Threshold for allowed amount of failed Hyperdrive runs

Because "reasons", we know that when we use azureml-sdk's HyperDriveStep we expect a number of HyperDrive runs to fail -- normally around 20%. How can we handle this without failing the entire HyperDriveStep (and then all downstream steps)? Below is an example of the pipeline.
I thought there would be an HyperDriveRunConfig param to allow for this, but it doesn't seem to exist. Perhaps this is controlled on the Pipeline itself with the continue_on_step_failure param?
The workaround we're considering is to catch the failed run within our train.py script and manually log the primary_metric as zero.
thanks for your question.
I'm assuming that HyperDriveStep is one of the steps in your Pipeline and that you want the remaining Pipeline steps to continue, when HyperDriveStep fails, is that correct?
Enabling continue_on_step_failure, should allow the rest of the pipeline steps to continue, when any single steps fails.
Additionally, the HyperDrive run consists of multiple child runs, controlled by the HyperDriveConfig. If the first 3 child runs explored by HyperDrive fail (e.g. with user script errors), the system automatically cancels the entire HyperDrive run, in order to avoid further wasting resources.
Are you looking to continue other Pipeline steps when the HyperDriveStep fails? or are you looking to continue other child runs within the HyperDrive run, when the first 3 child runs fail?
Thanks!

Camunda Engine behaviour with massive multi-instances processes and ready state

I wonder how Camunda manage multiple instances of a sub-process.
For example this BPMN:
Let's say multi-instances process would iterate on a big collection, 500 instances.
I have a function in a web app that call the endpoint to complete the user common task, and perform another call to camunda engine to get all tasks (on first API call callback). I am supposed to get a list of 500 sub-process user tasks (the ones generated by the multi-instances process).
What if the get tasks call is performed before Camunda Engine successfully instantiated all sub-processes?
Do i get a partial list of task ?
How to detect that main and sub processes are ready?
I don't really know if Camunda is able to manage this problematic by itself so I though of the following solution, knowing I only can use Modeler environment with Groovy to add code (Javascript as well, but the entire code parts already added are groovy):
Use of a sub process throw event to catch in main process, then count and compare tasks ready with awaited tasks number for each signal emitted.
Thanks
I would maybe likely spawn the tasks as parallel process (or 500 of them) and then got to a next step in which I signal or otherwise set a state that indicates the spawning is completed. I would further join the parallel processes all together again and have here a task signaling or otherwise setting a state that indicates all the parallel processes are done. See https://docs.camunda.org/manual/7.12/reference/bpmn20/gateways/parallel-gateway/. This way you can know exactly at what point (after spawning is done and before the join) you have a chance of getting your 500 spawned sub processes

How to specify dependency between multiple given, when or then in cucumber-jvm

I have a feature file which has multiple given when and then steps for ex
// File My.feature
Give doUserLogin
And changeUserPreference
When executeWhen1
And executeWhen2
Then executeThen1
And executeThen2
These are mapped to step definitions correctly, the problem i'm facing is that some are getting executed parallel for ex. in given, 'changeUserPreference' is happening before 'doUserLogin'. Similarly in Then part, 'executeThen2' is triggered before 'executeThen1' is fully completed.
How to specify the dependency between these statements.Is there any way i can say don't start execution of second statement(given, when or then) until first one is executed completely.
If your 'doUserLogin' step is exiting before the download completes, that would explain why the 'changeUserPreference' is starting up. This could happen, say, if you were connecting to an external system and initiating a download and then the api you are using is performing the download in another thread, then the main thread would continue on to the next step while the download is continuing in another thread.
My advice would be to execute this scenario in debug mode (assuming you are using an IDE that supports this) and see if your 'doUserLogin' step is finishing before the file download.

Can i call a gradle task from a groovy method in a .gradle file?

I have a non conventional build script in gradle that does cyclic compiling of projects.
It's gonna take weeks to change to standard gradle build so that's not gonna happen now.
The issue is I want to stop using ant in my script and move to use groovy + gradle only.
The question is how to change tasks like copy ? replaceregexp ? unzip ?
I assume in the standart gradle plugin and java plugin I have most of what i need, but they are all tasks, now if i have in my main task a method that needs copy operation, how do I got about ?
Is there a way to call the tasks code ? Is there a way to call the task itself from a groovy script ?
First of all, you should never call a task from another task - bad things will happen if you do. Instead, you should declare a relationship between the two tasks (dependsOn, mustRunAfter, finalizedBy).
In some cases (fewer than people tend to believe), chaining tasks may not be flexible enough; hence, for some tasks (e.g. Copy) an equivalent method (e.g. project.copy) is provided. However, these methods should be used with care. In many cases, tasks are the better choice, as they are the basic building blocks of Gradle and offer many advantages (e.g. automatic up-to-date checks).
Occasionally it also makes sense to use a GradleBuild task, which allows to execute one build as part of another.

Resources