Howto accomplish a scheduled custom CruiseControl.Net trigger with retry mechanism? - cruisecontrol.net

I'd like to create the following build strategy with CruiseControl.Net
Check for deliveries at fixed schedule (e.g. 7:00h, 12:00h, 16:00h, 20:00h).
The check for delivery consists of two conditions that must be met before starting an integration build:
Changes are detected in the code archive
Custom condition "A"
I have created a custom trigger plugin which checks for condition A and can be expanded with an inner trigger, in this case a multiTrigger of scheduledTriggers, which seems to work fine.
Now consider the scenario in which there are archive changes detected during the 7:00h check, but custom condition A has not been met (yet). If condition A is met just after the check it would mean that the changes would not be picked up until the 12:00h check, which is obviously not desired.
Is there any way to implement a kind of retry mechanism within the current CCNet config, so that if changes are detected but condition A is not met yet, CCNet will keep trying until the condition is met and then start the integration build after all?

Why don't you expand your check for condition A more often than the delivery checks with a conf like this :
multitrigger AND
scheduletrigger in code archive : 7h 12h 16h 20h
YourTrigger (say checked every 5 min)
If the schedule trigger detects modifications at 7h but condition is not met A then the state of the schedule trigger is set to "fired" but the build does not fire. if the condition A is met at 7h25 then your tigger will detect it and the build will begin.

Related

How to exclude gitlab CI job from pipeline based on a value in a project file?

I need to exclude a job from pipeline in case my project version is pre-release.
How I know it's a pre-release?
I set the following in the version info file, that all project files and tools use:
version = "1.2.3-pre"
From CI script, I parse the file, extract the version value, and know whether it's a pre-release or not, and can set the result in an environment variable.
The only way I know to exclude a job from pipeline is to use rules, while, I know also from gitlab docs that:
rules are evaluated before any jobs run
before_script also is claimed to be called with the script, i.e. after applying the rules.
I can stop the job, only after it starts from the script itself, based on the version value, but what I need is to extract the job from the pipeline in the first place, so it's not displayed in the pipeline history. Any idea?
Thanks
How do you run (start) your pipeline, and is the information whether "it's a pre-release" already known at this point?
If yes, then you could add a flag like IS_PRERELEASE as a variable to the pipeline, and use that in the rules: section of your job. The drawback is that this will not work with automatic pipelines (triggered by a commit or MR); but you can use this approach with manually triggered pipelines (https://docs.gitlab.com/ee/ci/variables/#override-a-variable-when-running-a-pipeline-manually) or via the API (https://docs.gitlab.com/ee/ci/triggers/#pass-cicd-variables-in-the-api-call).

How to read labels in Gitlab CI script

I have a few use cases in my Gitlab setup I would like to be able to support:
If a certain label (let's call it “skip_build”) is set, the deployment steps should not be run when I merge an MR to a main branch. This would be useful when we have multiple MRs being merged right after another and only need the last one built.
If another label (we'll call it “skip_tests”) is set, I should be able to read it as an env var from within the script and alter the flow within the script accordingly (using normal bash syntax), e.g. to alter the package command parameters used a bit. This is useful for small changes where it might not make sense to run a lengthy test suite.
Is this possible with Gitlab, and if so, how?
I’ve tried experimenting with CI_MERGE_REQUEST_LABELS, but it doesn’t seem to be able to read that as an env var from within the script.
You have to use merge request pipelines for the CI_MERGE_REQUEST_LABELS variable (and other MR-related variables) to be present as documented in predefined variables.
You could use a rules: clause to skip jobs. Something like
build:
rules: # only run this job if the regex pattern does not match
- if: $CI_MERGE_REQUEST_LABELS !~ /skip_build/
You can also do this on any other kind of predefined (or user-defined) variable, like branch name, commit messages, MR titles, etc. Whatever works for you.
For example, a built in feature of GitLab is that if your commit message contains [ci skip] it will prevent the pipeline from running. You could implement similar functionality for your jobs and/or pipelines through rules: or workflow:rules:.

Check if directory was loaded from cache in Gitlab CI

I want to set the value of a variable based on whether loading from a cache has succeeded.
I plan to set the variable using an if statement, the same way they do in this example: https://docs.gitlab.com/ee/ci/yaml/#workflowrulesvariables
(The link goes to the wrong part of the page: search for Example of workflow:rules:variables )
If my yaml looks like this:
cache:
key: $CI_COMMIT_REF_SLUG
paths:
- pathtocache
How can I check if pathtocache exists?
This isn't possible due to the lifecycle of when rules are evaluated, which is before the cache/artifacts are restored. Keep in mind the rules can be used to define whether or not the job runs, so they are evaluated at pipeline start before any jobs have been run and thus before you would have generated any cache files.
If you want to test whether the cache has been populated then branch your job's logic based on that, you will have to do so within the script block for your job.

Airflow Branch Operator and S3KeySensor doesnot respect trigger_rule='none_failed'

I have 3 S3KeySensors on different files in different folders. 2 of them have to be successful and the third one can be skipped.I am trying to trigger the downstream task with the trigger_rule='none_failed' but S3KeySensor doesnot seem to respect that. This is how my DAG looks like.
This is how it behaves
This is how i want it to behave:
You have to set trigger_rule="none_failed_or_skipped" to test_step task as explained in this documentation.
From the documentation:
none_failed_or_skipped: all parents have not failed (failed or upstream_failed) and at least one parent has succeeded.

Filter recent files in Logic Apps' SFTP when files are added/modified trigger

I have this Logic App that connects to an SFTP server and it's triggered by the "files are added or modified" trigger. It's set to run every 10 minutes, looking for new/modified files and copying them to an Azure storage account.
The problem is that this SFTP server path is set to overwrite a set of files every X minutes (I have no control over this) and so, pretty often the Logic App overlaps with the update process of these files and downloads files that are still being written. The result is corrupted files.
Is there a way to add a filter to the When files are added or modified (properties only) so that it only takes into consideration files with a modified date of, at least, 1 minute old?
That way, files that are currently being written won't be added to the list of files to download. The next run of the Logic App would then fetch this ignored files and so on.
UPDATE
I've found a Trigger Conditions in the trigger's setting but I can't find any documentation about it.
According to test the trigger "When files are added or modified", it seems we can not add a filter in the trigger to filter the records which are modified at least 1 minute ago. We can just get the List of Files LastModified datetime and loop them, use "If" condition to judge if we should download it.
Update:
The expression in the screenshot is:
sub(ticks(utcNow()), ticks(triggerBody()?['LastModified']))
Update workaround
Is it possible to add a "Delay" action when the last modified time less than 1 minute ? For example, if the last modified time less than 60 seconds, use "Delay" to wait 5 minutes until the overwrite operation complete, then do the download.
I check the sample #equals(triggers().code, 'InternalServerError'), actually it uses the condition functions in Logical comparison functions, so the key word is make sure the property you want to filter is in the trigger or triggerBody or you will get the below error.
So I change the expression to like #greater(triggerBody().LastModified,'2020-04-20T11:23:00Z'), this could filter the file modified less than 2020-04-20T11:23:00Z not trigger the flow.
Also you could use other function like less ,greaterOrEquals etc in the Logical comparison functions.

Resources