I have an upstream and a downstream pipeline. Unfortunately, I cannot easily re-trigger a downstream pipeline run via the upstream pipeline:
Expected situation:
Actual situation:
The upstream pipeline is responsible for creating the build, and then triggers the downstream pipeline for deployment or undeployment activities. Upstream sets a variable called ACTION in order to inform the downstream pipeline which stage is expected to run.
My goal is, to be able to re-run deployments and undeployments as often as I want.
How can I re-run the downstream pipeline with either the "deploy" or the "undeploy" parameter set, triggered by the upstream pipeline?
You cannot "retry" bridge jobs from the upstream pipeline, unfortunately. This is a known limitation in GitLab.
The only way to work around this is to re-run the downstream pipeline is to manually retry the pipeline/jobs from the downstream pipeline's interface. However, this only covers cases like failure of a downstream job. Retrying can't change the pipeline structure itself (like what jobs are included) because rules: are only evaluated at pipeline creation time. Even if you change variable settings and retry a pipeline, you won't see any difference due to rules:.
My advice would be to not use this - if: $ACTION rule. Instead, always include both jobs in the pipeline and set the undeploy job as the on_stop parameter for the deployment job.
Related
Is there a way of retriggering a successful child pipeline in gitlab ? I don't see any retry button in trigger job, which we otherwise see in normal pipline job. Is there a way/workaround to get an option to do so ?
I went through the gitlab doc, it only talks about retrying failed jobs in child pipeline.
That is currently (Q4 2022) not supported yet.
(And retry: is indeed only for failed job)
It is requested by issue 29456:
Ability to rerun a successful pipeline via "Retry" button
Not only failed pipelines sometimes need a rerun but also successful ones:
If your tests are unreliable and you are sceptical that the test success is repeatable
If your jobs depend on outside factors
If your job depends on some predefined CI variable which can change without a code change
So in general, a pipeline should show the retry button even in case of a success. Then, all jobs should be retried again
The currently suggested workaround of CI / CD -> Pipelines -> Run Pipeline does not always work, especially not for merge request pipelines.
In my case, I have all jobs defined as only: merge_requests and "Run Pipeline" responds with the error "No stages / jobs for this pipeline"
Is there a way to test pipeline changes on a stage that has finished running? It would be super useful when a stage has failed and instead of running the entire pipeline to test your changes, you could trigger a rerun of the stage with the updated yml file.
By definition, re-running a stage means running it the same way as it was run initially. When you think about, this is the very desired behaviour.
For pipeline development purposes, I would recommend cloning the pipeline and removing all unnecessary steps. Then work in this minimal/debugging pipeline for faster feedback.
I have a few independent scheduled CI jobs.
Check that the infrastructure matches Terraform code.
Check for npm vulnerabilities.
Check that external APIs pass tests.
These are not hermetic. They do not test the fitness of the code in isolation. They could succeed at commit 12345 and then fail at commit 12345 and then succeed again.
I run these daily.
Gitlab lacks the ability to have multiple pipeline types (unlike, say, Buildkite), so I use a variable to control which steps run.
However, I am left with the problem that these checks interfere with the main passed/failed status of commits.
For example, if the Terraform infra check fails, then it's broken and people are notified and whoever is the next to push is informed they fixed it.
These kinds of checks can't be that uncommon, right? How should thse be managed?
It sounds like you're looking for the allow_failure keyword.
https://docs.gitlab.com/ce/ci/yaml/#allow_failure
When allow_failure is set to true and the job fails, the job shows an orange warning in the UI. However, the logical flow of the pipeline considers the job a success/passed, and is not blocked.
job1:
stage: test
script:
- execute_script_that_will_fail
allow_failure: true
I have a an Azure Pipeline A, that executes a deployment to my Salesforce org in the event of a PR merge.
My problem statement is,
I am not able to restrict the execution of this pipeline such that it executes only after the previous execution of the same pipeline has completed.
In other words, if this pipeline is triggered by multiple PR's, then I would want only once instance of the pipeline to run. The next one should wait until the previous run has been completed.
Is there a way to achieve this?
You can enable "Batch changes while a build is in progress" option to execute one pipeline at a time.
If your question was on Release Pipeline, you can achieve this through specifying number of executions in the "Deployment queue settings" under Pre-Deployment conditions for the particular stage.
If you are using YAML you should be able to use the following trigger:
trigger:
batch: boolean # batch changes if true; start a new build for every push if false (default)
https://learn.microsoft.com/en-us/azure/devops/pipelines/yaml-schema?view=azure-devops&tabs=schema%2Cparameter-schema#triggers
I have set up a PR Pipeline in Azure. As part of this pipeline I run a number of regression tests. These run against a regression test database - we have to clear out the database at the start of the tests so we are certain what data is in there and what should come out of it.
This is all working fine until the pipeline runs multiple times in parallel - then the regression database is being written to multiple times and the data returned from it is not what is expected.
How can I stop a pipeline running in parallel - I've tried Google but can't find exactly what I'm looking for.
If the pipeline is running, the the next build should wait (not for all pipelines - I want to set it on a single pipeline), is this possible?
Depending on your exact use case, you may be able to control this with the right trigger configuration.
In my case, I had a pipeline scheduled to kick off every time a Pull Request is merged to the main branch in Azure. The pipeline deployed the code to a server and kicked off a suite of tests. Sometimes, when two merges occurred just minutes apart, the builds would fail due to a shared resource that required synchronisation being used.
I fixed it by Batching CI Runs
I changed my basic config
trigger:
- main
to use the more verbose syntax allowing me to turn batching on
trigger:
batch: true
branches:
include:
- main
With this in place, a new build will only be triggered for main once the previous one has finished, no matter how many commits are added to the branch in the meantime.
That way, I avoid having too many builds being kicked off and I can still use multiple agents where needed.
One way to solve this is to model your test regression database as an "environment" in your pipeline, then use the "Exclusive Lock" check to prevent concurrent "deployment" to that "environment".
Unfortunately this approach comes with several disadvantages inherent to "environments" in YAML pipelines:
you must set up the check manually in the UI, it's not controlled in source code.
it will only prevent that particular deployment job from running concurrently, not an entire pipeline.
the fake "environment" you create will appear in alongside all other environments, cluttering the environment view if you happen to use environments for "real" deployments. This is made worse by this view being a big sack of all environments, there's no grouping or hierarchy.
Overall the initial YAML reimplementation of Azure Pipelines mostly ignored the concepts of releases, deployments, environments. A few piecemeal and low-effort aspects have subsequently been patched in, but without any real overarching design or apparent plan to get to parity with the old release pipelines.
You can use "Trigger Azure DevOps Pipeline" extension by Maik van der Gaag.
It needs to add to you DevOps and configure end of the main pipeline and point to your test pipeline.
Can find more details on Maik's blog.
According to your description, you could use your own self-host agent.
Simply deploy your own self-hosted agents.
Just need to make sure your self host agent environment is the same as your local development environment.
Under this situation, since your agent pool only have one available build agent. When multiple builds triggered, only one build will be running simultaneously. Others will stay in queue with a specific order for agents. Unless the prior build finished, it will not run with next build.
For other pipeline, just need to keep use the host agent pool.