I have a quite simple question. We have setup Gitlab CI and want to allow auto merge as soon as the build succeeds for some of our branches.
Thing is, we see that the build passes, but the merge does not actually happen and the status stays at "to be merged automatically once the build succeeds".
Do you have any idea why?
I attached a screenshot of the bogus behaviour.
EDIT: Some additional information that was requested:
It looks like no background job is put in the queue where I activate a "automatic merge when build succeeds
When a build finishes running, no background jobs is triggered as far as I can see. Nothing new being scheduled, dead or in progress.
I also don't see anything crazy or ERROR like in the logs.
Two screenshots of my dashboard as a MR with an automated build runs :
Thanks,
Julien
Maybe your merge job is scheduled or dead.
Use your admin account and click the Admin Area button.
Click Monitoring -> Background Jobs
Then,click the dashboard and check the if your merge job is in the Scheduled queue or Dead queue.
Related
I have a Gitlab CI pipeline schedule and noticed that pipelines are not running (anymore).
When starting the schedule manually via the UI (<repo-root>/-/pipeline_schedules) it shows the following
However, there is not pipeline started and no error message is provided.
What can I do in this situation?
The success message is misleading in the sense that one might thing the pipeline was actually created, although it only means that a pipeline was scheduled.
There are various reasons, why a schedule cannot run (anymore).
This can be for example because of conflicting rules or outdated fields in your yml caused by breaking changes due to Gitlab upgrades.
To get to the root of the problem why your pipeline did not run you can trigger a pipeline manually and set the
CI_PIPELINE_SOURCE to hold "schedule" as value.
To do so, go to <repo-root>/-/pipelines/new, set your target branch or tag and the variable as follows
Additionally, you may want to provide further variables required to properly simulate your problematic schedule via the manual run.
Next hit Run pipeline and you should observe an actual error message why the pipeline could not run.
I wonder if there is a way to close a issue automatically at a certain time like every Friday at 18:00 if that issue has a label or something like that.
GitLab did not include such a feature.
They use their own bot to triage issues and merge requests.
This isn't a feature of GitLab itself. However, you could run a scheduled pipeline that uses the issues API to do this.
To make sure the scheduled pipeline has the properly scoped API access, you can generate a project access token and place it in the CI/CD variables.
The scheduled pipeline does not even necessarily have to be configured in the same project in which you want issues to be expired, if you're concerned about it triggering existing pipeline jobs. For example, you can create a new project called "issue cleanup" and setup the pipeline there to cleanup issues of one or more other projects on the schedule
So to give you a bit of context we have a service which has been split into two different services ie one for the read and one for the write side operations. The read side is called ProductStore and the write side is called ProductCatalog. The issue were facing was down the write side as the load tests create 100 products in the write side resource web app and then they are transferred to the read side for the load test to then read x number of times. If a build is launched in the product catalog because something new was merged to master then this will cause issues in the product store pipeline if it gets run concurrently.
The question I want to ask is there a way in the ProductStore yaml file to directly query via a specified azure task or via an AzurePowershell script to check if a build is currently running in the ProductCatalog pipeline.
The second part of this would be to loop/wait until that pipeline has successfully finished before resuming the product store pipeline.
Hope this is clear as I'm not sure how to best ask this question as I'm very new to the DevOps pipelines flow but this would massively help if there was a good way of checking this sort of thing.
As a workaround , you can set Pipeline completion trigger in ProductStore pipeline.
To trigger a pipeline upon the completion of another, specify the triggering pipeline as a pipeline resource.
Or configure build completion triggers in the UI, choose Triggers from the settings menu, and navigate to the YAML pane.
I've been using Azure Pipelines for a while now and haven't changed my azure-pipelines.yml file here in 2 months. Previously, when there was a new PR, the pipeline would trigger and cause the environment to be built and the tests would be run.
Today, there was a new PR but I noticed that the pipeline was not being triggered. Then, to further test this, I forked, cloned, and branched the repository myself and created another new PR and, again, the pipeline was not triggered.
It's not clear to me where things are getting stuck and it's not clear how one would debug this. I've gone through this Azure DevOps documentation but it wasn't useful. I can manually trigger the pipeline to execute and test the master branch but I don't know how to manually trigger the same thing for a PR. Here's my Azure DevOps page for reference.
As normal, you do not need config pr in YAML script if there's no any special demand, we would do pull request trigger for all branches. But, it start broken from 03-13 21:02 (UTC), which caused by us, you do not do anything wrong.
The fix is preparing with our best.
As Alex said, this is the implicit trigger which YAML support only, if you do not configure pr in YAML explicitly.
To avoid such stuck later, except the method that Alex mentioned: add pr into YAML. You can also make use of UI configuration which performance is very stable until now.
Just go Pipeline definition page => Click on three dots of right corner => Select Trigger:
Then you will see Triggers tab which has Continues integration and Pull request validation display below. Open Pull request validation and enable Override the YAML pull request trigger from here:
Additional, Our team has noticed this broken issue, will update whether it is fixed here once we have any fixed release in progress.
Update 3/18/2020:
Fixed has released to all region. Every one can work github pr trigger as the document shows now.
I have a data factory that I would like to publish, however I want to delay one of the pipelines from running as it uses a shared resource that isn't quite ready.
If possible I would like to allow the previous pipelines to run and then enable the downstream pipeline when the resource is ready for it.
How can I disable a pipeline so that I can re-enable it at a later time?
Edit your trigger and make sure Activated is checked NO. And of course don't forget to publish your changes!
Its not really possible in ADF directly. However, I think you have a couple of options to dealing with this.
Option 1.
Chain the datasets in the activities to enforce a fake dependency making the second activity wait. This is a bit clunky and requires the provisioning of fake datasets. But could work.
Option 2.
Manage it at a higher level with something like PowerShell.
For example:
Use the following cmdlet to check the status of the first activity and wait maybe in some sort of looping process.
Get-AzureRmDataFactoryActivityWindow
Next, use the following cmdlet to pause/unpause the downstream pipeline as required.
Suspend-AzureRmDataFactoryPipeline
Hope this helps.
You mentioned publishing, so if you are publishing trough Visual Studio, it is possible to disable a pipeline by setting its property "isPaused" to true in .json pipeline configuration file.
Property for making pipeline paused
You can disable pipeline by clicking Monitor & Manage in Data Factory you are using. Then click on the pipeline and in the upper left corner you have two options:
Pause: Will not terminate current running job, but will not start next
Terminate: Terminates all job instances (as well as not starting future ones)
GUI disabling pipeline
(TIP: Paused and Terminated pipeline have orange color, resumed have green color)
Use the powershell cmdlet to check the status of the activity
Get-AzureRmDataFactoryActivityWindow
Use the powershell cmdlet to pause/unpause a pipeline as required.
Suspend-AzureRmDataFactoryPipeline
Right click on the pipeline in the "Monitor and Manage" application and select "Pause Pipeline".
In case you're using ADF V2 and your pipeline is scheduled to run using a trigger, check which trigger your pipeline uses. Then go to the Manage tab and click on Author->Triggers. There you will get an option to stop the trigger. Publish the changes once you've stopped the trigger.