I've tried to setup a DevOps pipeline to trigger from BitBucket but it won't auto-trigger. Running it manually works fine. As far as I can tell, everything is setup correctly and I'm not sure how to debug further.
My pipeline is not overriding the trigger
My pipeline yaml is set to trigger on multiple branches
I'm an admin for the bitbucket repo and I can see a webhook is added when I create the pipeline
The request history from bitbucket shows that ADO returns a 200 call after a push
But nothing is triggered in ADO.
There's a few oddities I've noticed in ADO, not sure if they are related. When I edit the pipeline and look at Triggers->YAML it says "Some settings need attention" although none of the settings are showing that they need attention
Also, if I go to my Devops Project Settings->Service Connections and select the BitBucket connection, Edit and try to Authorize I get an error about a refresh token
I don't see anywhere to debug further in ADO. Any ideas?
The triggers are incorrectly specified.
trigger:
- dev
test
master
should be
trigger:
- dev
- test
- master
Also, the PR section is currently designed to trigger on a branch named none:
pr:
- none
The correct syntax to disable PR validation would be
pr: none
Related
Requirement - In my self hosted gitlab instance there are multiple projects maintained by different users which are all using one particular tag of an image from the container registry of my project. That tag is now outdated and I have created a new tag for the image and I would like to notify all the users to use the new tag
Is there any webhook available in gitlab which can be enabled for all PULL request of image:tag to send notifications (email,slack) to the authors of ci/cd pipelines?
Or maybe configure the pipeline to detect the image and tag being used and if it is the one in question then send notifications?
P.S. - Gitlab instance is using docker container registry
An approach that involves custom scripting. Less elegant than VonC's suggestion ;-)
… detect the image and tag being used and if it is the one in question then send notifications?
You could try tailing the logs while pulling the old tag manually.
Searching for the image & tag name in your log slice should help determine how the usernames of associated events can be parsed out. Probably with jq.
A custom script could then be added to regularly repeat that parsing and for example send an email to users who trigger those events.
"Webhook" ("custom HTTP callbacks") means a local listener to GitLab events.
Considering you are managing your GitLab instance, a better option would be to create a pipeline for external pull requests (since GitLab 12.3, Aug. 2019)
on-pull-requests:
script: echo 'this should run on pull requests'
only:
- external_pull_requests
This pipeline can check if a Dockerfile is being merged, and if that Dockerfile uses the wrong tag.
If it does, it can deny said pull request.
I'm wondering why the job "tf-plan-production" in the to-be-continuous/terraform template is the only one running on a merge request pipeline?
Does anybody know the reason behind this?
Because I find it disturbing to have 2 pipelines, 1 detached pipeline containing only a single job while the other pipeline contains all the other jobs (tf-plan-review, tf-tflint, tf-checkov ...). I hesitate to override this rule as I may miss something important.
To be more precise, in this to-be-continuous template, all the defined jobs are never run on a merge request pipeline by using the rule :
# exclude merge requests
- if: $CI_MERGE_REQUEST_ID
when: never
Except the "tf-plan-production" job which have the rule:
# enabled on merge requests
- if: $CI_MERGE_REQUEST_ID
terraform plan is a nondestructive operation that compares what terraform would create to what exists in output, and creates a diff between existing state and state that has been coded but not created.
Typically it is run when a PR is created so that a dry run is available and visible to the developers, while terraform apply is run on merge. If there isn't another environment developers can test their changes in, it is a necessary step.
I am trying to trigger a pipeline using $CI_JOB_TOKEN. But it gives a 404 error everytime. Is there somebody could block CI_JOB_TOKEN from triggering a pipeline ?? at access levels ??
curl --request POST --form "token=$CI_JOB_TOKEN" --form ref=master https://gitlab.eample.com/api/v4/projects/73237/trigger/pipeline
For me using the CI_JOB_TOKEN also returned a 404 error for a private repository. When I instead executed the same command using a pipeline trigger token (Settings > CI/CD/ Pipeline triggers) it works as expected.
A similar problem is described in this issue https://gitlab.com/gitlab-org/gitlab/-/issues/17511
Just for clarification: You have to generate the token in the other project and then set it as a custom ci variable e.g. PIPELINE_TRIGGER_TOKEN in the project where you want to use it. Then in the curl request within .gitlab-ci.yml replace CI_JOB_TOKEN with PIPELINE_TRIGGER_TOKEN.
Could you please make sure the ref=master is correct? Recently master was changed to main your API call might be hitting a non-existent branch hence 404
Check also your GitLab version:
With GitLab 14.1 (July 2021), you have:
Default branch name redirect
Default branch name redirect
As part of the larger effort to rename Git’s default initial branch in a GitLab project from master to main, we are adding an automatic redirect to streamline the transition. Previously, when projects renamed the default branch, current URLs would result in 404 Not Found. This is a frustrating experience when you are trying to navigate between branches. Now, if you navigate to a file or directory path whose default branch was renamed, you will be redirected to the updated path automatically.
See Documentation and Issue.
So your problem might not exist with 14.1.
I have several teams with several QA environments that need to be deployed to.
I am using the ServiceFabricDeploy#1 task to deploy but I can't find a way to change the Service Connection during the deployment.
The input has to be valid during compile time so I can't use a variable macro ( i.e. $(connectionName) is blank durin compile ).
The input has to change based on the value passed in from the UI, so if I use template expressions (i.e. ${{variables.connectionName}}) ) they pass the compile but turn to blank during runtime.
How can I pass in the Service Connection name to a YAML pipeline?
Turns out I was not using variables correctly. I was trying to pass them in through parameters. What I needed to do was use the variable macro directly.
This works:
- task: ServiceFabricDeploy#1
inputs:
applicationPackagePath: '$(Build.ArtifactStagingDirectory)\drop\pkg'
serviceConnectionName: $(connectionName)
... etc ...
As per Microsoft's documentation as on 29th June, 2020,
Service connection cannot be specified by variable
To expand upon Nitin's response, this used to work in our Classic UI-defined release pipelines - we pass $(AzureSubscription) to a number of Azure App Service Deploy tasks, and the service connection is appropriately selected.
A few days back I had to make a change to a single deployment task to modify an artifact path, and only this task started failing until I took the variable out and manually specified the service connection.
I suspect that existing pipelines may continue to work until they're modified. I'll be doing some experimentation to validate this.
To enforce some policies befor a merge request (MR) is merged i added a server side update hook - which is only executed when merging against develop.
This works as it should - but the problem is you only get the feedback after you accept the merge request.
What we want is some kind of preflight check - this MR can be merged. (Like gitlab itself does when it checks if the MR does'nt produce a conflict.
Is there any way to hook into this system?
EDIT: added image for better understanding.