I'm quite new to Bitbucket Pipelines. I currently defined three steps; 1 for testing, 1 for building and 1 manual trigger for deploying.
Is there any possible way to get the user which triggered the manual build (via environment variables for example).
In the question title you speak of the author. This would be easy – simply ask Git, for instance using git log -n 1 --format=format:'%an'.
On the other hand, it is not possible to get the user who manually triggered a build, for instance using the “Rerun” button.
Related
I have a use case where I have a template project set up that I then use as a base for new microservices. While this works in building some basic stuff, such as base files, ci/cd pipeline, etc, it's still the template. I'm going through all the ci/cd variables now to check, but wanted to see if anyone else had this use case come up. Basically, I want to know if there's something like a "first run on repo creation" trigger that can run as soon as the repo is cloned from the template, but then never run again. This stage would modify some of the files in the repo to change names of things like the service, etc.
Is there any way to currently do this? The only other way I can think of doing it would be to have another project that uses the api or something to get the new repo name then check in the modified files.
Thanks!
You could use a rule that checks for a specific commit message crafted to be the message at the HEAD of the template project. Optionally, you can also check if the project ID is not the project ID of the template project (to avoid the job running in the template project itself).
rules:
- if: '$CI_COMMIT_MESSAGE == "something very specific" && CI_PROJECT_ID != "1234"'
When a new project is created, the rule will evaluate true, but future commits that users make won't (or at least shouldn't, under normal circumstances) match the rule.
Though, to hook into project creation, using a system hook (for self-managed instances) might be a better option.
I am planning to experiment building a pipeline using Azure DevOps. One thing that I noticed early on is, after azure-pipelines.yml created, I have to commit this first before being able to run it. But I want to experiment on it which revolves around trial and error. Doing multiple commit just to test things out are not feasible.
In Jenkins I can just define my steps and try to run it without committing the file.
Is this also possible to do in Azure DevOps?
But I want to experiment on it which revolves around trial and error. Doing multiple commit just to test things out are not feasible.
Yes it is - you just use a different code branch. That will allow you the freedom to make as many changes as you need, while putting the pipeline together and trying it out, without committing to the master branch.
Then when you're happy with the way the pipeline is running, you can merge your branch into the master branch which the pipeline normally uses.
You cannot run YAML pipelines without committing them, but you can create classic pipelines and run them without committing anything pipeline-related to the repository (except for the source code you want to build). Classic pipelines can later be turned (or copy-pasted, to be exact) into yaml pipelines with view YAML -option.
https://learn.microsoft.com/en-us/azure/devops/pipelines/get-started/pipelines-get-started?view=azure-devops#define-pipelines-using-the-classic-interface
If you're on your own branch, or in a repository without any other developers making changes then you can
Make a change
use git commit --amend to overwrite your previous commit with the new file
use git push --force-with-lease to push that up to Azure DevOps
That will hide your commit history while experimenting
I have setup a Git project + CI (using Gitlab-runner) on Gitlab v12.3.5. I have a question about issues and pipelines. Let's say I create an issue and assign it to myself. So this create a branch/merge request. Then, I open up the WebIDE to modify some files in an attempt to fix the issue. Now I want to see what if my changes will fix the issue. In order to run the pipeline, is it necessary to commit the changes into the branch or is there some other way?
The scenario I have is that it may take me 20 times to fix the files to make the pipeline 'clean'. In that case, I would have to keep committing on each change to see the results. What is the preferred way to accomplish this? Is it possible to run the pipeline by just staging the changes to see if they work?
I am setting up the gitlab-ci.yaml file. Hence it is taking a lot of trials to get it working properly.
You should create a branch and push to that. Only pushed changes will trigger pipeline runs. After you're done, you can squash and merge the branch so that the repo's history will be clean.
Usually though, you won't have to do this because you'll have automated tests set up to check whether your code works. You should also try testing the Linux commands (or whichever commands you're running in your GitLab CI scripts) locally first. If you're worried about whether your .gitlab-ci.yml syntax is correct, you can navigate to the file in your repository and check there (there's a button at the top which lints it).
Suppose we have 100 static websites of similar type. It will have similar build pipeline tasks. So instead of creating build and release pipelines one by one using visual designer, is there a way to automate it so that it will get created automatically?
You can do that via rest api, also, if all the pipelines are in different repos you can use azure-pipelines.yaml in the root of the repo, it will pick it up automatically.
go to builds > edit > top right >
on the next screen you can rename it:
We are building a Microservices based architecture and we are having 50 odd CI and 50 odd CD pipelines. Is there a way to script the CI / CD Build and Release definitions? We want this to be a repeatable process and do not want to leave it to our DevOps engineer(s) as it is prone to errors. Please note that I am not talking about ARM (which is already being used by us). Is there a way to do the above?
For builds, you can use YAML builds, which are currently in preview.
For releases, there's nothing equivalent yet.
You could always use the REST APIs to extract the build and release definitions as JSON, source control them, and then create a continuous delivery pipeline to update them when the definitions in source control change.