Can I run Azure DevOps pipeline without committing it? - azure

I am planning to experiment building a pipeline using Azure DevOps. One thing that I noticed early on is, after azure-pipelines.yml created, I have to commit this first before being able to run it. But I want to experiment on it which revolves around trial and error. Doing multiple commit just to test things out are not feasible.
In Jenkins I can just define my steps and try to run it without committing the file.
Is this also possible to do in Azure DevOps?

But I want to experiment on it which revolves around trial and error. Doing multiple commit just to test things out are not feasible.
Yes it is - you just use a different code branch. That will allow you the freedom to make as many changes as you need, while putting the pipeline together and trying it out, without committing to the master branch.
Then when you're happy with the way the pipeline is running, you can merge your branch into the master branch which the pipeline normally uses.

You cannot run YAML pipelines without committing them, but you can create classic pipelines and run them without committing anything pipeline-related to the repository (except for the source code you want to build). Classic pipelines can later be turned (or copy-pasted, to be exact) into yaml pipelines with view YAML -option.
https://learn.microsoft.com/en-us/azure/devops/pipelines/get-started/pipelines-get-started?view=azure-devops#define-pipelines-using-the-classic-interface

If you're on your own branch, or in a repository without any other developers making changes then you can
Make a change
use git commit --amend to overwrite your previous commit with the new file
use git push --force-with-lease to push that up to Azure DevOps
That will hide your commit history while experimenting

Related

git pull in Azure Data Factory

When working with the regular source code, (Java, C++, etc..) there are things like
git pull ..
git fetch ..
git push ..
to synch your remote git repo branch with your local branch.
What is the equivalent of such in the Azure Data Factory world ?
So, I am using azure data factory with the Azure git repo.
I am working in the particular feature branch - "fefature branch"
And my pipeline has a copy activity that hits a data set in its "Sink" stage.
Here is a screen shot but .. it's pretty simple and seems right
I see that my code for Data set definition (Json) in the remote Git repository is different from what I see in the Azure portal gui (being pointed to that same remote branch). ADF Gui in the Azure Portal is correct, the one in the git repo contains some stuff that I already deleted, but it does not gets deleted there (Why??)
So, when I 'Debug' pipeline I get errors which indicate this discrepancy as a problem. I want ty sync the environments and .. given that I do not understand how the discrepancies came about, I don't know how to fix an issue?. Any help is appreciated.
In the ADF world, we use publish and create a new pull request to merge the new changes from a feature branch to the main branch.
it seems like your git repository version is not up to date with the live ADF.
If there are any pending changes in your main branch, then you can click on Publish button to merge the changes
And if you are working on the feature branches, you can merge the changes using the new pull request.
If you have multiple feature branches, then you will need to manually compare the different versions to resolve these conflicts.

Why bitbucket pipeline merges pull request before it runs?

In their documentation regarding pull request pipelines, bitbucket says:
Pull requests:
a special pipeline that only runs on pull requests initiated from within your repository. It merges the destination branch into your working branch before it runs. If the merge fails, the pipeline stops.
So I'm wondering, why merging before running the pipeline? Why not just running against the coming branch without merging?
Could the reason be detecting merge conflicts early on in the pipeline before the real merge?
If you want to run a pipeline against the coming branch, this is very doable by using Branch workflows. PR merge trigger is just a slightly different idea, as the result of a PR merge is not necessarily the same as the coming branch. For example, merge conflicts can be introduced, which will make your pipeline fail.
There's one thing that documentation is not quite clear about, so I'll clarify it: all this pre-pipeline merging only occurs inside your build environment. Git history of your repository is absolutely safe, and Bitbucket Pipelines won't introduce any changes to it on your behalf.
Finally, you can run a PR merge pipeline manually from the Pipelines UI, without actually merging a PR (see the same link). This way, you can make sure that the merge result build is passing without actually doing a merge.

GitLab CI - Move pipeline logic from a project repo to centralized "devops-repo"

I have a great experience of pipeline creating automation (in case of huge amount of repos).
For example, a project has 20 similar repos with Java app (like a microservice) and a pipeline for each of them is differing only by repo url (and a few more minor attributes). The CI/CD process for each of them is the same.
So, we can create a separated devops-repo with declaration configuration for our services. Also we can create a single pipeline which will pull the devops repo and create all needed pipelines for each repo in the configuration (this operation is going to be executed only once in the beginning and in case if we want to change the devops-configuration)
I have implemented that using Jenkins. Now, I am going to do so using GitLab CI. But I can't get how is it possible.
Is it possible to create a pipeline from another one (dynamically)?
Any suggestions?
You can use include and put the generic pipeline in your devops repo.
In your java repos you can include the devops pipeline and set the variables which are specific for the respective java repo.
So the pipeline for your java repos can be as short as this:
include:
- project: 'your-group/devops-repository'
file: '.generic-ci.yml'
variables:
FOO: bar

ADF source integration issues with multiple developers

We have two developers using the same ADF. Each developer creates a git branch and starts working on it. Each developer can save the changes to their own git branch but there can only be one collaboration branch and this branch decides the publishing branch. This is causing a blockade (for one of the developer. How can we solve this ?
ADF publish branch can be set using a publish_config.json but now there is an option to set this in the adf itself. which one takes precedence? What is the best practice here?
You need to manage the work of each developer with standard git branch/merge processes. When one dev is done with work in their feature branch, then they will create a pull request to merge changes into your collaboration branch.
If the second dev has not created a feature branch yet, they can just do so after the pull request from the first dev is complete and then continue work from there. If the second dev has already created a feature branch, then they will need to merge the new changes from the collaboration branch into their feature branch to continue work before later committing to git and creating a pull request to merge changes from their feature branch back into the collaboration branch. From there, you can publish as needed.
This git work can be done through the ADF editor as well as through any other git interface you have. It's up to you.
This article discusses the process in specific detail using the ADF editor.
EDIT:
I believe you now have answers for this from 3 of the other 5 questions you posted about this same topic in the past day.
ADF publish confusion in git mode
Azure data factory working-branch confusion
When ADF publish branch is git protected how to publish?
Here is another article which describes the fundamental git process for ADF to help bring you up to speed with the fundamentals of how the different branches work, and how you can switch publish branches on the fly if needed.

Gitlab: Issues and Pipelines

I have setup a Git project + CI (using Gitlab-runner) on Gitlab v12.3.5. I have a question about issues and pipelines. Let's say I create an issue and assign it to myself. So this create a branch/merge request. Then, I open up the WebIDE to modify some files in an attempt to fix the issue. Now I want to see what if my changes will fix the issue. In order to run the pipeline, is it necessary to commit the changes into the branch or is there some other way?
The scenario I have is that it may take me 20 times to fix the files to make the pipeline 'clean'. In that case, I would have to keep committing on each change to see the results. What is the preferred way to accomplish this? Is it possible to run the pipeline by just staging the changes to see if they work?
I am setting up the gitlab-ci.yaml file. Hence it is taking a lot of trials to get it working properly.
You should create a branch and push to that. Only pushed changes will trigger pipeline runs. After you're done, you can squash and merge the branch so that the repo's history will be clean.
Usually though, you won't have to do this because you'll have automated tests set up to check whether your code works. You should also try testing the Linux commands (or whichever commands you're running in your GitLab CI scripts) locally first. If you're worried about whether your .gitlab-ci.yml syntax is correct, you can navigate to the file in your repository and check there (there's a button at the top which lints it).

Resources