Migrating existing (entire) Azure DevOps pipeline to YAML based pipelines (in bulk) - azure

I would like to move the existing Azure DevOps pipelines to YAML based for obvious advantages. The problem is there are many of these and each one has many jobs.
When I click around in Azure DevOps, the "View YAML" link only appears for one job at a time. So that's gonna be a lot of manual work to view YAMLs for each pipeline x jobs and move that to code.
But for each pipeline there seems to be a way to "export" the entire pipeline in json. I was wondering if there is a similar way to at least dump the entire pipeline as YAML if not an entire folder.
If there is an API which exports the same then even better.

Until now, what we supported is what you see, use View YAML to copy/paste the definition of agent job. There has another workaround to get the entire definition of one pipeline is use the API to get the JSON from a build definition, convert it to YAML, tweak the syntax, then if needed, update the tasks which are referenced.
First, use Get Build Definition api to get the entire definition
of one pipeline.
Invoke JSON to YAML converter. Copy/paste the JSON of definition
into this converter.
Copy the YAML to a YAML editor of Azure Devops. Then the most important step is tweak the syntax.
Replace the refName key values with task names and version. For this, you can go our tasks source code which opened in github, built in tasks can be found there(note: please see the task.json file of corresponding task)
Noted: Use this method has another disadvantage that you need very familiar with YAML syntax so that you can tweak the content which convert from JSON successfully.

This is done and there is blog post about exporting pipelineas YAML on devblogs
It's it worth to mention that the new system knows how to handle every feature listed here:
Single and multiple jobs
Checkout options
Execution plan parallelism
Timeout and name metadata
Demands
Schedules and other triggers
Pool selection, including jobs which differ from the default
All tasks and all inputs, including optimizing for default inputs
Job and step conditions
Task group unrolling
In fact, there are only two areas which we know aren’t covered.
Variables
Variables defined in YAML “shadow” (hide) variables set in the UI. Therefore, we didn’t want to export them into the YAML file in case you need an ability to alter them at runtime. If you have UI variables in your Classic pipeline, we mention them by name in the comments to remind you that you need to configure them in your new YAML pipeline definition.
Timezone translation
cron schedules in YAML are in UTC, while Classic schedules are in the organization’s timezone. Timezone handling has a lot of sharp edges, so we opted not to try to be clever. We export the schedule without doing any translation, so your scheduled builds might be off by a certain number of hours unless you manually modify them. Here again, we make a note in the comments to remind you.
But there won't be support for release pipelines:
No plans to do so. Classic RM pipelines are different enough in their execution that I can’t make the same strong guarantees about correctness as I can with classic Build. Also, a number of concepts were re-thought between RM and unified YAML pipelines. In some cases, there isn’t a direct translation for an RM feature. A human is required to think about what the pipeline is designed to accomplish and re-implement it using new constructs.

I tried yamlizr https://github.com/f2calv/yamlizr
It works pretty well for exporting Release Pipelines, except it doesn't export out Pre/Post deployment conditions. We use these for Approval gates. So hopefully in a future release it will be supported.
But per Microsoft they won't support the export to YAML for Release Pipelines it sounds like.
https://devblogs.microsoft.com/devops/replacing-view-yaml/#comment-2043

Related

How can I conditionally run YAML tasks based on path filters?

I have a number of solutions, each of which have a mixture of applications and libraries. Generally speaking, the applications get built and deployed, and the libraries get published as NuGet packages to our internal packages feed. I'll call these "apps" and "nugets."
In my Classic Pipelines, I would have one build for the apps, and one for the nugets. With path filters, I would indicate folders that contain the nuget material, and only trigger the nuget build if those folders had changes. Likewise, the app build would have path filters to detect if any app code had changed. As a result, depending on what was changed in a branch, the app build might run, the nuget build might run, or both might run.
Now I'm trying to convert these to YAML. It seems we can only have one pipeline set up for CI, so I've combined the stages/jobs/steps for nugets and apps into this single pipeline. However, I can't seem to figure out a good way to only trigger the nuget tasks if the nuget path filters are satisfied and only the app tasks if the app path filters are satisfied.
I am hoping someone knows a way to do something similar to one of the following (or anything else that would solve the issue):
Have two different CI pipelines with their own sets of triggers and branch/path filters such that one or both might run on a given branch change
Set some variables based on which paths have changes so that I could later trigger the appropriate tasks using conditions
Make a pipeline always trigger, but only do tasks if a path filter is satisfied (so that the nuget build could always run, but not necessarily do anything, and then the app build could be triggered by the nuget build completing, and itself only do stuff if path filters are satisfied.
It seems we can only have one pipeline set up for CI
My issue was that this was an erroneous conclusion. It appeared to me that, out of the box, a Pipeline is created for a repo with a YAML file in it, and that you can change which file the Pipeline uses, but you can't add a list of files for it to use. I did not realize I could create an additional Pipeline in the UI, and then associate it to a different YAML file in the repo.
Basically, my inexperience with this topic is showing. To future people who might find this, note that you can create as many Pipelines in the UI as you want, and associate each one to a different YAML file.

Compile time vs Runtime Azure Pipelines

I come across the terms "Compile time" and "Run time" pretty much everywhere when i learn about
azure pipelines.
However, i still didn't find a clear explanation about them.
I have found this page in Microsoft's documentation, but it doesn't explain these terms very clearly.
I would be happy if someone could explain these terms in the context of the whole
run sequence of Azure Pipelines.
Thanks!
When using YAML Azure Devops pipelines you have your pipelines as code definition. Compile time happens before runtime and you can pass parameters to your YAML before it is compiled (parsed in reality). It will evaluate expressions and replace them in your YAML before even starting any tasks. On runtime, the "compiled" yaml will try for example try to read variables from your Azure Devops pipeline.
Here is an example from Microsoft DOSC:
https://learn.microsoft.com/en-us/azure/devops/pipelines/process/expressions?view=azure-devops
Expressions are probably the most affected thing when it comes to differences between compile time and run time.
Also a pretty nice article about this:
https://adamtheautomator.com/azure-devops-variables-complete-guide/

Azure functions - deploy by project in single repo?

We are building a set of serverless functions in Azure, but having difficulty deciding how to structure our source (Azure GIT) and DevOps to support them.
I am thinking of a single GIT repo, with all function apps housed independently within projects. We may have a lot of these function apps, we see great value in small code segments to do utility type of work, and I don't want dozens and dozens of independent repos just because of DevOps deployments. Is there a way to have a unique build and release process for each project, not the repo entirely? We aren't clear how this can be done and searches have come up empty on this. I thought it was possible to have unique build YAMLs per project across many projects in a single repo - but unclear how to implement the DevOps build and release pipleines to support this approach - ie; only a single function gets updated and we need to deploy - any guidance if this is possible and how to approach it would be great.
I haven't done this myself, but I'm in a similar situation where I'd like to have multiple functions (and other stuff) in a single Git repo for simplicity, but only build/deploy them as needed when they change. It looks like you can have multiple pipelines on a single repo with a different YAML file for each pipeline. The steps are documented in this link, and summarized below
In Azure DevOps, create a new Pipeline.
For the "Where is your code?" page, at the bottom choose the Use the classic editor option.
Select your source repo and branch.
On the "Select a template" screen, choose the YAML option at the top. Hit Apply.
There is a YAML file path field where you can specify the path and name of your YAML file for the pipeline.
You may want to set the pipeline to run manually if you don't want a build each time there's a commit to the repo.
EDIT There may be an easier way to do this now. If you go through the New Pipeline wizard, select your source location, on the Configure tab, at the bottom you can choose the Existing Azure Pipelines YAML file option. This lets you select a custom YAML file directly.

Can we have multiple triggers that execute different jobs in one yaml file?

Is it possible for a pipeline to have multiple triggers in one YAML file that executes different jobs per trigger?
In our pipeline, we pack each project in the solution and push it as a nuget package in our own azure devops artifacts and want to do the packing and pushing depending on the project. Saw that it is possible to specify the branch and path in the trigger, but you can only have one trigger according to this. But he only indicated it in the question, and the documentation doesn't explicitly state it.
Right now my option is to just configure different pipelines with yaml files per project but I want to ask here to confirm if this is possible or not.
Agree with Jessehouwing You can add multiple triggers. You can use conditionals on tasks, jobs, stages and environments to only run in specific cases.
https://learn.microsoft.com/en-us/azure/devops/pipelines/yaml-schema?view=azure-devops&tabs=schema#triggers
https://learn.microsoft.com/en-us/azure/devops/pipelines/process/conditions?tabs=yaml&view=azure-devops
Thanks for the input, studied the docs but it's not possible to achieve what I wanted with just the built in tasks for azure devops. I had to make a script that does it and assign true of false values to the conditionals.
The exact answer I was looking for was in this post

How to update ADF Pipeline level parameters during CICD

Being novice to ADF CICD i am currently exploring how we can update the pipeline scoped parameters when we deploy the pipeline from one enviornment to another.
Here is the detailed scenario -
I have a simple ADF pipeline with a copy activity moving files from one blob container to another
Example - Below there is copy activity and pipeline has two parameters named :
1- SourceBlobContainer
2- SinkBlobContainer
with their default values.
Here is how the dataset is configured to consume these Pipeline scoped parameters.
Since this is development environment its OK with the default values. But the Test environment will have the containers present with altogether different name (like "TestSourceBlob" & "TestSinkBlob").
Having said that, when CICD will happen it should handle this via CICD process by updating the default values of these parameters.
When read the documents, no where i found to handle such use-case.
Here are some links which i referred -
http://datanrg.blogspot.com/2019/02/continuous-integration-and-delivery.html
https://learn.microsoft.com/en-us/azure/data-factory/continuous-integration-deployment
Thoughts on how to handle this will be much appreciated. :-)
There is another approach in opposite to ARM templates located in 'ADF_Publish' branch.
Many companies leverage that workaround and it works great.
I have spent several days and built a brand new PowerShell module to publish the whole Azure Data Factory code from your master branch or directly from your local machine. The module resolves all pains existed so far in any other solution, including:
replacing any property in JSON file (ADF object),
deploying objects in an appropriate order,
deployment part of objects,
deleting objects not existing in the source any longer,
stop/start triggers, etc.
The module is publicly available in PS Gallery: azure.datafactory.tools
Source code and full documentation are in GitHub here.
Let me know if you have any question or concerns.
There is a "new" way to do ci/cd for ADF that should handle this exact use case. What I typically do is add global parameters and then reference those everywhere (in your case from the pipeline parameters). Then in your build you can override the global parameters with the values that you want. Here are some links to references that I used to get this working.
The "new" ci/cd method following something like what is outlined here Azure Data Factory CI-CD made simple: Building and deploying ARM templates with Azure DevOps YAML Pipelines. If you have followed this, something like this should work in your yaml:
overrideParameters: '-dataFactory_properties_globalParameters_environment_value "new value here"'
Here is an article that goes into more detail on the overrideParameters: ADF Release - Set global params during deployment
Here is a reference on global parameters and how to get them exposed to your ci/cd pipeline: Global parameters in Azure Data Factory

Resources