We are trying to set up an infrastructure pipeline in Azure DevOps and we have a requirement to have several variables set with each infra build, which will be defined by the project. This also involves copying VMs from one location to another and renaming etc. I can do this through Powershell, however I'm struggling on how to import all the variables for each infra build. Since there are a few I was thinking I could use an Excel file (csv) and then do a foreach loop, that way the code is consistent and simpler.
Is there a way to have a CSV uploaded to Azure DevOps that triggers the powershell pipeline to run or is there even a way I can add a CSV to a pipeline, I can always look at another trigger once the CSV has been uploaded?
There are 7 fields (columns) in each infra build and there could be up to 30-50 builds (rows) running at each time hence the desire to automate it and use a CSV, adding variables each time is just not practical.
Thanks in advance and hope that makes snese
Thanks
Is there a way to have a CSV uploaded to Azure DevOps that triggers
the powershell pipeline to run or is there even a way I can add a CSV
to a pipeline, I can always look at another trigger once the CSV has
been uploaded?
From this doc, the * can only set as the final character.
So, you can create a folder named 'csv' to upload the '.csv' file, then set your yml file like this:
trigger:
branches:
include:
- main
paths:
include:
- 'csv/*'
Related
Present we kept multiple pipelines in separate yml files and all of them are included in the mail ci yml file.
using include statements.
Is there anyway to make it conditional.
Like I will have pipeline1 and pipeline2 yaml files.
So I can create another pipeline.yml where I can choose among the above two files based on some condition.
Gitlab is designed for deterministic pipelines, all other solutions add-on that looks implicit.
Web service
Include in .gitlab-ci.yml link to the web server, which could be a dynamically generated yml file.
include:
- 'https://your-service.com/gitlab-ci-generate'
Unfortunately, It's not possible to use CI variables in link to server. gitlab issue link
Dynamic child pipelines
You have complete control of the pipeline. Write a simple bash script, which renames your pre-pushed .yml files and executes that pipeline.
I have created a Release in the Azure DevOps environment, and added the Archive File task to my stage. I want the name of the zip file to include the date the file was created on, but there doesn't seem to be syntax that allows me to do it. Does anyone know of a way to include the date in the file name?
Looks you could not use that directly in Archive file to create of the task, the workaround is to configure the Release name format with $(Date: yyyyMMdd) in the release pipeline -> Options like below.
Then in the task, reference it with $(Release.ReleaseName), e.g. xxxxxx/test-$(Release.ReleaseName).zip.
Run the task, it works on my side.
The scenario:
There are two azure pipelines for UI code and .NET data service code. There are Dev, UAT and Production releases for each pipeline.
Currently, each pipeline puts the code into its own drop folder.
What I need to do is this:
When the UAT release has been built, I need to zip the UI and Dataservice, then put them into a single zip file so I need to have these steps automated.
I have no prior knowledge using azure pipelines, I'm not sure where to start looking on how to achieve the above?
If anyone can point me in the right direction, it will be hugely appreciated. Thank you so much.
I have a release pipeline which i use to deploy my resources to other environments. All works fine but the problem is that every time i deploy, all the resources even if no modification is made, are deployed. Is there a way through which i can do selective deployment; i.e. I deploy only those resources which have been modified. Any help would do. Thanks.
That`s a broad question. There is no out-of-box feature to select units to deploy. But you can use variables in the release pipeline:
Define a variable for each resource/unit and set some default value and "Settable at release time" property.
For each resource, define a separate task to deploy and define the custom condition, like: and(succeeded(), eq(variables['Custom.DeployUnit1'], 'YES'))
You can update these variables at the release creation time:
Is there any way to do selective deployment in azure devops?
There is no such out of box way to selective deployment in azure devops.
That because Azure devops release does not support release only changed files since only release changed files not always meaningful and could not archive what the project intend to release (such as the config file only changed in a commit).
But you could create a PowerShell script to compare timestamp for all files:
Create XML file that stores the last upload/publish information of
each files (e.g. file name, date time, changeset/commit version).
Create a PowerShell script file that included the logical to compare
files (get files metadata and compare with that XML file) and copy
updated files to specific folder
Publish the files in that folder
Check the similar thread for some more details.
Besides, if deploying via the deploy.cmd or MSDeploy.exe, you could also use the the -useChecksum WebDeploy flag:
WebDeploy/MSDeploy Quick Tip: Only Deploy Changed Files
Hope this helps.
Being novice to ADF CICD i am currently exploring how we can update the pipeline scoped parameters when we deploy the pipeline from one enviornment to another.
Here is the detailed scenario -
I have a simple ADF pipeline with a copy activity moving files from one blob container to another
Example - Below there is copy activity and pipeline has two parameters named :
1- SourceBlobContainer
2- SinkBlobContainer
with their default values.
Here is how the dataset is configured to consume these Pipeline scoped parameters.
Since this is development environment its OK with the default values. But the Test environment will have the containers present with altogether different name (like "TestSourceBlob" & "TestSinkBlob").
Having said that, when CICD will happen it should handle this via CICD process by updating the default values of these parameters.
When read the documents, no where i found to handle such use-case.
Here are some links which i referred -
http://datanrg.blogspot.com/2019/02/continuous-integration-and-delivery.html
https://learn.microsoft.com/en-us/azure/data-factory/continuous-integration-deployment
Thoughts on how to handle this will be much appreciated. :-)
There is another approach in opposite to ARM templates located in 'ADF_Publish' branch.
Many companies leverage that workaround and it works great.
I have spent several days and built a brand new PowerShell module to publish the whole Azure Data Factory code from your master branch or directly from your local machine. The module resolves all pains existed so far in any other solution, including:
replacing any property in JSON file (ADF object),
deploying objects in an appropriate order,
deployment part of objects,
deleting objects not existing in the source any longer,
stop/start triggers, etc.
The module is publicly available in PS Gallery: azure.datafactory.tools
Source code and full documentation are in GitHub here.
Let me know if you have any question or concerns.
There is a "new" way to do ci/cd for ADF that should handle this exact use case. What I typically do is add global parameters and then reference those everywhere (in your case from the pipeline parameters). Then in your build you can override the global parameters with the values that you want. Here are some links to references that I used to get this working.
The "new" ci/cd method following something like what is outlined here Azure Data Factory CI-CD made simple: Building and deploying ARM templates with Azure DevOps YAML Pipelines. If you have followed this, something like this should work in your yaml:
overrideParameters: '-dataFactory_properties_globalParameters_environment_value "new value here"'
Here is an article that goes into more detail on the overrideParameters: ADF Release - Set global params during deployment
Here is a reference on global parameters and how to get them exposed to your ci/cd pipeline: Global parameters in Azure Data Factory