The scenario:
There are two azure pipelines for UI code and .NET data service code. There are Dev, UAT and Production releases for each pipeline.
Currently, each pipeline puts the code into its own drop folder.
What I need to do is this:
When the UAT release has been built, I need to zip the UI and Dataservice, then put them into a single zip file so I need to have these steps automated.
I have no prior knowledge using azure pipelines, I'm not sure where to start looking on how to achieve the above?
If anyone can point me in the right direction, it will be hugely appreciated. Thank you so much.
Related
I was hoping to get some feedback on using Azure Pipelines and what the best practices are for my situation.
We have recently migrated from TFS 2017 and we are in the process of re-writing all our pipelines. We were using builds and releases prior to the upgrade in the legacy build tasks. We would like to setup more useful YAML pipelines.
Let me set the stage with what we currently have
10+ microservices
10 individual builds that trigger from a folder in the repo for each one
10 releases that get created on successful build
3 environments per release (Dev, QA, UAT)
So in summary... a build of a single microservice triggers off of a commit to a folder in the branch. The successful build then triggers a release to Dev. Dev completes and a user go go start a QA build by clicking the release.
In the new Azure Pipeline world... what would be the best approach to doing this model.
We would like to have all the builds happen in a single pipeline (each stage would be a microservice?)
How do we trigger only on a commit to that folder?
What would the CD look like? Should it be in the same pipeline and be a new stage?
How can we easily just add environments without having to just keep copy/pasting all the code for each environment? Ideally i would like to just be able to add a variable and a new environment can be deployed to
I am open to any suggestions here. I am ok if I am way off here, I am looking for the best practices and best approach to this.
TIA
I have read some tutorials about Azure DevOps. There are 3 things i do not really understand:
Can we say azure-pipeline.yml on Azure is the equivalent of .gitlab-ci.yml on gitlab ?
I have read some tutorials talking about azure-pipeline.yml files and others talking about azure-pipelines.yml ? What is the good syntax for this file name ?
I have create a "devops project" from Azure Services page. I have choose ASP.Net Core Application and Windows Web App. I can see a pipeline on dev.azure.com but there is no yml file in source code. So i am wondering where is this file...
Thanks
Can we say azure-pipeline.yml on Azure is the equivalent of .gitlab-ci.yml on gitlab
YAML defines the way to code your configuration management by defining build and release pipelines in the code.It is named as azure-pipelines in Azure Devops and .gitlab-ci.yml on gitlab
have read some tutorials talking about azure-pipeline.yml files and others talking about azure-pipelines.yml ? What is the good syntax for this file name ?
azure-pipelines.yml is the default name, but if you need you canhange the name of the yaml file by clicking on "Edit in the visual designer".
I have create a "devops project" from Azure Services page. I have choose ASP.Net Core Application and Windows Web App. I can see a pipeline on dev.azure.com but there is no yml file in source code. So i am wondering where is this file...
There are two ways to create the pipeline one is using the classic editor and using the YAML code. It should definitely be there if you create it using YAML
Answering your question in the same order they were asked.
Yes, azure-pipeline.yml is the equivalent of gitlab-ci.yml. In both cases you bundle together a number of commands you want to execute.
It is the same. The file is called azure-pipelines.yml. Good thing if you try to edit the file in azure-devops is that there is a really nice check and auto-completion tool, that helps a lot especially in indentation (spaces before the command)which is a common issue with yaml files.
If you created the Ci/CD pipelines by the development center in azure portal you see a UIed version of the yaml. But for every step you can still see the yaml code if you wish. If you try to create a new build pipeline the default way you get is to use the yaml file.
We are building a set of serverless functions in Azure, but having difficulty deciding how to structure our source (Azure GIT) and DevOps to support them.
I am thinking of a single GIT repo, with all function apps housed independently within projects. We may have a lot of these function apps, we see great value in small code segments to do utility type of work, and I don't want dozens and dozens of independent repos just because of DevOps deployments. Is there a way to have a unique build and release process for each project, not the repo entirely? We aren't clear how this can be done and searches have come up empty on this. I thought it was possible to have unique build YAMLs per project across many projects in a single repo - but unclear how to implement the DevOps build and release pipleines to support this approach - ie; only a single function gets updated and we need to deploy - any guidance if this is possible and how to approach it would be great.
I haven't done this myself, but I'm in a similar situation where I'd like to have multiple functions (and other stuff) in a single Git repo for simplicity, but only build/deploy them as needed when they change. It looks like you can have multiple pipelines on a single repo with a different YAML file for each pipeline. The steps are documented in this link, and summarized below
In Azure DevOps, create a new Pipeline.
For the "Where is your code?" page, at the bottom choose the Use the classic editor option.
Select your source repo and branch.
On the "Select a template" screen, choose the YAML option at the top. Hit Apply.
There is a YAML file path field where you can specify the path and name of your YAML file for the pipeline.
You may want to set the pipeline to run manually if you don't want a build each time there's a commit to the repo.
EDIT There may be an easier way to do this now. If you go through the New Pipeline wizard, select your source location, on the Configure tab, at the bottom you can choose the Existing Azure Pipelines YAML file option. This lets you select a custom YAML file directly.
I'm coming from a long SSIS background, we're looking to use Azure data factory v2 but I'm struggling to find any (clear) way of working with multiple environments. In SSIS we would have project parameters tied to the Visual Studio project configuration (e.g. development/test/production etc...) and say there were 2 parameters for SourceServerName and DestinationServerName, these would point to different servers if we were in development or test.
From my initial playing around I can't see any way to do this in data factory. I've searched google of course, but any information I've found seems to be around CI/CD then talks about Git 'branches' and is difficult to follow.
I'm basically looking for a very simple explanation and example of how this would be achieved in Azure data factory v2 (if it is even possible).
It works differently. You create an instance of data factory per environment and your environments are effectively embedded in each instance.
So here's one simple approach:
Create three data factories: dev, test, prod
Create your linked services in the dev environment pointing at dev sources and targets
Create the same named linked services in test, but of course these point at your tst systems
Now when you "migrate" your pipelines from dev to test, they use the same logical name (just like a connection manager)
So you don't designate an environment at execution time or map variables or anything... everything in test just runs against test because that's the way the linked servers have been defined.
That's the first step.
The next step is to connect only the dev ADF instance to Git. If you're a newcomer to Git it can be daunting but it's just a version control system. You save your code to it and it remembers every change you made.
Once your pipeline code is in git, the theory is that you migrate code out of git into higher environments in an automated fashion.
If you go through the links provided in the other answer, you'll see how you set it up.
I do have an issue with this approach though - you have to look up all of your environment values in keystore, which to me is silly because why do we need to designate the test servers hostname everytime we deploy to test?
One last thing is that if you a pipeline that doesn't use a linked service (say a REST pipeline), I haven't found a way to make that environment aware. I ended up building logic around the current data factories name to dynamically change endpoints.
This is a bit of a bran dump but feel free to ask questions.
Although it's not recommended - yes, you can do it.
Take a look at Linked Service - in this case, I have a connection to Azure SQL Database:
You have possibilities to use dynamic content for either the server name and database name.
Just add a parameter to your pipeline, pass it to the Linked Service and use in the required field.
Let me know whether I explained it clearly enough?
Yes, it's possible although not so simple as it was in VS for SSIS.
1) First of all: there is no desktop application for developing ADF, only the browser.
Therefore developers should make the changes in their DEV environment and from many reasons, the best way to do it is a way of working with GIT repository connected.
2) Then, you need "only":
a) publish the changes (it creates/updates adf_publish branch in git)
b) With Azure DevOps deploy the code from adf_publish replacing required parameters for target environment.
I know that at the beginning it sounds horrible, but the sooner you set up an environment like this the more time you save while developing pipelines.
How to do these things step by step?
I describe all the steps in the following posts:
- Setting up Code Repository for Azure Data Factory v2
- Deployment of Azure Data Factory with Azure DevOps
I hope this helps.
There are tons of resources online on how to replace JSON configuration files in a release pipeline like this one. I configured this. It works. However, we have multiple integration tests which reach the database too. These tests are run during build time. I haven't seen any option yet to replace config values in the build pipeline. Does it exist? Or do I really have to use this custom task (see screenshot below)?
There is an out-of-the-box task since recently by Microsoft. It's called File Transform. It's currently in preview but it works really well! Haven't had any issues whatsoever with it and it works the same as you would configure it in the release pipeline. Would recommend this any day!
Below you can see my configuration.
There is no out-of-the-box task only to replace tokens/values in files (also in the release pipline the task is Azure App Service Deploy and not only for replace json configuration).
You need to use an external extension from here or write a PowerShell script for that.