I have a project with two components, one component needs on public network and another component works on private network.
So if I give in application.yml as follows then the first component fails
pool:
vmImage: windows-latest
name: private-network
But if I give the pool without CBS then the second component fails
pool:
vmImage: windows-latest
#name: private-network
How to solve this? How to switch the network while the pipeline is running?
You can use multiple stages to run your pipelines in Private and Public Network in your yaml script like below:-
trigger:
- main
stages:
- stage: Public
jobs:
- job: PublicJob
pool:
vmImage: windows-latest
steps:
- script: |
# Run the component that needs public network access
echo "Running component on public network"
- stage: Private
dependsOn: Public
jobs:
- job: PrivateJob
pool:
name: private-network
steps:
- script: |
# Run the component that needs private network access
echo "Running component on private network"
In the private network, Add the name of your private pool and use both pools in a multi-stage pipeline like the above and run.
Alternatively, You can also reference both these pools in a variable in azure DevOps and use it whenever needed for your pipeline in a runtime.
Related
Is there a way within a Release pipeline in Azure to pass variables created in one stage to the next stage?
I see lots of documentation about using echo "##vso[task..... - This however does not seem to work within the release pipeline.
I am mainly using bash scripts and I can reuse it within the same stage in different tasks, but not subsequent stages.
This seems like an essential feature to me, passing variables through stages...
Is there a way to do this?
If you want to pass variables from one stage to another stage in yml pipelines for release, you are supposed to use echo "##vso[task....." follow the doc
For a simple example:
stages:
- stage: BuildStage
jobs:
- job: BuildJob
steps:
- bash: echo "##vso[task.setvariable variable=TestArtifactName;isoutput=true]testValue"
name: printvar
- stage: DeployWebsiteStage
lockBehavior: sequential
dependsOn: BuildStage
condition: succeeded()
variables:
BuildStageArtifactFolderName: $[stageDependencies.BuildStage.BuildJob.outputs['printvar.TestArtifactName'] ]
jobs:
- deployment: DeployWebsite
environment:
name: webapplicationdeploy
strategy:
runOnce:
deploy:
steps:
- task: PowerShell#2
inputs:
targetType: 'inline'
script: Write-Host "BuildStageArtifactFolderName:" $(BuildStageArtifactFolderName)
You are supposed to get the value set from stage 'BuildStage'.
If you want to pass variables from one stage to another stage in classic release pipelines,
1.Set Release Permission Manage releases for the Project Collection Build Service as Allow.
2.Toggle on 'Allow scripts to access the OAuth token' for the first stage
3.Set a variable like 'StageVar' in release scope.
4.ADD the first powershell task(inline) in the first stage for creating a variable 'myVar' in the first stage.
5.update the Release Definition and Release Variable (StageVar)
6.Add a powershell task in the second stage to retrieve the value of myVar via the Release Variable StageVar.
You could refer the blog for more details.
It works on my side,
results in the first stage:
results in the second stage:
You can make use of the Azure DevOps Variable group to store your variables and call them in your release pipelines across multiple stages and multiple pipelines within a project. You can make use of Azure CLI to make use of the Variable group.
I have stored 2 variables of database name and password in the SharedVariables group. I can call this variable group in 2 ways: 1) Via the YAML pipeline and 2) Via Classic/Release pipelines.
Yaml:-
trigger:
- main
pool:
vmImage: ubuntu-latest
variables:
- group: SharedVariables
steps:
- script: |
echo $(databaseserverpassword)
When I ran the pipeline the database server password was encrypted like the below:-
In Release pipeline:-
You can add this variable within multiple pipelines and multiple stages in your release like below:-
But the above method will help you store static values in the variable group and not the output variable of build to release Unless you specifically assign those variables manually in the variable group. You can make use of this extension > Variable Tools for Azure DevOps Services - Visual Studio Marketplace
With this extension, you can store your variables from the build in a JSON file and load that JSON file in the next release stage by calling the task from the extension.
Save the build variable in a file:-
Load the build variable in a release:-
Another method is to store the variables in a file as a build artifact and then call the build artifact in the release pipeline with the below yaml code:-
trigger:
- dev
pool:
vmImage: windows-latest
parameters:
- name: powerenvironment
displayName: Where to deploy?
type: string
steps:
- task: PowerShell#2
inputs:
targetType: 'inline'
script: |
$variable = '${{parameters.powerenvironment}}'
$variable | Out-File $(Build.ArtifactStagingDirectory)\filewithvariable.txt
Get-Content $(Build.ArtifactStagingDirectory)\filewithvariable.txt
- task: PublishBuildArtifacts#1
inputs:
PathtoPublish: '$(Build.ArtifactStagingDirectory)'
ArtifactName: 'drop'
publishLocation: 'Container'
And download the artifact and run the tasks in your release pipeline.
Reference:-
Pass parameters from build to release pipelines on Azure DevOps - GeralexGR
Another simple method is to run the PowerShell script to store the build output as JSON in the published artifact and read the content in the release pipeline like below:-
ConvertTo-Json | Out-File "file.json"
Get-Content "file.json" | themnvertFrom-Json
You can also reference the dependencies from various stages and call them in another stage within a pipeline with below yaml code :-
stages:
- stage: A
jobs:
- job: A1
steps:
- bash: echo "##vso[task.setvariable variable=shouldrun;isOutput=true]true"
# or on Windows:
# - script: echo ##vso[task.setvariable variable=shouldrun;isOutput=true]true
name: printvar
- stage: B
condition: and(succeeded(), eq(dependencies.A.outputs['A1.printvar.should run], 'true'))
dependsOn: A
jobs:
- job: B1
steps:
- script: echo hello from Stage B
Reference:-
bash - How to pass a variable from build to release in azure build to release pipeline - Stack Overflow By PatrickLu-MSFT
azure devops - How to get the variable value in TFS/AzureDevOps from Build to Release Pipeline? - Stack Overflow By jessehouwing
VSTS : Can I access the Build variables from Release definition? By Calidus
Expressions - Azure Pipelines | Microsoft Learn
azure-pipeline.yml
trigger:
- master
parameters:
- name: config
displayName: Execution Environment
type: string
default: QA
values:
- QA
- PreProd
- Prod
pool:
vmImage: 'windows-latest'
The above works perfectly, so in Azure the Execution Environment parameter is shown when I run the pipeline.
If however I attempt to put the parameters in a template as follows:
azure-pipeline.yml
trigger:
- master
extends:
template: parameters.yml
pool:
vmImage: 'windows-latest'
parameters.xml
parameters:
- name: config
displayName: Execution Environment
type: string
default: QA
values:
- QA
- PreProd
- Prod
Then when I run the pipeline the parameter is not shown.
In summary I'm trying to re-use a parameters.yml in different pipelines but extends: template: does not seem to work even though per this link it should:
https://learn.microsoft.com/en-us/azure/devops/pipelines/security/templates?view=azure-devops#set-required-templates
Runtime parameters are something different than templates parameters and having the second in your pipeline will not cause them to show on the UI. There is no way to template runtime parameters. You need to repeat them in each pipeline you expect to have them.
I have a YAML file that resembles the following:
stages:
- stage: A
pool:
vmImage: 'windows-2019'
jobs:
- job: a
steps:
- task: PowerShell#2
inputs:
targetType: 'inline'
script: |
#edits file "$(System.DefaultWorkingDirectory)/myfolder/myfile.json"
- stage: B
dependsOn: A
pool:
vmImage: 'windows-2019'
jobs:
- job: b
steps:
- task: PowerShell#2
inputs:
targetType: 'inline'
script: |
#uses file "$(System.DefaultWorkingDirectory)/myfolder/myfile.json"
I have split my pipeline into two stages; A: edits a file in a repository and B: works with the edited file.
My problem is, the files seem to get reset between stages. Is there any way of keeping the changes throughout the stages, rather than resetting them?
I don't want to publish artifacts and so on as in stage b, although not in the YAML above, I am running multiple PowerShell script files that contain hardcoded file paths and it would just be a mess overwriting the file paths to point at the artifacts directory before running the stage.
An
Based on my test , the cause of this issue is that the two stages run on the different Agent machines.
For example: Stage A -> Agent machine name: 'fv-az146' , Stage B -> Agent machine name: 'fv-az151'
You could check the agent information in Build log -> Initialize job.
Is there any way of keeping the changes throughout the stages, rather
than resetting them?
Since you don't want to publish artifacts, you could try to use Self-hosted agents to run two stages.
You need to add demands to the agent to ensure that the stages run on the same Self-hosted agent.
According to this doc:
The demands keyword is supported by private pools.
We couldn't specify specific "Agent Capabilities" in Microsoft-hosted agents. So we couldn't ensure that two stages can run on the same agent
Update:
Since the two stages are running on the same agent, the "check out" step in Stage B could override the files in Stage A.
So you also need to add the - checkout: none in Stage B.
Here is the updated Yaml template:
stages:
- stage: A
pool:
name: Pool name
demands:
- Agent.Name -equals agentname1
jobs:
- job: a
steps:
- task: PowerShell#2
...
- stage: B
dependsOn: A
pool:
name: Pool name
demands:
- Agent.Name -equals agentname1
jobs:
- job: b
steps:
- checkout: none
- task: PowerShell#2
...
The overall workflow: the Stage A edits the files and save it to $(System.DefaultWorkingDirectory).
Then Stage B could directly use the files in $(System.DefaultWorkingDirectory).
The files in $(System.DefaultWorkingDirectory) will keep changes in Stage A and B.
In Azure Devops I am able to create an ACR Service Connection and use the Docker#2 task to login, build and push an image - as part of the first job of my pipeline.
In the second job of my pipeline I want to use the image I built in the first job and run some stuff inside it. However even though I supply the service connection name(the same as first job) my pipeline keeps failing with 'docker login': denied: requested access to the resource is denied.
How can I make this work with using the Service Connection that works just fine for the 1st job?
- job: BuildDockerImage
pool:
vmImage: 'ubuntu-16.04'
steps:
- task: Docker#2
displayName: Build /push image
inputs:
command: buildAndPush
repository: XYZ
dockerfile: Dockerfile
containerRegistry: ABC
tags: |
$(Build.SourceVersion)
- job: TestCode
dependsOn: BuildDockerImage
condition: succeeded()
timeoutInMinutes: 200
pool:
vmImage: 'ubuntu-16.04'
container:
image: ABC/XYZ:$(Build.SourceVersion)
For this issue , it could be the lack of endpoint parameter.
Containers can be hosted on registries other than Docker Hub. To host an image on Azure Container Registry or another private container registry, add a service connection to the private registry. Then you can reference it in a container spec:
container:
image: myprivate/registry:xxx
endpoint: private_dockerhub_connection
Here is a document you can refer to .
I read the environments documentation here and the issues opened under the environment resource, however I find it impossible to achieve my goal:
I would like to use a parametrized yaml template in order to deploy to multiple environments like below:
parameters:
pool_name: ''
aks_namespace: ''
environment: ''
jobs:
- job: preDeploy
displayName: preDeploy
pool:
name: $(pool_name)
steps:
- template: cd_step_prerequisites.yml
- deployment: Deploy
displayName: Deploy
dependsOn: preDeploy
condition: succeeded()
variables:
secret_name: acrecret
pool:
name: dockerAgents
**environment: '$(environment).$(aks_namespace)'**
strategy:
runOnce:
deploy:
steps:
- template: cd_step_aks_deploy.yml
- job: postDeploy
displayName: postDeploy
dependsOn: Deploy
condition: succeeded()
pool:
name: $(pool_name)
steps:
- template: cd_step_postrequisites.yml
I would like to use this approach so that I only host a minimal pipeline.yml next to my code, and then I would have all the templates in a different repo and call them from the main pipeline, as such:
resources:
repositories:
- repository: self
- repository: devops
type: git
name: devops
- stage: CD1
displayName: Deploy to Alpha
jobs:
**- template: pipeline/cd_job_api.yml#devops**
parameters:
pool_name: $(pool_name)
aks_namespace: $(aks_namespace)
app_name: $(app_name)
app_image_full_name: $(app_image_full_name)
environment: alpha
Then I would be able to pass the $environment variable in order to manipulate multiple deployment targets (AKS clusters/ groups of namespaces) from one template.
Currently this seems to be impossible as the default AzureDevOps parser fails when I try to run my pipeline, with the message "$(environment) environment does not contain x namespace" which tells me that the variable doesn't get expanded.
Is this planning to be implemented anytime soon? If not, are there any alternatives to use only one parametrized job template to deploy to multiple environments?
I think you would need to either parse the files and do a token replace with a script or there should be steps for that.
Your main alternative would be helm. It allows to create templates and pass in variables to render those templates.
Maybe I'm a bit late to the party, but I was also struggling with this problem and found this open thread.
I found this "closed" issue on github. The key points for me in the issue are this comment with a partial solution and this other comment pointing to the explanation of why is not working. Quoting Microsoft's article:
It also answers another common issue: why can't I use variables to resolve service connection / environment names? Resources are authorized before a stage can start running, so stage- and job-level variables aren't available. Pipeline-level variables can be used, but only those explicitly included in the pipeline. Variable groups are themselves a resource subject to authorization, so their data is likewise not available when checking resource authorization.
Regarding the solution, based on the first comment I reference, I ended up creating a new Variable Group with variables with the following naming convention: product.environment.varname. Then I added this group to the beginning of the pipeline (global scope) and then referenced the variables using macro syntax: $(var)
Quick example:
variables:
- group: Product.Pipelines.Environments
jobs:
- job: preDeploy
displayName: preDeploy
pool:
name: $(pool_name)
steps:
- template: cd_step_prerequisites.yml
- deployment: Deploy
displayName: Deploy
dependsOn: preDeploy
condition: succeeded()
variables:
secret_name: acrecret
pool:
name: dockerAgents
environment: $(product.dev.environmentname) #this is the variable within the variable group
strategy:
runOnce:
deploy:
steps:
- template: cd_step_aks_deploy.yml
The variable group will contain among other variables:
product.dev.environmentname: Development
product.stg.environmentname: Staging
product.prd.environmentname: Production