Multiple deployment pipes on bitbucket - azure

I have a solution where reside a web application and an azure function.
So different pipes should be used for deployment:
microsoft/azure-web-apps-deploy:1.0.3 - for web app
microsoft/azure-functions-deploy:1.0.2 - for azure function
How they should be combined in a yml file? Or do I need to have separate yml files?
It looks like I can't have multiple deployment steps for a single environment?

Bitbucket Pipelines does not support multiple files for defining the pipeline nor different deployment steps for the same deployment stage so: yes, same file, same step.
pipelines:
default:
- step:
deployment: production
script:
- pipe: [docker://]microsoft/azure-web-apps-deploy:1.0.3
variables:
FOO: bar
- pipe: [docker://]microsoft/azure-functions-deploy:1.0.2
variables:
BAR: foo
So, deploying to "production" will deploy both the webapp and the function.
The docker:// prefix for the pipe is necessary if the pipes aren't registered as such but are arbitrary docker images with smart entrypoints.
Yet, you could use custom deployment environments like "Production Webapp" and "Production Function" so you could deploy them separatedly like
pipelines:
default:
- parallel:
- step:
deployment: production-webapp
script:
- pipe: [docker://]microsoft/azure-web-apps-deploy:1.0.3
variables:
FOO: bar
- step:
deployment: production-function
script:
- pipe: [docker://]microsoft/azure-functions-deploy:1.0.2
variables:
BAR: foo
and Bitbucket will keep track of the deployments made to each "environment" and also let you redeploy to them individually.

Related

How to declare variable groups once in template and use in multiple repos in Azure yaml pipeline

How to authorize variable in a yaml template in another repo to be used in a different repo. IOW, how to declare variables in a template once and use in multiple repos in azure devops
I am trying to migrate from classic pipelines to yaml in azure devops. So i am trying to set up a repo to host all yaml templates so it can be referenced and reused by multiple repos for builds, etc.
I wrote this yaml pipeline to sample prototyping it:
`name: FirstPL
trigger:
- my_test_branch
pool: my-agent
resources:
repositories:
- repository: blah
type: git
name: foo/bar
ref: refs/heads/poc
variables:
- template: pipeline_vars.yml#blah
steps:
- script: echo $(variable_from_pipeline_vars)
`
However when i run this i get the follwoing error:
An error occurred while loading the YAML build pipeline. Variable group was not found or is not authorized for use. For authorization details, refer to https://aka.ms/yamlauthz.
How can i declare my variables and variables groups once in a template in a repo that is dedicated to host those templates and then use them over and again in multiple repos using the resourcs syntax above? Also, I tried to find a way authorize the variables template but couldn't find anything to enable this.
How to authorize variable in a yaml template in another repo to be
used in a different repo. IOW, how to declare variables in a template
once and use in multiple repos in azure devops. However when I run
this i get the follwoing error:
An error occurred while loading the YAML build pipeline. Variable group was not found or is not authorized for use. For authorization
details, refer to
https://aka.ms/yamlauthz.
You can directly add the variable group in your azure DevOps project in the Library tab and save all your variables from pipeline_vars.yml in the variable group like below:-
Now, You can access this variable group in your YAML pipeline of multiple repos like the below:-
# Starter pipeline
# Start with a minimal pipeline that you can customize to build and deploy your code.
# Add steps that build, run tests, deploy, and more:
# https://aka.ms/yaml
pool:
vmImage: ubuntu-latest
workspace:
clean: all
resources:
repositories:
- repository: repo_a
type: git
name: InternalProjects/repo_a
trigger:
- main
- release
- repository: repo_b
type: git
name: InternalProjects/repo_b
trigger:
- main
variables:
- group: SharedVariables
steps:
- checkout: repo_a
- checkout: repo_b
- script: |
echo $(databaseServerName)
- task: AzureCLI#2
inputs:
azureSubscription: 'xxx subscription(xxxxxxxxx-f598-44d6-b4fd-xxxxxxxxxxxx)'
scriptType: 'bash'
scriptLocation: 'inlineScript'
inlineScript: 'az resource list --location uksouth'
Output:-
It asks for approving permission for the Variable group to run in the pipeline like below:-
Console:-
Tried the same with another repo repo_b in the project and it asks to approve access for repositories and variable groups like the below:-
Output:-
If you want this variable to be accessed in multiple stages/repos/pipelines within the project without authorization prompt. You can click on Security on top and allow it:-
I created one variables template and referenced it in the YAML pipeline to use across multiple repos by checking out another repo like below:-
# Starter pipeline
# Start with a minimal pipeline that you can customize to build and deploy your code.
# Add steps that build, run tests, deploy, and more:
# https://aka.ms/yaml
pool:
vmImage: ubuntu-latest
workspace:
clean: all
resources:
repositories:
- repository: repo_a
type: git
name: InternalProjects/repo_a
trigger:
- main
- release
- repository: repo_b
type: git
name: InternalProjects/repo_b
trigger:
- main
variables:
- template: pipeline_vars.yml
steps:
- checkout: repo_a
- checkout: repo_b
- script: |
echo $(environmentName)
- task: AzureCLI#2
inputs:
azureSubscription: 'xxx subscription(xxxxxxxx-f598-44d6-b4fd-e2b6e97xxxxxx)'
scriptType: 'bash'
scriptLocation: 'inlineScript'
inlineScript: 'az resource list --location uksouth'
Output:-
I tried to reference the same template in another repo where it does not exist it could not read the pipeline_vars.yml file as it does not exist in the repo.
You can make use of variable groups like the above to reference the variables in this pipeline.
One of the possible reasons for this is that the project that hosts the repository with the variables does not allow access to it's repositories from yaml pipelines.
To verify, go to your project's settings -> Pipelines -> Settings -> Verify "Protect access to repositories in YAML pipelines" . This setting is enabled by default. You could set it to off or add a checkout step to your pipeline yaml. See here for more information.

Staging deployment with Bitbucket pipeline and Azure Static Web App

I could successfully deploy my project into the production environment using the provided documentation https://github.com/MicrosoftDocs/azure-docs/blob/main/articles/static-web-apps/bitbucket.md
pipelines:
branches:
main:
- step:
name: Deploy to test
deployment: test
script:
- pipe: microsoft/azure-static-web-apps-deploy:main
variables:
APP_LOCATION: '$BITBUCKET_CLONE_DIR'
OUTPUT_LOCATION: '$BITBUCKET_CLONE_DIR/dist'
API_TOKEN: $deployment_token​
But there are no information about how to deploy to other environments than production, e.g: staging, qa, release ...
With Azure pipeline, the value can be set with the deployment_environment parameter.
Does anyone have a solution for it?
If you are using azure static web apps with multiples slots like prod,staging/dev then you have to specify DEPLOYMENT_ENVIRONMENT in your bitbucket-pipelines.yml file
Example :
pipelines:
branches:
develop:
- step:
name: Deploy to staging
deployment: staging
script:
- pipe: microsoft/azure-static-web-apps-deploy:main
variables:
APP_LOCATION: '$BITBUCKET_CLONE_DIR/dist'
API_TOKEN: $deployment_token
DEPLOYMENT_ENVIRONMENT:Staging
master:
- step:
name: Deploy to production
deployment: production
script:
- pipe: microsoft/azure-static-web-apps-deploy:main
variables:
APP_LOCATION: '$BITBUCKET_CLONE_DIR/dist'
API_TOKEN: $deployment_token
DEPLOYMENT_ENVIRONMENT:Prod
If you are using the multiple Static Web App resources then you can follow the answer provided by David Torres, you have to create tokens for each web and mention them in variables.
Thanks & Regards
Just got this working earlier today. I ran into the same issue, looks like they only support Staging environments when using Github or Azure Pipelines. I was able to get this to work by creating multiple Static Web App resources in Azure Portal. Prod gets the "paid" version, develop branch gets the "free" version.
Here is what the pipelines YAML looks like:
pipelines:
branches:
develop:
- step:
name: Deploy to staging
deployment: staging
script:
- pipe: microsoft/azure-static-web-apps-deploy:main
variables:
APP_LOCATION: '$BITBUCKET_CLONE_DIR/dist'
API_TOKEN: $deployment_token_beta
master:
- step:
name: Deploy to production
deployment: production
script:
- pipe: microsoft/azure-static-web-apps-deploy:main
variables:
APP_LOCATION: '$BITBUCKET_CLONE_DIR/dist'
API_TOKEN: $deployment_token
Then I added a new deployment_token_beta to the repo deployment variables using the key from the second web app

Do we have standard pipeline option in Azure DevOps to share the library in other pipelines like in jenkins?

I would like to have one standard pipeline defined and want to use that as a shared library in all my jobs which have the common steps.
Yes, it's called Template:
Use templates to define your logic once and then reuse it several times. Templates combine the content of multiple YAML files into a single pipeline. You can pass parameters into a template from your parent pipeline.
For example, Job reuse:
First yaml:
# File: templates/jobs.yml
jobs:
- job: Build
steps:
- script: npm install
- job: Test
steps:
- script: npm test
Second yaml:
# File: azure-pipelines.yml
jobs:
- template: templates/jobs.yml # Template reference

Can an Azure YAML Pipelines <deployment job> use variable environments?

I read the environments documentation here and the issues opened under the environment resource, however I find it impossible to achieve my goal:
I would like to use a parametrized yaml template in order to deploy to multiple environments like below:
parameters:
pool_name: ''
aks_namespace: ''
environment: ''
jobs:
- job: preDeploy
displayName: preDeploy
pool:
name: $(pool_name)
steps:
- template: cd_step_prerequisites.yml
- deployment: Deploy
displayName: Deploy
dependsOn: preDeploy
condition: succeeded()
variables:
secret_name: acrecret
pool:
name: dockerAgents
**environment: '$(environment).$(aks_namespace)'**
strategy:
runOnce:
deploy:
steps:
- template: cd_step_aks_deploy.yml
- job: postDeploy
displayName: postDeploy
dependsOn: Deploy
condition: succeeded()
pool:
name: $(pool_name)
steps:
- template: cd_step_postrequisites.yml
I would like to use this approach so that I only host a minimal pipeline.yml next to my code, and then I would have all the templates in a different repo and call them from the main pipeline, as such:
resources:
repositories:
- repository: self
- repository: devops
type: git
name: devops
- stage: CD1
displayName: Deploy to Alpha
jobs:
**- template: pipeline/cd_job_api.yml#devops**
parameters:
pool_name: $(pool_name)
aks_namespace: $(aks_namespace)
app_name: $(app_name)
app_image_full_name: $(app_image_full_name)
environment: alpha
Then I would be able to pass the $environment variable in order to manipulate multiple deployment targets (AKS clusters/ groups of namespaces) from one template.
Currently this seems to be impossible as the default AzureDevOps parser fails when I try to run my pipeline, with the message "$(environment) environment does not contain x namespace" which tells me that the variable doesn't get expanded.
Is this planning to be implemented anytime soon? If not, are there any alternatives to use only one parametrized job template to deploy to multiple environments?
I think you would need to either parse the files and do a token replace with a script or there should be steps for that.
Your main alternative would be helm. It allows to create templates and pass in variables to render those templates.
Maybe I'm a bit late to the party, but I was also struggling with this problem and found this open thread.
I found this "closed" issue on github. The key points for me in the issue are this comment with a partial solution and this other comment pointing to the explanation of why is not working. Quoting Microsoft's article:
It also answers another common issue: why can't I use variables to resolve service connection / environment names? Resources are authorized before a stage can start running, so stage- and job-level variables aren't available. Pipeline-level variables can be used, but only those explicitly included in the pipeline. Variable groups are themselves a resource subject to authorization, so their data is likewise not available when checking resource authorization.
Regarding the solution, based on the first comment I reference, I ended up creating a new Variable Group with variables with the following naming convention: product.environment.varname. Then I added this group to the beginning of the pipeline (global scope) and then referenced the variables using macro syntax: $(var)
Quick example:
variables:
- group: Product.Pipelines.Environments
jobs:
- job: preDeploy
displayName: preDeploy
pool:
name: $(pool_name)
steps:
- template: cd_step_prerequisites.yml
- deployment: Deploy
displayName: Deploy
dependsOn: preDeploy
condition: succeeded()
variables:
secret_name: acrecret
pool:
name: dockerAgents
environment: $(product.dev.environmentname) #this is the variable within the variable group
strategy:
runOnce:
deploy:
steps:
- template: cd_step_aks_deploy.yml
The variable group will contain among other variables:
product.dev.environmentname: Development
product.stg.environmentname: Staging
product.prd.environmentname: Production

GitlabCi deploy on multiple servers

I use Gitlab runner and works fine for a single server. The gitlab-ci.yml is simple:
stages:
- test
- deploy
test:
stage: test
image: php
tags:
- docker
script:
- echo "Run tests..."
deploy:
stage: deploy
tags:
- shell
script:
- sh deploy.sh
As i said this is fine for a single server but to deploy same app on another server? I tried with same gitlab-runner config (same conf.toml) but then it was only updating one of them randomly.
Is there somehow gitlab Ci to be triggered by more than 1 runner and deploy all of them according gitlab-ci.yml?
You can register several runners (e.g. tagged serverA and serverB) from different servers and have multiple deployment jobs, each of them performed by a different runner. This is because you can set more than one tag in a job and only a runner having all the tags will be used.
stages:
- test
- deploy
test:
stage: test
image: php
tags:
- docker
script:
- echo "Run tests..."
deployA:
stage: deploy
tags:
- shell
- serverA
script:
- sh deploy.sh
deployB:
stage: deploy
tags:
- shell
- serverB
script:
- sh deploy.sh
However, take into account a situation when one of the deployment jobs fails - this would end up in you having two different versions of the code on the servers. Depending on your situation this might or might not be a problem.
Yes there is, just set up two jobs for the same stage:
stages:
- deploy
deploy:one:
stage: deploy
script:
- echo "Hello CI one"
deploy:two:
stage: deploy
script:
- echo "Hello CI two"
If necessary you can use tags on your runners to choose which one to use.
Since 2016, you now have Environments and deployments
Environments describe where code is deployed.
Each time GitLab CI/CD deploys a version of code to an environment, a deployment is created.
GitLab:
Provides a full history of deployments to each environment.
Tracks your deployments, so you always know what is deployed on your servers.
It does integrates well with Prometheis, and, with GitLab 13.11 (April 2021), you even have:
Update a deploy freeze period in the UI
In GitLab 13.2, we added the ability to create a deploy freeze period in the project’s CI/CD settings.
This capability helps teams avoid unintentional deployments, reduce uncertainty, and mitigate deployment risks.
However, it was not possible to update deploy freezes.
In GitLab 13.11, we are adding the ability to edit an existing deploy freeze. This way, you can update the freeze period to match your business needs.
See Documentation and Issue.
As shown in "gitlab-ci.yml deployment on multiple hosts", you can use YAML anchors to trigger parallel deployment on multiple environments, which means "multiple servers".

Resources