New images are not being deployed to AKS - azure

I've split out the initial azure-pipelines.yml to use templates, iteration, etc... For whatever reason, the new images are not being deployed despite using latest tag and/or imagePullPolicy: Always.
Also, I basically have two pipelines PR and Release:
PR is triggered when a PR request is submitted to merge to production. It automatically triggers this pipeline to run unit tests, build the Docker image, do integration tests, etc. and then pushes the image to ACR if everything passed.
When the PR pipeline is passing, and the PR is approved, it is merged into production which then triggers the Release pipeline.
Here is an example of one of my k8s deployment manifests (the pipeline says unchanged when these are applied):
apiVersion: apps/v1
kind: Deployment
metadata:
name: admin-v2-deployment-prod
namespace: prod
spec:
replicas: 3
selector:
matchLabels:
component: admin-v2
template:
metadata:
labels:
component: admin-v2
spec:
containers:
- name: admin-v2
imagePullPolicy: Always
image: appacr.azurecr.io/app-admin-v2:latest
ports:
- containerPort: 4001
---
apiVersion: v1
kind: Service
metadata:
name: admin-v2-cluster-ip-service-prod
namespace: prod
spec:
type: ClusterIP
selector:
component: admin-v2
ports:
- port: 4001
targetPort: 4001
And here are the various pipeline related .yamls I've been splitting out:
Both PR and Release:
# templates/variables.yaml
variables:
dockerRegistryServiceConnection: '<GUID>'
imageRepository: 'app'
containerRegistry: 'appacr.azurecr.io'
dockerfilePath: '$(Build.SourcesDirectory)'
tag: '$(Build.BuildId)'
imagePullSecret: 'appacr1c5a-auth'
vmImageName: 'ubuntu-latest'
PR:
# pr.yaml
trigger: none
resources:
- repo: self
pool:
vmIMage: $(vmImageName)
variables:
- template: templates/variables.yaml
stages:
- template: templates/changed.yaml
- template: templates/unitTests.yaml
- template: templates/build.yaml
parameters:
services:
- api
- admin
- admin-v2
- client
- template: templates/integrationTests.yaml
# templates/build.yaml
parameters:
- name: services
type: object
default: []
stages:
- stage: Build
displayName: Build stage
jobs:
- job: Build
displayName: Build
steps:
- ${{ each service in parameters.services }}:
- task: Docker#2
displayName: Build and push an ${{ service }} image to container registry
inputs:
command: buildAndPush
repository: $(imageRepository)-${{ service }}
dockerfile: $(dockerfilePath)/${{ service }}/Dockerfile
containerRegistry: $(dockerRegistryServiceConnection)
tags: |
$(tag)
Release:
# release.yaml
trigger:
branches:
include:
- production
resources:
- repo: self
variables:
- template: templates/variables.yaml
stages:
- template: templates/publish.yaml
- template: templates/deploy.yaml
parameters:
services:
- api
- admin
- admin-v2
- client
# templates/deploy.yaml
parameters:
- name: services
type: object
default: []
stages:
- stage: Deploy
displayName: Deploy stage
dependsOn: Publish
jobs:
- deployment: Deploy
displayName: Deploy
pool:
vmImage: $(vmImageName)
environment: 'App Production AKS'
strategy:
runOnce:
deploy:
steps:
- task: KubernetesManifest#0
displayName: Create imagePullSecret
inputs:
action: createSecret
secretName: $(imagePullSecret)
kubernetesServiceConnection: 'App Production AKS'
dockerRegistryEndpoint: $(dockerRegistryServiceConnection)
- ${{ each service in parameters.services }}:
- task: KubernetesManifest#0
displayName: Deploy to ${{ service }} Kubernetes cluster
inputs:
action: deploy
kubernetesServiceConnection: 'App Production AKS'
manifests: |
$(Pipeline.Workspace)/k8s/aks/${{ service }}.yaml
imagePullSecrets: |
$(imagePullSecret)
containers: |
$(containerRegistry)/$(imageRepository)-${{ service }}:$(tag)
Both PR and Release pass...
The new images are in ACR...
I've pulled the images to verify they have the latest changes...
They just aren't getting deployed to AKS.
Any suggestions for what I am doing wrong here?

For whatever reason, the new images are not being deployed despite using latest tag
How should Kubernetes know that there is a new image? Kubernetes config is declarative. Kubernetes is already running what once was "latest" image.
Here is an example of one of my k8s deployment manifests (the pipeline says unchanged when these are applied)
Yeah, it is unchanged because the declarative disered state has not changed. The Deployment-manifest states what should be deployed, it is not a command.
Proposed solution
Whenever you build an image, always give it a unique name. And whenever you want to deploy something, always set a unique name of what should be running - then Kubernetes will manage this in an elegant zero-downtime way using rolling deployments - unless you configure it to behave different.

In your deployment you pull
image: appacr.azurecr.io/app-admin-v2:latest
Since there is no hash but simply the tag latest referenced the deployment says:
"You want latest? I have latest running!".
Important part is the "running". The pull policy always doesn't help if there is no need to pull in the first place.
Potential solutions:
Change something in your deployment that will cause a redeployment of the pods. Then it will actually pull the image again.
Cleaner solution: Don't use latest! Use some semantic versioning or date or whatever strategy matches your approach. In this case the tag will change always and it will always pull that image.

Related

ADO YAML failing: No repository found by name templates

I'm trying to edit an ADO YAML file down to the bare minimum in order to isolate another issue.
When I run Validate, it comes back with the following error:
No repository found by name templates
Here's the general gist of my YAML:
#resources:
# repositories:
# - repository: templates
# type: git
# name: TemplateProject/TemplateRepo
name: $(VersionName)
trigger:
branches:
include:
- main
batch: true
paths:
exclude: $(ListOfExclusions)
stages:
- template: core/setVersion.yml#templates
- stage: Build
pool: linux
jobs:
- job: BuildDocker
displayName: Build and Push Docker Image
pool: linux
steps:
- task: Docker#2
displayName: Build and push an image to container registry
inputs:
command: buildAndPush
repository: $(RepoName)
dockerfile: $(Build.SourcesDirectory)/Dockerfile
containerRegistry: $(dockerRegistryServiceConnection)
tags: |
$(Tag)
What could be going wrong? The error message makes me think the YAML isn't clean.
It turns out I caused a simple typo when commenting out the resources section of the YAML. I had a template part of the stage that also needed to be commented out, and I neglected to do this.
Once I updated the code to read:
stages:
# - template: core/setVersion.yml#templates
- stage: Build
pool: linux
jobs:
- job: BuildDocker
# etc...
Now my YAML validates with OK.

YAML - Execute template in Deploymentjob

I'm currently trying to setup my complete build/release pipeline with yaml files.
First I tried with different stages (dev/staging/prod) and it worked.
Now I wanted to add an approval that the deploy doesn't not happen automatically on each system.
Therefore I added an environment in the TFS with an approval check.
But when I try to setup the yaml file I always get an error.
I don't know how to setup this properly.
This is my main yaml file called release-pipeline.yaml
trigger:
- master
pool:
name: POOL
stages:
- stage: BuildSolution
jobs:
- job: BuildSolutionJob
displayName: Build
workspace:
clean: all
steps:
- template: yaml/BuildSolution.yml
- template: yaml/CopyFiles.yml
- template: yaml/PublishArtifact.yml
- stage: DeployOn_STAGING_System
dependsOn: BuildSolution
jobs:
- job: Deploy_STAGING
- template: yaml/Deploy.yml
parameters:
Environment: 'SITE'
Staging: 1
- stage: Deploy_DEV_System
dependsOn: BuildSolution
jobs:
- deployment: Deploy_DEV
environment: ENVCHECK_DEV
strategy:
runOnce:
deploy:
steps:
- template: yaml/Deploy.yml
parameters:
Environment: 'SITE'
ViewDeploy: 1
This is my Deploy.yml file which i want to execute (only some snips):
parameters:
- name: Environment
type: string
- name: ProdSystem
type: number
default: 0
- name: Staging
type: number
default: 0
- name: ViewDeploy
type: number
default: 0
jobs:
- job:
variables:
artifactName: $[stageDependencies.BuildSolution.BuildSolutionJob.outputs['SetVariables.artifactName']]
version: $[stageDependencies.BuildSolution.BuildSolutionJob.outputs['SetVariables.version']]
steps:
- task: PowerShell#2
displayName: Display given parameters
inputs:
targetType: inline
script: >
Write-Host "ArtifaceName: " $(artifactName)
Write-Host "Environment for Deploy: " ${{ parameters.Environment }}
Write-Host "##vso[task.setvariable variable=isStaging]${{ parameters.Staging }}"
failOnStderr: true
When I try to execute I get the following error:
/release-pipeline.yml: Unexpected value 'parameters'.
How do I need to change this that it will work with the template in both cases, with and without the environment approval check?
I tried https://samlearnsazure.blog/2020/02/05/approvals-in-environments/
and of course different structure for the calling. But nothing helped.
In the tutorials they always have steps below the "deploy" but because I have different sites and environments I want/need the template file to save work.
I found another post which goes in the same direction.
I reworked my complete template so that I can use the approach of this: DevOps template with conditional manual approval job
It's not the same as I wanted, but it works.
My goal was that I don't want to create a environment if I have no checks on this site. Only for sites where I wanted the approval check.
With the above mentioned solution I need to create an environment for each site, independent if I have checks or not.

Error: az cli script failed. UnhandledPromiseRejectionWarning: Error: az cli script failed. Github Ci pipeline that I seteup keeps failing

env:
AZURE_WEBAPP_PACKAGE_PATH: '.'
DOTNET_VERSION: '6.0.x'
on:
push:
branches: master
jobs:
build:
runs-on: ubuntu-latest
steps:
- name: Checkout code
uses: actions/checkout#v2
- name: Set up .NET Core
uses: actions/setup-dotnet#v1
with:
dotnet-version: ${{ env.DOTNET_VERSION }}
- name: Set up dependency caching for faster builds
uses: actions/cache#v2
with:
path: ~/.nuget/packages
key: ${{ runner.os }}-nuget-${{ hashFiles('**/packages.lock.json') }}
restore-keys: |
${{ runner.os }}-nuget-
- name: Build with dotnet
run: dotnet build --configuration Release
- name: Test
run: dotnet test --no-restore --verbosity normal
- name: dotnet publish
run: dotnet publish -c Release -o ${{env.DOTNET_ROOT}}/myapp
- name: Upload artifact for deployment job
uses: actions/upload-artifact#v3
with:
name: .net-app
path: ${{env.DOTNET_ROOT}}/myapp
- name: Login to Aure
uses: azure/login#v1
with:
creds: ${{ secrets.AZURE_CREDENTIALS }}
deploy:
runs-on: ubuntu-latest
needs: build
environment:
name: 'development'
url: ${{ steps.deploy-to-webapp.outputs.webapp-url }}
steps:
- name: Download artifact from build job
uses: actions/download-artifact#v3
with:
name: .net-app
- name: Deploy to Azure
uses: azure/CLI#v1
with:
azcliversion: latest
inlineScript: |
az deployment group create \
-- name \
-- resource-group \
-- template-file Template/template.json \
-- parameters storageAccountType=Standard_LRS
I had this a few weeks ago, something to with using "latest" was the issue.
Could you try replacing it with the below and see if the issue goes away?
- name: Deploy to Azure
uses: azure/CLI#v1
with:
azcliversion: 2.37.0
I'm not sure what your issue is, but you are not logged in to Azure during the deploy phase of your work.
You are logging in to Azure and then starting a new job. This is a new fresh container and it has no previous knowledge relating to your account.
I would move the Login part to the deploy step and see if that solves your issue.
Apart from that, if the config you posted is the entire workflow you will run into an issue wit this part:
url: ${{ steps.deploy-to-webapp.outputs.webapp-url }} since the step deploy-to-webapp is not in the config and as such there is no output to pull from.

How to view or get the service connection id in Azure DevOps

Is there a way to view or get the service connection ids for the service connections that I create in Azure DevOps?
I need them in yaml pipelines that I create. For example, dockerRegistryServiceConnection that you see in the following is used in the docker#02 task for setting containerRegistry, if you see below.
variables:
- name: vmImageName
value: ubuntu-latest
# Container registry service connection established during pipeline creation
- name: dockerRegistryServiceConnection
value: 'd072f8f7-fag1-asdf-467e-7fd5jfr5gjh6' # This is not a true id
- name: imageRepository
value: 'globoticket.services.discount'
- name: containerRegistry
value: 'reacrtrialsregistry.azurecr.io'
- name: dockerfileFolderPath
value: 'src/Services/GloboTicket.Services.Discount'
- name: tag
value: '$(Build.BuildId)'
name: $(date:yyyyMMdd)$(rev:.r)
stages:
- stage: Build
jobs:
- job: buildWebApp
displayName: Build Release pipeline for Discount Service on Master branch
pool:
vmImage: $(vmImageName)
steps:
- checkout: self
- task: Docker#2
displayName: Build the image
inputs:
command: build
repository: $(imageRepository)
dockerfile: $(dockerfileFolderPath)/Dockerfile
buildContext: .
tags: |
$(tag)
- script: |
sudo docker tag $(imageRepository):$(tag) $(containerRegistry)/$(imageRepository):$(tag)
displayName: 'Tag container image before push'
- task: Docker#2
displayName: Push an tagged image to container registry
inputs:
command: push
repository: $(imageRepository)
dockerfile: $(dockerfileFolderPath)/Dockerfile
buildContext: .
containerRegistry: $(dockerRegistryServiceConnection)
tags: |
$(tag)
- stage: DeployToDev
displayName: Deploy to Dev Env
jobs:
- deployment:
pool:
vmImage: ubuntu-latest
environment: Dev
strategy:
runOnce:
deploy:
steps:
- script: |
echo Any deploy stage starts here.
displayName: 'Command Line Script to write out some messages'
The input for the containerRegistry input for the docker task is the name of the service connection, not the id, according to docs:
Container registry (Optional): Name of the Docker registry service
connection
If you still need the ID, you can click the service connection in the list under Project Settings -> Service Connections and fetch the service connection ID from the resourceId parameter in the url:
All Azure Tasks I know are using the service connection name, not the Id. The same is true for the Docker#2 Task:
Source.

Set dependsOn to previous stage without hardcoding the value in Azure Pipelines

Using Azure Pipelines, how can you set dependsOn to the previous stage without hardcoding the value?
Is there some sort of pre-defined variable I can use? I only saw for the current stage and not previous stages.
My pipeline is performing a Terraform deployment. The first (build) stages are running a terraform plan in parallel for each environment which would then be used by the (deploy) stages that are running terraform apply using the plan file in each environment sequentially.
I am using two different templates for the plan (build) and apply (deploy) stages. I am currently using a parameter to specify the dependsOn list values where I need to specify all the plan (build) stages explicitly however for the apply (deploy) stages, I was hoping to default value of the parameter to something like $(Build.PreviousStage) to make it a bit more elegant.
# azure-pipelines.yml
# LINES REMOVED #
stages:
# Terraform Plan (DevTest)
- template: templates/plan.yml
parameters:
az_service_connection: DevTest-Terraform
environment: DevTest
# Terraform Plan (Stage)
- template: templates/plan.yml
parameters:
az_service_connection: Stage-Terraform
environment: Stage
# Terraform Plan (Prod)
- template: templates/plan.yml
parameters:
az_service_connection: Prod-Terraform
environment: Prod
# Terraform Deploy (DevTest)
- template: templates/deploy.yml
parameters:
az_service_connection: DevTest-Terraform
depends_on:
- DevTest_Plan
- Stage_Plan
- Prod_Plan
environment: DevTest
# Terraform Deploy (Stage)
- template: templates/deploy.yml
parameters:
az_service_connection: Stage-Terraform
depends_on:
- DevTest_Deploy
environment: Stage
# Terraform Deploy (Prod)
- template: templates/deploy.yml
parameters:
az_service_connection: Prod-Terraform
depends_on:
- Stage_Deploy
environment: Prod
# ./templates/deploy.yml
parameters:
- name: az_service_connection
type: string
- name: depends_on
type: object
- name: environment
type: string
stages:
# Terraform Deploy
- stage: ${{ parameters.environment }}_Deploy
displayName: Terraform Deploy (${{ parameters.environment }})
condition: and(succeeded(), in(variables['Build.Reason'], 'IndividualCI', 'BatchedCI', 'Manual'))
dependsOn:
${{ parameters.depends_on }}
# LINES REMOVED #
An if condition to check whether the depends_on parameter is empty while also setting the parameter default to "" allowed me to workaround this issue allowing me to set the depends_on parameter for the first DEV deploy stage while not setting it for the others.
As #Krzysztof Madej stated in his answer/comments, no such variable exists to set the previous stage for dependsOn but will automatically depend on the previous stage if not set.
# azure-pipelines.yml
# LINES REMOVED #
stages:
# Terraform Plan (DevTest)
- template: templates/plan.yml
parameters:
az_service_connection: DevTest-Terraform
environment: DevTest
# Terraform Plan (Stage)
- template: templates/plan.yml
parameters:
az_service_connection: Stage-Terraform
environment: Stage
# Terraform Plan (Prod)
- template: templates/plan.yml
parameters:
az_service_connection: Prod-Terraform
environment: Prod
# Terraform Deploy (DevTest)
- template: templates/deploy.yml
parameters:
az_service_connection: DevTest-Terraform
depends_on:
- DevTest_Plan
- Stage_Plan
- Prod_Plan
environment: DevTest
# Terraform Deploy (Stage)
- template: templates/deploy.yml
parameters:
az_service_connection: Stage-Terraform
environment: Stage
# Terraform Deploy (Prod)
- template: templates/deploy.yml
parameters:
az_service_connection: Prod-Terraform
environment: Prod
# templates/deploy.yml
parameters:
- name: az_service_connection
type: string
- name: depends_on
type: object
default:
- name: environment
type: string
stages:
# Terraform Deploy
- stage: ${{ parameters.environment }}_Deploy
displayName: Terraform Deploy (${{ parameters.environment }})
condition: and(succeeded(), in(variables['Build.Reason'], 'IndividualCI', 'BatchedCI', 'Manual'))
${{ if ne(length(parameters.depends_on), 0) }}:
dependsOn:
${{ parameters.depends_on }}
There is no variable like the one mentioned by you, but
When you define multiple stages in a pipeline, by default, they run one after the other in the order in which you define them in the YAML file. Pipelines must contain at least one stage with no dependencies.
and
# if you do not use a dependsOn keyword, stages run in the order they are defined
stages:
- stage: QA
jobs:
- job:
...
- stage: Prod
jobs:
- job:
...
As it is written here. So if you do not define dependsOn stage should be implicitly depends on previous stage.

Resources