I'm in the process of migration my solution from classic pipelines to YAML pipelines in Azure DevOps.
One of the steps in the pipeline is the creation of ACI container from the image I build and push in the previous steps.
When I run this step using YAML pipeline it fails with the message -
"The image 'registry.azurecr.io/performancerunner:1.0' in container group 'performance-testing-container-group' is not accessible. Please check the image and registry credential."
When I run the exact same ACI container creating command from the classic pipeline it works.
I'm using AzureCLI task which looks like this
- task: AzureCLI#1
displayName: 'Run performance tests'
inputs:
azureSubscription: $(AZURE_SUBSCRIPTION)
scriptType: 'bash'
scriptLocation: 'scriptPath'
scriptPath: 'LoadTesting/deployment/scripts/run_tests.sh'
The content of the run_tests.sh looks like this
az container create -g $PERFORMANCE_TESTING_RG_NAME --registry-login-server "$PERFORMANCE_TESTING_REGISTRY_NAME.azurecr.io" --registry-username $PERFORMANCE_TESTING_REGISTRY_NAME \
--registry-password $REGISTRY_PASSWORD --image $IMAGE_NAME \
-n $PERFORMANCE_TESTING_CONTAINER_NAME --cpu 1 --memory 8 --restart-policy Never \
--command-line "dotnet LoadTests.dll -n testApp -c 1000"
When I echo this command, copy it with variables substituted from the logs and run it locally it works fine.
For your issue, the problem is that the error you got shows. You will get the error in two situations. One is that your image with the tag is not right in the registry that you used. And another is that the credential of the registry is not right.
With the messages, it seems it's all right for your image. Then you need to focus on another reason of the two. You set the credential through the variables, so I think a good way is to output the variables to check if it's right.
Related
I am currently struggling with Azure DevOps. I am trying to deploy a VM through the DevOps pipeline and pass it the SSH key that I have saved in the Library as a Secure File.
trigger:
- none
pool:
vmImage: ubuntu-latest
variables:
- group: 'deployment-test'
steps:
- task: AzureCLI#2
displayName: "Create VM"
inputs:
azureSubscription: sp-test1234
scriptType: bash
scriptLocation: inlineScript
inlineScript: |
az vm create \
--resource-group testrg23142 \
--name testvm \
--image Canonical:UbuntuServer:18.04-LTS:latest \
--custom-data "${BASH_SOURCE%/*}"/cloud-init.yaml \
--ssh-key-values ............
--vnet-name vm-test-vnet \
--subnet vm-test-subnet \
--assign-identity '[system]'
I can´t find an example how to get my public key ¨test43.pub¨ from Secure File passed to ssh-key-values. Maybe there is something fundamental I am missing. I found the following link https://learn.microsoft.com/en-us/azure/devops/pipelines/tasks/utility/download-secure-file?view=azure-devops which is about making the secure file accessible to the agent machine. The agent machine should be the temporary VM defined in pool, which has nothing to do with the VM I am creating. I am stuck, any help is much appreciated.
You could set a secret variable in your pipeline for the contents of your mykey.pub file. Then, call the variable in your pipeline definition as $(myPubKey). For the secret part of your key, use the Secure File library in Azure Pipelines.
Here's a more specific example for using an SSH key:
https://learn.microsoft.com/en-us/azure/devops/pipelines/tasks/utility/install-ssh-key?view=azure-devops
I am using AKS.I am trying to fetch the IP of the service post my deployment through devops so that I can pass on the IP to the API Management for further configuration. right now my task looks like this
- task: Kubernetes#1
inputs:
connectionType: 'Kubernetes Service Connection'
kubernetesServiceEndpoint: 'string-Conn'
namespace: '<appservices>'
command: 'get'
arguments: 'get services --namespace appservices authsvc --output jsonpath=''{.status.loadBalancer.ingress[0].ip}'''
secretType: 'dockerRegistry'
containerRegistryType: 'Azure Container Registry'
name: 'GetSvc'
when I run the command locally I am getting the IP of the loadbalancer. but how can I pass the output from this task to the next task? previously, when I use azure cli scripts, I can pass the vso set variable as part of the script itself like the one below but not sure how will I add the output of this task to a variable.
inlineScript: |
$something = (az storage container generate-sas --account-name <container> --name armtemplate --permissions r --expiry $(date -u -d "30 minutes" +%Y-%m-%dT%H:%MZ))
Write-Host($something) Write-Output("##vso[task.setvariable variable=SasToken;]$something")
I have followed the approach suggested by Amit Baranes since I am not clear on the script execution assignment without variable name. I have used the Azure cli task and ran it. It was successful
- task: AzureCLI#2
inputs:
azureSubscription: '<Service-Conn>'
scriptType: 'pscore'
scriptLocation: 'inlineScript'
inlineScript: |
az aks get-credentials -n $(clusterName) -g $(clusterRG)
$externalIp = (kubectl get -n $(ns) services $(svc) --output jsonpath='{.status.loadBalancer.ingress[0].ip}' )
Write-Host($externalIp) Write-Output("##vso[task.setvariable variable=AKSURL;]$externalIp")
We could use the logging command ##vso[task.setvariable variable=SasToken;]$something" to set variables in scripts.
But according to your description, we recommend that you use the output variable to pass the variable IP. For example, assume we have a task called MyTask, which sets an output variable called MyVar. We could use outputs in the same job.
steps:
- task: MyTask#1 # this step generates the output variable
name: ProduceVar # because we're going to depend on it, we need to name the step
- script: echo $(ProduceVar.MyVar) # this step uses the output variable
I have a weird issue, possible I am missing something. The web app is created as a container with default settings(Quickstart) and I am using Azure Container Registry to push my docker image. I have pipeline which logins to the ACR, build and pushes image, and then deploys the image to the web app. All these tasks are successful. But when I got to the webapp in azure portal, the image source is set to docker hub and not the Azure Container registry.The full inage name and tag are of the ACR. Any suggestion what I am missing?
Update: I have added the pipeline.yml file for reference, if it helps. I am logging in to registry while pushing docker image, I confirm that the image is push in the ACR.
- task: Docker#2
displayName: Login to QA Azure Container Registry
inputs:
command: login
containerRegistry: $(azureSubscriptionContainer) #Docker Registry type Service connection
- task: Docker#2
displayName: Build and Push
inputs:
command: buildAndPush
repository: $(repository) #custom name of repository in ACR
tags: $(Build.BuildId)
- task: AzureRMWebAppDeployment#4
displayName: Azure App Service Deploy
inputs:
appType: webAppContainer
ConnectedServiceName: $(azureSubscription) #ARM service connection
WebAppName: $(webApp)
DockerNamespace: $(dockerNameSpace) #ACR namespace myacr.azurecr.io
DockerRepository: $(repository) #custom name of repository in ACR
DockerImageTag: $(Build.BuildId)
TLDR: You need to either select the repository in the UI; or add the credentials via ARM or AzureCLI.
https://learn.microsoft.com/en-us/azure/devops/pipelines/targets/webapp-on-container-linux?view=azure-devops&tabs=dotnet-core%2Cyaml#configure-registry-credentials-in-web-app
You can use your deployment task like you are doing right now, not all developers need access to the WebApp. However, you need to set up the credentials to the registry at least once. The UI will basically show its "best guess" for how to intepret the underlying configuration.
https://learn.microsoft.com/en-us/azure/app-service/containers/tutorial-custom-docker-image#configure-registry-credentials-in-web-app
You could also do that with every pipeline run, tough I would not recommend that.
Sidenote: You can also have your webapp automatically pull a certain tag whenever it is updated in the container registry if you select "Continuous deployment".
Create App Service with CLI:
az webapp create --resource-group myResourceGroup --plan myAppServicePlan --name <app-name> --deployment-container-image-name <azure-container-registry-name>.azurecr.io/mydockerimage:v1.0.0
This is for public images, but according the documentation if you want to use private image you must also configure registry credentials to yours Web App and if this is not Docker registry you must provide -docker-registry-server-url:
az webapp config container set --name <app-name> --resource-group myResourceGroup --docker-custom-image-name <azure-container-registry-name>.azurecr.io/mydockerimage:v1.0.0 --docker-registry-server-url https://<azure-container-registry-name>.azurecr.io --docker-registry-server-user <registry-username> --docker-registry-server-password <password>
I try to build a CI/CD Pipeline with Azure Devops.
My goal is to
Build a docker Image an upload this to a private docker Respository in Dockerhub within the CI Pipeline
Deploy this image to an Azure Kubernetes Cluster within the CD Pipeline
The CI Pipeline works well:
The image is pushed successfully to dockerhub
The pipeline docker push task:
steps:
- task: Docker#1
displayName: 'Push an image'
inputs:
containerregistrytype: 'Container Registry'
dockerRegistryEndpoint: DockerHubConnection
command: 'Push an image'
imageName: 'jastechgmbh/microservice-demo:$(Build.BuildId)'
After that I trigger my release pipeline manually an it shows success as well
The apply pipeline task:
steps:
- task: Kubernetes#0
displayName: 'kubectl apply'
inputs:
kubernetesServiceConnection: MicroserviceTestClusterConnection
command: apply
useConfigurationFile: true
configuration: '$(System.DefaultWorkingDirectory)/_MicroservicePlayground-MavenCI/drop/deployment.azure.yaml'
containerRegistryType: 'Container Registry'
dockerRegistryConnection: DockerHubConnection
But when I check the deployment on my kubernetes dashboard an error message pops up:
Failed to pull image "jastechgmbh/microservice-demo:38": rpc error: code = Unknown desc = Error response from daemon: pull access denied for jastechgmbh/microservice-demo, repository does not exist or may require 'docker login': denied: requested access to the resource is denied
I use the same dockerhub service connection in the CI & CD Pipeline.
I would be very happy about your help.
I believe this error indicates your kubernetes cluster doesnt have access to docker registry. You'd need to create docker secret for that. like so:
kubectl create secret generic regcred \
--from-file=.dockerconfigjson=<path/to/.docker/config.json> \
--type=kubernetes.io/dockerconfigjson
or from command line:
kubectl create secret docker-registry regcred --docker-server=<your-registry-server> --docker-username=<your-name> --docker-password=<your-pword> --docker-email=<your-email>
https://kubernetes.io/docs/tasks/configure-pod-container/pull-image-private-registry/
Configure ACR integration for existing AKS clusters
az aks update -n myAKSClusterName -g myAKSResourceGroupName --attach-acr acr-name
https://learn.microsoft.com/en-us/azure/aks/cluster-container-registry-integration
Solved the issue for me
The answer above is correct, just need to add that you have to put imagePullsecrets on your deployment. Read the link provided on the other answer, it explain it in detail:
https://kubernetes.io/docs/tasks/configure-pod-container/pull-image-private-registry/
I have a repository hosted on gitlab.com, it has several build jobs associated with it. I would like for an ability to deploy the compiled artifacts of any given build (generally in the form of HTML/CSS/JavaScript compiled files) to azure.
All of the guides/docs/tutorials I've seen so far (1, 2, 3, to name a few), focus on deploying files directly from a git repository, which I can see being useful in some cases, but isn't what I need in this case, as I want the compilation targets, and not the source.
Solutions welcome, we've been bashing our heads over this for several days now.
Alternatives to GitLab in which this is made possible (in case it's not in GitLab), will also be welcomed.
Add a deployment stage that has the build dependencies, from a job or more then a job, and thus downloads the artifacts of those jobs see below .gitlab-ci.yml:
stages:
- build
- ...
- deploy
buildjob:1:
stage: build
script:
- build_to_web_dir.sh
artifacts:
paths:
- web
buildjob:2:
stage: build
script:
- build_to_web_dir.sh
artifacts:
paths:
- web
deploy:
stage: deploy
GIT_STRATEGY: none
image: microsoft/azure-cli
dependencies:
- buildjob:1
- buildjob:2
script:
- export containerName=mynewcontainername
- export storageAccount=mystorageaccount
- az storage blob delete-batch --source ${containerName} --account-name ${storageAccount} --output table
- az storage blob upload-batch --source ./web --destination ${containerName} --account-name ${storageAccount} --output table --no-progress
In the deploy job, only one directory will be in the CI_PROJECT_DIR ./web containing all files that the build jobs have produced.
checkout storage quickstart azure for creating and setting up the storage container, account details etc.
For the deploy stage we can use the microsoft/azure-cli docker image, so we can call from our script the az command, see storage-quickstart-blobs-cli for more detail explanation.
az storage blob upload-batch --source ./web --destination ${containerName} --account-name ${storageAccount} --output blobname --no-progress
will copy ./web to the storage container
we should not export for security reasons in the .gitlab-ci.yml:
export AZURE_STORAGE_ACCOUNT="mystorageaccountname"
export AZURE_STORAGE_ACCESS_KEY="myStorageAccountKey"
but they should be defined in the project_or_group/settings/ci_cd environment variables, so they'll be present in the script environment.