Task is running on Node.
Part of the yaml file for the pipeline is:
steps:
- script: |
#!/bin/bash
./plan.sh
displayName: "example deploy"
continueOnError: false
Now, when sometimes the ./plan.sh script fails: but it still shows as a success (green tick) in the pipeline. See below:
How do I make it show a "failed" red cross whenever it fails?
What you are doing now is actually calling the bash script file from PowerShell. Your way of writing it cannot capture the error message. The correct way is as follows:
plan2.sh
xxx
pipeline YAML definition:
trigger:
- none
pool:
vmImage: ubuntu-latest
steps:
- script: |
bash $(System.DefaultWorkingDirectory)/plan2.sh
displayName: "example deploy"
continueOnError: false
Successfully get the issue:
But usually, we use the bash task directly to run the bash script.
plan.sh
{
xxx
# Your code here.
} || {
# save log for exception
echo some error output here.
exit 1
}
pipeline YAML definition part:
steps:
- task: Bash#3
displayName: 'Bash Script'
inputs:
targetType: filePath
filePath: ./plan.sh
Successfully get the issue:
For your script step to signal failure, you need to make the script as a whole return a non-0 exit code. You may need some conditional logic in your script, checking the exit code from plan.sh after it returns.
I was able to solve this by adding
set -o pipefail
In the start of the yaml file.
Related
I want to tag my pipeline build during the build process itself. For which based on the official document I tried echo ##vso[build.addbuildtag]testing in the pipeline yaml. There was no error but the build was not tagged either.
I'm able to add tags from portal successfully.
My pipleine yaml below.
pool:
vmImage: ubuntu-latest
jobs:
- job: addingtag
displayName: addingtag
steps:
- script: |
echo ##vso[build.addbuildtag]testing
displayName: addingtag
Below are other combinations I tried and still failed.
echo ##vso[build.addbuildtag] testing
echo ##vso[build.addbuildtag]Tag_testing
You may need to add double quotes, I could successfully add tag by using YAML script like below.
- script: |
echo "##vso[build.addbuildtag]testing"
displayName: addingtag
In my build yaml pipeline, I have a step.bash to connect to isql and run a select query. I have a requirement to fail/exit the step in case of any issue in isql and retry 2nd time.
However, Azure devops is marking step.bash as success and skipping the 2nd retry.
Is there anyway to fail the bash.step forcefully?
I tried RAISEERROR and azure devops logging commands inside the isql but no luck.
here's the step:
steps:
- bash: |
isql -U ${{ User }} -P ${{ Password }} -S ${{ Server }} -D ${{ Database }} <<-EOSQL
USE master
GO
Select * from table (If this query fails with error code 102)
IF ##error = 102
RAISERROR 17001 "Unable to login"
GO
EOSQL
echo "##vso[task.setvariable variable=dataset]ok"
displayName: Test dataset
enabled: true
condition: ne(variables['dataset'], 'ok')
continueOnError: true
exit 1 will force the task to fail.
I write a demo:
trigger:
- none
pool:
vmImage: ubuntu-latest
steps:
- task: Bash#3
inputs:
targetType: 'inline'
script: |
# Write your commands here
exit 1 #This will make the task fail.
# exit 0 #This will make the task success.
Result:
So put it in the place where you want to make the task fail.
My company recently upgraded from macOS11 to macOS12 in azure pipeline. But the building project pipeline keeps failing with this error:
##[error]Error: /usr/bin/xcodebuild failed with return code: 65
But it doesn't show me where the error is in the whole building log...
Does anybody have any idea? Or how can I see more building details?
Use the System.debug=true as the environment variable will make the DevOps pipeline show the entire logs.
There are two situations:
1, YAML pipeline.
trigger:
- none
pool:
vmImage: ubuntu-latest
variables:
- name: system.debug
value: true #This will help show more log details.
steps:
- task: PowerShell#2
inputs:
targetType: 'inline'
script: |
# Write your PowerShell commands here.
Write-Host "Hello World"
2, Classic pipeline.
Currently i am in the process of converting my pipelines from classic over to azurepipelines.yml and im having an issue trying to find the correct syntax to reference release variables in a bash step.
The existing code in a bash task
namebuilder=$(RELEASE.ENVIRONMENTNAME)-$(RELEASE.RELEASEID)
will output the following
dev-2049
however when converted over to my new pipeline file the above code produces the the following error
/home/vsts/work/_temp/ac39e1d7-11bd-4c32-9b1b-1520dae11c5a.sh: line 1: RELEASE.ENVIRONMENTNAME: command not found
/home/vsts/work/_temp/ac39e1d7-11bd-4c32-9b1b-1520dae11c5a.sh: line 1: RELEASE.RELEASEID: command not found
[extracted from pipeline.yml]
- bash: |
namebuilder=$(RELEASE.ENVIRONMENTNAME)-$(RELEASE.RELEASEID)
i have even created a step trying a few different approaches without much luck
steps:
- bash: |
echo This multiline script always runs in Bash.
echo Even on Windows machines!
echo '$(release.environmentname)'
echo $(release.environmentname)
echo '$(RELEASE.ENVIRONMENTNAME)'
echo $(RELEASE.ENVIRONMENTNAME)
produces
This multiline script always runs in Bash.
Even on Windows machines!
$(release.environmentname)
$(RELEASE.ENVIRONMENTNAME)
/home/vsts/work/_temp/260dd504-a42d-45d6-bb1b-bf1f4b015cf8.sh: line 4: release.environmentname: command not found
/home/vsts/work/_temp/260dd504-a42d-45d6-bb1b-bf1f4b015cf8.sh: line 6: RELEASE.ENVIRONMENTNAME: command not found
is it also possible (in a much cleaner approach) to define this as a pipeline variable and reference at a global scope like below ?
variables:
namebuilder: '$(release.environmentname)-$(release.releaseid)'
stages:
- stage: Deploy
displayName: deploy infra
jobs:
- job: deploy_infra
displayName: deploy infra
continueOnError: true
workspace:
clean: outputs
steps:
- bash: |
echo This multiline script always runs in Bash.
echo Even on Windows machines!
echo '$(namebuilder)'
tia
It doesn't look like release.environment or any release variables are available for multi-stage pipelines. You could use the new environment concept and at that point environment.name would be available. I think you would likely go with $(environment.name)-$(build.buildid) for what you are after.
So I am not sure if the release pipelines you are converting are deploying to say an app service, or to a VM, or just using a hosted agent to publish something else? Disclaimer I have not used the Environment concept extensively yet just some reading and limited testing. Its all new!
So for deploying to VMs You can configure a Virtual Machine resource in an environment. This concept has a bunch of parallels with classic deployment group agents. You register an agent on a target machine. From there your pipeline steps can execute in that machine's context and you get a further set of environment variables.
The example pipeline below outputs any environment variables from the context the steps are running in and also outputs $(environment.name)-$(build.buildid)
A normal job in a hosted pipeline
A Deployment to an Environment
A Deployment to an Environment with a VM resource
trigger:
- master
pool:
vmImage: 'ubuntu-latest'
variables:
namebuilder: '$(environment.name)-$(build.buildid)'
jobs:
- job: NormalJobInHostedPipeline
steps:
- task: PowerShell#2
name: EnvironmentVariables
inputs:
targetType: 'inline'
script: 'gci env:* | sort-object name'
- bash: |
echo This multiline script always runs in Bash.
echo Even on Windows machines!
echo '$(namebuilder)'
# track deployments on the environment
- deployment: DeploymentHostedContext
displayName: Runs in Hosted Pool
pool:
vmImage: 'Ubuntu-16.04'
# auto creates an environment if it doesn't exist
environment: 'Dev'
strategy:
runOnce:
deploy:
steps:
- task: PowerShell#2
name: EnvironmentVariables
inputs:
targetType: 'inline'
script: 'gci env:* | sort-object name'
- bash: |
echo This multiline script always runs in Bash.
echo Even on Windows machines!
echo '$(namebuilder)'
# Similar to Deployment Group Agent need to register them -stage will fail if resource does not exist
# https://learn.microsoft.com/en-us/azure/devops/pipelines/process/environments-virtual-machines?view=azure-devops
- deployment: DeploymentVirtualMachineContext
displayName: Run On Virtual Machine Agent
environment:
name: DevVM
resourceType: VirtualMachine
strategy:
runOnce:
deploy:
steps:
- task: PowerShell#2
name: EnvironmentVariables
inputs:
targetType: 'inline'
script: 'gci env:* | sort-object name'
- task: PowerShell#2
name: VariableName
inputs:
targetType: 'inline'
script: 'echo $(namebuilder)'
Use $(System.StageName) in place of $(Release.EnvironmentName), as for release Id, you'd need to use $(Build.BuildId)
I found that $(Environment.Name) doesn't work unless you're using environments. I'm not since it's still quite limited.
I have the following in my azure-pipelines.yml
jobs:
- job: TestifFolder1Exists
pool:
vmImage: 'ubuntu-16.04'
steps:
- bash: git log -1 --name-only | grep -c Folder1
failOnStderr: false
- job: Folder1DoesntExist
pool:
vmImage: 'ubuntu-16.04'
dependsOn: TestifFolder1Exists
condition: failed()
- job: Folder1DoesExist
pool:
vmImage: 'ubuntu-16.04'
dependsOn: TestifFolder1Exists
condition: succeeded()
I am trying to test whether a folder has had a change made, so I can publish artifacts from that directory.
The problem I am having is that if there isn't anything written to the folder, the script fails with a
Bash exited with code '1'. (this is what I want) which in turn makes the whole build fail.
If I add continueOnError then the following jobs always run the succeeded job.
How can I let this job fail, without it failing the entire build?
There is a option called continueOnError. It's set to false by default. Change this to true and your task won't stop the job from building.
https://learn.microsoft.com/en-us/azure/devops/pipelines/process/tasks?view=azure-devops&tabs=yaml#controloptions
I didn't figure out how to ignore a failed job, but this is how I solved this particular problem
jobs:
- job: TestifFolder1Exists
pool:
vmImage: 'ubuntu-16.04'
steps:
- bash: |
if [ "$(git log -1 --name-only | grep -c Folder1)" -eq 1 ]; then
echo "##vso[task.setVariable variable=Folder1Changed]true"
fi
- bash: echo succeeded
displayName: Perform some task
condition: eq(variables.Folder1Changed, 'true')
(although it turns out that Azure Devops does what I was trying to create here already - path filters triggers!)
In powershell task there is ignoreLASTEXITCODE property, that turns off propagation of exit code back to pipeline, so you can i.e. analyse it in next task. Not sure why it was not provided for bash. https://learn.microsoft.com/en-us/azure/devops/pipelines/tasks/utility/powershell?view=azure-devops