Windows 2016 Borwnout on Azure Copy Task - azure

We have been using the Azure copy task in devops for the last year.
https://learn.microsoft.com/en-us/azure/devops/pipelines/tasks/deploy/azure-file-copy?view=azure-devops
Until this week it was fine, but now on the execution of the task it is failing with:
This is a scheduled windows-2016 brownout. The windows-2016 environment is deprecated and will be removed on July 31, 2022. For more details, see https://github.com/actions/virtual-environments/issues/5403
The problem I have is the step does not allow us to set the machine (that I can see) to target windows latest. In the documentation, it's just a powershell script.
Has anyone else fixed this issue or migrated to another step that resolves it?

The message is referring to the agent that your pipeline is running on. You're correct: You don't choose the agent pool at the task level. You choose it at the job or stage level.

Related

Azure devops artefact retention

I’ve got a mono repo, which has 10 separate CICD pipelines written in yaml.
I’ve noticed lately that we’ve lost a vast number of runs, and some of them had successful production releases.
Am I right in thinking that the project rententiob settings applies to all pipelines? Rather than individual?
I’ve been reading on the ms website and I think in order to retain them going forward, I have to use the API via a powershell script.
I assume the said script needs to run after a successful deployment to production.
I’m quite surprised that there isn’t a global option to say ‘keep all production releases’
The project Retention policy settings will be applied to all pipeline runs not individual. So you could not use this setting to retention specific successful production releases directly.
To achieve this, you could use the PowerShell script to retention these specific runs with "Condition". Add the PowerShell script as the last task of your deployment to check if this one needs to be retained. Refer to this official doc: https://learn.microsoft.com/en-us/azure/devops/pipelines/build/run-retention?view=azure-devops
Here is an example to retention forever based on condition:
- powershell: |
   $contentType = "application/json";
   $headers = #{ Authorization = 'Bearer $(System.AccessToken)' };
   $rawRequest = #{ daysValid = 365000 ; definitionId = $(System.DefinitionId); ownerId = 'User:$(Build.RequestedForId)'; protectPipeline = $false; runId = $(Build.BuildId) };
   $request = ConvertTo-Json #($rawRequest);
   $uri = "$(System.CollectionUri)$(System.TeamProject)/_apis/build/retention/leases?api-version=6.0-preview.1";
   Invoke-RestMethod -uri $uri -method POST -Headers $headers -ContentType $contentType -Body $request;
  displayName: 'PowerShell Script'
  condition: {Your customize condition}

What will happen to azure pipelines after windows 2016 self host agent is removed?

In azure devops i'm getting a warning about the removal of microsoft host agent that uses windows 2016 (vs2017-win2016)
https://github.com/actions/virtual-environments/issues/4312
What i want to know in regards to that, is if on the pipelines agent jobs where the agent specification is set to windows 2016, if they will automatically start using a newer version of windows agent or stop working completly.
The github topic seems to indicate that.
The ones were the agent job inherits from the pipeline, i believe there is no problem, besides that for some reason the task(s) are tied to windows 2016.
And what about the pipelines defined in the Releases section
When i click Create release
will it only fail after i try to deploy a created release?
I think, your pipelines will fail. There was a situation when MS just "friendly" reminded about depreciation:
Check this issue: https://github.com/actions/virtual-environments/issues/4312
Releases also contain the same issue. You have to update their jobs to use the new agent type:

Azure DevOps Releases - How to carry out a Test Stage at a specific virtual machine?

After Azure => Pipelines, I end up with two published artifacts: one containing a .NET Core Console Application (myDrop), another containing the corresponding Testing library (written with xUnit) (myTestDrop). Then I move on to Azure => Releases to create a new Release Pipeline as below:
I have a Windows Virtual Machine (VM), which is already installed with all necessary libraries, e.g. .NET Core; and I would like to carry out the Integration Testing (the 2nd Stage above) in that machine. Specifically,
Copy both myDrop and myTestDrop to that VM.
Set some Environment Variable: the path to, let's say, MyConsole.exe in myDrop.
Then run the Integration Test: dotnet vstest "MyConsole.Tests.dll" --logger:trx --ResultsDirectory:"c:/Somewhere" /TestCaseFilter:"Category=IntegrationTest"
If the test is successful, the returned code from dotnet.exe is 0 (otherwise, 1).
The 3rd Stage only runs in case the 2nd Stage is successful.
There should be a way to read *.trx generated from the Integration Test above, especially in case there are some test failures.
My experience with Azure DevOps is limited. I have searched around but most Azure Release examples involve Web Application (IIS, SQL...), not a normal Console Application with Test on a specific VM. Feel free to suggest other alternatives or best practices, given the scenario above.
Any advice, suggestions are appreciated.
How to carry out a Test Stage at a specific virtual machine?
You can install and use a self-hosted agent in that machine. Please refer to this document to install Self-hosted Windows agents. You need to add an agent job to the release pipeline first. Then, choose the agent pool with self-hosted agent installed.
Copy both myDrop and myTestDrop to that VM.
Since your agent is installed on the VM, it will automatically download artifacts to its local folder.
The 3rd Stage only runs in case the 2nd Stage is successful.
You can select “After Stage” trigger in pre-deployment conditions. For example, if the test stage in your screenshot fails, the deployment stage will not deploy.
There should be a way to read *.trx generated from the Integration
Test above, especially in case there are some test failures.
You can check the test results in the Tests tab of the release result page. You can also download the *.trx file on this page.

Is there a way I can re-initiate failed tasks or agent phase in a TFS release?

When a certain task fails in an environment I had to always redeploy the whole environment after fixing the issue. Is there a way I could re-initiate only the failed task or just the phase where the task is failed.
For example: In the screenshot below the last task "Run script *" under "Agent Phase" has failed. I had to re-initiate the whole environment deployment to re-execute the last task which will also executes "Run on Agent" phase as well. This is painful during production release pipeline.
Recently the retryCountOnTaskFailure argument was introduced:
- task: <name of task>
retryCountOnTaskFailure: <max number of retries>
...
Understand your concern. However, this is not supported at present with on-premises TFS Server 2018.
When you're doing a PROD drop and a step near the end randomly fails, then you can't just rerun from that failed step. Had to re-deploy.
To re-run failed task/step:
Actually, there is a related user voice.
Rerun failed build task/step
https://developercommunity.visualstudio.com/idea/365697/rerun-failed-build-taskstep.html
Multiple people commented and echoed. You could monitor the status of above user voice.
To re-run failed agent phase/ agent job
Also a related user voice:
Retry failed run with multi-stage pipelines
https://developercommunity.visualstudio.com/idea/598906/retry-failed-run-with-multi-stage-pipelines.html
However, this is has been released with Azure DevOps Service now:
https://learn.microsoft.com/en-us/azure/devops/release-notes/2019/sprint-158-update#retry-failed-stages Still not available with Azure DevOps/TFS on-premises. Generally, it won't be long until it's released with latest Azure DevOps version.
With all that said, I think you still have to re-deploy on TFS 2018 at present. Sorry for any inconvenience.
In TFS 2018 you don't have this option.
However, in Azure Pipelines you have the option to re-run failed jobs, so I guess in the next release of Azure DevOps Server (TFS) this feature will be.
You can change the retries from

VSTS - Azure end point issue

I am currently getting the following error on VSTS when trying to do publish a release to a Azure:
The release definition cannot be saved because the environment 'App-Service-Template' references a service endpoint that is in dirty state. Update the endpoint(s) and retry the operation. Details: 'System.Collections.Generic.List`1[System.String]'
I have tried the following troubleshooting steps and still get the same error:
I have recreated the service end point in VSTS and it fails
I have recreated the Resource group in Azure that the service point connects to and have tried to connect the end point to a empty resource group and it fails
I followed the manual steps of creating the end point connection, i then can verify the status of the connection which passes. I then try to publish the release to Azure and get the above error message.
Lastly i have tried all the MS VSTS recommended troubleshooting with still no luck https://learn.microsoft.com/en-us/vsts/pipelines/release/azure-rm-endpoint?view=vsts
I am all out of ideas. Any Help would be appreciated.
Cheers
What ultimately fixed this issue for us was abandoning and deleting the failed releases, and then a new build and release triggered from that new build.
(one weird quirk we encountered was that the release sat in the last stage for over 8 minutes: 'wait for console output from agent'... when I left the release and came back it said successful with the last stage only taking 27 seconds)
We had the exact same error. I also ran through the points on your list with no luck, before arriving at the solution with a new build and release.
I have managed to resolve the issue. It was caused by a corrupt resource group in Azure. Deleted the resource group and created it with a different name and this worked. No idea why this happened as there were no logs and resources where running in the group, just couldn't deploy new or change existing ones.

Resources