VSTS - Azure end point issue - azure

I am currently getting the following error on VSTS when trying to do publish a release to a Azure:
The release definition cannot be saved because the environment 'App-Service-Template' references a service endpoint that is in dirty state. Update the endpoint(s) and retry the operation. Details: 'System.Collections.Generic.List`1[System.String]'
I have tried the following troubleshooting steps and still get the same error:
I have recreated the service end point in VSTS and it fails
I have recreated the Resource group in Azure that the service point connects to and have tried to connect the end point to a empty resource group and it fails
I followed the manual steps of creating the end point connection, i then can verify the status of the connection which passes. I then try to publish the release to Azure and get the above error message.
Lastly i have tried all the MS VSTS recommended troubleshooting with still no luck https://learn.microsoft.com/en-us/vsts/pipelines/release/azure-rm-endpoint?view=vsts
I am all out of ideas. Any Help would be appreciated.
Cheers

What ultimately fixed this issue for us was abandoning and deleting the failed releases, and then a new build and release triggered from that new build.
(one weird quirk we encountered was that the release sat in the last stage for over 8 minutes: 'wait for console output from agent'... when I left the release and came back it said successful with the last stage only taking 27 seconds)
We had the exact same error. I also ran through the points on your list with no luck, before arriving at the solution with a new build and release.

I have managed to resolve the issue. It was caused by a corrupt resource group in Azure. Deleted the resource group and created it with a different name and this worked. No idea why this happened as there were no logs and resources where running in the group, just couldn't deploy new or change existing ones.

Related

encounter the issue in ADF

We have encounter the issue in our UAT environment however it was working fine in DEV environment and as per the advise we have execute the pipeline multiple time but still isuee is there.
Error code -1000
Failure type - System error
Detail- Fail to complete sub job b.This could be a transient issue and you may re-run the job.if it fails again continuously contact customer support

Pipelines Randomly Failing on Azure pipelines when using Self Hosted Microsoft Agent

I have been using Azure pipelines for many years. Recently I switched to using from azure Microsoft Hosted Agents to Self hosted Agent which are running in a VMSS.
Since switching, I have noticed MANY builds fail because of an error like below. This happens quite often and very randomly. The build will be successful if I rerun the build.
---> Running in 3619316996da
unexpected EOF
##[error]unexpected EOF
##[error]The process '/usr/bin/docker' failed with exit code 1
Finishing: Build an image
Has anyone had such an issue or can help enlighten me where to look. Its pretty frustrating especially since the issue seems to be random and didn't happen with Microsoft Hosted Agents.
Problem was solved after updating linux/Docker to the latest version

Configuration Version is missing Terraform Cloud

I set up a workspace and I am following the Enforce Policy with Sentinel hands on guide.
I see the following message in the run tab:
As soon as I try to press the queue plan button I receive this error:
My configured variables are:
Is there something else I need to configure to be able to queue a plan?
Executing from the cli I was able to trigger a run (in TF Cloud) that only included the plan step. The run execution can be viewed if I access the specific run url directly.
Any help, suggestions are more than welcome!
I guess you've already resolved this issue, but I post my resolution anyway.
When this issue occurred, I had wrong workspace's name settings.
There were two workspaces with similar names, and I don't know why this happened. Then I deleted one of them.
On the other one occurred this issue.
In the end, when I deleted the workspace and recreated it, I didn't have the issue occurring anymore.
In my case I didn't have duplicate / similar names workspaces (only one workspace). I found that after running terraform apply locally once first, the UI controls started to work as expected.

Is there a way I can re-initiate failed tasks or agent phase in a TFS release?

When a certain task fails in an environment I had to always redeploy the whole environment after fixing the issue. Is there a way I could re-initiate only the failed task or just the phase where the task is failed.
For example: In the screenshot below the last task "Run script *" under "Agent Phase" has failed. I had to re-initiate the whole environment deployment to re-execute the last task which will also executes "Run on Agent" phase as well. This is painful during production release pipeline.
Recently the retryCountOnTaskFailure argument was introduced:
- task: <name of task>
retryCountOnTaskFailure: <max number of retries>
...
Understand your concern. However, this is not supported at present with on-premises TFS Server 2018.
When you're doing a PROD drop and a step near the end randomly fails, then you can't just rerun from that failed step. Had to re-deploy.
To re-run failed task/step:
Actually, there is a related user voice.
Rerun failed build task/step
https://developercommunity.visualstudio.com/idea/365697/rerun-failed-build-taskstep.html
Multiple people commented and echoed. You could monitor the status of above user voice.
To re-run failed agent phase/ agent job
Also a related user voice:
Retry failed run with multi-stage pipelines
https://developercommunity.visualstudio.com/idea/598906/retry-failed-run-with-multi-stage-pipelines.html
However, this is has been released with Azure DevOps Service now:
https://learn.microsoft.com/en-us/azure/devops/release-notes/2019/sprint-158-update#retry-failed-stages Still not available with Azure DevOps/TFS on-premises. Generally, it won't be long until it's released with latest Azure DevOps version.
With all that said, I think you still have to re-deploy on TFS 2018 at present. Sorry for any inconvenience.
In TFS 2018 you don't have this option.
However, in Azure Pipelines you have the option to re-run failed jobs, so I guess in the next release of Azure DevOps Server (TFS) this feature will be.
You can change the retries from

Azure Build pipeline not able to retrieve latest source version

I'm running into a strange problem whenever I start a particular build, and I can't get my head around it.
I just imported an existing VSTS-repository into my new GIT-Repository on Azure DevOps. My next step is to create a Build-pipeline which should lead to an artifact which I can deploy. For the company I work for I've done this many times, but I've never seen this error before.
The buildpipeline is setup, and as soon as I start a build it immediately fails with the following error;
Hopefully somebody can help out in resolving this.
UPDATE - Added settings for retrieving sources
After posting the second screenshot and going through everything again properly, I saw that I didn't point the Build Pipeline to the proper GIT-Repository in Azure Devops. After updating this, the issue was resolved.

Resources