encounter the issue in ADF - azure

We have encounter the issue in our UAT environment however it was working fine in DEV environment and as per the advise we have execute the pipeline multiple time but still isuee is there.
Error code -1000
Failure type - System error
Detail- Fail to complete sub job b.This could be a transient issue and you may re-run the job.if it fails again continuously contact customer support

Related

How to debug Gitlab CI scheduled pipeline not running?

I have a Gitlab CI pipeline schedule and noticed that pipelines are not running (anymore).
When starting the schedule manually via the UI (<repo-root>/-/pipeline_schedules) it shows the following
However, there is not pipeline started and no error message is provided.
What can I do in this situation?
The success message is misleading in the sense that one might thing the pipeline was actually created, although it only means that a pipeline was scheduled.
There are various reasons, why a schedule cannot run (anymore).
This can be for example because of conflicting rules or outdated fields in your yml caused by breaking changes due to Gitlab upgrades.
To get to the root of the problem why your pipeline did not run you can trigger a pipeline manually and set the
CI_PIPELINE_SOURCE to hold "schedule" as value.
To do so, go to <repo-root>/-/pipelines/new, set your target branch or tag and the variable as follows
Additionally, you may want to provide further variables required to properly simulate your problematic schedule via the manual run.
Next hit Run pipeline and you should observe an actual error message why the pipeline could not run.

Configuration Version is missing Terraform Cloud

I set up a workspace and I am following the Enforce Policy with Sentinel hands on guide.
I see the following message in the run tab:
As soon as I try to press the queue plan button I receive this error:
My configured variables are:
Is there something else I need to configure to be able to queue a plan?
Executing from the cli I was able to trigger a run (in TF Cloud) that only included the plan step. The run execution can be viewed if I access the specific run url directly.
Any help, suggestions are more than welcome!
I guess you've already resolved this issue, but I post my resolution anyway.
When this issue occurred, I had wrong workspace's name settings.
There were two workspaces with similar names, and I don't know why this happened. Then I deleted one of them.
On the other one occurred this issue.
In the end, when I deleted the workspace and recreated it, I didn't have the issue occurring anymore.
In my case I didn't have duplicate / similar names workspaces (only one workspace). I found that after running terraform apply locally once first, the UI controls started to work as expected.

Azure Build pipeline not able to retrieve latest source version

I'm running into a strange problem whenever I start a particular build, and I can't get my head around it.
I just imported an existing VSTS-repository into my new GIT-Repository on Azure DevOps. My next step is to create a Build-pipeline which should lead to an artifact which I can deploy. For the company I work for I've done this many times, but I've never seen this error before.
The buildpipeline is setup, and as soon as I start a build it immediately fails with the following error;
Hopefully somebody can help out in resolving this.
UPDATE - Added settings for retrieving sources
After posting the second screenshot and going through everything again properly, I saw that I didn't point the Build Pipeline to the proper GIT-Repository in Azure Devops. After updating this, the issue was resolved.

VSTS - Azure end point issue

I am currently getting the following error on VSTS when trying to do publish a release to a Azure:
The release definition cannot be saved because the environment 'App-Service-Template' references a service endpoint that is in dirty state. Update the endpoint(s) and retry the operation. Details: 'System.Collections.Generic.List`1[System.String]'
I have tried the following troubleshooting steps and still get the same error:
I have recreated the service end point in VSTS and it fails
I have recreated the Resource group in Azure that the service point connects to and have tried to connect the end point to a empty resource group and it fails
I followed the manual steps of creating the end point connection, i then can verify the status of the connection which passes. I then try to publish the release to Azure and get the above error message.
Lastly i have tried all the MS VSTS recommended troubleshooting with still no luck https://learn.microsoft.com/en-us/vsts/pipelines/release/azure-rm-endpoint?view=vsts
I am all out of ideas. Any Help would be appreciated.
Cheers
What ultimately fixed this issue for us was abandoning and deleting the failed releases, and then a new build and release triggered from that new build.
(one weird quirk we encountered was that the release sat in the last stage for over 8 minutes: 'wait for console output from agent'... when I left the release and came back it said successful with the last stage only taking 27 seconds)
We had the exact same error. I also ran through the points on your list with no luck, before arriving at the solution with a new build and release.
I have managed to resolve the issue. It was caused by a corrupt resource group in Azure. Deleted the resource group and created it with a different name and this worked. No idea why this happened as there were no logs and resources where running in the group, just couldn't deploy new or change existing ones.

cctray reports build successful when cruise control cant reach source control repository

When the source control repository is unreachable cruise control keeps going back to check for modifications. While the latest build was successful the dashboard reports failure but cctray reports success.
Is there some way I can catch this scenario and have these two agree?
Yes this scenario can occur when the CCTray hangs on the local PC. If the issue is occuring at the dashboard then it means the IIS Server hangs where the CruiseControl server is running.
To resolve this is to identify where the issue is at. If the issue is at the CCTray level then restart the CCTray. If the issue is at the Dashboard level then restarting the IIS should fix it.
This is actually due to an issue in CruiseControl; not CCTray itself.
If source control fails (say because of a timeout or connection failure) the following will be true:
CruiseControl will set project state to Exception as the project is currently in an error state
CruiseControl will NOT modify the last build status, as a build has not occurred
so if the previous build succeeded - the project will report Success for the last build status
CruiseControl only reports - natively - the last build status via the API that CCTray uses. Getting it to inspect the project status is more complicated and ends up being less efficient. As such the CCTray reports the 'status' as the last build status rather than a hybrid of the two.
The WebDashboard shows the project status and the last build status hence true status of the project is better evaluated.
This issue has several other side-effects; such as projectTriggers firing in this circumstance; as these also do not check the project status.
Ideally CCTray - and projectTriggers, et. al - would check both the project status and last build status and report the outcome as a combination of both.

Resources