Quick Deployment Job failing when no item in Quick deploy item list - sharepoint

I have a content deployment job from one server to another....content deployment job works fine but when I turn on Quick Deploy job it start showing me system event error...
I later figured it out that quick deploy is working fine if there is atleast one item in quick deploy items list otherwise its giving me error.
In quick deploy settings I put it as after every 30 minutes so I am getting error after every 30 minutes in system event....
The Execute method of job definition Microsoft.SharePoint.Publishing.Administration.ContentDeploymentJobhe Execute method of job definition Microsoft.SharePoint.Publishing.Administration.ContentDeploymentJobDefinition (ID daa20dd3-f6ad-4e27-923a-1ebf26c71723) threw an exception. More information is included below.
ContentDeploymentJobReport with ID '{00000000-0000-0000-0000-000000000000}' was not found.
Parameter name: jobReportId
and
Publishing: Content deployment job failed. Error: 'System.ArgumentOutOfRangeException: ContentDeploymentJobReport with ID '{00000000-0000-0000-0000-000000000000}' was not found.
Parameter name: jobReportId
at Microsoft.SharePoint.Publishing.Administration.ContentDeploymentJobReport.GetInstance(Guid jobReportId)
at Microsoft.SharePoint.Publishing.Administration.ContentDeploymentJob.get_LastReport()
at Microsoft.SharePoint.Publishing.Administration.ContentDeploymentJob.get_SQMDeploymentJobFlags()
at Microsoft.SharePoint.Publishing.Administration.ContentDeploymentJob.

I found out that if we have any item in our authoring environment which needs to get to Production in quick deployment list...we are not getting this error...so i just find the workaround for now by deploying any page after setting up my content deployment jobs...that way I am not getting this issue....

Related

How to get a pipeline run error with Azure runs REST API

I'm using Azure's Runs API to get a pipeline run result as described here:
https://learn.microsoft.com/en-us/rest/api/azure/devops/pipelines/runs/get?view=azure-devops-rest-6.0#runresult
I can see in the documentation how to get the state and final result so I can know if the run was a success or a failure. However, in case of a failure, I don't see how I can get the error that occurred in that run as a string.
How can I get the actual error which caused the pipeline run to fail?
You can use the REST API "Timeline - Get" to list the issues (error and warning) associated with a run.
Note:
This API can only list the first 10 issues. If the run has more than 10 issues, the rest will not be listed in the response. To get the complete issues, you can use the API "Builds - Get Build Log" or "Logs - Get" to get the complete logs that contains the complete issues.
[UPDATE]
The buildId is same as the runId, and you can find it from the URL of the pipeline (build) run.
The timelineId is not required in the API request, you can use the request URI like as below.
GET https://dev.azure.com/{organization}/{project}/_apis/build/builds/{buildId}/timeline/?api-version=6.0

Azure Function Error: The operation has timed out

An error has started popping up in my Azure Data Factory Pipeline. I have a few Azure Function steps in the pipeline, but for some reason, one of the Azure Function steps has started returning an error. In Azure Data Factory, the error is a 3608 code after running for 1 minute 40 seconds:
Failure type: User configuration issue
Details: Call to provided Azure function 'CollateSheetsHTTPTrigger' failed with status-'InternalServerError' and message - 'Invoking Azure function failed with HttpStatusCode - InternalServerError.'.
However, in a prior run sub-pipeline, this Azure Function ran successfully on the same data (parameters and worksheet are on the only difference). The subsequent 3 runs of pipelines fail immediately (after 2 seconds) at the first Azure Function (a different AZ function now) step in each, with the same 3608 error code but different details:
Call to provided Azure function '???????????????' failed with status-'NotFound'
and message - '<html> <head><title>404 Not Found</title></head> <body
bgcolor="white"> <center><h1>404 Not Found</h1></center> <hr><center>nginx</center>
</body> </html> '.
Now it gets even stranger. After these 3 failed pipelines, the next pipeline which is pretty much the same as the previous 4 except for a few parameters, runs successfully, even though it has the same 2 AZ functions that failed before. And then the next 2 pretty similar pipelines also run successfully.
I then went and looked at the monitoring page for the 2 Azure Functions:
The first AZ function that failed, had 2 errors even though it only failed once in AZ Data Factory... the timing is slightly different for the 2 errors but they could only come from the first failed pipeline, so why does it say there are 2 errors? Then if you look at the actual error, all it says is "The operation timed out". The function was not running for more than 150 seconds so this is strange. Additionally, I have a bunch of error catching code and nothing comes up there.
The other failed AZ function steps from the other function do not show up on the monitoring page, it seems as if the first error crashed the AZ function app and then it eventually restarted?
I'm sorry I can't help but I did have a similar problem with an Azure function that executes a SOAP-call to a webservice every minute. Since 4 days this also fails with a timeout. If I run the function within my debugger it runs without problems. But the Azure Function fails every time, after 20 sec.
I'll follow this question and hope someone else can help...
An Azure Support Engineer identified the issue, it was due to a change to the azure-function-host library. The relevant issue is here https://protect-eu.mimecast.com/s/-CT5C3QxrTmREBwhgPxLm?domain=github.com and was fixed last week

Azure Datafactory Pipeline Failed inside a scheduled trigger

I have created 2 pipeline in Azure Datafactory. We have a custom activity created to run a python script inside the pipeline.When the pipeline is executed manually it successfully run for n number of time.But i have created a scheduled trigger of an interval of 15 minutes in order to run the 2 pipelines.The first execution successfully runs but in the next interval i am getting the error "Operation on target PyScript failed: Hit unexpected exception and execution failed." we are blocked wiht this.any input on this would be really helpful.
from ADF troubleshooting guide, it states...
Custom Activity :
The following table applies to Azure Batch.
Error code: 2500
Message: Hit unexpected exception and execution failed.
Cause: Can't launch command, or the program returned an error code.
Recommendation: Ensure that the executable file exists. If the program started, make sure stdout.txt and stderr.txt were uploaded to the storage account. It's a good practice to emit copious logs in your code for debugging.
Related helpful doc: Tutorial: Run Python scripts through Azure Data Factory using Azure Batch
Hope this helps.
If you are still blocked, please share failed pipeline run ID & failed activity run ID, for further analysis.

Azure: what could be the cause of the error "Unable to edit or replace deployment"?

When I recreate my VM I got the following error:
Problem occurred during request to Azure services. Cloud provider details: Unable to edit or replace deployment 'VM-Name': previous deployment from '8/20/2019 6:20:33 AM' is still active (expiration time is '8/27/2019 5:17:41 AM'). Please see https://aka.ms/arm-deploy for usage details.
Help me please to understand.
What could be the cause of the error ?
UPDATED:
This deployment has not been started previously.
Prior to this, errors were received during creation:
Azure is not available now. Please Try again later
There were several such errors one at a time and then I got that error related to:
Unable to edit or replace deployment
My assumptions about this.
Tell me, am I right or not ?
I launched the image, then after some time I recreated it.
Creation began, but at that moment the connection with Azure was lost.
Then, when the connection was restored, we tried to make a deployment that was not removed in the previous attempt (because there was no connection with Azure).
As a result, we got such an error.
Does this theory make sense?
exactly what it says, there is another deployment with the same name going on at this time, either change the name of the deployment you are trying to queue or wait for the other deployment to finish\fail
This can also occur if you use Bicep templates for your ARM deployement and multiple modules or resources in the template have the same name:
module fooModule '../modules/foo.bicep' = {
name: 'foo'
}
module barModule '../modules/bar.bicep' = {
name: 'foo'
}
I got the same error initially pipeline was working but when retriggered pipeline took more time so i canceled the deployment and made a fresh rerun it encounters. i think i need wait until that deployment filed.

Why is my powershell DSC failing with the message "An item with the same key has already been added"

We have a powershell DSC that is being executed to bring a vmss to a desired state.
It was working until we added some more parameters, and then it broke.
I removed everything except the parameters from the script and it still doesn't work.
The full error is
The DSC Extension received an incorrect input: An error occurred while
executing script or module 'IISInstall.ps1': An item with the same
key has already been added..
Please correct the input and retry executing the extension.
We even added logging in the DSC to try to troubleshoot.
It doesn't even seem to make it into the body of the DSC.
What am I doing wrong?
The parameter was called $instanceName. We wanted to use it to add a custom header to IIS to track which instance a response came from.
Turns out if you use $instanceName that steps on the internals of DSC in some way and it will never deploy correctly!
As soon as you remove $instanceName from the parameter list, it will work.

Resources