Azure Data Factory error DFExecutorUserError error code 1204 - azure

So I am getting an error in Azure Data Factory that I haven't been able to find any information about. I am running a data flow and eventually (after an hour or so) get this error
{"StatusCode":"DFExecutorUserError","Message":"Job failed due to
reason: The service has encountered an error processing your request.
Please try again. Error code 1204.","Details":"The service has
encountered an error processing your request. Please try again. Error
code 1204."}
Troubleshooting I have already done :
I have successfully ran the data flow using the sample option. Did this with 1 million rows.
I am processing 3 years of data and I have successfully processed all the data by filter the data by year and running the data flow once for each year.
So I think I have shown the data isn't the problem, because I have processed all of it by breaking it down into 3 runs.
I haven't found a pattern in the time the pipeline runs for before the error occurs that would indicate I am hitting any timeout value.
The source and sink for this data flow are both an Azure SQL Server database.
Does anyone have any thoughts? Any suggestions for getting a more verbose error out of data factory (I already have the pipeline set with verbose logging).

We are glad to hear that you has found the cause:
"I opened a Microsoft support ticket and they are saying it is a
database transient caused failure."
I think the error will be resolved automatically. I post this as answer and this can be beneficial to other community members. Thank you.
Update:
The most important thing is that you have resolved it by increase the vCorces in the end.
"The only thing they gave me was their BS article on handling
transient errors. Maybe I’m just old but a database that cannot
maintain connections to it is not very useful. What I’ve done to
workaround this is increase my vCores. This sql database was a
serverless one. While performance didn’t look bad my guess is the
database must be doing some sort of resize in the background to
handle the hour long data builds I need it to do. I had already tried
setting the min/max vCores to be the same. The connection errors
disappeared when I increased the vCores count to 6."

Related

Application Insights - Faulted Error code from Dependencies in Az blob storage

We are using Az blob storage, and it's reached few time maximum threshold. Due to this getting DNS error code in dependencies, but dependency collector updating as Faulted.
How can we avoid this Faulted error code.
Please check marked error code and share your thoughts.
• The ‘faulted’ error code that you are encountering in the ‘Application Insights’ top response codes is related to the ‘DependencyCollection’ module tracking an Exception event along with a ‘DependencyTelemetry’ in the event of client-side errors like DNS. Since you are also getting DNS error code in dependencies related to Azure blob storage reaching maximum threshold, it is a common error code related to above said scenario irrespective of whether the Azure resource is a blob storage or an APIM.
• Thus, this error code is an exception which is sent to user ‘ikey’ along with Dependency Telemetry. So, if this exception is not tracked, then the only information ‘DependencyCollector’ has is that the call failed, and ‘resultCode’ is reported as "Faulted". As a result, you should modify the result code to be more useful, before removing the actual exception.
For more detailed information regarding this ‘faulted’ error code, please refer to the below SO community thread and its comments discussions as well as also into the github community discussion forum link given below. They discuss the probable cause of this error to be the timeout for ‘GET’ request resulting in thread starvation or poor application performance or might be a lot of context switching resulting in a high threadpool count.
Azure AppInsights - Http Result code Faulted
https://github.com/microsoft/ApplicationInsights-dotnet/issues/1362#issuecomment-511488536

Azure Data Factory - Copy Activity - Read data from http response timeout

I use the Copy activity to query an HTTP endpoint, but after 5 minutes I keep getting the following error "Read data from response timeout":
Error Code: ErrorCode=UserErrorReadHttpDataTimeout,'Type=Microsoft.DataTransfer.Common.Shared.HybridDeliveryException,Message=Read data from http response timeout. If this is not binary copy, you are suggested to enable staged copy to accelerate reading data, otherwise please retry.,Source=Microsoft.DataTransfer.ClientLibrary,''Type=System.Net.WebException,Message=The operation has timed out.,Source=System,'"
The request runs in full length without interruption on the server side (visible in the logs)
I searched online and I only found this:
The error gets triggered as soon as Reading from source hits 5 Minutes
PS: The error seems to be happening only on certain endpoints (same server but different endpoint, I don't get any timeout error)
Have any of you ever had a problem like this? If so, how did you solve it?
Thank you for you help !
Error Message - Read data from http response timeout. If this is not
binary copy, you are suggested to enable staged copy to accelerate
reading data, otherwise please retry.
As mentioned in the above error message, you need to try staged copy.
Also you need to configure Retry settings.
Refer - https://learn.microsoft.com/en-us/answers/questions/51055/azure-data-factory-copy-activity-retry.html

Azure Data Factory - CRM (OData) Connector

I have an Azure Data Factory for Data extraction from OnPremise CRM. I am running into an issue with one of the Data entities where the Pipeline runs for close to 8 hours and throws this below exception. I know it's not an issue with authentication as I am able to get the other entities without any issues. I tried to change the parallelCopies to 18 and DIUs but when I trigger the Pipeline it sticks to Parallel Copies of '1', DIUs of 4 and eventually fails. Appreciate any inputs.
Operation on target XXXX failed: Failure happened on 'Source' side. ErrorCode=UserErrorFailedFileOperation,'Type=Microsoft.DataTransfer.Common.Shared.HybridDeliveryException,Message=Upload file failed at path XXXXXXX,Source=Microsoft.DataTransfer.Common,''Type=System.NotSupportedException,Message=The authentication endpoint Kerberos was not found on the configured Secure Token Service!,Source=Microsoft.Xrm.Sdk,'
I ran into something similar when using CRM as a sink; any upsert activities would fail very near exactly 60 minutes. The error I observed in the Azure Data Factory activity was:
'Type=System.NotSupportedException,Message=The authentication endpoint Kerberos was not found on the configured Secure Token Service!,Source=Microsoft.Xrm.Sdk,'
This post helped me find what to change in ADFS. I ran Get-ADFSRelyingPartyTrust and reviewed the TokenLifetime property, which happened to be 0. Apparently tokens last 60 minutes when the configuration is 0.
The following PowerShell increased the timeout, and I confirmed upsert activities no longer fail when exceeding 60 minutes.
Set-ADFSRelyingPartyTrust –TargetName "<RelyingPartyTrust>" –TokenLifetime <timeout in minutes>
It turned out to be a time out setting on the ADFS, once the time out is increased the job ran successfully.

Azure Data Factory Connection to Google Big Query Timeout Issues

I´m trying to grab Firebase analytics data from Google BigQuery with Azure Data Factory.
The Connection to BigQuery works but I have quite often timeout issues when running a (simple) query. 3 out of 5 times I run into a timeout. If no timeout occurs I recive the data as expected.
Can someone of you confirm this issue? Or has an idea what´s the reason for the.
Thanks & best,
Michael
Timeout issues could happen in the Azure Data Factory sometimes. It is affected by source dataset, sink dataset, network, query performance and other factors, etc. After all, your connectors are not azure services.
You could try to set timeout param follow this json chart. Or you could set retry times to deal with timeout issues.
If your sample data is so simple that can't be timeout,maybe you could commit feedback here to ask adf team about your concern.

Application insights: SQL Dependency result code "-2"

This is what I get in ~20 out of ~2M SQL dependency AI registrations.
Apparently this result code does not appear in sys.messages, since all message id's in there are positive.
Also I can't seem to find any stack trace information on this error. It seems to be a timeout or general transient error (handled by Polly from my side), but either way it's registered under dependencies and not exceptions.
Does anybody know what this error is and where I can find more information regarding all possible SQL dependency errors I might get?
I have submitted this issue to app insights team here, and just get feedback:
-2 means Timeout expired. The timeout period elapsed prior to completion of the
operation or the server is not responding. (Microsoft SQL Server, Error: -2).
The Sql Exception number is here.
And if you have any concerns, please comment at that thread. Hope it helps.

Resources