Azure Data Factory - Copy Activity - Read data from http response timeout - azure

I use the Copy activity to query an HTTP endpoint, but after 5 minutes I keep getting the following error "Read data from response timeout":
Error Code: ErrorCode=UserErrorReadHttpDataTimeout,'Type=Microsoft.DataTransfer.Common.Shared.HybridDeliveryException,Message=Read data from http response timeout. If this is not binary copy, you are suggested to enable staged copy to accelerate reading data, otherwise please retry.,Source=Microsoft.DataTransfer.ClientLibrary,''Type=System.Net.WebException,Message=The operation has timed out.,Source=System,'"
The request runs in full length without interruption on the server side (visible in the logs)
I searched online and I only found this:
The error gets triggered as soon as Reading from source hits 5 Minutes
PS: The error seems to be happening only on certain endpoints (same server but different endpoint, I don't get any timeout error)
Have any of you ever had a problem like this? If so, how did you solve it?
Thank you for you help !

Error Message - Read data from http response timeout. If this is not
binary copy, you are suggested to enable staged copy to accelerate
reading data, otherwise please retry.
As mentioned in the above error message, you need to try staged copy.
Also you need to configure Retry settings.
Refer - https://learn.microsoft.com/en-us/answers/questions/51055/azure-data-factory-copy-activity-retry.html

Related

Google Cloud Pub/Sup API: Almost 100% Errors on StreamingPull

I'm trying to use GCP Pub/Sub StreamingPull using the NodeJs client and I understand that the pub sub is designed for 100% error rate as mentioned in Docs.
So do I have to restart the listener if I face errors in the errorHandler and also please tell what error code should I be looking for to see if the streaming connection is closed. Here is the ref Error Codes
const errorHandler=(error)=>{
if(errorCodeCheckCondition){
subscription.on('message', messageHandler);
subscription.removeListener('message', messageHandler);
}
}
subscription.on('error', errorHandler);
I'm using GCP Pub/Sub StreamingPull for first time, so please guide.
You do need to re-establish the streaming pull connection after you get any error.
According to the rpc StreamingPull
The server will close the stream and return the status on any error. The server may close the stream with status UNAVAILABLE to reassign server-side resources, in which case, the client should re-establish the stream. Flow control can be achieved by configuring the underlying RPC channel.
Since You know about StreamingPull has a 100% error rate, I believe you must have also gone through the Diagnosing StreamingPull errors.
The Pub/Sub client library will re-establish the underlying streaming pull connection when it disconnects for a retriable reason, e.g., an UNAVAILABLE error. You can see in the StreamingPull config in the library the set of errors that are retried internally.
The errors you would typically get back at the application level would be ones where some additional intervention is likely necessary, e.g., a PERMISSION_DENIED error (where the subscriber does not have permission to receive messages on the subscription) or a NOT_FOUND error (where the subscription does not exist. Retrying on these types of errors is likely just to result in the error reoccurring until the underlying issue is resolved.
You could decide that retrying is what you want to do because you want the subscriber to start working again without having to manually restart it once other steps are taken to fix the problem, but you'll want to make sure you have some way to discover these types of issues, perhaps through some kind of Cloud Monitoring alerting on streaming pull errors or on a large number of unprocessed messages building up.

Azure Data Factory error DFExecutorUserError error code 1204

So I am getting an error in Azure Data Factory that I haven't been able to find any information about. I am running a data flow and eventually (after an hour or so) get this error
{"StatusCode":"DFExecutorUserError","Message":"Job failed due to
reason: The service has encountered an error processing your request.
Please try again. Error code 1204.","Details":"The service has
encountered an error processing your request. Please try again. Error
code 1204."}
Troubleshooting I have already done :
I have successfully ran the data flow using the sample option. Did this with 1 million rows.
I am processing 3 years of data and I have successfully processed all the data by filter the data by year and running the data flow once for each year.
So I think I have shown the data isn't the problem, because I have processed all of it by breaking it down into 3 runs.
I haven't found a pattern in the time the pipeline runs for before the error occurs that would indicate I am hitting any timeout value.
The source and sink for this data flow are both an Azure SQL Server database.
Does anyone have any thoughts? Any suggestions for getting a more verbose error out of data factory (I already have the pipeline set with verbose logging).
We are glad to hear that you has found the cause:
"I opened a Microsoft support ticket and they are saying it is a
database transient caused failure."
I think the error will be resolved automatically. I post this as answer and this can be beneficial to other community members. Thank you.
Update:
The most important thing is that you have resolved it by increase the vCorces in the end.
"The only thing they gave me was their BS article on handling
transient errors. Maybe I’m just old but a database that cannot
maintain connections to it is not very useful. What I’ve done to
workaround this is increase my vCores. This sql database was a
serverless one. While performance didn’t look bad my guess is the
database must be doing some sort of resize in the background to
handle the hour long data builds I need it to do. I had already tried
setting the min/max vCores to be the same. The connection errors
disappeared when I increased the vCores count to 6."

AJAX request errors out with no response

I have an application with webix on UI and node js on server side.
From the UI if I trigger a long running AJAX request for e.g. process 1000 records, the request errors out after 1.5 mins (not consistently) approximately.
The error object contains no information about the reason for request failure but since processing smaller set of records seems to work fine I am thinking of blaming it on timeout.
From the developer console I see that request seems to be Stalled and response is empty.
Currently I cant drop a request and keep polling it after every few seconds to see if the processing has been finished. I have to wait for the request to finish but I am not sure how to do it as webix forum doesn't seem to have any information on this except for setting timeout.
If setting timeout is the way to go then what would happen tomorrow if the request size goes to 2000 records - I don't want to keep on increasing the timeout
Also, if I am left with no choice how would I implement the polling. If I drop a request on to server there can be other clients as well who are triggering a similar request. How would I distinguish between requests originated from different clients?
I would really appreciate some help on this.

Does Azure's HTTP request timeout of 4 minutes apply to multipart-formdata file uploads?

Azure apparently has a 4 minute timeout for http requests before they kill the connection. This is non configurable in app services:
https://social.msdn.microsoft.com/Forums/en-US/32b76114-67a4-4e6b-ac45-61b0f0a0829f/changing-the-4-minute-request-time-out-for-app-services?forum=AzureAPIApps
I have seen this first hand in my application - I have a process that allows users to view files that exist on a network drive, select a subset of those files and upload those files to a third party service. This happens via a post request which sends the list of file names using content-type json. This operation can take a while and I receive a timeout error at almost exactly 4 minutes.
I also have another process which allows users to drag and drop files into the web application directly, these files are posted to the server using content-type multipart/form-data, and forwarded to the third party service. This request never times out no matter how long the upload takes.
Is there something about using multipart/form-data that overrides azures 4 minute timeout?
It probably does not matter but I am using Node.
The timeout is actually 3m 50s (230 seconds) and not 4 minutes.
But note that it is an idle connection timeout, meaning that it only kicks in if there is no data flowing in the request/response. So it is strange that you would hit this if you are actively uploading files. I would suggest monitoring network traffic to see if anything is being sent. If it really goes 230s with no uploaded data, then there is probably some other issue, and the timeout is just a side effect.

Azure stream analytics - How to redirect or handle error events/rows?

Is there a way to capture and redirect data error events/rows to a separate output?
For example, say I have events coming through and for some reason there are data conversion errors. I would like to handle those errors and do something, probably a separate output for further investigation?
Currently in stream analytics error policy, if an event fails to be written to output we only got two options
Drop - which just drops the event (or)
Retry - retries writing the event until it succeeds
Collecting all error events is not supported currently. You can enable diagnostic logs and get a a sample of every kind of error at frequent intervals.
Here is the documentation link.
If there is a way for you to filter such events in the query itself, then you could redirect such events to a different output and reprocess that later.

Resources