This is what I get in ~20 out of ~2M SQL dependency AI registrations.
Apparently this result code does not appear in sys.messages, since all message id's in there are positive.
Also I can't seem to find any stack trace information on this error. It seems to be a timeout or general transient error (handled by Polly from my side), but either way it's registered under dependencies and not exceptions.
Does anybody know what this error is and where I can find more information regarding all possible SQL dependency errors I might get?
I have submitted this issue to app insights team here, and just get feedback:
-2 means Timeout expired. The timeout period elapsed prior to completion of the
operation or the server is not responding. (Microsoft SQL Server, Error: -2).
The Sql Exception number is here.
And if you have any concerns, please comment at that thread. Hope it helps.
Related
I'm trying to use GCP Pub/Sub StreamingPull using the NodeJs client and I understand that the pub sub is designed for 100% error rate as mentioned in Docs.
So do I have to restart the listener if I face errors in the errorHandler and also please tell what error code should I be looking for to see if the streaming connection is closed. Here is the ref Error Codes
const errorHandler=(error)=>{
if(errorCodeCheckCondition){
subscription.on('message', messageHandler);
subscription.removeListener('message', messageHandler);
}
}
subscription.on('error', errorHandler);
I'm using GCP Pub/Sub StreamingPull for first time, so please guide.
You do need to re-establish the streaming pull connection after you get any error.
According to the rpc StreamingPull
The server will close the stream and return the status on any error. The server may close the stream with status UNAVAILABLE to reassign server-side resources, in which case, the client should re-establish the stream. Flow control can be achieved by configuring the underlying RPC channel.
Since You know about StreamingPull has a 100% error rate, I believe you must have also gone through the Diagnosing StreamingPull errors.
The Pub/Sub client library will re-establish the underlying streaming pull connection when it disconnects for a retriable reason, e.g., an UNAVAILABLE error. You can see in the StreamingPull config in the library the set of errors that are retried internally.
The errors you would typically get back at the application level would be ones where some additional intervention is likely necessary, e.g., a PERMISSION_DENIED error (where the subscriber does not have permission to receive messages on the subscription) or a NOT_FOUND error (where the subscription does not exist. Retrying on these types of errors is likely just to result in the error reoccurring until the underlying issue is resolved.
You could decide that retrying is what you want to do because you want the subscriber to start working again without having to manually restart it once other steps are taken to fix the problem, but you'll want to make sure you have some way to discover these types of issues, perhaps through some kind of Cloud Monitoring alerting on streaming pull errors or on a large number of unprocessed messages building up.
We are using Az blob storage, and it's reached few time maximum threshold. Due to this getting DNS error code in dependencies, but dependency collector updating as Faulted.
How can we avoid this Faulted error code.
Please check marked error code and share your thoughts.
• The ‘faulted’ error code that you are encountering in the ‘Application Insights’ top response codes is related to the ‘DependencyCollection’ module tracking an Exception event along with a ‘DependencyTelemetry’ in the event of client-side errors like DNS. Since you are also getting DNS error code in dependencies related to Azure blob storage reaching maximum threshold, it is a common error code related to above said scenario irrespective of whether the Azure resource is a blob storage or an APIM.
• Thus, this error code is an exception which is sent to user ‘ikey’ along with Dependency Telemetry. So, if this exception is not tracked, then the only information ‘DependencyCollector’ has is that the call failed, and ‘resultCode’ is reported as "Faulted". As a result, you should modify the result code to be more useful, before removing the actual exception.
For more detailed information regarding this ‘faulted’ error code, please refer to the below SO community thread and its comments discussions as well as also into the github community discussion forum link given below. They discuss the probable cause of this error to be the timeout for ‘GET’ request resulting in thread starvation or poor application performance or might be a lot of context switching resulting in a high threadpool count.
Azure AppInsights - Http Result code Faulted
https://github.com/microsoft/ApplicationInsights-dotnet/issues/1362#issuecomment-511488536
So I am getting an error in Azure Data Factory that I haven't been able to find any information about. I am running a data flow and eventually (after an hour or so) get this error
{"StatusCode":"DFExecutorUserError","Message":"Job failed due to
reason: The service has encountered an error processing your request.
Please try again. Error code 1204.","Details":"The service has
encountered an error processing your request. Please try again. Error
code 1204."}
Troubleshooting I have already done :
I have successfully ran the data flow using the sample option. Did this with 1 million rows.
I am processing 3 years of data and I have successfully processed all the data by filter the data by year and running the data flow once for each year.
So I think I have shown the data isn't the problem, because I have processed all of it by breaking it down into 3 runs.
I haven't found a pattern in the time the pipeline runs for before the error occurs that would indicate I am hitting any timeout value.
The source and sink for this data flow are both an Azure SQL Server database.
Does anyone have any thoughts? Any suggestions for getting a more verbose error out of data factory (I already have the pipeline set with verbose logging).
We are glad to hear that you has found the cause:
"I opened a Microsoft support ticket and they are saying it is a
database transient caused failure."
I think the error will be resolved automatically. I post this as answer and this can be beneficial to other community members. Thank you.
Update:
The most important thing is that you have resolved it by increase the vCorces in the end.
"The only thing they gave me was their BS article on handling
transient errors. Maybe I’m just old but a database that cannot
maintain connections to it is not very useful. What I’ve done to
workaround this is increase my vCores. This sql database was a
serverless one. While performance didn’t look bad my guess is the
database must be doing some sort of resize in the background to
handle the hour long data builds I need it to do. I had already tried
setting the min/max vCores to be the same. The connection errors
disappeared when I increased the vCores count to 6."
I have an UWP app installed in an upboard that reads IotHub messages sended to that deviceID.
deviceClient = DeviceClient.CreateFromConnectionString(deviceConnectionString, TransportType.Mqtt);
Message receivedMessage = await deviceClient.ReceiveAsync();
The app works fine and reads the messages correctly, but sometimes I have these exceptions:
IotHubClientTransientException: Transient error occured, please retry.
I read that these errors may can be generated from wrong connection string, but it's not possible in my case.
Can someone help me?
The error is most likely caused by a network connectivity error. Just add a retry strategy. You could simply write your own or use a library like Polly.Net
In a distributed world connectivity issues should be expected, so I don't think there is any problem with your code other than is should be more resilient. I think it is really nice that the exceptions even tells you it should be retried, most of the times you have to figure that out yourself.
Some more guidance from the Azure team can be found here. In your case the Retry pattern is a good fit:
Retry
Enable an application to handle anticipated, temporary failures when it tries to connect to a service or network resource by transparently retrying an operation that's previously failed.
We have a worker role running in the cloud which polls an Azure CloudQueue periodically retrieving messages that a web role has put on there for us. Currently the worker role and web role are housed in the same Cloud Service application and currently we are only running one instance.
As we are testing we have our logging switched on and so the contents of the messages and other useful information appear in our cloud storage which we view using Cerebrata Azure Diagnostics Manager. (Great product btw)
DiagnosticMonitorConfiguration diagConfig = DiagnosticMonitor.GetDefaultInitialConfiguration();
diagConfig.Logs.ScheduledTransferLogLevelFilter = LogLevel.Verbose;
It all appears to work remarkably well actually, however occasionally we see a Verbose message in the trace log which simply has "Fail"as the message. The code it appears to be generated from is wrapped in a try catch so it is odd that we aren't seeing the message through those means.
It would appear that something is happening that is out of our code's control, perhaps the worker role is being restarted, or the cloud op system is detecting a major error that only it can deal with by restarting our worker role. It recovers and carries on so it is somewhat of a mystery to us what might be happening.
What we haven't ascertained yet is whether we are losing a message.
Any help would be gratefully appreciated.
Cheers
Kindo Malay
Without the stack trace it's hard to say too much, but with the logging set to verbose it's quite likely that you're seeing some internal logging from one of the dlls you're using.
For example if you run a Azure Table query that causes certain kinds of errors, the error will be logged out 3 times because the storage client library is catching the error, tracing it out and then retrying.
If the error is not being caught by your try catch block, then it's likely nothing you need to worry about.
If deliverability of queue messages is important to you, you should ensure that you make use of the visibility timeout overload of CloudQueue.GetMessage and only delete the message when you've finished processing it. You may end up processing some messages twice, but at least you will process all of them.
If your role instance is getting restarted after running for a while, it's often because your process exited due to an unhandled exception.