Azure WebJob logging is crashing host, what can we do? - azure

We are using Azure to run several WebJobs. One of our WebJob functions has the following signature:
public static Task ProcessFileUploadedMessageAsync(
[QueueTrigger("uploads")] FileUploadedMessage message,
TextWriter logger,
CancellationToken cancellationToken)
This function is monitoring a queue for a message that indicates a file has been uploaded, which then triggers an import of the file's data. Note the use of a TextWriter as the second argument: this is supplied by the WebJobs API infrastructure.
Our import process is kind of slow (can be several hours for a single file import in some cases), so we periodically write messages to the log (via the TextWriter) to track our progress. Unfortunately, our larger files are causing the WebJob hosting process to be terminated due to a logging exception. Here is a sample stack trace from the hosting process log:
[04/02/2016 03:44:59 > 660083: ERR ]
[04/02/2016 03:45:00 > 660083: ERR ] Unhandled Exception: System.ArgumentOutOfRangeException: Index was out of range. Must be non-negative and less than the size of the collection.
[04/02/2016 03:45:00 > 660083: ERR ] Parameter name: chunkLength
[04/02/2016 03:45:00 > 660083: ERR ] at System.Text.StringBuilder.ToString()
[04/02/2016 03:45:00 > 660083: ERR ] at System.IO.StringWriter.ToString()
[04/02/2016 03:45:00 > 660083: ERR ] at Microsoft.Azure.WebJobs.Host.Loggers.UpdateOutputLogCommand.<UpdateOutputBlob>d__10.MoveNext()
[04/02/2016 03:45:00 > 660083: ERR ] --- End of stack trace from previous location where exception was thrown ---
[04/02/2016 03:45:00 > 660083: ERR ] at System.Runtime.CompilerServices.TaskAwaiter.ThrowForNonSuccess(Task task)
[04/02/2016 03:45:00 > 660083: ERR ] at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task)
[04/02/2016 03:45:00 > 660083: ERR ] at Microsoft.Azure.WebJobs.Host.Loggers.UpdateOutputLogCommand.<TryExecuteAsync>d__3.MoveNext()
[04/02/2016 03:45:00 > 660083: ERR ] --- End of stack trace from previous location where exception was thrown ---
[04/02/2016 03:45:00 > 660083: ERR ] at System.Runtime.CompilerServices.TaskAwaiter.ThrowForNonSuccess(Task task)
[04/02/2016 03:45:00 > 660083: ERR ] at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task)
[04/02/2016 03:45:00 > 660083: ERR ] at Microsoft.Azure.WebJobs.Host.Timers.RecurrentTaskSeriesCommand.<ExecuteAsync>d__0.MoveNext()
[04/02/2016 03:45:00 > 660083: ERR ] --- End of stack trace from previous location where exception was thrown ---
[04/02/2016 03:45:00 > 660083: ERR ] at System.Runtime.CompilerServices.TaskAwaiter.ThrowForNonSuccess(Task task)
[04/02/2016 03:45:00 > 660083: ERR ] at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task)
[04/02/2016 03:45:00 > 660083: ERR ] at Microsoft.Azure.WebJobs.Host.Timers.TaskSeriesTimer.<RunAsync>d__d.MoveNext()
[04/02/2016 03:45:00 > 660083: ERR ] --- End of stack trace from previous location where exception was thrown ---
[04/02/2016 03:45:00 > 660083: ERR ] at Microsoft.Azure.WebJobs.Host.Timers.BackgroundExceptionDispatcher.<>c__DisplayClass1.<Throw>b__0()
[04/02/2016 03:45:00 > 660083: ERR ] at System.Threading.ThreadHelper.ThreadStart_Context(Object state)
[04/02/2016 03:45:00 > 660083: ERR ] at System.Threading.ExecutionContext.RunInternal(ExecutionContext executionContext, ContextCallback callback, Object state, Boolean preserveSyncCtx)
[04/02/2016 03:45:00 > 660083: ERR ] at System.Threading.ExecutionContext.Run(ExecutionContext executionContext, ContextCallback callback, Object state, Boolean preserveSyncCtx)
[04/02/2016 03:45:00 > 660083: ERR ] at System.Threading.ExecutionContext.Run(ExecutionContext executionContext, ContextCallback callback, Object state)
[04/02/2016 03:45:00 > 660083: ERR ] at System.Threading.ThreadHelper.ThreadStart()
[04/02/2016 03:45:00 > 660083: SYS ERR ] Job failed due to exit code -532462766
[04/02/2016 03:45:01 > 660083: SYS INFO] Process went down, waiting for 0 seconds
[04/02/2016 03:45:01 > 660083: SYS INFO] Status changed to PendingRestart
The main problem is that this exception is being thrown not by our code but by something in the WebJobs API:
at Microsoft.Azure.WebJobs.Host.Loggers.UpdateOutputLogCommand.<UpdateOutputBlob>d__10.MoveNext()
We tried putting try...catch blocks around our calls to the TextWriter but these had no effect. It would appear that log messages are buffered somewhere and periodically flushed to Azure blob storage by a separate thread. If this is the case, then it would follow that we have no way of trapping the exception.
Has anyone else come across the same problem, or can anyone think of a possible solution or workaround?
For completeness, here is how we are using the TextWriter for logging:
await logger.WriteLineAsync("some text to log");
Nothing more complicated than that.
UPDATE: Seems as though this has been reported as issue #675 on the WebJobs SDK GitHub.

You should add async to method, just like this:
public async static Task ProcessFileUploadedMessageAsync(...)
To get details, you can see this article

Related

Why webjob is failing with exit code 532462766?

This is the error stated by the webjob console. Earlier the same code was running fine at least it was making a logging in application insight. This is the error:
[06/29/2022 08:14:21 > fa1ef8: SYS INFO] Status changed to Initializing
[06/29/2022 08:14:25 > fa1ef8: SYS INFO] Run script 'ArchiveOldSMBFiles.exe' with script host - 'WindowsScriptHost'
[06/29/2022 08:14:25 > fa1ef8: SYS INFO] Status changed to Running
[06/29/2022 08:14:26 > fa1ef8: ERR ]
[06/29/2022 08:14:27 > fa1ef8: ERR ] Unhandled Exception: System.TypeInitializationException: The type initializer for 'ArchiveOldSMBFiles.Program' threw an exception. ---> System.Configuration.ConfigurationErrorsException: Configuration system failed to initialize ---> System.Configuration.ConfigurationErrorsException: Unrecognized element. (D:\local\Temp\jobs\triggered\webjob7\awufcgqk.dxj\Debug\ArchiveOldSMBFiles.exe.config line 38)
[06/29/2022 08:14:27 > fa1ef8: ERR ] at System.Configuration.ConfigurationSchemaErrors.ThrowIfErrors(Boolean ignoreLocal)
[06/29/2022 08:14:27 > fa1ef8: ERR ] at System.Configuration.BaseConfigurationRecord.ThrowIfParseErrors(ConfigurationSchemaErrors schemaErrors)
[06/29/2022 08:14:27 > fa1ef8: ERR ] at System.Configuration.BaseConfigurationRecord.ThrowIfInitErrors()
[06/29/2022 08:14:27 > fa1ef8: ERR ] at System.Configuration.ClientConfigurationSystem.EnsureInit(String configKey)
[06/29/2022 08:14:27 > fa1ef8: ERR ] --- End of inner exception stack trace ---
[06/29/2022 08:14:27 > fa1ef8: ERR ] at System.Configuration.ClientConfigurationSystem.EnsureInit(String configKey)
[06/29/2022 08:14:27 > fa1ef8: ERR ] at System.Configuration.ClientConfigurationSystem.PrepareClientConfigSystem(String sectionName)
[06/29/2022 08:14:27 > fa1ef8: ERR ] at System.Configuration.ClientConfigurationSystem.System.Configuration.Internal.IInternalConfigSystem.GetSection(String sectionName)
[06/29/2022 08:14:27 > fa1ef8: ERR ] at System.Configuration.ConfigurationManager.GetSection(String sectionName)
[06/29/2022 08:14:27 > fa1ef8: ERR ] at System.Configuration.PrivilegedConfigurationManager.GetSection(String sectionName)
[06/29/2022 08:14:27 > fa1ef8: ERR ] at System.Net.Configuration.SettingsSectionInternal.get_Section()
[06/29/2022 08:14:27 > fa1ef8: ERR ] at System.Net.Sockets.Socket.InitializeSockets()
[06/29/2022 08:14:27 > fa1ef8: ERR ] at System.Net.NetworkInformation.NetworkChange.AddressChangeListener.StartHelper(NetworkAddressChangedEventHandler caller, Boolean captureContext, StartIPOptions startIPOptions)
[06/29/2022 08:14:27 > fa1ef8: ERR ] at Microsoft.ApplicationInsights.WindowsServer.TelemetryChannel.Implementation.Network.AddAddressChangedEventHandler(NetworkAddressChangedEventHandler handler) in /_/BASE/src/ServerTelemetryChannel/Implementation/Network.cs:line 12
[06/29/2022 08:14:27 > fa1ef8: ERR ] at Microsoft.ApplicationInsights.WindowsServer.TelemetryChannel.Implementation.TransmissionPolicy.NetworkAvailabilityTransmissionPolicy.SubscribeToNetworkAddressChangedEvents() in /_/BASE/src/ServerTelemetryChannel/Implementation/TransmissionPolicy/NetworkAvailabilityTransmissionPolicy.cs:line 36
[06/29/2022 08:14:27 > fa1ef8: ERR ] at Microsoft.ApplicationInsights.WindowsServer.TelemetryChannel.Implementation.TransmissionPolicy.NetworkAvailabilityTransmissionPolicy.Initialize(Transmitter transmitter) in /_/BASE/src/ServerTelemetryChannel/Implementation/TransmissionPolicy/NetworkAvailabilityTransmissionPolicy.cs:line 19
[06/29/2022 08:14:27 > fa1ef8: ERR ] at Microsoft.ApplicationInsights.WindowsServer.TelemetryChannel.Implementation.TransmissionPolicy.TransmissionPolicyCollection.Initialize(Transmitter transmitter) in /_/BASE/src/ServerTelemetryChannel/Implementation/TransmissionPolicy/TransmissionPolicyCollection.cs:line 49
[06/29/2022 08:14:27 > fa1ef8: ERR ] at Microsoft.ApplicationInsights.WindowsServer.TelemetryChannel.Implementation.Transmitter..ctor(TransmissionSender sender, TransmissionBuffer transmissionBuffer, TransmissionStorage storage, TransmissionPolicyCollection policies, BackoffLogicManager backoffLogicManager) in /_/BASE/src/ServerTelemetryChannel/Implementation/Transmitter.cs:line 57
[06/29/2022 08:14:27 > fa1ef8: ERR ] at Microsoft.ApplicationInsights.WindowsServer.TelemetryChannel.ServerTelemetryChannel..ctor(INetwork network, IApplicationLifecycle applicationLifecycle) in /_/BASE/src/ServerTelemetryChannel/ServerTelemetryChannel.cs:line 52
[06/29/2022 08:14:27 > fa1ef8: ERR ] at Microsoft.ApplicationInsights.WindowsServer.TelemetryChannel.ServerTelemetryChannel..ctor() in /_/BASE/src/ServerTelemetryChannel/ServerTelemetryChannel.cs:line 41
[06/29/2022 08:14:27 > fa1ef8: ERR ] at ArchiveOldSMBFiles.Program..cctor() in D:\Projects\ArchiveOldSMBFiles\Program.cs:line 30
[06/29/2022 08:14:27 > fa1ef8: ERR ] --- End of inner exception stack trace ---
[06/29/2022 08:14:27 > fa1ef8: ERR ] at ArchiveOldSMBFiles.Program.Main(String[] args)
[06/29/2022 08:14:27 > fa1ef8: SYS INFO] Status changed to Failed
[06/29/2022 08:14:27 > fa1ef8: SYS ERR ] Job failed due to exit code -532462766
My code:
class Program
{
static IServiceCollection services = new ServiceCollection()
.AddLogging(loggingBuilder => loggingBuilder.AddFilter<Microsoft.Extensions.Logging.ApplicationInsights.ApplicationInsightsLoggerProvider>("", LogLevel.Trace))
.AddSingleton(typeof(ITelemetryChannel), new ServerTelemetryChannel())
.AddApplicationInsightsTelemetryWorkerService(ConfigurationManager.AppSettings["APPINSIGHTS_INSTRUMENTATIONKEY"]);
static IServiceProvider serviceProvider = services.BuildServiceProvider();
static ILogger<Program> logger = serviceProvider.GetRequiredService<ILogger<Program>>();
static TelemetryClient telemetryClient = serviceProvider.GetRequiredService<TelemetryClient>();
public static void Main(string[] args)
{
try
{
var _storageConn = ConfigurationManager
.AppSettings["PaperCaptureStorageConnectionString"];
ServicePointManager.DefaultConnectionLimit = 50;
JobHostConfiguration config = new JobHostConfiguration();
config.StorageConnectionString = _storageConn;
var host = new JobHost(config);
Task callTask = host.CallAsync(typeof(Program).GetMethod(nameof(Program.ProcessMethod)));
callTask.Wait();
host.RunAndBlock();
}
catch (Exception ex)
{
Console.WriteLine(ex);
}
}
[NoAutomaticTrigger]
public static void ProcessMethod()
{
using (telemetryClient.StartOperation<RequestTelemetry>("ArchiveOldSMBFiles1233"))
{
logger.LogInformation("-ArchiveOldSMBFiles.Main Execution Started");
var totalDeletedFiles = 0;
try
{
string jsonTemplate = string.Empty;
string fileShareName = string.Empty;
Console.WriteLine("Before unity of work");
using (var unitOfWork = new DataConnector().UnitOfWork)
{
Console.WriteLine("After unity of work");
jsonTemplate = ConfigurationService.GetConfigValueByConfigCode(
BusinessLogic.Helper.Constants.ConfigType.AzureConstantValues
, BusinessLogic.Helper.Constants.ConfigCode.SMBRemoveOldFile, unitOfWork);
Console.WriteLine("This is your json: " + jsonTemplate);
}
}
}
catch (Exception ex)
{
Console.WriteLine("After Catch");
Console.WriteLine("Crash: "+ ex.ToString());
logger.LogError(ex.ToString());
}
finally
{
Console.WriteLine("After finally");
telemetryClient.Flush();
}
}
}
}
I have fixed the Azure SQL connection issue which was caused by the wrong Azure connection string.
I am uploading my webjob by zipping Debug folder and uploading it.
To resolve the error Job failed due to exit code -532462766,
please check the following:
This error usually occurs when the config file is broken.
After installing the Application Insights try comparing the configuration files.
Please check whether you have web.config file. If not, try creating web.config in console app similar to app.config and republish SO Thread by Lucas Huet-Hudson.
Try copying the the programname.exe.config file along with the exe as answered by Nasir Razzaq in the above thread.
Try to disable the dashboard logging as suggested by NicolajHedeager in this GitHub Blog.
If the issue still persists, raise Azure Support Ticket to know the root cause of the issue.
For more in detail, please refer below links:
Turning a console app into a long-running Azure WebJob – Sander van de Velde (wordpress.com) by Sander van de Velde
Azure WebJob fails with exit code 532462766 (microsoft.com) by Anastasia Black

Azure WebJobs FunctionIndexingException with Microsoft.WindowsAzure.Storage.StorageException 403 Forbidden

When I try to run my WebJob, I get the following failure:
[07/12/2018 18:09:21 > 7351a7: ERR ] Unhandled Exception: Microsoft.Azure.WebJobs.Host.Indexers.FunctionIndexingException: Error indexing method 'Foo.Bar' ---> System.InvalidOperationException: Invalid storage account 'storage'. Please make sure your credentials are correct. ---> Microsoft.WindowsAzure.Storage.StorageException: The remote server returned an error: (403) Forbidden. ---> System.Net.WebException: The remote server returned an error: (403) Forbidden.
[07/12/2018 18:09:21 > 7351a7: ERR ] at Microsoft.WindowsAzure.Storage.Shared.Protocol.HttpResponseParsers.ProcessExpectedStatusCodeNoException[T](HttpStatusCode expectedStatusCode, HttpStatusCode actualStatusCode, T retVal, StorageCommandBase`1 cmd, Exception ex)
[07/12/2018 18:09:21 > 7351a7: ERR ] at Microsoft.WindowsAzure.Storage.Blob.CloudBlobClient.b__19(RESTCommand`1 cmd, HttpWebResponse resp, Exception ex, OperationContext ctx)
[07/12/2018 18:09:21 > 7351a7: ERR ] at Microsoft.WindowsAzure.Storage.Core.Executor.Executor.EndGetResponse[T](IAsyncResult getResponseResult)
[07/12/2018 18:09:21 > 7351a7: ERR ] --- End of inner exception stack trace ---
[07/12/2018 18:09:21 > 7351a7: ERR ] at Microsoft.WindowsAzure.Storage.Core.Executor.Executor.EndExecuteAsync[T](IAsyncResult result)
[07/12/2018 18:09:21 > 7351a7: ERR ] at Microsoft.WindowsAzure.Storage.Core.Util.AsyncExtensions.c__DisplayClass2`1.b__0(IAsyncResult ar)
[07/12/2018 18:09:21 > 7351a7: ERR ] --- End of stack trace from previous location where exception was thrown ---
[07/12/2018 18:09:21 > 7351a7: ERR ] at System.Runtime.ExceptionServices.ExceptionDispatchInfo.Throw()
[07/12/2018 18:09:21 > 7351a7: ERR ] at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task)
[07/12/2018 18:09:21 > 7351a7: ERR ] at Microsoft.Azure.WebJobs.Host.Executors.DefaultStorageCredentialsValidator.d__1.MoveNext()
[07/12/2018 18:09:21 > 7351a7: ERR ] --- End of inner exception stack trace ---
[07/12/2018 18:09:21 > 7351a7: ERR ] at Microsoft.Azure.WebJobs.Host.Executors.DefaultStorageCredentialsValidator.d__1.MoveNext()
[07/12/2018 18:09:21 > 7351a7: ERR ] --- End of stack trace from previous location where exception was thrown ---
[07/12/2018 18:09:21 > 7351a7: ERR ] at System.Runtime.ExceptionServices.ExceptionDispatchInfo.Throw()
[07/12/2018 18:09:21 > 7351a7: ERR ] at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task)
[07/12/2018 18:09:21 > 7351a7: ERR ] at Microsoft.Azure.WebJobs.Host.Executors.DefaultStorageCredentialsValidator.d__0.MoveNext()
[07/12/2018 18:09:21 > 7351a7: ERR ] --- End of stack trace from previous location where exception was thrown ---
[07/12/2018 18:09:21 > 7351a7: ERR ] at System.Runtime.ExceptionServices.ExceptionDispatchInfo.Throw()
[07/12/2018 18:09:21 > 7351a7: ERR ] at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task)
[07/12/2018 18:09:21 > 7351a7: ERR ] at Microsoft.Azure.WebJobs.Host.Executors.DefaultStorageAccountProvider.d__24.MoveNext()
[07/12/2018 18:09:21 > 7351a7: ERR ] --- End of stack trace from previous location where exception was thrown ---
[07/12/2018 18:09:21 > 7351a7: ERR ] at System.Runtime.ExceptionServices.ExceptionDispatchInfo.Throw()
[07/12/2018 18:09:21 > 7351a7: ERR ] at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task)
[07/12/2018 18:09:21 > 7351a7: WARN] Reached maximum allowed output lines for this run, to see all of the job's logs you can enable website application diagnostics
[07/12/2018 18:09:21 > 7351a7: SYS ERR ] Job failed due to exit code -532462766
[07/12/2018 18:09:21 > 7351a7: SYS INFO] Process went down, waiting for 60 seconds
I have 100% validated that my storage account credentials are correct.
When I attempt to manually connect to the storage account and invoke CreateifNotExist, I get:
[07/12/2018 18:11:46 > 7351a7: INFO] Microsoft.WindowsAzure.Storage.StorageException: The remote server returned an error: (403) Forbidden. ---> System.Net.WebException: The remote server returned an error: (403) Forbidden.
[07/12/2018 18:11:46 > 7351a7: INFO] at System.Net.HttpWebRequest.GetResponse()
[07/12/2018 18:11:46 > 7351a7: INFO] at Microsoft.WindowsAzure.Storage.Core.Executor.Executor.ExecuteSync[T](RESTCommand`1 cmd, IRetryPolicy policy, OperationContext operationContext)
[07/12/2018 18:11:46 > 7351a7: INFO] --- End of inner exception stack trace ---
[07/12/2018 18:11:46 > 7351a7: INFO] at Microsoft.WindowsAzure.Storage.Core.Executor.Executor.ExecuteSync[T](RESTCommand`1 cmd, IRetryPolicy policy, OperationContext operationContext)
[07/12/2018 18:11:46 > 7351a7: INFO] at Microsoft.WindowsAzure.Storage.Queue.CloudQueue.CreateIfNotExists(QueueRequestOptions options, OperationContext operationContext)
Upon dumping additional information about the exception from Microsoft.WindowsAzure.Storage.StorageExtendedErrorInformation, I see:
AuthenticationErrorDetail:The MAC signature found in the HTTP request 'CUVqVMvvWecPsnTEynjocGpq6TkBmkpJsVL6hr2jkKQ=' is not the same as any computed signature. Server used following string to sign: 'PUT
See the following GitHub issue where ApplicationInsights may be interferring with your HTTP request to the storage REST APIs:
https://github.com/Azure/azure-storage-net/issues/490
Make sure the following is in your applicationinsights.config file:
<Add Type="Microsoft.ApplicationInsights.DependencyCollector.DependencyTrackingTelemetryModule, Microsoft.AI.DependencyCollector">
<ExcludeComponentCorrelationHttpHeadersOnDomains>
<!--
Requests to the following hostnames will not be modified by adding correlation headers.
Add entries here to exclude additional hostnames.
NOTE: this configuration will be lost upon NuGet upgrade.
-->
<Add>core.windows.net</Add>
<Add>core.chinacloudapi.cn</Add>
<Add>core.cloudapi.de</Add>
<Add>core.usgovcloudapi.net</Add>
<Add>localhost</Add>
<Add>127.0.0.1</Add>
</ExcludeComponentCorrelationHttpHeadersOnDomains>
<IncludeDiagnosticSourceActivities>
<Add>Microsoft.Azure.EventHubs</Add>
<Add>Microsoft.Azure.ServiceBus</Add>
</IncludeDiagnosticSourceActivities>
</Add>

EventProcessorHost using in webjob with multiple instances - giving exception Microsoft.ServiceBus.Messaging.LeaseLostException

I am using EventProcessorHost in webjob with multiple instances - giving exception Microsoft.ServiceBus.Messaging.LeaseLostException. Perticularly only one instance is giving this exception.
It is not giving any exception when I am running it as single instances
Microsoft.ServiceBus.Messaging.LeaseLostException: Exception of type 'Microsoft.ServiceBus.Messaging.LeaseLostException' was thrown. ---> Microsoft.WindowsAzure.Storage.StorageException: The remote server returned an error: (409) Conflict. ---> System.Net.WebException: The remote server returned an error: (409) Conflict.
at Microsoft.WindowsAzure.Storage.Shared.Protocol.HttpResponseParsers.ProcessExpectedStatusCodeNoException[T](HttpStatusCode expectedStatusCode, HttpStatusCode actualStatusCode, T retVal, StorageCommandBase1 cmd, Exception ex) in c:\Program Files (x86)\Jenkins\workspace\release_dotnet_master\Lib\Common\Shared\Protocol\HttpResponseParsers.Common.cs:line 50
at Microsoft.WindowsAzure.Storage.Blob.CloudBlob.<>c__DisplayClass33.<RenewLeaseImpl>b__32(RESTCommand1 cmd, HttpWebResponse resp, Exception ex, OperationContext ctx) in c:\Program Files (x86)\Jenkins\workspace\release_dotnet_master\Lib\ClassLibraryCommon\Blob\CloudBlob.cs:line 3186
at Microsoft.WindowsAzure.Storage.Core.Executor.Executor.EndGetResponse[T](IAsyncResult getResponseResult) in c:\Program Files (x86)\Jenkins\workspace\release_dotnet_master\Lib\ClassLibraryCommon\Core\Executor\Executor.cs:line 299
--- End of inner exception stack trace ---
at Microsoft.WindowsAzure.Storage.Core.Executor.Executor.EndExecuteAsync[T](IAsyncResult result) in c:\Program Files (x86)\Jenkins\workspace\release_dotnet_master\Lib\ClassLibraryCommon\Core\Executor\Executor.cs:line 50
at Microsoft.WindowsAzure.Storage.Blob.CloudBlob.EndRenewLease(IAsyncResult asyncResult) in c:\Program Files (x86)\Jenkins\workspace\release_dotnet_master\Lib\ClassLibraryCommon\Blob\CloudBlob.cs:line 1982
at Microsoft.WindowsAzure.Storage.Core.Util.AsyncExtensions.<>c__DisplayClass4.b__3(IAsyncResult ar) in c:\Program Files (x86)\Jenkins\workspace\release_dotnet_master\Lib\ClassLibraryCommon\Core\Util\AsyncExtensions.cs:line 114
--- End of stack trace from previous location where exception was thrown ---
at System.Runtime.CompilerServices.TaskAwaiter.ThrowForNonSuccess(Task task)
at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task)
at Microsoft.ServiceBus.Messaging.BlobLeaseManager.d__23.MoveNext()
--- End of inner exception stack trace ---
at Microsoft.ServiceBus.Messaging.BlobLeaseManager.d__23.MoveNext()
--- End of stack trace from previous location where exception was thrown ---
at System.Runtime.CompilerServices.TaskAwaiter.ThrowForNonSuccess(Task task)
at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task)
at Microsoft.ServiceBus.Messaging.BlobLeaseManager.d__24.MoveNext()
--- End of stack trace from previous location where exception was thrown ---
at System.Runtime.CompilerServices.TaskAwaiter.ThrowForNonSuccess(Task task)
at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task)
at RoutingServiceWebJob.DataProcessorFactory.EventHubDataProcessor.d__37.MoveNext() in d:\a\1\s\RoutingServiceWebJob\DataProcessorFactory\EventHubDataProcessor.cs:line 163
I am reading messages one at a time. Please suggest.
I was able to avoid this by setting the Host name to be a unique string. For example
var eventProcessorHostName = Guid.NewGuid().ToString();

About Microsoft.ServiceBus.Messaging.LeaseLostException

I got an exception of Microsoft.ServiceBus.Messaging.LeaseLostException, in my EventHub processor.
what does this exception mean? What is the possible root cause of this exception?
Following are stack track:
at Microsoft.ServiceBus.Messaging.BlobLeaseManager.d__24.MoveNext()\r\n--- End of stack trace from previous location where exception was thrown ---\r\n
at System.Runtime.CompilerServices.TaskAwaiter.ThrowForNonSuccess(Task task)\r\n
at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task)\r\n
at Microsoft.ServiceBus.Messaging.BlobLeaseManager.d__25.MoveNext()\r\n--- End of stack trace from previous location where exception was thrown ---\r\n
at System.Runtime.CompilerServices.TaskAwaiter.ThrowForNonSuccess(Task task)\r\n
at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task)\r\n
at MyEventHub.EventProcessor`1.d__6.MoveNext()\r\n\r\nMicrosoft.WindowsAzure.Storage.StorageException : \"The remote server returned an error: (409) Conflict.\":
at Microsoft.WindowsAzure.Storage.Core.Executor.Executor.EndExecuteAsync[T](IAsyncResult result) in c:\Program Files (x86)\Jenkins\workspace\release_dotnet_master\Lib\ClassLibraryCommon\Core\Executor\Executor.cs:line 60\r\n
at Microsoft.WindowsAzure.Storage.Core.Util.AsyncExtensions.<>c__DisplayClass4.b__3(IAsyncResult ar) in c:\Program Files (x86)\Jenkins\workspace\release_dotnet_master\Lib\ClassLibraryCommon\Core\Util\AsyncExtensions.cs:line 115\r\n--- End of stack trace from previous location where exception was thrown ---\r\n
at System.Runtime.CompilerServices.TaskAwaiter.ThrowForNonSuccess(Task task)\r\n
at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task)\r\n
at Microsoft.ServiceBus.Messaging.BlobLeaseManager.d__24.MoveNext()\r\n\r\nSystem.Net.WebException : \"The remote server returned an error: (409) Conflict.\":
at Microsoft.WindowsAzure.Storage.Shared.Protocol.HttpResponseParsers.ProcessExpectedStatusCodeNoException[T](HttpStatusCode expectedStatusCode, HttpStatusCode actualStatusCode, T retVal, StorageCommandBase1 cmd, Exception ex) in c:\\Program Files (x86)\\Jenkins\\workspace\\release_dotnet_master\\Lib\\Common\\Shared\\Protocol\\HttpResponseParsers.Common.cs:line 50\r\n
at Microsoft.WindowsAzure.Storage.Blob.CloudBlob.<>c__DisplayClass33.<RenewLeaseImpl>b__32(RESTCommand1 cmd, HttpWebResponse resp, Exception ex, OperationContext ctx) in c:\Program Files (x86)\Jenkins\workspace\release_dotnet_master\Lib\ClassLibraryCommon\Blob\CloudBlob.cs:line 3186\r\n
at Microsoft.WindowsAzure.Storage.Core.Executor.Executor.EndGetResponse[T](IAsyncResult getResponseResult) in c:\Program Files (x86)\Jenkins\workspace\release_dotnet_master\Lib\ClassLibraryCommon\Core\Executor\Executor.cs:line 306"
And the only internal exception in my own code is:
at MyEventHub.EventProcessor`1.<< CloseAsync >> d__6.MoveNext()\r\n\r\nMicrosoft.WindowsAzure.Storage.StorageException : \"The remote server returned an error: (409) Conflict.\":
...
And here is the code of CloseAsync:
public async Task CloseAsync(PartitionContext context, CloseReason reason)
{
try
{
if (reason == CloseReason.Shutdown)
{
await context.CheckpointAsync();
}
}
catch (Exception ex)
{
this.HandleException(ex);
}
finally
{
this.configuration.DecrementOpenedPartitionCount?.Invoke();
}
}
MyEventHub is hosted in worker role which deployed 2 instance in Azure.
#Youxu , for your EventHub, how many receivers ( EventProcessorHost ) are configured?. As per my knowledge we can have only one active receiver with certain epoch. If we create a EventHub listener ( with default options) while one listener is already listing to a EventHub, newly created listener gets higher epoch and first listener gets disconnected due to LeaseLostException
Check if you are (accidentally) running more than one receiver for same EventHub concurrently.

Running webjob in console works but throws exception on Azure: The remote name could not be resolved: 'xxx.queue.core.windows.net'

My webjob deletes documents from my blob storage. If I copy the code to a console app and run it, it runs to completion without error. If I create a new webjob in visual studio (right click my website project and select Add -> New Azure Webjob project), copy the code into the Functions.cs and deploy, Azure throws an exception when running the webjob:
===============================================
[09/09/2016 01:51:44 > fc93a9: ERR ] Unhandled Exception: Microsoft.WindowsAzure.Storage.StorageException: The remote name could not be resolved: 'xxx.queue.core.windows.net' ---> System.Net.WebException: The remote name could not be resolved: 'xxx.queue.core.windows.net'
[09/09/2016 01:51:44 > fc93a9: ERR ] at System.Net.HttpWebRequest.EndGetResponse(IAsyncResult asyncResult)
[09/09/2016 01:51:44 > fc93a9: ERR ] at Microsoft.WindowsAzure.Storage.Core.Executor.Executor.EndGetResponse[T](IAsyncResult getResponseResult)
[09/09/2016 01:51:44 > fc93a9: ERR ] --- End of inner exception stack trace ---
[09/09/2016 01:51:44 > fc93a9: ERR ] at Microsoft.WindowsAzure.Storage.Core.Executor.Executor.EndExecuteAsync[T](IAsyncResult result)
[09/09/2016 01:51:44 > fc93a9: ERR ] at Microsoft.WindowsAzure.Storage.Queue.CloudQueue.EndExists(IAsyncResult asyncResult)
[09/09/2016 01:51:44 > fc93a9: ERR ] at Microsoft.WindowsAzure.Storage.Core.Util.AsyncExtensions.<>c__DisplayClass1`1.<CreateCallback>b__0(IAsyncResult ar)
[09/09/2016 01:51:44 > fc93a9: ERR ] --- End of stack trace from previous location where exception was thrown ---
[09/09/2016 01:51:44 > fc93a9: ERR ] at System.Runtime.CompilerServices.TaskAwaiter.ThrowForNonSuccess(Task task)
[09/09/2016 01:51:44 > fc93a9: ERR ] at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task)
[09/09/2016 01:51:44 > fc93a9: ERR ] at Microsoft.Azure.WebJobs.Host.Queues.Listeners.QueueListener.<ExecuteAsync>d__4.MoveNext()
[09/09/2016 01:51:44 > fc93a9: ERR ] --- End of stack trace from previous location where exception was thrown ---
[09/09/2016 01:51:44 > fc93a9: ERR ] at System.Runtime.CompilerServices.TaskAwaiter.ThrowForNonSuccess(Task task)
[09/09/2016 01:51:44 > fc93a9: ERR ] at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task)
[09/09/2016 01:51:44 > fc93a9: ERR ] at Microsoft.Azure.WebJobs.Host.Timers.TaskSeriesTimer.<RunAsync>d__d.MoveNext()
[09/09/2016 01:51:44 > fc93a9: ERR ] --- End of stack trace from previous location where exception was thrown ---
[09/09/2016 01:51:44 > fc93a9: ERR ] at Microsoft.Azure.WebJobs.Host.Timers.BackgroundExceptionDispatcher.<>c__DisplayClass1.<Throw>b__0()
[09/09/2016 01:51:44 > fc93a9: ERR ] at System.Threading.ThreadHelper.ThreadStart_Context(Object state)
[09/09/2016 01:51:44 > fc93a9: ERR ] at System.Threading.ExecutionContext.RunInternal(ExecutionContext executionContext, ContextCallback callback, Object state, Boolean preserveSyncCtx)
[09/09/2016 01:51:44 > fc93a9: ERR ] at System.Threading.ExecutionContext.Run(ExecutionContext executionContext, ContextCallback callback, Object state, Boolean preserveSyncCtx)
[09/09/2016 01:51:44 > fc93a9: ERR ] at System.Threading.ExecutionContext.Run(ExecutionContext executionContext, ContextCallback callback, Object state)
[09/09/2016 01:51:44 > fc93a9: ERR ] at System.Threading.ThreadHelper.ThreadStart()
[09/09/2016 01:51:44 > fc93a9: SYS ERR ] Job failed due to exit code -532462766
=======================================================
I don't have a queue so thats probably where the error is coming from but the weird thing is I do not try to access a queue, I am only accessing my blob storage.
Not sure if this helps but my webjobs Program.cs looks like this:
static void Main()
{
JobHostConfiguration config = new JobHostConfiguration();
config.Tracing.ConsoleLevel = TraceLevel.Verbose;
config.UseTimers();
var host = new JobHost(config);
// The following code ensures that the WebJob will be running continuously
host.RunAndBlock();
}
And this is the only code in the job that uses storage, the rest is database stuff to get lists, etc.
CloudStorageAccount storageAccount = CloudStorageAccount.Parse(CloudConfigurationManager.GetSetting("StorageConnectionString"));
// Create the blob client.
CloudBlobClient blobClient = storageAccount.CreateCloudBlobClient();
// Retrieve reference to a previously created container.
CloudBlobContainer container = blobClient.GetContainerReference("agentdocuments");
foreach (var doc in expiredDocs)
{
if (doc.ContentLength == 1)
{
// Retrieve reference to blob
CloudBlockBlob blockBlob = container.GetBlockBlobReference($"{doc.SubDomain.ToLower()}/{doc.ClientId}/{doc.FileId}");
await blockBlob.DeleteIfExistsAsync();
}
}
EDIT: I should also add that this only started occurring after updating:
Microsoft.Azure.WebJobs from 1.1.1 to 1.1.2
Microsoft.Azure.WebJobs.Core from 1.1.1 to 1.1.2
WindowsAzure.Storage from 5.0.2 to 7.2.0 and
There were also a few code changes and other updates (like Newtonsoft.Json,Microsoft.WindowsAzure.ConfigurationManager,Microsoft.Web.WebJobs.Publish) but no code changes with how I accessed the blob.
UPDATE - My webjob ran over the weekend to completion. It failed at 22:23 with the same error as above, but then ran successfully on the retry at 22:24. But then failed again today with the same error. I'm going to try remove the logging as stated by #FleminAdambukulam but strange how it ran without any code changes but then fails again......
The problem is that the storage account is of the special type that only supports Blobs. Even though you are not using queues, the WebJobs SDK relies on them internally. So in order to use the WebJobs SDK, you need to use a regular Storage account.

Resources