This is the error stated by the webjob console. Earlier the same code was running fine at least it was making a logging in application insight. This is the error:
[06/29/2022 08:14:21 > fa1ef8: SYS INFO] Status changed to Initializing
[06/29/2022 08:14:25 > fa1ef8: SYS INFO] Run script 'ArchiveOldSMBFiles.exe' with script host - 'WindowsScriptHost'
[06/29/2022 08:14:25 > fa1ef8: SYS INFO] Status changed to Running
[06/29/2022 08:14:26 > fa1ef8: ERR ]
[06/29/2022 08:14:27 > fa1ef8: ERR ] Unhandled Exception: System.TypeInitializationException: The type initializer for 'ArchiveOldSMBFiles.Program' threw an exception. ---> System.Configuration.ConfigurationErrorsException: Configuration system failed to initialize ---> System.Configuration.ConfigurationErrorsException: Unrecognized element. (D:\local\Temp\jobs\triggered\webjob7\awufcgqk.dxj\Debug\ArchiveOldSMBFiles.exe.config line 38)
[06/29/2022 08:14:27 > fa1ef8: ERR ] at System.Configuration.ConfigurationSchemaErrors.ThrowIfErrors(Boolean ignoreLocal)
[06/29/2022 08:14:27 > fa1ef8: ERR ] at System.Configuration.BaseConfigurationRecord.ThrowIfParseErrors(ConfigurationSchemaErrors schemaErrors)
[06/29/2022 08:14:27 > fa1ef8: ERR ] at System.Configuration.BaseConfigurationRecord.ThrowIfInitErrors()
[06/29/2022 08:14:27 > fa1ef8: ERR ] at System.Configuration.ClientConfigurationSystem.EnsureInit(String configKey)
[06/29/2022 08:14:27 > fa1ef8: ERR ] --- End of inner exception stack trace ---
[06/29/2022 08:14:27 > fa1ef8: ERR ] at System.Configuration.ClientConfigurationSystem.EnsureInit(String configKey)
[06/29/2022 08:14:27 > fa1ef8: ERR ] at System.Configuration.ClientConfigurationSystem.PrepareClientConfigSystem(String sectionName)
[06/29/2022 08:14:27 > fa1ef8: ERR ] at System.Configuration.ClientConfigurationSystem.System.Configuration.Internal.IInternalConfigSystem.GetSection(String sectionName)
[06/29/2022 08:14:27 > fa1ef8: ERR ] at System.Configuration.ConfigurationManager.GetSection(String sectionName)
[06/29/2022 08:14:27 > fa1ef8: ERR ] at System.Configuration.PrivilegedConfigurationManager.GetSection(String sectionName)
[06/29/2022 08:14:27 > fa1ef8: ERR ] at System.Net.Configuration.SettingsSectionInternal.get_Section()
[06/29/2022 08:14:27 > fa1ef8: ERR ] at System.Net.Sockets.Socket.InitializeSockets()
[06/29/2022 08:14:27 > fa1ef8: ERR ] at System.Net.NetworkInformation.NetworkChange.AddressChangeListener.StartHelper(NetworkAddressChangedEventHandler caller, Boolean captureContext, StartIPOptions startIPOptions)
[06/29/2022 08:14:27 > fa1ef8: ERR ] at Microsoft.ApplicationInsights.WindowsServer.TelemetryChannel.Implementation.Network.AddAddressChangedEventHandler(NetworkAddressChangedEventHandler handler) in /_/BASE/src/ServerTelemetryChannel/Implementation/Network.cs:line 12
[06/29/2022 08:14:27 > fa1ef8: ERR ] at Microsoft.ApplicationInsights.WindowsServer.TelemetryChannel.Implementation.TransmissionPolicy.NetworkAvailabilityTransmissionPolicy.SubscribeToNetworkAddressChangedEvents() in /_/BASE/src/ServerTelemetryChannel/Implementation/TransmissionPolicy/NetworkAvailabilityTransmissionPolicy.cs:line 36
[06/29/2022 08:14:27 > fa1ef8: ERR ] at Microsoft.ApplicationInsights.WindowsServer.TelemetryChannel.Implementation.TransmissionPolicy.NetworkAvailabilityTransmissionPolicy.Initialize(Transmitter transmitter) in /_/BASE/src/ServerTelemetryChannel/Implementation/TransmissionPolicy/NetworkAvailabilityTransmissionPolicy.cs:line 19
[06/29/2022 08:14:27 > fa1ef8: ERR ] at Microsoft.ApplicationInsights.WindowsServer.TelemetryChannel.Implementation.TransmissionPolicy.TransmissionPolicyCollection.Initialize(Transmitter transmitter) in /_/BASE/src/ServerTelemetryChannel/Implementation/TransmissionPolicy/TransmissionPolicyCollection.cs:line 49
[06/29/2022 08:14:27 > fa1ef8: ERR ] at Microsoft.ApplicationInsights.WindowsServer.TelemetryChannel.Implementation.Transmitter..ctor(TransmissionSender sender, TransmissionBuffer transmissionBuffer, TransmissionStorage storage, TransmissionPolicyCollection policies, BackoffLogicManager backoffLogicManager) in /_/BASE/src/ServerTelemetryChannel/Implementation/Transmitter.cs:line 57
[06/29/2022 08:14:27 > fa1ef8: ERR ] at Microsoft.ApplicationInsights.WindowsServer.TelemetryChannel.ServerTelemetryChannel..ctor(INetwork network, IApplicationLifecycle applicationLifecycle) in /_/BASE/src/ServerTelemetryChannel/ServerTelemetryChannel.cs:line 52
[06/29/2022 08:14:27 > fa1ef8: ERR ] at Microsoft.ApplicationInsights.WindowsServer.TelemetryChannel.ServerTelemetryChannel..ctor() in /_/BASE/src/ServerTelemetryChannel/ServerTelemetryChannel.cs:line 41
[06/29/2022 08:14:27 > fa1ef8: ERR ] at ArchiveOldSMBFiles.Program..cctor() in D:\Projects\ArchiveOldSMBFiles\Program.cs:line 30
[06/29/2022 08:14:27 > fa1ef8: ERR ] --- End of inner exception stack trace ---
[06/29/2022 08:14:27 > fa1ef8: ERR ] at ArchiveOldSMBFiles.Program.Main(String[] args)
[06/29/2022 08:14:27 > fa1ef8: SYS INFO] Status changed to Failed
[06/29/2022 08:14:27 > fa1ef8: SYS ERR ] Job failed due to exit code -532462766
My code:
class Program
{
static IServiceCollection services = new ServiceCollection()
.AddLogging(loggingBuilder => loggingBuilder.AddFilter<Microsoft.Extensions.Logging.ApplicationInsights.ApplicationInsightsLoggerProvider>("", LogLevel.Trace))
.AddSingleton(typeof(ITelemetryChannel), new ServerTelemetryChannel())
.AddApplicationInsightsTelemetryWorkerService(ConfigurationManager.AppSettings["APPINSIGHTS_INSTRUMENTATIONKEY"]);
static IServiceProvider serviceProvider = services.BuildServiceProvider();
static ILogger<Program> logger = serviceProvider.GetRequiredService<ILogger<Program>>();
static TelemetryClient telemetryClient = serviceProvider.GetRequiredService<TelemetryClient>();
public static void Main(string[] args)
{
try
{
var _storageConn = ConfigurationManager
.AppSettings["PaperCaptureStorageConnectionString"];
ServicePointManager.DefaultConnectionLimit = 50;
JobHostConfiguration config = new JobHostConfiguration();
config.StorageConnectionString = _storageConn;
var host = new JobHost(config);
Task callTask = host.CallAsync(typeof(Program).GetMethod(nameof(Program.ProcessMethod)));
callTask.Wait();
host.RunAndBlock();
}
catch (Exception ex)
{
Console.WriteLine(ex);
}
}
[NoAutomaticTrigger]
public static void ProcessMethod()
{
using (telemetryClient.StartOperation<RequestTelemetry>("ArchiveOldSMBFiles1233"))
{
logger.LogInformation("-ArchiveOldSMBFiles.Main Execution Started");
var totalDeletedFiles = 0;
try
{
string jsonTemplate = string.Empty;
string fileShareName = string.Empty;
Console.WriteLine("Before unity of work");
using (var unitOfWork = new DataConnector().UnitOfWork)
{
Console.WriteLine("After unity of work");
jsonTemplate = ConfigurationService.GetConfigValueByConfigCode(
BusinessLogic.Helper.Constants.ConfigType.AzureConstantValues
, BusinessLogic.Helper.Constants.ConfigCode.SMBRemoveOldFile, unitOfWork);
Console.WriteLine("This is your json: " + jsonTemplate);
}
}
}
catch (Exception ex)
{
Console.WriteLine("After Catch");
Console.WriteLine("Crash: "+ ex.ToString());
logger.LogError(ex.ToString());
}
finally
{
Console.WriteLine("After finally");
telemetryClient.Flush();
}
}
}
}
I have fixed the Azure SQL connection issue which was caused by the wrong Azure connection string.
I am uploading my webjob by zipping Debug folder and uploading it.
To resolve the error Job failed due to exit code -532462766,
please check the following:
This error usually occurs when the config file is broken.
After installing the Application Insights try comparing the configuration files.
Please check whether you have web.config file. If not, try creating web.config in console app similar to app.config and republish SO Thread by Lucas Huet-Hudson.
Try copying the the programname.exe.config file along with the exe as answered by Nasir Razzaq in the above thread.
Try to disable the dashboard logging as suggested by NicolajHedeager in this GitHub Blog.
If the issue still persists, raise Azure Support Ticket to know the root cause of the issue.
For more in detail, please refer below links:
Turning a console app into a long-running Azure WebJob – Sander van de Velde (wordpress.com) by Sander van de Velde
Azure WebJob fails with exit code 532462766 (microsoft.com) by Anastasia Black
Related
[Question posted by a user on YugabyteDB Community Slack]
I am currently using YugabyteDB with the reactive Postgres driver (io.r2dbc:r2dbc-postgresql), and I am facing some intermittent issues like this one, as you can see in the stack trace below.
I was told that the Postgres driver may not deal correctly with the Yugabyte load balancing, which maybe is leading me to this problem, and then maybe the actual Yugabyte driver would deal properly with such scenarios.
However, I am using a reactive code, which means I need an R2DBC driver, and I did not find any official R2DBC Yugabyte driver.
Do you think a more appropriate driver would really solve such problem?
If so, is there any other R2DBC driver that would be more suitable for my purpose here?
If not, do you have any suggestions to solve the problem below?
23:37:07.239 [reactor-tcp-epoll-1] WARN i.r.p.client.ReactorNettyClient - Error: SEVERITY_LOCALIZED=ERROR, SEVERITY_NON_LOCALIZED=ERROR, CODE=40001, MESSAGE=Query error: Restart read required at: { read: { physical: 1639179428477618 } local_limit: { physical: 1639179428477618 } global_limit: <min> in_txn_limit: <max> serial_no: 0 }, FILE=pg_yb_utils.c, LINE=333, ROUTINE=HandleYBStatusAtErrorLevel
23:37:07.247 [reactor-kafka-sender-1609501721] ERROR reactor.core.publisher.Operators - Operator called default onErrorDropped
reactor.core.Exceptions$ErrorCallbackNotImplemented: org.jooq.exception.DataAccessException: SQL [update "core"."videos" set "status" = $1 where "core"."videos"."media_key" = $2 returning "core"."videos"."user_id", "core"."videos"."media_key", "core"."videos"."post_id", "core"."videos"."status"]; Query error: Restart read required at: { read: { physical: 1639179428477618 } local_limit: { physical: 1639179428477618 } global_limit: <min> in_txn_limit: <max> serial_no: 0 }
Caused by: org.jooq.exception.DataAccessException: SQL [update "core"."videos" set "status" = $1 where "core"."videos"."media_key" = $2 returning "core"."videos"."user_id", "core"."videos"."media_key", "core"."videos"."post_id", "core"."videos"."status"]; Query error: Restart read required at: { read: { physical: 1639179428477618 } local_limit: { physical: 1639179428477618 } global_limit: <min> in_txn_limit: <max> serial_no: 0 }
at org.jooq.impl.Tools.translate(Tools.java:2978)
at org.jooq.impl.Tools.translate(Tools.java:2962)
at org.jooq.impl.R2DBC$Forwarding.onError(R2DBC.java:236)
at reactor.core.publisher.StrictSubscriber.onError(StrictSubscriber.java:106)
at org.springframework.cloud.sleuth.instrument.reactor.ScopePassingSpanSubscriber.onError(ScopePassingSpanSubscriber.java:96)
at reactor.core.publisher.FluxHandle$HandleSubscriber.onError(FluxHandle.java:203)
at reactor.core.publisher.MonoFlatMapMany$FlatMapManyInner.onError(MonoFlatMapMany.java:255)
at org.springframework.cloud.sleuth.instrument.reactor.ScopePassingSpanSubscriber.onError(ScopePassingSpanSubscriber.java:96)
at reactor.core.publisher.FluxHandleFuseable$HandleFuseableSubscriber.onNext(FluxHandleFuseable.java:191)
at reactor.core.publisher.FluxFilterFuseable$FilterFuseableConditionalSubscriber.onNext(FluxFilterFuseable.java:337)
at reactor.core.publisher.FluxContextWrite$ContextWriteSubscriber.onNext(FluxContextWrite.java:107)
at reactor.core.publisher.FluxPeekFuseable$PeekConditionalSubscriber.onNext(FluxPeekFuseable.java:854)
at org.springframework.cloud.sleuth.instrument.reactor.ScopePassingSpanSubscriber.onNext(ScopePassingSpanSubscriber.java:89)
at reactor.core.publisher.FluxPeek$PeekSubscriber.onNext(FluxPeek.java:200)
at io.r2dbc.postgresql.util.FluxDiscardOnCancel$FluxDiscardOnCancelSubscriber.onNext(FluxDiscardOnCancel.java:86)
at org.springframework.cloud.sleuth.instrument.reactor.ScopePassingSpanSubscriber.onNext(ScopePassingSpanSubscriber.java:89)
at reactor.core.publisher.FluxCreate$BufferAsyncSink.drain(FluxCreate.java:793)
at reactor.core.publisher.FluxCreate$BufferAsyncSink.next(FluxCreate.java:718)
at reactor.core.publisher.FluxCreate$SerializedFluxSink.next(FluxCreate.java:154)
at io.r2dbc.postgresql.client.ReactorNettyClient$Conversation.emit(ReactorNettyClient.java:735)
at io.r2dbc.postgresql.client.ReactorNettyClient$BackendMessageSubscriber.emit(ReactorNettyClient.java:986)
at io.r2dbc.postgresql.client.ReactorNettyClient$BackendMessageSubscriber.onNext(ReactorNettyClient.java:860)
at io.r2dbc.postgresql.client.ReactorNettyClient$BackendMessageSubscriber.onNext(ReactorNettyClient.java:767)
at reactor.core.publisher.FluxHandle$HandleSubscriber.onNext(FluxHandle.java:119)
at reactor.core.publisher.FluxPeekFuseable$PeekConditionalSubscriber.onNext(FluxPeekFuseable.java:854)
at org.springframework.cloud.sleuth.instrument.reactor.ScopePassingSpanSubscriber.onNext(ScopePassingSpanSubscriber.java:89)
at reactor.core.publisher.FluxMap$MapSubscriber.onNext(FluxMap.java:120)
at org.springframework.cloud.sleuth.instrument.reactor.ScopePassingSpanSubscriber.onNext(ScopePassingSpanSubscriber.java:89)
at reactor.core.publisher.FluxMap$MapSubscriber.onNext(FluxMap.java:120)
at reactor.netty.channel.FluxReceive.drainReceiver(FluxReceive.java:279)
at reactor.netty.channel.FluxReceive.onInboundNext(FluxReceive.java:388)
at reactor.netty.channel.ChannelOperations.onInboundNext(ChannelOperations.java:404)
at reactor.netty.channel.ChannelOperationsHandler.channelRead(ChannelOperationsHandler.java:93)
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:379)
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:365)
at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:357)
at io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:324)
at io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:296)
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:379)
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:365)
at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:357)
at io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410)
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:379)
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:365)
at io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919)
at io.netty.channel.epoll.AbstractEpollStreamChannel$EpollStreamUnsafe.epollInReady(AbstractEpollStreamChannel.java:795)
at io.netty.channel.epoll.EpollEventLoop.processReady(EpollEventLoop.java:480)
at io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:378)
at io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:986)
at io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74)
at io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30)
at java.base/java.lang.Thread.run(Thread.java:831)
Caused by: io.r2dbc.postgresql.ExceptionFactory$PostgresqlRollbackException: Query error: Restart read required at: { read: { physical: 1639179428477618 } local_limit: { physical: 1639179428477618 } global_limit: <min> in_txn_limit: <max> serial_no: 0 }
at io.r2dbc.postgresql.ExceptionFactory.createException(ExceptionFactory.java:72)
at io.r2dbc.postgresql.ExceptionFactory.handleErrorResponse(ExceptionFactory.java:111)
at reactor.core.publisher.FluxHandleFuseable$HandleFuseableSubscriber.onNext(FluxHandleFuseable.java:169)
... 43 common frames omitted
Thank you very much in advance!
The exception stack trace is related to Restart read errors. Currently, YugabyteDB supports only optimistic locking with SNAPSHOT isolation level which means whenever there is conflict on concurrent access, the driver will throw restart read errors like below:
Caused by: io.r2dbc.postgresql.ExceptionFactory$PostgresqlRollbackException: Query error: Restart read required at: { read: { physical: 1639179428477618 } local_limit: { physical: 1639179428477618 } global_limit: <min> in_txn_limit: <max> serial_no: 0 }
You will need to handle the Rollback exception and retry the operation. It's on the roadmap for YugabyteDB to support REPEATABLE READS with pessimistic locking to avoid transaction retries, until the feature is available you’ll need to retry the transaction on the client-side.
When I try to run my WebJob, I get the following failure:
[07/12/2018 18:09:21 > 7351a7: ERR ] Unhandled Exception: Microsoft.Azure.WebJobs.Host.Indexers.FunctionIndexingException: Error indexing method 'Foo.Bar' ---> System.InvalidOperationException: Invalid storage account 'storage'. Please make sure your credentials are correct. ---> Microsoft.WindowsAzure.Storage.StorageException: The remote server returned an error: (403) Forbidden. ---> System.Net.WebException: The remote server returned an error: (403) Forbidden.
[07/12/2018 18:09:21 > 7351a7: ERR ] at Microsoft.WindowsAzure.Storage.Shared.Protocol.HttpResponseParsers.ProcessExpectedStatusCodeNoException[T](HttpStatusCode expectedStatusCode, HttpStatusCode actualStatusCode, T retVal, StorageCommandBase`1 cmd, Exception ex)
[07/12/2018 18:09:21 > 7351a7: ERR ] at Microsoft.WindowsAzure.Storage.Blob.CloudBlobClient.b__19(RESTCommand`1 cmd, HttpWebResponse resp, Exception ex, OperationContext ctx)
[07/12/2018 18:09:21 > 7351a7: ERR ] at Microsoft.WindowsAzure.Storage.Core.Executor.Executor.EndGetResponse[T](IAsyncResult getResponseResult)
[07/12/2018 18:09:21 > 7351a7: ERR ] --- End of inner exception stack trace ---
[07/12/2018 18:09:21 > 7351a7: ERR ] at Microsoft.WindowsAzure.Storage.Core.Executor.Executor.EndExecuteAsync[T](IAsyncResult result)
[07/12/2018 18:09:21 > 7351a7: ERR ] at Microsoft.WindowsAzure.Storage.Core.Util.AsyncExtensions.c__DisplayClass2`1.b__0(IAsyncResult ar)
[07/12/2018 18:09:21 > 7351a7: ERR ] --- End of stack trace from previous location where exception was thrown ---
[07/12/2018 18:09:21 > 7351a7: ERR ] at System.Runtime.ExceptionServices.ExceptionDispatchInfo.Throw()
[07/12/2018 18:09:21 > 7351a7: ERR ] at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task)
[07/12/2018 18:09:21 > 7351a7: ERR ] at Microsoft.Azure.WebJobs.Host.Executors.DefaultStorageCredentialsValidator.d__1.MoveNext()
[07/12/2018 18:09:21 > 7351a7: ERR ] --- End of inner exception stack trace ---
[07/12/2018 18:09:21 > 7351a7: ERR ] at Microsoft.Azure.WebJobs.Host.Executors.DefaultStorageCredentialsValidator.d__1.MoveNext()
[07/12/2018 18:09:21 > 7351a7: ERR ] --- End of stack trace from previous location where exception was thrown ---
[07/12/2018 18:09:21 > 7351a7: ERR ] at System.Runtime.ExceptionServices.ExceptionDispatchInfo.Throw()
[07/12/2018 18:09:21 > 7351a7: ERR ] at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task)
[07/12/2018 18:09:21 > 7351a7: ERR ] at Microsoft.Azure.WebJobs.Host.Executors.DefaultStorageCredentialsValidator.d__0.MoveNext()
[07/12/2018 18:09:21 > 7351a7: ERR ] --- End of stack trace from previous location where exception was thrown ---
[07/12/2018 18:09:21 > 7351a7: ERR ] at System.Runtime.ExceptionServices.ExceptionDispatchInfo.Throw()
[07/12/2018 18:09:21 > 7351a7: ERR ] at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task)
[07/12/2018 18:09:21 > 7351a7: ERR ] at Microsoft.Azure.WebJobs.Host.Executors.DefaultStorageAccountProvider.d__24.MoveNext()
[07/12/2018 18:09:21 > 7351a7: ERR ] --- End of stack trace from previous location where exception was thrown ---
[07/12/2018 18:09:21 > 7351a7: ERR ] at System.Runtime.ExceptionServices.ExceptionDispatchInfo.Throw()
[07/12/2018 18:09:21 > 7351a7: ERR ] at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task)
[07/12/2018 18:09:21 > 7351a7: WARN] Reached maximum allowed output lines for this run, to see all of the job's logs you can enable website application diagnostics
[07/12/2018 18:09:21 > 7351a7: SYS ERR ] Job failed due to exit code -532462766
[07/12/2018 18:09:21 > 7351a7: SYS INFO] Process went down, waiting for 60 seconds
I have 100% validated that my storage account credentials are correct.
When I attempt to manually connect to the storage account and invoke CreateifNotExist, I get:
[07/12/2018 18:11:46 > 7351a7: INFO] Microsoft.WindowsAzure.Storage.StorageException: The remote server returned an error: (403) Forbidden. ---> System.Net.WebException: The remote server returned an error: (403) Forbidden.
[07/12/2018 18:11:46 > 7351a7: INFO] at System.Net.HttpWebRequest.GetResponse()
[07/12/2018 18:11:46 > 7351a7: INFO] at Microsoft.WindowsAzure.Storage.Core.Executor.Executor.ExecuteSync[T](RESTCommand`1 cmd, IRetryPolicy policy, OperationContext operationContext)
[07/12/2018 18:11:46 > 7351a7: INFO] --- End of inner exception stack trace ---
[07/12/2018 18:11:46 > 7351a7: INFO] at Microsoft.WindowsAzure.Storage.Core.Executor.Executor.ExecuteSync[T](RESTCommand`1 cmd, IRetryPolicy policy, OperationContext operationContext)
[07/12/2018 18:11:46 > 7351a7: INFO] at Microsoft.WindowsAzure.Storage.Queue.CloudQueue.CreateIfNotExists(QueueRequestOptions options, OperationContext operationContext)
Upon dumping additional information about the exception from Microsoft.WindowsAzure.Storage.StorageExtendedErrorInformation, I see:
AuthenticationErrorDetail:The MAC signature found in the HTTP request 'CUVqVMvvWecPsnTEynjocGpq6TkBmkpJsVL6hr2jkKQ=' is not the same as any computed signature. Server used following string to sign: 'PUT
See the following GitHub issue where ApplicationInsights may be interferring with your HTTP request to the storage REST APIs:
https://github.com/Azure/azure-storage-net/issues/490
Make sure the following is in your applicationinsights.config file:
<Add Type="Microsoft.ApplicationInsights.DependencyCollector.DependencyTrackingTelemetryModule, Microsoft.AI.DependencyCollector">
<ExcludeComponentCorrelationHttpHeadersOnDomains>
<!--
Requests to the following hostnames will not be modified by adding correlation headers.
Add entries here to exclude additional hostnames.
NOTE: this configuration will be lost upon NuGet upgrade.
-->
<Add>core.windows.net</Add>
<Add>core.chinacloudapi.cn</Add>
<Add>core.cloudapi.de</Add>
<Add>core.usgovcloudapi.net</Add>
<Add>localhost</Add>
<Add>127.0.0.1</Add>
</ExcludeComponentCorrelationHttpHeadersOnDomains>
<IncludeDiagnosticSourceActivities>
<Add>Microsoft.Azure.EventHubs</Add>
<Add>Microsoft.Azure.ServiceBus</Add>
</IncludeDiagnosticSourceActivities>
</Add>
My webjob deletes documents from my blob storage. If I copy the code to a console app and run it, it runs to completion without error. If I create a new webjob in visual studio (right click my website project and select Add -> New Azure Webjob project), copy the code into the Functions.cs and deploy, Azure throws an exception when running the webjob:
===============================================
[09/09/2016 01:51:44 > fc93a9: ERR ] Unhandled Exception: Microsoft.WindowsAzure.Storage.StorageException: The remote name could not be resolved: 'xxx.queue.core.windows.net' ---> System.Net.WebException: The remote name could not be resolved: 'xxx.queue.core.windows.net'
[09/09/2016 01:51:44 > fc93a9: ERR ] at System.Net.HttpWebRequest.EndGetResponse(IAsyncResult asyncResult)
[09/09/2016 01:51:44 > fc93a9: ERR ] at Microsoft.WindowsAzure.Storage.Core.Executor.Executor.EndGetResponse[T](IAsyncResult getResponseResult)
[09/09/2016 01:51:44 > fc93a9: ERR ] --- End of inner exception stack trace ---
[09/09/2016 01:51:44 > fc93a9: ERR ] at Microsoft.WindowsAzure.Storage.Core.Executor.Executor.EndExecuteAsync[T](IAsyncResult result)
[09/09/2016 01:51:44 > fc93a9: ERR ] at Microsoft.WindowsAzure.Storage.Queue.CloudQueue.EndExists(IAsyncResult asyncResult)
[09/09/2016 01:51:44 > fc93a9: ERR ] at Microsoft.WindowsAzure.Storage.Core.Util.AsyncExtensions.<>c__DisplayClass1`1.<CreateCallback>b__0(IAsyncResult ar)
[09/09/2016 01:51:44 > fc93a9: ERR ] --- End of stack trace from previous location where exception was thrown ---
[09/09/2016 01:51:44 > fc93a9: ERR ] at System.Runtime.CompilerServices.TaskAwaiter.ThrowForNonSuccess(Task task)
[09/09/2016 01:51:44 > fc93a9: ERR ] at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task)
[09/09/2016 01:51:44 > fc93a9: ERR ] at Microsoft.Azure.WebJobs.Host.Queues.Listeners.QueueListener.<ExecuteAsync>d__4.MoveNext()
[09/09/2016 01:51:44 > fc93a9: ERR ] --- End of stack trace from previous location where exception was thrown ---
[09/09/2016 01:51:44 > fc93a9: ERR ] at System.Runtime.CompilerServices.TaskAwaiter.ThrowForNonSuccess(Task task)
[09/09/2016 01:51:44 > fc93a9: ERR ] at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task)
[09/09/2016 01:51:44 > fc93a9: ERR ] at Microsoft.Azure.WebJobs.Host.Timers.TaskSeriesTimer.<RunAsync>d__d.MoveNext()
[09/09/2016 01:51:44 > fc93a9: ERR ] --- End of stack trace from previous location where exception was thrown ---
[09/09/2016 01:51:44 > fc93a9: ERR ] at Microsoft.Azure.WebJobs.Host.Timers.BackgroundExceptionDispatcher.<>c__DisplayClass1.<Throw>b__0()
[09/09/2016 01:51:44 > fc93a9: ERR ] at System.Threading.ThreadHelper.ThreadStart_Context(Object state)
[09/09/2016 01:51:44 > fc93a9: ERR ] at System.Threading.ExecutionContext.RunInternal(ExecutionContext executionContext, ContextCallback callback, Object state, Boolean preserveSyncCtx)
[09/09/2016 01:51:44 > fc93a9: ERR ] at System.Threading.ExecutionContext.Run(ExecutionContext executionContext, ContextCallback callback, Object state, Boolean preserveSyncCtx)
[09/09/2016 01:51:44 > fc93a9: ERR ] at System.Threading.ExecutionContext.Run(ExecutionContext executionContext, ContextCallback callback, Object state)
[09/09/2016 01:51:44 > fc93a9: ERR ] at System.Threading.ThreadHelper.ThreadStart()
[09/09/2016 01:51:44 > fc93a9: SYS ERR ] Job failed due to exit code -532462766
=======================================================
I don't have a queue so thats probably where the error is coming from but the weird thing is I do not try to access a queue, I am only accessing my blob storage.
Not sure if this helps but my webjobs Program.cs looks like this:
static void Main()
{
JobHostConfiguration config = new JobHostConfiguration();
config.Tracing.ConsoleLevel = TraceLevel.Verbose;
config.UseTimers();
var host = new JobHost(config);
// The following code ensures that the WebJob will be running continuously
host.RunAndBlock();
}
And this is the only code in the job that uses storage, the rest is database stuff to get lists, etc.
CloudStorageAccount storageAccount = CloudStorageAccount.Parse(CloudConfigurationManager.GetSetting("StorageConnectionString"));
// Create the blob client.
CloudBlobClient blobClient = storageAccount.CreateCloudBlobClient();
// Retrieve reference to a previously created container.
CloudBlobContainer container = blobClient.GetContainerReference("agentdocuments");
foreach (var doc in expiredDocs)
{
if (doc.ContentLength == 1)
{
// Retrieve reference to blob
CloudBlockBlob blockBlob = container.GetBlockBlobReference($"{doc.SubDomain.ToLower()}/{doc.ClientId}/{doc.FileId}");
await blockBlob.DeleteIfExistsAsync();
}
}
EDIT: I should also add that this only started occurring after updating:
Microsoft.Azure.WebJobs from 1.1.1 to 1.1.2
Microsoft.Azure.WebJobs.Core from 1.1.1 to 1.1.2
WindowsAzure.Storage from 5.0.2 to 7.2.0 and
There were also a few code changes and other updates (like Newtonsoft.Json,Microsoft.WindowsAzure.ConfigurationManager,Microsoft.Web.WebJobs.Publish) but no code changes with how I accessed the blob.
UPDATE - My webjob ran over the weekend to completion. It failed at 22:23 with the same error as above, but then ran successfully on the retry at 22:24. But then failed again today with the same error. I'm going to try remove the logging as stated by #FleminAdambukulam but strange how it ran without any code changes but then fails again......
The problem is that the storage account is of the special type that only supports Blobs. Even though you are not using queues, the WebJobs SDK relies on them internally. So in order to use the WebJobs SDK, you need to use a regular Storage account.
We are using Azure to run several WebJobs. One of our WebJob functions has the following signature:
public static Task ProcessFileUploadedMessageAsync(
[QueueTrigger("uploads")] FileUploadedMessage message,
TextWriter logger,
CancellationToken cancellationToken)
This function is monitoring a queue for a message that indicates a file has been uploaded, which then triggers an import of the file's data. Note the use of a TextWriter as the second argument: this is supplied by the WebJobs API infrastructure.
Our import process is kind of slow (can be several hours for a single file import in some cases), so we periodically write messages to the log (via the TextWriter) to track our progress. Unfortunately, our larger files are causing the WebJob hosting process to be terminated due to a logging exception. Here is a sample stack trace from the hosting process log:
[04/02/2016 03:44:59 > 660083: ERR ]
[04/02/2016 03:45:00 > 660083: ERR ] Unhandled Exception: System.ArgumentOutOfRangeException: Index was out of range. Must be non-negative and less than the size of the collection.
[04/02/2016 03:45:00 > 660083: ERR ] Parameter name: chunkLength
[04/02/2016 03:45:00 > 660083: ERR ] at System.Text.StringBuilder.ToString()
[04/02/2016 03:45:00 > 660083: ERR ] at System.IO.StringWriter.ToString()
[04/02/2016 03:45:00 > 660083: ERR ] at Microsoft.Azure.WebJobs.Host.Loggers.UpdateOutputLogCommand.<UpdateOutputBlob>d__10.MoveNext()
[04/02/2016 03:45:00 > 660083: ERR ] --- End of stack trace from previous location where exception was thrown ---
[04/02/2016 03:45:00 > 660083: ERR ] at System.Runtime.CompilerServices.TaskAwaiter.ThrowForNonSuccess(Task task)
[04/02/2016 03:45:00 > 660083: ERR ] at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task)
[04/02/2016 03:45:00 > 660083: ERR ] at Microsoft.Azure.WebJobs.Host.Loggers.UpdateOutputLogCommand.<TryExecuteAsync>d__3.MoveNext()
[04/02/2016 03:45:00 > 660083: ERR ] --- End of stack trace from previous location where exception was thrown ---
[04/02/2016 03:45:00 > 660083: ERR ] at System.Runtime.CompilerServices.TaskAwaiter.ThrowForNonSuccess(Task task)
[04/02/2016 03:45:00 > 660083: ERR ] at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task)
[04/02/2016 03:45:00 > 660083: ERR ] at Microsoft.Azure.WebJobs.Host.Timers.RecurrentTaskSeriesCommand.<ExecuteAsync>d__0.MoveNext()
[04/02/2016 03:45:00 > 660083: ERR ] --- End of stack trace from previous location where exception was thrown ---
[04/02/2016 03:45:00 > 660083: ERR ] at System.Runtime.CompilerServices.TaskAwaiter.ThrowForNonSuccess(Task task)
[04/02/2016 03:45:00 > 660083: ERR ] at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task)
[04/02/2016 03:45:00 > 660083: ERR ] at Microsoft.Azure.WebJobs.Host.Timers.TaskSeriesTimer.<RunAsync>d__d.MoveNext()
[04/02/2016 03:45:00 > 660083: ERR ] --- End of stack trace from previous location where exception was thrown ---
[04/02/2016 03:45:00 > 660083: ERR ] at Microsoft.Azure.WebJobs.Host.Timers.BackgroundExceptionDispatcher.<>c__DisplayClass1.<Throw>b__0()
[04/02/2016 03:45:00 > 660083: ERR ] at System.Threading.ThreadHelper.ThreadStart_Context(Object state)
[04/02/2016 03:45:00 > 660083: ERR ] at System.Threading.ExecutionContext.RunInternal(ExecutionContext executionContext, ContextCallback callback, Object state, Boolean preserveSyncCtx)
[04/02/2016 03:45:00 > 660083: ERR ] at System.Threading.ExecutionContext.Run(ExecutionContext executionContext, ContextCallback callback, Object state, Boolean preserveSyncCtx)
[04/02/2016 03:45:00 > 660083: ERR ] at System.Threading.ExecutionContext.Run(ExecutionContext executionContext, ContextCallback callback, Object state)
[04/02/2016 03:45:00 > 660083: ERR ] at System.Threading.ThreadHelper.ThreadStart()
[04/02/2016 03:45:00 > 660083: SYS ERR ] Job failed due to exit code -532462766
[04/02/2016 03:45:01 > 660083: SYS INFO] Process went down, waiting for 0 seconds
[04/02/2016 03:45:01 > 660083: SYS INFO] Status changed to PendingRestart
The main problem is that this exception is being thrown not by our code but by something in the WebJobs API:
at Microsoft.Azure.WebJobs.Host.Loggers.UpdateOutputLogCommand.<UpdateOutputBlob>d__10.MoveNext()
We tried putting try...catch blocks around our calls to the TextWriter but these had no effect. It would appear that log messages are buffered somewhere and periodically flushed to Azure blob storage by a separate thread. If this is the case, then it would follow that we have no way of trapping the exception.
Has anyone else come across the same problem, or can anyone think of a possible solution or workaround?
For completeness, here is how we are using the TextWriter for logging:
await logger.WriteLineAsync("some text to log");
Nothing more complicated than that.
UPDATE: Seems as though this has been reported as issue #675 on the WebJobs SDK GitHub.
You should add async to method, just like this:
public async static Task ProcessFileUploadedMessageAsync(...)
To get details, you can see this article
I'm trying to deploy a custom version of VirtoCommerce to the Azure cloud and I'm having some trouble with it.
When I try to load the website i run into a
Server Error in '/' Application.
A connection attempt failed because the connected party did not properly respond after a period of time, or established connection failed because connected host has failed to respond <ip>:9200
These are the last few lines of the SchedulerConsole log
[03/16/2015 17:45:23 > 62130d: INFO] VirtoCommerce.ScheduleService.Trace Error: 0 : VirtoCommerce.Scheduling.Jobs.GenerateSearchIndexWork#3e32cbe9-b912-4901-bedb-7604a4ab0616 Trace VirtoCommerce.Search.Providers.Elastic.ElasticSearchException: Failed to remove indexes. URL:xpmarket-search.cloudapp.net:9200 ---> PlainElastic.Net.OperationException: Unable to connect to the remote server ---> System.Net.WebException: Unable to connect to the remote server ---> System.Net.Sockets.SocketException: A connection attempt failed because the connected party did not properly respond after a period of time, or established connection failed because connected host has failed to respond 137.117.213.124:9200
[03/16/2015 17:45:23 > 62130d: INFO] at System.Net.Sockets.Socket.DoConnect(EndPoint endPointSnapshot, SocketAddress socketAddress)
[03/16/2015 17:45:23 > 62130d: INFO] at System.Net.ServicePoint.ConnectSocketInternal(Boolean connectFailure, Socket s4, Socket s6, Socket& socket, IPAddress& address, ConnectSocketState state, IAsyncResult asyncResult, Exception& exception)
[03/16/2015 17:45:23 > 62130d: INFO] --- End of inner exception stack trace ---
[03/16/2015 17:45:23 > 62130d: INFO] at System.Net.HttpWebRequest.GetResponse()
[03/16/2015 17:45:23 > 62130d: INFO] at PlainElastic.Net.ElasticConnection.ExecuteRequest(String method, String command, String jsonData)
[03/16/2015 17:45:23 > 62130d: INFO] --- End of inner exception stack trace ---
[03/16/2015 17:45:23 > 62130d: INFO] at PlainElastic.Net.ElasticConnection.ExecuteRequest(String method, String command, String jsonData)
[03/16/2015 17:45:23 > 62130d: INFO] at PlainElastic.Net.ElasticConnection.Delete(String command, String jsonData)
[03/16/2015 17:45:23 > 62130d: INFO] at VirtoCommerce.Search.Providers.Elastic.ElasticClient`1.Delete(DeleteCommand deleteCommand) in c:\Users\Tiago\Documents\xpmarketplace\src\Extensions\Search\ElasticSearchProvider\ElasticClient.cs:line 129
[03/16/2015 17:45:23 > 62130d: INFO] at VirtoCommerce.Search.Providers.Elastic.ElasticSearchProvider.RemoveAll(String scope, String documentType) in c:\Users\Tiago\Documents\xpmarketplace\src\Extensions\Search\ElasticSearchProvider\ElasticSearchProvider.cs:line 477
[03/16/2015 17:45:23 > 62130d: INFO] --- End of inner exception stack trace ---
[03/16/2015 17:45:23 > 62130d: INFO] at VirtoCommerce.Search.Providers.Elastic.ElasticSearchProvider.ThrowException(String message, Exception innerException) in c:\Users\Tiago\Documents\xpmarketplace\src\Extensions\Search\ElasticSearchProvider\ElasticSearchProvider.cs:line 562
[03/16/2015 17:45:23 > 62130d: INFO] at VirtoCommerce.Search.Providers.Elastic.ElasticSearchProvider.RemoveAll(String scope, String documentType) in c:\Users\Tiago\Documents\xpmarketplace\src\Extensions\Search\ElasticSearchProvider\ElasticSearchProvider.cs:line 490
[03/16/2015 17:45:23 > 62130d: INFO] at VirtoCommerce.Foundation.Search.SearchIndexController.Prepare(String scope, String documentType, Boolean rebuild) in c:\Users\Tiago\Documents\xpmarketplace\src\Core\CommerceFoundation\Search\SearchIndexController.cs:line 91
[03/16/2015 17:45:23 > 62130d: INFO] at VirtoCommerce.Scheduling.Jobs.GenerateSearchIndexWork.Execute(IJobContext context) in c:\Users\Tiago\Documents\xpmarketplace\src\Extensions\Jobs\VirtoCommerceJobs\GenerateSearchIndexWork.cs:line 17
[03/16/2015 17:45:23 > 62130d: INFO] at VirtoCommerce.Scheduling.JobActivityTool.ControlledExecution(IJobActivity activity, TraceContext traceContext, Action`1 audit, IDictionary`2 parameters) in c:\Users\Tiago\Documents\xpmarketplace\src\Extensions\Jobs\SchedulingLib\JobActivityTool.cs:line 32
[03/16/2015 17:45:23 > 62130d: INFO] VirtoCommerce.ScheduleService.Trace Stop: 0 : VirtoCommerce.Scheduling.Jobs.GenerateSearchIndexWork#3e32cbe9-b912-4901-bedb-7604a4ab0616 Finished with Error! Duration=0m. 21s. 66
[03/16/2015 17:45:27 > 62130d: INFO] VirtoCommerce.ScheduleService.Trace Verbose: 0 : TRACE|3/16/2015 5:45:27 PM|JobScheduler|SchedulerProcess-ApartmentIteration|||Iterating 5 Jobs
Looks like your Elastic Search Server is not running, can't open "137.117.213.124:9200". Which version of VC are you trying to deploy?
We had a small issue with not enough space allocated by default when elastic search was created. Here is the fix: https://github.com/VirtoCommerce/vc-community/commit/213de23cc023cc8da8983daba188d08c2de3c2a6
Basically make sue that size is set to the new value and redeploy elastic search.