Async/await in azure worker role causing the role to recycle - azure

I am playing around with Tasks, Async and await in my WorkerRole (RoleEntryPoint).
I had some unexplained recycles and i have found out now that if something is running to long in a await call, the role recycles. To reproduce it, just do a await Task.Delay(60000) in the Run method.
Anyone who can explain to me why?

The Run method must block. From the docs:
If you do override the Run method, your code should block indefinitely. If the Run method returns, the role is automatically recycled by raising the Stopping event and calling the OnStop method so that your shutdown sequences may be executed before the role is taken offline.
A simple solution is to just do this:
public override void Run()
{
RunAsync().Wait();
}
public async Task RunAsync()
{
while (true)
{
await Task.Delay(60000);
}
}
Alternatively, you can use AsyncContext from my AsyncEx library:
public override void Run()
{
AsyncContext.Run(async () =>
{
while (true)
{
await Task.Delay(60000);
}
});
}
Whichever option you choose, Run should not be async. It's kind of like Main for a Console app (see my blog for why async Main is not allowed).

I would recommend a lower value for Task.Delay like 1000 (ms). I suspect that the worker role cannot respond quickly enough to the health check. The role is then considered unresponsive and restarted.
Make sure the Run method never returns with something like this
while (true)
{
Thread.Sleep(1000);
}
Or with Task.Delay in your case.

Related

ServiceStack with MiniProfiler for .Net 6

I was attempting to add Profiling into ServiceStack 6 with .Net 6 and using the .Net Framework MiniProfiler Plugin code as a starting point.
I noticed that ServiceStack still has Profiler.Current.Step("Step Name") in the Handlers, AutoQueryFeature and others.
What is currently causing me some stress is the following:
In ServiceStackHandlerBase.GetResponseAsync(IRequest httpReq, object request) the Async Task is not awaited. This causes the step to be disposed of the when it reaches the first async method it must await, causing all the subsequent nested steps to not be children. Is there something simple I'm missing here or is this just a bug in a seldom used feature?
In SqlServerOrmLiteDialectProvider most of the async methods make use of an Unwrap function that drills down to the SqlConnection or SqlCommand this causes an issue when attempting to wrap a command to enable profiling as it ignores the override methods in the wrapper in favour of the IHasDbCommand.DbCommand nested within. Not using IHasDbCommand on the wrapping command makes it attempt to use wrapping command but hits a snag because of the forced cast to SqlCommand. Is there an easy way to combat this issue, or do I have to extend each OrmliteDialectProvider I wish to use that has this issue to take into account the wrapping command if it is present?
Any input would be appreciated.
Thanks.
Extra Information Point 1
Below is the code from ServiceStackHandlerBase that appears (to me) to be a bug?
public virtual Task<object> GetResponseAsync(IRequest httpReq, object request)
{
using (Profiler.Current.Step("Execute " + GetType().Name + " Service"))
{
return appHost.ServiceController.ExecuteAsync(request, httpReq);
}
}
I made a small example that shows what I am looking at:
using System;
using System.Threading.Tasks;
public class Program
{
public static async Task<int> Main(string[] args)
{
Console.WriteLine("App Start.");
await GetResponseAsync();
Console.WriteLine("App End.");
return 0;
}
// Async method with a using and non-awaited task.
private static Task GetResponseAsync()
{
using(new Test())
{
return AdditionAsync();
}
}
// Placeholder async method.
private static async Task AdditionAsync()
{
Console.WriteLine("Async Task Started.");
await Task.Delay(2000);
Console.WriteLine("Async Task Complete.");
}
}
public class Test : IDisposable
{
public Test()
{
Console.WriteLine("Disposable instance created.");
}
public void Dispose()
{
Console.WriteLine("Disposable instance disposed.");
}
}
My Desired Result:
App Start.
Disposable instance created.
Async Task Started.
Async Task Complete.
Disposable instance disposed.
App End.
My Actual Result:
App Start.
Disposable instance created.
Async Task Started.
Disposable instance disposed.
Async Task Complete.
App End.
This to me shows that even though the task is awaited at a later point in the code, the using has already disposed of the contained object.
Mini Profiler was coupled to System.Web so isn't supported in ServiceStack .NET6.
To view the generated SQL you can use a BeforeExecFilter to inspect the IDbCommand before it's executed.
This is what PrintSql() uses to write all generated SQL to the console:
OrmLiteUtils.PrintSql();
Note: when you return a non-awaited task it just means it doesn't get awaited at that point, it still gets executed when the return task is eventually awaited.
To avoid the explicit casting you should be able to override a SQL Server Dialect Provider where you'll be able to replace the existing implementation with your own.

How to use await keyword inside a method without changing the method async

I am developing a scheduled job to send message to Message queue using Quartz.net. The Execute method of IJob is not async. so I can't use async Task. But I want to call a method with await keyword.
Please find below my code. Not sure whether I am doing correct. Can anyone please help me with this?
private async Task PublishToQueue(ChangeDetected changeDetected)
{
_logProvider.Info("Publish to Queue started");
try
{
await _busControl.Publish(changeDetected);
_logProvider.Info($"ChangeDetected message published to RabbitMq. Message");
}
catch (Exception ex)
{
_logProvider.Error("Error publishing message to queue: ", ex);
throw;
}
}
public class ChangedNotificatonJob : IJob
{
public void Execute(IJobExecutionContext context)
{
//Publish message to queue
Policy
.Handle<Exception>()
.RetryAsync(3, (exception, count) =>
{
//Do something for each retry
})
.ExecuteAsync(async () =>
{
await PublishToQueue(message);
});
}
}
Is this correct way? I have used .GetAwaiter();
Policy
.Handle<Exception>()
.RetryAsync(_configReader.RetryLimit, (exception, count) =>
{
//Do something for each retry
})
.ExecuteAsync(async () =>
{
await PublishToQueue(message);
}).GetAwaiter()
Polly's .ExecuteAsync() returns a Task. With any Task, you can just call .Wait() on it (or other blocking methods) to block synchronously until it completes, or throws an exception.
As you have observed, since IJob.Execute(...) isn't async, you can't use await, so you have no choice but to block synchronously on the task, if you want to discover the success-or-otherwise of publishing before IJob.Execute(...) returns.
.Wait() will cause any exception from the task to be rethrown, wrapped in an AggregateException. This will occur if all Polly-orchestrated retries fail.
You'll need to decide what to do with that exception:
If you want the caller to handle it, rethrow it or don't catch it and let it cascade outside the Quartz job.
If you want to handle it before returning from IJob.Execute(...), you'll need a try {} catch {} around the whole .ExecuteAsync(...).Wait(). Or consider Polly's .ExecuteAndCaptureAsync(...) syntax: it avoids you having to provide that outer try-catch, by instead placing the final outcome of the execution into a PolicyResult instance. See the Polly doco.
There is a further alternative if your only intention is to log somewhere that message publishing failed, and you don't care whether that logging happens before IJob.Execute(...) returns or not. In that case, instead of using .Wait(), you could chain a continuation task on to ExecuteAsync() using .ContinueWith(...), and handle any logging in there. We adopt this approach, and capture failed message publishing to a special 'message hospital' - capturing enough information so that we can choose whether to republish that message again later, if appropriate. Whether this approach is valuable depends on how important it is to you never to lose a message.
EDIT: GetAwaiter() is irrelevant. It won't magically let you start using await inside a non-async method.

Message being retried when operation takes time

I have a messaging system using Azure ServiceBus but I'm using Nimbus on top of that. I have an endpoint that sends a command to another endpoint and at one point the handler class on the other side picks it up, so it is all working fine.
When the operation takes time, roughly more than 20 second or so, the handler gets 'another' call with the same message. It looks like Nimbus is retrying the message that is already being handled by an other (even the same) instance of the handler, I don't see any exceptions being thrown and I could easily repro this with the following handler:
public class Synchronizer : IHandleCommand<RequestSynchronization>
{
public async Task Handle(RequestSynchronization synchronizeInfo)
{
Console.WriteLine("Received Synchronization");
await Task.Delay(TimeSpan.FromSeconds(30)); //Simulate long running process
Console.WriteLine("Got through first timeout");
await Task.Delay(TimeSpan.FromSeconds(30)); //Simulate another long running process
Console.WriteLine("Got through second timeout");
}
}
My question is: How do I disable this behavior? I am happy for the transaction take time as it is a heavy process that I have off-loaded from my website, which was the whole point of going with this architecture in the first place.
In other words, I was expecting the message to not to be picked up by another handler while one has picked it up and is processing it, unless there's an exception and the message goes back to the queue and eventually gets picked up for a retry.
Any ideas how to do this? Anything I'm missing?
By default, ASB/WSB will give you a message lock of 30 seconds. The idea is that you pop a BrokeredMessage off the head of the queue but have to either .Complete() or .Abandon() that message within the lock timeout.
If you don't do that, the service bus assumes that you've crashed or otherwise failed and it will return that message to the queue to be re-processed.
You have a couple of options:
1) Implement ILongRunningHandler on your handler. Nimbus will pay attention to the remaining lock time and automatically renew your message lock. Caution: The maximum message lock time supported by ASB/WSB is five minutes no matter how many times you renew so if your handler takes longer than that then you might want option #2.
public class Synchronizer : IHandleCommand<RequestSynchronization>, ILongRunningTask
{
public async Task Handle(RequestSynchronization synchronizeInfo)
{
Console.WriteLine("Received Synchronization");
await Task.Delay(TimeSpan.FromSeconds(30)); //Simulate long running process
Console.WriteLine("Got through first timeout");
await Task.Delay(TimeSpan.FromSeconds(30)); //Simulate another long running process
Console.WriteLine("Got through second timeout");
}
}
2) In your handler, call a Task.Run(() => SomeService(yourMessage)) and return. If you do this, be careful about lifetime scoping of dependencies if your handler takes any. If you need an IFoo, take a dependency on a Func> (or equivalent depending on your container) and resolve that within your handling task.
public class Synchronizer : IHandleCommand<RequestSynchronization>
{
private readonly Func<Owned<IFoo>> fooFunc;
public Synchronizer(Func<Owned<IFoo>> fooFunc)
{
_fooFunc = fooFunc;
}
public async Task Handle(RequestSynchronization synchronizeInfo)
{
// don't await!
Task.Run(() => {
using (var foo = _fooFunc())
{
Console.WriteLine("Received Synchronization");
await Task.Delay(TimeSpan.FromSeconds(30)); //Simulate long running process
Console.WriteLine("Got through first timeout");
await Task.Delay(TimeSpan.FromSeconds(30)); //Simulate another long running process
Console.WriteLine("Got through second timeout");
}
});
}
}
I think you are looking for the code here: http://www.uglybugger.org/software/post/support_for_long_running_handlers_in_nimbus

Async Logger. Can I lose/delay log entries?

I'm implementing my own logging framework. Following is my BaseLogger which receives the log entries and push it to the actual Logger which implements the abstract Log method.
I use the C# TPL for logging in an Async manner. I use Threads instead of TPL. (TPL task doesn't hold a real thread. So if all threads of the application end, tasks will stop as well, which will cause all 'waiting' log entries to be lost.)
public abstract class BaseLogger
{
// ... Omitted properties constructor .etc. ... //
public virtual void AddLogEntry(LogEntry entry)
{
if (!AsyncSupported)
{
// the underlying logger doesn't support Async.
// Simply call the log method and return.
Log(entry);
return;
}
// Logger supports Async.
LogAsync(entry);
}
private void LogAsync(LogEntry entry)
{
lock (LogQueueSyncRoot) // Make sure we ave a lock before accessing the queue.
{
LogQueue.Enqueue(entry);
}
if (LogThread == null || LogThread.ThreadState == ThreadState.Stopped)
{ // either the thread is completed, or this is the first time we're logging to this logger.
LogTask = new new Thread(new ThreadStart(() =>
{
while (true)
{
LogEntry logEntry;
lock (LogQueueSyncRoot)
{
if (LogQueue.Count > 0)
{
logEntry = LogQueue.Dequeue();
}
else
{
break;
// is it possible for a message to be added,
// right after the break and I leanve the lock {} but
// before I exit the loop and task gets 'completed' ??
}
}
Log(logEntry);
}
}));
LogThread.Start();
}
}
// Actual logger implimentations will impliment this method.
protected abstract void Log(LogEntry entry);
}
Note that AddLogEntry can be called from multiple threads at the same time.
My question is, is it possible for this implementation to lose log entries ?
I'm worried that, is it possible to add a log entry to the queue, right after my thread exists the loop with the break statement and exits the lock block, and which is in the else clause, and the thread is still in the 'Running' state.
I do realize that, because I'm using a queue, even if I miss an entry, the next request to log, will push the missed entry as well. But this is not acceptable, specially if this happens for the last log entry of the application.
Also, please let me know whether and how I can implement the same, but using the new C# 5.0 async and await keywords with a cleaner code. I don't mind requiring .NET 4.5.
Thanks in Advance.
While you could likely get this to work, in my experience, I'd recommend, if possible, use an existing logging framework :) For instance, there are various options for async logging/appenders with log4net, such as this async appender wrapper thingy.
Otherwise, IMHO since you're going to be blocking a threadpool thread during your logging operation anyway, I would instead just start a dedicated thread for your logging. You seem to be kind-of going for that approach already, just via Task so that you'd not hold a threadpool thread when nothing is logging. However, the simplification in implementation I think benefits just having the dedicated thread.
Once you have a dedicated logging thread, you then only need have an intermediate ConcurrentQueue. At that point, your log method just adds to the queue and your dedicated logging thread just does that while loop you already have. You can wrap with BlockingCollection if you need blocking/bounded behavior.
By having the dedicated thread as the only thing that writes, it eliminates any possibility of having multiple threads/tasks pulling off queue entries and trying to write log entries at the same time (painful race condition). Since the log method is now just adding to a collection, it doesn't need to be async and you don't need to deal with the TPL at all, making it simpler and easier to reason about (and hopefully in the category of 'obviously correct' or thereabouts :)
This 'dedicated logging thread' approach is what I believe the log4net appender I linked to does as well, FWIW, in case that helps serve as an example.
I see two race conditions off the top of my head:
You can spin up more than one Thread if multiple threads call AddLogEntry. This won't cause lost events but is inefficient.
Yes, an event can be queued while the Thread is exiting, and in that case it would be "lost".
Also, there's a serious performance issue here: unless you're logging constantly (thousands of times a second), you're going to be spinning up a new Thread for each log entry. That will get expensive quickly.
Like James, I agree that you should use an established logging library. Logging is not as trivial as it seems, and there are already many solutions.
That said, if you want a nice .NET 4.5-based approach, it's pretty easy:
public abstract class BaseLogger
{
private readonly ActionBlock<LogEntry> block;
protected BaseLogger(int maxDegreeOfParallelism = 1)
{
block = new ActionBlock<LogEntry>(
entry =>
{
Log(entry);
},
new ExecutionDataflowBlockOptions
{
MaxDegreeOfParallelism = maxDegreeOfParallelism,
});
}
public virtual void AddLogEntry(LogEntry entry)
{
block.Post(entry);
}
protected abstract void Log(LogEntry entry);
}
Regarding the loosing waiting messages on app crush because of unhandled exception, I've bound a handler to the event AppDomain.CurrentDomain.DomainUnload. Goes like this:
protected ManualResetEvent flushing = new ManualResetEvent(true);
protected AsyncLogger() // ctor of logger
{
AppDomain.CurrentDomain.DomainUnload += CurrentDomain_DomainUnload;
}
protected void CurrentDomain_DomainUnload(object sender, EventArgs e)
{
if (!IsEmpty)
{
flushing.WaitOne();
}
}
Maybe not too clean, but works.

JavaFX Multi Threading

I'm writing a small programm where JavaFx acts as a viewer and controler and let Java do the other hard work. I can start multiple threads from Javafx however, I'm not able to stop them. If I try to use .stop(), the threads are still running.
Here is one of them:
public var sleepTask_connect;
function LogOutAction(): Void {
sleepTask_connect.stop();
}
function LogInAction(): Void {
var listener = FXListener_interface_connection {
override function callback(errorCode, errorMessage): Void {
//do something
if(errorCode != 200){
setIcn(errorMessage);
}
}
}
sleepTask_connect = FXListener_connection {
listener: listener
};
sleepTask_connect.start();
}
Use JavaTaskBase to implement you Java thread. There is a stop method to kill the thread. Here is an example of how you use it.
I've had better luck with the JFXtras XWorker component for threading. See http://jfxtras.googlecode.com/svn/site/javadoc/release-0.6/org.jfxtras.async/org.jfxtras.async.XWorker.html.
However in general in order for your thread to respond to cancel/stop requests, you have to check the canceled or stopped flag in your code during your "do something" section. This works if your thread is in an infinite loop for example, or if you just have a series of long running processes you can check for canceled/stopped in between them. Alternatively, if your code calls some blocking method (like sockets or a blocking queue), then most of these will throw an InterruptedException when the thread is canceled.

Resources