Scenario:
I have hooked up the Web job with a CancellationToken and need to simulate shutdown to see if the cancellation is being processed successfully. I've tried the Ctrl + C combination but the cancellation did not fire. What is the correct way of simulating this shutdown for debugging purposes?
Since this is debug code, did it with a bit of a hack. The issue in my case is, the CancellationToken was being passed by a framework call and did not allow access to the CancellationTokenSource.
private async Task InitializeEventProcessing(CancellationToken ctx)
{
#if DEBUG
CancellationTokenSource cts = new CancellationTokenSource(TimeSpan.FromSeconds(10));
ctx = cts.Token;
#endif
.
.
.
}
Didn't really like the other answer, so here is what I did...
The built in WebJobsShutdownWatcher looks for an environment variable WEBJOBS_SHUTDOWN_FILE and watches for changes.
If you configure the debug settings to provide a file name for that environment variable, then all you have to do create that file(contents don't matter) and it will follow the same shutdown path as if it is deployed.
Of course you have to delete it afterward, or add a build step to delete it on debug or something, but at least it's executing the same code as graceful shutdown on the host.
Related
I have a few Azure functions sharing same the code. So I created a batch file for publishing my libs. It is a simple bat file. For each of my azure functions, it connects to a host and uses robocopy to synchronize folders.
However, each time I publish, current running functions are dropped. I want to avoid that. Is there a way to let a running function naturally terminate its work?
I think its possible because when I publish, I'm not re-write real running dll, but I copy file in <azure-function-url>/site/wwwroot folder.
NOTE:
The function calls an async method without await. The async method does not completed the work when source change. (Im not focus on this problem, thanks Matt for the comment..open my eyes)
The functions runtime is designed to allow functions to gracefully exit in the event of host restarts, see here.
Not awaiting your async calls is an antipattern in functions, as we won't be able to track your function execution. We use the returned Task to determine when your function has finished. If you do not return a Task, we assume your function has completed when it returns.
In your case, that means we will kill the host on restarts while your orphaned asynchronous calls are running. If you fail to await async calls, we also don't guarantee successful:
Logging
Output bindings
Exception handling
Do: static async Task Run(...){ await asyncMethod(); }
Don't: static void Run(...){ asyncMethod(); }
I have created my first azure webjob that runs continously;
I'm not so sure this is a code issue, but for the sake of completeness here is my code:
static void Main()
{
var host = new JobHost();
host.CallAsync(typeof(Functions).GetMethod("ProcessMethod"));
host.RunAndBlock();
}
And for the function:
[NoAutomaticTrigger]
public static async Task ProcessMethod(TextWriter log)
{
log.WriteLine(DateTime.UtcNow.ToShortTimeString() + ": Started");
while (true)
{
Task.Run(() => RunAllAsync(log));
await Task.Delay(TimeSpan.FromSeconds(60));
}
log.WriteLine(DateTime.UtcNow.ToShortTimeString() + "Shutting down..");
}
Note that the async task fires off a task of its own. This was to ensure they were started quite accurately with the same interval. The job itself is to download an url, parse and input some data in a db, but that shouldnt be relevant for the multiple instance issue I am experiencing.
My problem is that once this has been running for roughly 5 minutes a second ProcessMethod is called which makes me have two sessions simoultaniously doing the same thing. The second method says it is "started from Dashboard" even though I am 100% confident I did not click anything to start it off myself.
Anyone experienced anything like it?
Change the instance count to 1 from Scale tab of WebApp in Azure portal. By default it is set to 2 instances which is causing it to run two times.
I can't explain why it's getting called twice, but I think you'd be better served with a triggered job using a CRON schedule (https://azure.microsoft.com/en-us/documentation/articles/web-sites-create-web-jobs/#CreateScheduledCRON), instead of a Continuous WebJob.
Also, it doesn't seem like you are using the WebJobs SDK, so you can completely skip that. Your WebJob can be as simple as a Main that directly does the work. No JobHost, no async, and generally easier to get right.
I am developing a triggered webjob that use TimerTrigger.
Before the webjob stops, I need to dispose some objects but I don't know how to trigger the "webjob stop".
Having a NoAutomaticTrigger function, I know that I can use the WebJobsShutdownWatcher class to handle when the webjob is stopping but with a triggered job I need some help...
I had a look at Extensible Triggers and Binders with Azure WebJobs SDK 1.1.0-alpha1.
Is it a good idea to create a custom trigger (StopTrigger) that used the WebJobsShutdownWatcher class to fire action before the webjob stops ?
Ok The answer was in the question :
Yes I can use the WebJobsShutdownWatcher class because it has a Register function that is called when the cancellation token is canceled, in other words when the webjob is stopping.
static void Main()
{
var cancellationToken = new WebJobsShutdownWatcher().Token;
cancellationToken.Register(() =>
{
Console.Out.WriteLine("Do whatever you want before the webjob is stopped...");
});
var host = new JobHost();
// The following code ensures that the WebJob will be running continuously
host.RunAndBlock();
}
EDIT (Based on Matthew comment):
If you use Triggered functions, you can add a CancellationToken parameter to your function signatures. The runtime will cancel that token when the host is shutting down automatically, allowing your function to receive the notification.
public static void QueueFunction(
[QueueTrigger("QueueName")] string message,
TextWriter log,
CancellationToken cancellationToken)
{
...
if(cancellationToken.IsCancellationRequested) return;
...
}
I was recently trying to figure out how to do this without the
WebJobs SDK which contains the WebJobShutdownWatcher, this is what I
found out.
What the underlying runtime does (and what the WebJobsShutdownWatcher referenced above checks), is create a local file at the location specified by the environment variable %WEBJOBS_SHUTDOWN_FILE%. If this file exists, it is essentially the runtime's signal to the webjob that it must shutdown within a configurable wait period (default of 5 seconds for continuous jobs, 30 for triggered jobs), otherwise the runtime will kill the job.
The net effect is, if you are not using the Azure WebJobs SDK, which contains the WebJobsShutdownWatcher as described above, you can still achieve graceful shutdown of your Azure Web Job by monitoring for the shutdown file on an interval shorter than the configured wait period.
Additional details, including how to configure the wait period, are described here: https://github.com/projectkudu/kudu/wiki/WebJobs#graceful-shutdown
I am running an azure webjobs SDK console application (continuous) with the recommended setup:
public static void ProcessQueueMessage([QueueTrigger("logqueue")] string logMessage, TextWriter logger)
The azure queue I am running against has ~6000 messages in it and I am running the web-job locally, as a console application.
The problem I'm having is that the processing randomly stops after processing between zero and ~30 messages. The console stays open, but no more console messages are displayed.
For example, it might just process 2 messages:
Executing: 'Functions.ProcessQueueMessage' - Reason: 'New queue message detected on 'QueueName'.'
Executed: 'Functions.ProcessQueueMessage' (Succeeded)
Executing: 'Functions.ProcessQueueMessage' - Reason: 'New queue message detected on 'QueueName'.'
Executed: 'Functions.ProcessQueueMessage' (Succeeded)
And then, nothing. There doesn't seem to be anything wrong with my internet connection and I can't trace the issues down to any particular messages.
Has anyone else had issues with this SDK?
Update:
I made sure that I was using the right versions of all of the dependencies by removing the nuget packages and then re-running install-package Microsoft.Axure.Webjobs. I am now using webjobs version 1.1.0 which has pulled in version 4.3 of azure storage.
As recommended by Matthew, I have pulled down the source code for azure webjobs to determine where the process is freezing up. Once the freez-up occurs, I pause execution and checked the running threads for what I believe is the culprit within Microsoft.Azure.WebJobs.Host.CompositeTraceWriter
protected virtual void InvokeTextWriter(TraceEvent traceEvent)
{
if (_innerTextWriter != null)
{
string message = traceEvent.Message;
if (!string.IsNullOrEmpty(message) &&
message.EndsWith("\r\n", StringComparison.OrdinalIgnoreCase))
{
// remove any terminating return+line feed, since we're
// calling WriteLine below
message = message.Substring(0, message.Length - 2);
}
_innerTextWriter.WriteLine(message);
if (traceEvent.Exception != null)
{
_innerTextWriter.WriteLine(traceEvent.Exception.ToDetails());
}
}
}
The line it freezes on is line 66 : _innerTextWriter.WriteLine(message);
_innerTextWriter is an instance of System.IO.TextWriter.SyncTextWriter
Is it possible there is some deadlock issue with this class or the way it is being used?
Some notes:
I am running in the debugger, so in this case I believe the textwriter is forwarding to the console internally
I have my batchsize set to 1 via config.Queues.BatchSize = 1;, not sure if that could matter
I'm currently working on setting up an environment on another computer so that I can see if it is reproducible somewhere other than this machine (surface book).
Update
The issue was me not understanding how the new windows 10 command prompt works. Any time you click on the command window, it goes into "select" mode which completely pauses execution of the process.
Basically: https://superuser.com/questions/419717/windows-command-prompt-freezing-randomly?newreg=ece53f5584254346be68f85d1fd2f18d
You can tell it is in this state because it will prefix the window title with the word "Select":
You have to press enter or click again to get it going once again.
So, two final comments:
1) What an incredibly confusing and un-intuitive behavior for a command window!
2) I hope some admin will come take pity on the shame I have brought upon myself and my family by deleting this question.
To get rid of this strange behavior, you can disable QuickEdit mode:
Strange. When it is in this stuck state, can you try adding a new queue message to the queue and see if that triggers? Are you sure your function isn't hanging internally? What version of the SDK are you using? You might also try upgrading to v1.1.0 which we just released last week. If there are really a bunch of messages in the queue waiting to be processed, I can't think of anything that would cause this. The queue listener in the SDK should chug along, reading batches of messages in parallel and dispatching them to your function. Have you changed any of the JobHostConfiguration.Queues configuration knobs? You haven't force updated the version of the Azure SDK have you to something higher than the WebJobs SDK supports?
Another option if you can't figure this out might be to clone the SDK, build it and debug it locally. The repo is here. The main queue processing loop is here.
I've configured the Azure Diagnostics so that the logs get uploaded to a storage table. I'm using Trace.TraceXxx from my code and all works well.
Now I'm trying to add tracing from the Role OnStart() and OnStop() methods. I know that the tracing works as I see the lines in the Debug window when running in the emulator. But from the cloud deployment, it seems that these trace lines never get uploaded to the table. My guess is that it is somewhat related to TraceSources, as the only trace lines I've in the table come from the w3wp.exe source... Any hint ?
Thanks
Like you said you can add the trace listener using the WaIISHost.exe.config, but besides that you can also add the trace listener in code (you'll need a reference to Microsoft.WindowsAzure.Diagnostics.dll):
public class WebRole : RoleEntryPoint
{
public override void Run()
{
var listener = new Microsoft.WindowsAzure.Diagnostics.DiagnosticMonitorTraceListener();
Trace.Listeners.Add(listener);
...
}
}
Another way of setting up diagnostics is through the configuration file. If you created a VS solution recently, it will automatically create the diagnostics plug-in and configuration for the trace listener. With the config file (diagnostics.wadcfg) there is no code that needs to be written for the different data sources. Here is a link where you can get started and a sample file:
http://msdn.microsoft.com/en-us/library/gg604918.aspx
You cannot include custom performance counters right now and you need to make sure that you don’t try to allocate more than 4GB of buffer to anything (you can leave at 0), or it tends to fail.
Note, the time interval format (e.g PT1M). That is a serialization format, so PTXM is X minutes, while PTXS is X in seconds. You need to mark this as content and copy always in Visual Studio (place at root of project) so it gets packaged.
And here is a link to the three ways to setup diagnostics
http://msdn.microsoft.com/en-us/library/windowsazure/hh411541.aspx
Ranjith
http://www.opstera.com