What are the sequence of events that occur when I make changes to the settings file (ServiceConfiguration.Cloud.cscfg) for an cloud deployed App? Will the worker roles restart so that the new changes are reflected? (Will the OnStop, OnStart, Run events be trigerred on changing the settings value?)
In my cloud service, I read the custom values from the configuration file in the Run() method of the WorkerRole and wondering if any change to ServiceConfiguration.Cloud.cscfg file for an app deployed in the cloud will re-trigger the OnStart and Run events?
Yes indeed, your instances will go through OnStop / (Reboot) / OnStart / Run after each configuration change. If you're storing the settings in your application in a static variable for example it might be a good thing to let this happen. This way, after the reboot your application will restart and it will get a chance to re-initialize all settings in the static variables.
Now on the other hand, if you want the instance to reboot you can handle this change yourself (maybe you cached the settings somewhere, or iniitalized a static object without those settings). You'll need to trigger the reboot by handling the RoleEnvironment.Changing event :
public override bool OnStart()
{
RoleEnvironment.Changing += RoleEnvironmentChanging;
return base.OnStart();
}
private void RoleEnvironmentChanging(object sender, RoleEnvironmentChangingEventArgs e)
{
if ((e.Changes.Any(change => change is RoleEnvironmentConfigurationSettingChange)))
{
e.Cancel = true;
}
}
Related
I have a Webjob that I want to be time triggered:
public class ArchiveFunctions
{
private readonly IOrderArchiver _orderArchiver;
public ArchiveFunctions(IOrderArchiver orderArchiver)
{
_orderArchiver = orderArchiver;
}
public async Task Archive([TimerTrigger("0 */5 * * * *")] TimerInfo timer, TextWriter log)
{
log.WriteLine("Hello world");
}
}
My program.cs:
public static void Main()
{
var config = new JobHostConfiguration
{
JobActivator = new AutofacJobActivator(RegisterComponents())
};
config.UseTimers();
var host = new JobHost(config);
// The following code ensures that the WebJob will be running continuously
host.RunAndBlock();
}
my publish-setting.json:
{
"$schema": "http://schemastore.org/schemas/json/webjob-publish-settings.json",
"webJobName": "OrdersArchiving",
"runMode": "OnDemand"
}
Here is what it looks like on azure portal:
My problem is that the job runs, I have the hello world, but the job keeps in run state and it get to a time out error message:
[02/05/2018 15:34:05 > f0ea5f: ERR ] Command 'cmd /c ""Ores.Contr ...' was aborted due to no output nor CPU activity for 121 seconds. You can increase the SCM_COMMAND_IDLE_TIMEOUT app setting (or WEBJOBS_IDLE_TIMEOUT if this is a WebJob) if needed.
What can I do to fix this?
I have a wild guess RunAndBlock could be a problem.. but I do not see a solution..
Thanks!
Edit:
I have tested Rob Reagan answer, it does help with the error, thank you!
On my same service, I have one other time triggerd job (was done in core, while mine is not).
You can see the Webjob.Missions is 'triggered', and status update on last time it ran. You can see as well the schedule on it.
I would like to have the same for mine 'OrdersArchiving'.
How can I achieve that?
Thanks!
Change your run mode to continuous and not triggered. The TimerTrigger will handle executing the method you've placed it on.
Also, make sure that you're not using a Free tier for hosting your WebJob. After twenty minutes of inactivity, the app will be paused and will await a new HTTP request to wake it up.
Also, make sure you've enabled Always On on your Web App settings to prevent the same thing from happening to a higher service tier web app.
Edit
Tom asked how to invoke methods on a schedule for a Triggered WebJob. There are two options to do so:
Set the job up as triggered and use a settings.json file to set up the schedule. You can read about it here.
Invoke a method via HTTP using an Azure Scheduler. The Azure Scheduler is a separate Azure service that you can provision. It has a free tier which may be sufficient for your use. Please see David Ebbo's post on this here.
Trying to get Azure Webjobs to react to incoming Service Bus event, Im running this by hitting F5. Im getting the error at startup.
No job functions found. Try making your job classes and methods
public. If you're using binding extensions (e.g. ServiceBus, Timers,
etc.) make sure you've called the registration method for the
extension(s) in your startup code (e.g. config.UseServiceBus(),
config.UseTimers(), etc.).
My functions-class look like this:
public class Functions
{
// This function will get triggered/executed when a new message is written
// on an Azure Queue called queue.
public static void ProcessQueueMessage([ServiceBusTrigger("test-from-dynamics-queue")] BrokeredMessage message, TextWriter log)
{
log.WriteLine(message);
}
}
I have every class and method set to public
I am calling config.UseServiceBus(); in my program.cs file
Im using Microsoft.Azure.WebJobs v 1.1.2
((Im not entirely sure I have written the correct AzureWebJobsDashboard- and AzureWebJobsStorage-connectionstrings, I took them from my only Azure storage-settings in Azure portal. If that might be the problem, where should I get them ))
According to your mentioned error, it seems that you miss parameter config for ininitializing JobHost. If it is that case, please use the following code.
JobHost host = new JobHost(config)
More detail info about how to use Azure Service Bus with the WebJobs SDK please refer to the document.The following is the sample code from document.
public class Program
{
public static void Main()
{
JobHostConfiguration config = new JobHostConfiguration();
config.UseServiceBus();
JobHost host = new JobHost(config);
host.RunAndBlock();
}
}
Context
I have a RedisMqServer configured to handle a single message on my ServiceStack web service. The messages on that MQ originate from another application and show up in the .inq with all the correct properties. Everything is on 4.0.38.
My configuration in MyAppHost.cs:
public override void Configure(Container container)
{
var redisFactory = new PooledRedisClientManager(0, "etc:etc");
redisFactory.ConnectTimeout = 5;
redisFactory.IdleTimeOutSecs = 30;
redisFactory.PoolTimeout = 3;
container.Register<IRedisClientsManager>(redisFactory);
//Plugins, Filters, other Registrations omitted
var mqHost = new RedisMqServer(redisFactory, retryCount: 2);
mqHost.DisablePublishingResponses = true;
mqHost.RegisterHandler<CreateVisitor>(ServiceController.ExecuteMessage);
mqHost.Start();
}
And then in Global.asax.cs:
void Application_Start(object sender, EventArgs e)
{
new MyAppHost().Init();
}
Problem
The messages are not consistently handled when I deploy this elsewhere. They wait in the .inq until whenever. Nothing is lost, just delayed for an indeterminate duration.
As of this moment, the only things that come to mind are:
I'm using IIS Express locally, and the server is using IIS.
Application_Start needs to happen before it can handle messages.
I've tried initializing the service by making other API calls over HTTP, before and after queuing messages, with more failure than success. Sometimes the service starts to handle them, but I am unable to identify and thus influence when this happens.
Note
I do have several other console applications and windows services that listen on other MQs and handle messages placed by other applications, and those have always worked flawlessly. This is the first time I've tried this from within an existing web service, however.
Hard to know what the issue from this description (are messages getting lost or just delayed?) but this sounds like it's due to ASP.NET AppDomain recycling in which case you can disable AppDomain recycling or setup up a continuous ping route to hit your ASP.NET Web Application to keep the AppDomain alive.
If the ASP.NET Service is available on the Internet you can use services like https://uptimerobot.com or https://www.pingdom.com to configure it to ping your Service at different intervals (e.g. 5-10 minutes) otherwise if this is an internal Service you can use a Scheduled Task.
I'm trying to gain some understanding and experience in creating background processes on Azure.
I've created a simple console app and converted it to Azure Worker Role. How do I invoke it? I tried to use Azure Scheduler but looks like the scheduler can only invoke a worker role through message queues or HTTP/HTTPS.
I never thought about any type of communication as my idea was to create a background process that does not really communicate with any other app. Do I need to convert the worker role to a web role and invoke it using Azure Scheduler using HTTP/HTTPS?
Worker role has three events:
OnStart
OnRun
OnStop
public class WorkerRole : RoleEntryPoint
{
ManualResetEvent CompletedEvent = new ManualResetEvent(false);
public override void Run()
{
//Your background processing code
CompletedEvent.WaitOne();
}
public override bool OnStart()
{
return base.OnStart();
}
public override void OnStop()
{
CompletedEvent.Set();
base.OnStop();
}
}
The moment you run/debug your console converted worker role. First two (OnStart & OnRun) fires in sequence. Now in OnRun you have to keep the thread alive, either by using a while loop or using ManualResetEvent this is where your background processing code would live.
OnStop is fired when you either release the thread from OnRun or something un-expected goes. This is the place to dispose your objects. Close unclosed file-handles database connection etc.
I am wondering why the following azure workerrole does not show any diagnostic messages when the role is shutdown:
public class WorkerRole : RoleEntryPoint {
private bool running=true;
public override void Run() {
while (running)
{
Thread.Sleep(10000);
TTrace.WriteLine("working", "Information");
}
Trace.WriteLine("stopped", "Information");
}
public override bool OnStart()
{
Trace.WriteLine("starting", "Information");
return base.OnStart();
}
public override void OnStop() {
Trace.WriteLine("stopping", "Information");
running = false;
base.OnStop();
}
}
I can see the events 'starting' and 'working' in the diagnostic logs, but the Onstop method does not log anything. I was wondering if it's even called so I injected some code in the OnStop() method to write out some data. In fact the data was written as expected which proves that the method is called, it's just that I don't get any logs. Any ideas how to Trace my shutdown code?
My first and best guess is that the Diagnostics Agent does not have time to transfer the trace out to storage for you to see it. Traces are first logged locally on the VM, then the agent will transfer them off (OnDemand or Scheduled) depending on how you have configured it. Once the VM shuts down, the agent is gone too and cannot transfer it off.
Tracing in OnStop is not supported and if you manage to get it working via On-Demand Transfer (http://msdn.microsoft.com/en-us/library/windowsazure/gg433075.aspx ) it's likely to not work in the next release. Note, tracing in Web Role OnStart does not work either. See my blog post http://blogs.msdn.com/b/rickandy/archive/2012/12/21/optimal-azure-restarts.aspx to fix that. Also see my blog post for instructions on view real time OnStop trace data with DbgView.
The OnStop method should be used only to delay shutdown until you've cleaned up - so you shouldn't have much code in there to trace. Again, see my blog for details.