Job Stops Running? - iis

I have a job that I want to run every ten minutes. To schedule it, I use:
public static IScheduler _scheduler { get; private set; }
...
ISchedulerFactory schedFact = new StdSchedulerFactory();
_scheduler = schedFact.GetScheduler();
_scheduler.Start();
string cron = "0 0/10 * 1/1 * ? *";
JobKey jobkey = new JobKey("Radar", "F");
IJobDetail job = JobBuilder.Create<RadarJob>()
.WithIdentity(jobkey)
.Build();
CronScheduleBuilder csb = CronScheduleBuilder.CronSchedule(new CronExpression(cron)).InTimeZone(TimeZoneInfo.Local);
ICronTrigger trigger = (ICronTrigger)TriggerBuilder.Create()
.WithIdentity("Radar-Trigger", "G")
.WithSchedule(csb)
.Build();
try
{
DateTimeOffset ft = _scheduler.ScheduleJob(job, trigger);
Response.Write("Job Scheduled");
}
catch (ObjectAlreadyExistsException)
{
Response.Write("Job Already Exists!");
}
It seems to work at first, the job runs fine every ten minutes. However, after an hour or so, it stops running. I log successes and errors, and I have no errors. What is causing my job to stop by itself?
I am running IIS 7, .NET Framework 4.0, Using a Shared Hosting Plan from GoDaddy.

Your job is running inside of the IIS AppPool. The pool is probably recycling, which will kill the quartz task, and IIS will not automatically restart it (as it DOES restart web requests which are in process when the pool recycles).
(I'm assuming that you are running the above code in Application_Start() inside of your Global.asax file.)
We had this situation and decided to use quartz as a standalone service which would not be affected by pool recycles, though I'm not sure if this would be a viable option for you under a shared hosting plan.
You indicate that you are running on IIS 7. If in fact this is IIS 7.5, there may be a better solution outlined in Auto-Start application / global.asax / wcf service when IIS7 starts automatically, which would be to configure the app pool to automatically restart.

This is late for answering but I had this kind off error in my application. I solved it by IIS config that I found it in Scott Gu weblog : http://weblogs.asp.net/scottgu/auto-start-asp-net-applications-vs-2010-and-net-4-0-series

Related

How to stop outbound HTTP connections from timing out

Backgound:
I'm currently hosting an ASP.NET application in Azure with the following specs:
ASP .Net Core 2.2
Using Flurl for HTTP requests
Kestrel Webserver
Docker (Linux - mcr.microsoft.com/dotnet/core/aspnet:2.2 runtime)
Azure App Service on P2V2 tier app service plan
I have a a couple of background jobs that run on the service that makes a lot of outbound HTTP calls to a 3rd party service.
Issue:
Under a small load (approximately 1 call per 10 seconds), all requests are completed in under a second with no issue. The issue I'm having is that under a heavy load, when service can make up to 3/4 calls in a 10 second span, some of the requests will randomly timeout and throw an exception. When I was using RestSharp the exception would read "The operation has timed out". Now that I'm using Flurl, the exception reads "The call timed out".
Here's the kicker - If I run the same job from my laptop running Windows 10 / Visual Studios 2017, this problem does NOT occur. This leads me to believe I'm hitting some limit or running out of some resource in my hosted environment. Unclear if that is connection/socket or thread related.
Things I've tried:
Ensure all code paths to the request are using async/await to prevent lockouts
Ensure Kestrel Defaults allow unlimited connections (it does by default)
Ensure Dockers default connection limits are sufficient (2000 by default, more than enough)
Configuring ServicePointManager settings for connection limits
Here is the code in my startup.cs that I'm currently using to try and prevent this issue:
public class Startup
{
public Startup(IHostingEnvironment hostingEnvironment)
{
...
// ServicePointManager setup
ServicePointManager.UseNagleAlgorithm = false;
ServicePointManager.Expect100Continue = false;
ServicePointManager.DefaultConnectionLimit = int.MaxValue;
ServicePointManager.EnableDnsRoundRobin = true;
ServicePointManager.ReusePort = true;
// Set Service point timeouts
var sp = ServicePointManager.FindServicePoint(new Uri("https://placeholder.thirdparty.com"));
sp.ConnectionLeaseTimeout = 15 * 1000; // 15 seconds
FlurlHttp.ConfigureClient("https://placeholder.thirdparty.com", cli => cli.Settings.ConnectionLeaseTimeout = new TimeSpan(0, 0, 15));
}
}
Has anyone else run into a similar issue to this? I'm open to any suggestions on how to best debug this situation, or possible methods to correct the issue. I'm at a complete loss after researching this for several days.
Thank you in advance.
I had similar issues. Take a look at Asp.net Core HttpClient has many TIME_WAIT or CLOSE_WAIT connections . Debugging via netstat helped identify the problem for me. As one possible solution. I suggest you use IHttpClientFactory. You can get more info from https://learn.microsoft.com/en-us/aspnet/core/fundamentals/http-requests?view=aspnetcore-2.2 It should be fairly easy to use as described in Flurl client lifetime in ASP.Net Core 2.1 and IHttpClientFactory

Why does my Time trigger webjob keep running?

I have a Webjob that I want to be time triggered:
public class ArchiveFunctions
{
private readonly IOrderArchiver _orderArchiver;
public ArchiveFunctions(IOrderArchiver orderArchiver)
{
_orderArchiver = orderArchiver;
}
public async Task Archive([TimerTrigger("0 */5 * * * *")] TimerInfo timer, TextWriter log)
{
log.WriteLine("Hello world");
}
}
My program.cs:
public static void Main()
{
var config = new JobHostConfiguration
{
JobActivator = new AutofacJobActivator(RegisterComponents())
};
config.UseTimers();
var host = new JobHost(config);
// The following code ensures that the WebJob will be running continuously
host.RunAndBlock();
}
my publish-setting.json:
{
"$schema": "http://schemastore.org/schemas/json/webjob-publish-settings.json",
"webJobName": "OrdersArchiving",
"runMode": "OnDemand"
}
Here is what it looks like on azure portal:
My problem is that the job runs, I have the hello world, but the job keeps in run state and it get to a time out error message:
[02/05/2018 15:34:05 > f0ea5f: ERR ] Command 'cmd /c ""Ores.Contr ...' was aborted due to no output nor CPU activity for 121 seconds. You can increase the SCM_COMMAND_IDLE_TIMEOUT app setting (or WEBJOBS_IDLE_TIMEOUT if this is a WebJob) if needed.
What can I do to fix this?
I have a wild guess RunAndBlock could be a problem.. but I do not see a solution..
Thanks!
Edit:
I have tested Rob Reagan answer, it does help with the error, thank you!
On my same service, I have one other time triggerd job (was done in core, while mine is not).
You can see the Webjob.Missions is 'triggered', and status update on last time it ran. You can see as well the schedule on it.
I would like to have the same for mine 'OrdersArchiving'.
How can I achieve that?
Thanks!
Change your run mode to continuous and not triggered. The TimerTrigger will handle executing the method you've placed it on.
Also, make sure that you're not using a Free tier for hosting your WebJob. After twenty minutes of inactivity, the app will be paused and will await a new HTTP request to wake it up.
Also, make sure you've enabled Always On on your Web App settings to prevent the same thing from happening to a higher service tier web app.
Edit
Tom asked how to invoke methods on a schedule for a Triggered WebJob. There are two options to do so:
Set the job up as triggered and use a settings.json file to set up the schedule. You can read about it here.
Invoke a method via HTTP using an Azure Scheduler. The Azure Scheduler is a separate Azure service that you can provision. It has a free tier which may be sufficient for your use. Please see David Ebbo's post on this here.

Azure subscription and webjob questions

So i'm trying to get a small project of mine going that I want to host on azure, it's a web app which works fine and I've recently found webjobs which I now want to use to have a task run which does data gathering and updating, which I have a Console App for.
My problem is that I can't set a schedule, since it is published to the web app which dosen't support scheduling, so I tried using the Azure Webjobs SDK and using a timer but it wont run without a AzureWebJobsStorage connection string which I cannot get since my Azure account is a Dreamspark account and I cannot make a Azure Storage Account with it.
So I was wondering if there is some way to get this webjob to run on a time somehow (every hour or so). Otherwise if I just upgraded my account to "Pay-As-You-Go"? would I still retain my free features? namely SQL Server.
Im not sure if this is the right palce to ask but I tried googling for it without success.
Update: Decided to just make the console app run oin a infinate loop and ill just monitor it through the portal, the code below is what I am using to made that loop.
class Program
{
static void Main()
{
var time = 1000 * 60 * 30;
Timer myTimer = new Timer(time);
myTimer.Start();
myTimer.Elapsed += new ElapsedEventHandler(myTimer_Elapsed);
Console.ReadLine();
}
public static void myTimer_Elapsed(object sender, ElapsedEventArgs e)
{
Functions.PullAndUpdateDatabase();
}
}
The simplest way to get your Web Job on a schedule is detailed in Amit Apple's blog titled "How to add a schedule to a triggered WebJob".
It's as simple as adding a JSON file called settings.job to your console application and in it describing the schedule you want as a cron expression like so:
{"schedule": "the schedule as a cron expression"}
For example, to run your job every 30 minutes you'd have this in your settings.job file:
{"schedule": "0 0,30 * * * *"}
Amit's blog also goes into details on how to write a cron expression.
Caveat: The scheduling mechanism used in this method is hosted on the instance where your web application is running. If your web application is not configured as Always On and is not in constant use it might be unloaded and the scheduler will then stop running.
To prevent this you will either need to set your web application to Always On or choose an alternative scheduling option - based on the Azure Scheduler service, as described in a blog post titled "Hooking up a scheduler job to a WebJob" written by David Ebbo.

ServiceStack RedisMqServer not always handling messages published from separate application

Context
I have a RedisMqServer configured to handle a single message on my ServiceStack web service. The messages on that MQ originate from another application and show up in the .inq with all the correct properties. Everything is on 4.0.38.
My configuration in MyAppHost.cs:
public override void Configure(Container container)
{
var redisFactory = new PooledRedisClientManager(0, "etc:etc");
redisFactory.ConnectTimeout = 5;
redisFactory.IdleTimeOutSecs = 30;
redisFactory.PoolTimeout = 3;
container.Register<IRedisClientsManager>(redisFactory);
//Plugins, Filters, other Registrations omitted
var mqHost = new RedisMqServer(redisFactory, retryCount: 2);
mqHost.DisablePublishingResponses = true;
mqHost.RegisterHandler<CreateVisitor>(ServiceController.ExecuteMessage);
mqHost.Start();
}
And then in Global.asax.cs:
void Application_Start(object sender, EventArgs e)
{
new MyAppHost().Init();
}
Problem
The messages are not consistently handled when I deploy this elsewhere. They wait in the .inq until whenever. Nothing is lost, just delayed for an indeterminate duration.
As of this moment, the only things that come to mind are:
I'm using IIS Express locally, and the server is using IIS.
Application_Start needs to happen before it can handle messages.
I've tried initializing the service by making other API calls over HTTP, before and after queuing messages, with more failure than success. Sometimes the service starts to handle them, but I am unable to identify and thus influence when this happens.
Note
I do have several other console applications and windows services that listen on other MQs and handle messages placed by other applications, and those have always worked flawlessly. This is the first time I've tried this from within an existing web service, however.
Hard to know what the issue from this description (are messages getting lost or just delayed?) but this sounds like it's due to ASP.NET AppDomain recycling in which case you can disable AppDomain recycling or setup up a continuous ping route to hit your ASP.NET Web Application to keep the AppDomain alive.
If the ASP.NET Service is available on the Internet you can use services like https://uptimerobot.com or https://www.pingdom.com to configure it to ping your Service at different intervals (e.g. 5-10 minutes) otherwise if this is an internal Service you can use a Scheduled Task.

How can I keep my Azure WebJob running without "Always On"

I have a continous webjob associated with a website and I am running that website on the Shared mode. I don't want to go to Always On option as there is no real need for my application. I only want to process the message when the calls are made to my website.
My issue is that the job keeps stopping after few minutes even though I am continuously calling a dummy keep alive method on my website at every 5 minute that posts a message to the queue that is monitored by that webjob.
My webjob is a simple console application built using the WebJob SDK that has a code like this
JobHost host = new JobHost(new JobHostConfiguration(storageConnictionSttring));
host.RunAndBlock();
and the message processing function looks like below:
public static void ProcessKeepAliveMessages([QueueTrigger("keepalive")] KeepAliveTrigger message)
{
Console.WriteLine("Keep Alive message called on :{0}", message.MessageTime);
}
The message log for the job basically says says
[03/05/2015 18:51:02 > 4660f6: SYS INFO] WebJob is stopping due to website shutting down
I don't mind if that happen this way, but when the website starts with the next call to keep alive, the webjob is not started. All the messages are queued till I go to the management dashboard or the SCM portal as shown below
https://mysite.scm.azurewebsites.net/api/continuouswebjobs
I can see the status like this:
[{"status":"Starting","detailed_status":"4660f6 - Starting\r\n","log_url":"https://mysite.scm.azurewebsites.net/vfs/data/jobs/continuous/WebJobs/job_log.txt","name":"WebJobs","run_command":"mysite.WebJobs.exe","url":"https://mysite.scm.azurewebsites.net/api/continuouswebjobs/WebJobs","extra_info_url":"https://mysite.scm.azurewebsites.net/azurejobs/#/jobs/continuous/WebJobs","type":"continuous","error":null,"using_sdk":true,"settings":{}}]
I would really appreciate if someone can help me understand what is going wrong here.
I've run into a similar problem. I have a website (shared mode) and an associated webjob (continuous type). Looking at webjob logs, I found that the job enters stopped state after about 15 min. of inactivity and stops reacting to trigger messages. It seems contradictory to the concept of continuous job concept but, apparently, to get it running truly continuously you have to subscribe to a paid website. You get what you pay for...
That said, my website needs to be used only about every few days and running in a shared mode makes perfect sense. I don't mind that the site needs a bit extra time to get started - as long as it restarts automatically. The problem with the webjob is that once stopped it won't restart by itself. So, my goal was to restart it with the website.
I have noticed that a mere look at the webjob from Azure Management Portal starts it. Following this line of thinking, I have found that fetching webjob properties is enough to switch it to the running state. The only trick is how to fetch the properties programmatically, so that restarting the website will also restart the webjob.
Because the call to fetch webjob properties must be authenticated, the first step is to go to Azure Management Portal and download the website publishing profile. In the publishing profile you can find the authentication credentials: username (usually $<website_name>) and userPWD (hash of the password). Copy them down.
Here is a function that will get webjob properties and wake it up (if not yet running):
class Program
{
static void Main(string[] args)
{
string websiteName = "<website_name>";
string webjobName = "<webjob_name>";
string userName = "<from_publishing_profile>";
string userPWD = "<from_publishing_profile>";
string webjobUrl = string.Format("https://{0}.scm.azurewebsites.net/api/continuouswebjobs/{1}", websiteName, webjobName);
var result = GetWebjobState(webjobUrl, userName, userPWD);
Console.WriteLine(result);
Console.ReadKey(true);
}
private static JObject GetWebjobState(string webjobUrl, string userName, string userPWD)
{
HttpClient client = new HttpClient();
string auth = "Basic " + Convert.ToBase64String(Encoding.UTF8.GetBytes(userName + ':' + userPWD));
client.DefaultRequestHeaders.Add("authorization", auth);
client.DefaultRequestHeaders.Accept.Add(new MediaTypeWithQualityHeaderValue("application/json"));
var data = client.GetStringAsync(webjobUrl).Result;
var result = JsonConvert.DeserializeObject(data) as JObject;
return result;
}
}
You can use a similar function to get all webjobs in your website (use endpoint https://<website_name>.scm.azurewebsites.net/api/webjobs). You may also look at the returned JObject to verify the actual state of the webjob and other properties.
If you want the WebJob to not stop you need to make sure your scm site is alive.
So the keep-alive requests should go to https://sitename.scm.azurewebsites.net and these requests need to be authenticated (basic auth using your deployment credentials).

Resources