I've created a simple Azure WebJob that uses a QueueInput trigger. It deployed without any problems and I've schedule it via the management portal so that it 'Runs continuously'
Initial testing seemed fine, with the job triggering shortly after placing anything in the queue.
By chance I then left it about a day before placing anything else in the queue. This time the job hadn't triggered within a few minutes so I logged in to the portal to view the invocation logs - which showed that the job had just that moment been triggered.
That seemed too much of a coincidence so I left it another day before placing something in the queue. Again, the job didn't trigger. I left it overnight and by morning it still hadn't triggered.
When I logged in to the management portal this time I noticed that the job was marked as 'Aborted' on the WebJobs page. It was like that only for about 10 seconds before the status changed to 'Running'. And then the job immediately triggered from what was placed in the queue the night before, as expected.
As it's an alpha release I'm expecting glitches. Just wondering whether anyone else has had a similar experience.
For WebJobs SDK, your job must be running in order to listen for triggers (new queue messages, new blobs, etc). Azure Websites free tier has quotas and will put your job to sleep which means it's no longer listening on triggers. Using the site may cause it to come back to life and start listening to triggers again.
The SDK dashboard will show a warning icon next to functions if the hosting job is not running (it detects this via heartbeats).
Make sure that your website is configured with the "Always On" setting Enabled.
If your site contains continuously running jobs they may not perform reliably if this setting is disabled.
http://azure.microsoft.com/en-us/documentation/articles/web-sites-configure/
By default, web sites are unloaded if they have been idle for some period of time. This lets the system conserve resources. You can enable the Always On setting for a site in Standard mode if the site needs to be loaded all the time. Because continuous web jobs may not run reliably if Always On is disabled, you should enable Always On when you have continuous web jobs running on the site.
Related
I have an Azure webjob created using the SDK that runs hourly. The job runs, and works fine, but when I look at the job in the portal it always shows Failed. I can run the job from the Debug Console and everything appears fine. When run from the console the job typically takes seconds to run, but when run on the schedule it usually shows 12-20 minutes, before it fails.
How can I get more details as to why this is failing? Do I need to be telling webjobs somehow the task is finished and it's waiting on me?
Thanks,
Russ
Webjob Failure
This error happens if the job uses TimerTrigger.
If the job is long-running use WEBJOBS_IDLE_TIMEOUT and SCM_COMMAND_IDLE_TIMEOUT in Azure app settings instead of web.config.
If the job is not long-running, it should have scheduled timers less than 2 minutes, which will probably work well for testing only.
Finally, the ultimate solution is to use Basic or Standard offering of AppPlan.
In that case you can ENABLE Always On to keep the container loaded all the time.
However, WEBJOBS_IDLE_TIMEOUT and SCM_COMMAND_IDLE_TIMEOUT must also be set as described above. Continuous WebJobs or of WebJobs triggered using a CRON (TimerTrigger) expression without Always On, will not run reliably.
For more details, you could refer to this article.
I'm looking to use the API to change the number of web job instances I have running based on the size of a processing Q, I know I can setup rules in the portal but the minimum aggregation time is 60 minutes and I don't want to have the system waiting 60 minutes before scaling up if we suddenly get a burst of work.
The issue I have is that currently if I scale out in the portal manually from say 1 to 5 instances it kills the single running instance and then starts 5 new ones.
I assume if I did this through the API the same thing would happen, do you know if there is any way to avoid this?
Thanks
Si
UPDATE:
See below, I submitted 4 jobs and then as the first was processing I scaled out from 1 to 3 instances and this is what happened, the job that Never Finished then reran after the next 3 had finished as it's message would have popped back on the queue because it's processing failed initially.
if I scale out in the portal manually from say 1 to 5 instances it kills the single running instance and then starts 5 new ones.
As my test, if you scale your webjob, it will not kill the single running instance. I created a Webjob template and write a timer trigger in it.
Here is the time I scaled my web app:
Here is the trigger log in Azure storage ('azure-jobs-host-output'):
If you find your webjob is running in 'inactive instance' state in Azure webjob dashboard. Please do not worry about it. Your webjob is still in running. Please have a look at David's reply in this thread. Here is a snippet:
This is actually a bug in what the Portal displays. The Portal ends up asking an arbitrary instance about the WebJob status, and if it happens to hit any instance other than the one that's actually running it, it will be reported as inactive.
I have used Azure Schedulers before for quick jobs before. It targets a URL which is ASPX page or WebApi and it did the job.
Now I have a job that takes up to 15-20 minutes. Of course, I am getting timeout error after 30 seconds.
I'm trying to avoid creating a Windows Service or some console application that would run on Azure VM, rather have a non-UI application that runs in the background.
Do you have any suggestion what should I do?
You should use an Azure WebJob for this. WebJobs support simple scheduling via a cron expression (details here). Basically you upload a simple script file or exe that performs the work you want done, upload it to your WebApp along with a cron schedule expression, and Azure WebJobs will make sure it runs on schedule.
For your scenario, you'll want to create a "Continuous" WebJob and ensure you've enabled "Always On" which ensures the background job continues running (i.e. it isn't request triggered).
WebJobs sure is a good solutions, but it will share resources with its attached Web App.
You could consider using an Azure Cloud Service. I do that myself for longer running tasks, that are more CPU intensive.
Read more
For long running WebJobs, you have to tinker with the Timeout value (by default 2 minutes) or make sure your Webjob makes some Console.Writes.
To achieve that, go to the Web App Settings > Application Settings and add the following configurations:
WEBJOBS_IDLE_TIMEOUT - Time in seconds after which we'll abort a running triggered job's process if it's in idle, has no cpu time or output.
SCM_COMMAND_IDLE_TIMEOUT - Time in milisecods. By default, when your build process launches some command, it's allowed to run for up to 60 seconds without producing any output. If that is not long enough, you can make it longer, e.g. to make it 10 minutes:
I have an Azure Website where I would like to be able to republish the website without stopping any webjobs that might be running in the background.
Ignoring the fact that it's bad practice to publish while the site is being used, this scenario means that a large queue might keep the webjobs firing 24/7 as load increases on the website.
I'm not sure if publishing the website (and not the webjobs) cause the webjobs (scheduled and on-demand) to cancel. Do they?
I think they do, and in that case, is there anything you can do to prevent that? I risk jobs being stopped halfway-through because of the need to publish, and I don't want to sit there waiting for the queue to be empty before publishing. A method of allowing currently running jobs to finish without starting new runs would be fine too.
If the webjob files are not updated (under wwwroot/app_data/jobs/...) they will not restart.
I have read that web jobs require "always on" enabled
http://blog.amitapple.com/post/73574681678/git-deploy-console-app/
I have a free web site and my job continues to run also after 20 minutes. As I'm writing, it runs since more than two hour. Why? Perhaps because my site has traffic so the app-pool continue to be "up and running"?
While your job continued to run more than 20 minutes there is no promise from Windows Azure Websites for that, so if you need to rely on that you'll need to enable "always on", also free sites have a limitation on the amount of CPU they can use so even if your web job will be always on, this limitation will most likely be reached.
One reason why your job kept going could be because you kept checking it (in the dashboard), looking at the dashboard for the continuous jobs will also start them if they went down (due to "always on" not set).