I have an azure wep app, with a couple of web jobs running inside it. I noticed one of these webjobs was running old code. It is also impossible to remove it, it keeps running no matter what I do.
I have tried:
* Redeploying the code.
* Stopping the webjob
* Stopping the web app
* Deleting the web job.
Even after deletion, the webjob is still running, and consuming from its queue.
There is no web app anywhere that is running a webjob with that old code anymore.
I have a suspicion that the webjob is running inside a deployment slot, but all deployment slots were deleted, so I do not know how to confirm this.
On the kudu dashboard, the webjob is not present in the process explorer.
How do I get rid of this rogue webjob?
You could try to get specific job by name and check job runs history via WebJobs API, which could help you check whether job is indeed deleted/stopped. If the WebJob is deleted from current website, please make sure whether any other WebJob running on your other websites are consuming the queue. In addition please make sure whether Azure Functions are being used. Of cause, the tasks outside of Azure could consuming queue is also possible. If you worry someone is performing your storage maliciously, as you said, you could try to regenerate your storage account keys, but you may need to re-sync the access keys with your applications/services that are dependent on the storage account.
Related
I have a deployment process that places everything needed within a repository which my azure AppService is already configured to pull from.
This deployment process is fully automated and works well.
I would like to amend this deployment process to include one or more console applications which would then be configured to be run as WebJobs, either when triggered, or on a continuous basis.
However the configuration for webjobs appears to want me to upload the .exe during configuration, rather than point at a pre-existing .exe.
This seems less than optimal, because it suggests that I'll have to reupload each time said console app changes.
It would be far more convenient to be able to point to a known location within the AppService which contained the full deployment of the WebJob console App.
Is there a way to achieve this?
As I know, the deployment process you want couldn't be done. No matter on which way WebJob is deployed, Job is copied to the file system on the Kudu in essence. And WebJob is a function depending on Web App Service, so the deployment couldn't be processed as a whole. You could read the Wiki.
From your description, I suggest you using Azure Functions. You also could use TimeTriggerăBlobTriggerăHTTPTrigger etc. You could write just the code you need for the problem at hand, without worrying about a whole application or the infrastructure to run it.
If you still have questions, please let me know.
Right. So this article: https://azure.microsoft.com/en-us/documentation/articles/web-sites-create-web-jobs/ mentions that you can "You can run programs or scripts in WebJobs in your App Service web app in three ways: on demand, continuously, or on a schedule. There is no additional cost to use WebJobs."
Which is great, the free-alternative is a Scheduler Job Collection
with a job, but you're limited to running it every hour. So being able to run the webjob as part of the webapp, and on a higher frequency is what we need.
However, I'm really struggling to find any way of automating this process. Using the Azure portal to add a web job works fine - but the "automation script" generation tool doesn't generate a json file which includes anything about the webjob - so we'd always have to manually create it.
There are examples of custom templates for automating the creation of webjobs - but they all create said webjob as part of a Scheduler Job Collection, where we are limited to the hourly execution.
To summarise: I'm looking for a way of automating the creation of a webjob, linked to a web-app (such that it doesn't incur extra costs).
Any help would be much appreciated.
WebJobs are deployed by folder convention (as described here), so deploying a WebJob is no different from deploying a Web App. It's simply a matter of getting the files in the right place.
Specifically, triggered WebJobs (manual or scheduled) go under site\wwwroot\app_data\jobs\triggered\{job name} and continuous WebJobs go under site\wwwroot\app_data\jobs\continuous\{job name}.
I have a continuous webjob that needs to run two webapps in two locations on a TimerTrigger. When I deploy the webjob from Visual Studio to both locations everything works well and both webjobs run at the same time.
Now I'm ready to start deploying this with Octopus-Deploy. I have successfully created a plan with two steps that does that and puts the assemblies in the correct location under the web apps (app_data\jobs\continuous\{jobname}) in Azure. The problem is that only one webjob executes its job at a time even though both webjobs have a status of Running. If I stop and start the one that's executing, the other webjob starts executing its job while the one I turned off/on has a status of running, but doesn't ever execute its job. Also, if I redeploy just one of them from visual studio, they both execute their jobs at both locations again.
I'm not doing anything with Singletons and have actually tried turning it off by using a 'settings.job' file with {is_singleton: false}. Is there something Octopus is doing with the package that makes Azure think the webjob is a singleton?
My guess is that the issue is caused by using the same storage account and host id for both of the apps that you deployed. When you do that, the WebJobs SDK views it as a single Web App that has been scaled out to two instances, and makes sure the timer is only run on one of them.
The simplest solution is to use a different storage account for each app.
We have a bug in our WebJob running in our live environment, I have identified the bug and fixed it, this I can verify in our Dev-environment. I published my WebJob as a "Azure WebJob" to our live environment but the bug i still present. To add to the confusion the bug now just occurs sometimes. So for some reason the old code is running somewhere sometimes.
Can someone please help me understand this?
I had a similar problem. We deploy using a stage environment in Azure and it turned out that the "old" WebJobs (running code with an old version of the entity framework model) where still running on the queue. These jobs where then fetching messages and consuming them. To add to the problem the exception was consumed in a try catch and the status of the WebJob was success.
Check if you have a stage environment (add -stage to the Webapp name) and if so go in to the Azure management portal and stop them.
Note, it is not enough to stop the Webapp, you must stop the WebJobs directly. This is done (in the new portal) under Settings->WebJobs and then right-clicking on the webjobs name selecting stop.
I spent ages looking into this problem. Turns out I had the web job project running in a console on my PC at work! No matter what I did on Azure the presence of this exe running and using the same storage for the web jobs meant that the old code running on my work PC picked up the jobs before Azure did. Easy fix: just make sure no exes are running outside of Azure!
In our case the web app was published to physical path /site/www instead of the default /site/wwwroot, because of this the Azure web portal interface adds the WebJobs to folder /site/jobs, but the webdeploy via VS or Azure are still trying to publish the webjobs inside the /site/www.
More details at Publishing WebJobs with Azure Pipelines
We have an app we deploy to Azure. It involves deploying several cloud services with a mix of web roles and worker roles. Some of the worker roles pick items up off a queue and process them. We also have some scheduled jobs that run periodically (backing up Azure table storage, etc).
Every time we deploy, we have to watch the Staging environment in the portal and manually stop the roles from starting. We have to do this because we don't want the Staging and Production slots both processing information at the same time (e.g. pulling from the same queue but processing it differently, or both running the same scheduled job simultaneously, etc).
The only way I've found to have a deployment go into Staging in a stopped state is to leave the last deployment there also stopped. Downside is you're charged for those instances, even when they're not running.
So, how do you deploy to an empty staging slot in Azure without the deployment starting up?
EDIT: We kick off the builds through Visual Studio or Visual Studio Online (i.e. TFS). Don't usually use powershell.
There is no way to create a deployment but not have it start. What you can do instead is have a setting in your csdef/cscfg that your code would read during OnStart.
For example, you would have a setting called "ShouldRun" set to False. In OnStart you would have a loop that checked that setting and exits the loop if ShouldRun==True. After you deploy you would then go to the portal and change that setting to True whenever you are ready for it to start processing. Once the loop exits then the OnStart method will finish which will cause Azure to call your Run method and bring your instances to the Ready state.
In addition you could add a Changed event handler to stop processing messages when the setting was changed to False. This would let you first stop your production deployment and then start your staging deployment.
For me, you need to separate even your queue's and configs. Another option, you can create a powershell script to stop your cloud service after publish to it.
http://msdn.microsoft.com/en-us/library/dn495211.aspx