Azure WebJob Publishing: Last in Wins - azure

So I have recently setup automated deployment for my WebAPI that also has FIVE webjobs.
But the problem is that I deploy the web jobs as FIVE distinct build steps within the VSO build definition and after the deployment is successful I don't have FIVE webjobs deployed. I only have ONE webjob deployed, the last one.
This makes me think that when doing the Azure Web App Deployment Build Step, everything is getting erased before the zip package is deployed.
Question
Does anybody know how can I make it so that deploying the web job does NOT erase the previous web jobs (that are in different folders anyway)?

I figured it out. Apparently there is a "Do not delete" flag on the Azure Deployment step. Checking this and making sure you SAVE (this is still a little awkward in VSO) solves this problem.

Related

Error "Web Deploy cannot modify the file on the destination because it is locked by an external process" in Octopus deploy

We are deploying into azure using Octopus deploy. We are using it since more than a year, and suddenly we started (about 3 weeks ago) to get errors on few deployments.
Microsoft.Web.Deployment.DeploymentDetailedClientServerException: Web Deploy cannot modify the file 'msvcr120.dll' on the destination because it is locked by an external process. In order to allow the publish operation to succeed, you may need to either restart your application to release the lock, or use the AppOffline rule handler for .Net applications on your next publish attempt. Learn more at: http://go.microsoft.com/fwlink/?LinkId=221672#ERROR_FILE_IN_USE
We have the webapp running and always on and we have the app setting 'MSDEPLOY_RENAME_LOCKED_FILES' to 1 that in theory prevents this.
Does anyone knows if something was changed in azure or octopus?
There are a number of reasons files may be locked during deployment. You should be able to get an idea of what may be locking files by using the kudu process explorer, which you can access using the url {yoursite}.scm.azurewebsites.net.
In order to avoid the locking issue altogether, you could make use of slots to achieve a zero downtime deployment if that's an option for you. In this case you could stop the site or enable App Offline which should unlock any files and allow the deployment to succeed after which a slot swap will make the deployment live. App Offline is preferred over using MSDEPLOY_RENAME_LOCKED_FILES, but will take the application offline during the deployment. Octopus also has support for this as an option on the Deploy an Azure Web App step itself, so may be worth a try even without slots.
You can use custom pre/post deployment scripts as part of your Deploy an Azure Web App to make use of the Stop-AzureRmWebAppSlot, Start-AzureRmWebAppSlot and Switch-AzureRmWebAppSlot Powershell commands Azure commandlets to achieve the above.
An alternative may be to use zip deployments, however, the Deploy an Azure Web App Octopus step doesn't have first class support for this quite yet. It can still be achieved using a Run an Azure PowerShell Script along with a package references if this is what you are wanting to do.

Azure DevOps Build Pipeline - A failed build still gets deployed to Azure

I'm trying to create a CI/CD pipeline for an example prototype. Thus, I've started simple enough to test my infrastructure - I'm using an almost untouched boilerplate of ASP.NET Framework Web App (targeting 4.6.1). The steps I've completed are:
App is deployed to an Azure App Service.
Its version control is hosted with Azure DevOps.
A build pipeline with the following tasks has been created, set-up and tested if it executes (tasks and their order, come from a template):
Azure Deployment Options/Settings are bound to the repository DevOps, thus builds are also displayed in Azure, and should be deployed there if successful.
The Build Pipeline is bound to the correct repository inside DevOps
Builds get triggered by pushing to the master branch
The next step was to verify that a broken build, because of failed tests or any other reason is not deployed to production in Azure. I've created a failing test for this reason.
And this is where I'm left stumped. Builds do fail as expected and the "App Service Deploy" task is skipped, because the build tasks before it have a failure:
And yet those broken builds still get deployed to Azure and to production without even waiting for the pipeline to finish. I'm verifying that a change has actually happened with small visual updates.
Build started and finished in Azure as soon as a push occurs before the pipeline in DevOps is fully traversed (or even started, if finding an agent takes longer):
(DevOps still not finished):
What am I doing wrong here? Am I understanding the pipeline wrong? Have I missed a set-up step somewhere? I'm lost.
Edit: As asked by Josh, here's my trigger as well:
Edit 2.2 A bit more clarification of my deployment options in my App Service in Azure, related to Daniel's comments:
This turned out to be the issue.
This is the only option I'm allowed to choose when tying my deployment to DevOps. I'm not allowed to choose a pipeline, just a project and a branch. In a tutorial I've compared with, the settings are the same (at least in this menu), but the build does not get triggered from the repository, but expects the pipeline to reach the appropriate step first, which is why I haven't considered it to be the culprit. Is there some additional setting up, I've missed to do, to indicate that it must look for a pipeline, rather than fire straight away from branch changes?
The deployment you have set up in the Azure portal is tied to source control only, not your build definition. So every time you commit to source control, two things happen that are totally disconnected from each other and start in parallel since they listen to the same repository for changes:
A build fires off in the pipeline.
The Azure website is updated with the version you just pushed to source control, since its deployment options are bound to it.
Remove #2 and your problem will go away. You set the App Service you want updated in the pipeline, you don't need an additional hook in the App Service itself.

How can I configure an Azure WebJob to run from a git deployed exe, rather than from an upload?

I have a deployment process that places everything needed within a repository which my azure AppService is already configured to pull from.
This deployment process is fully automated and works well.
I would like to amend this deployment process to include one or more console applications which would then be configured to be run as WebJobs, either when triggered, or on a continuous basis.
However the configuration for webjobs appears to want me to upload the .exe during configuration, rather than point at a pre-existing .exe.
This seems less than optimal, because it suggests that I'll have to reupload each time said console app changes.
It would be far more convenient to be able to point to a known location within the AppService which contained the full deployment of the WebJob console App.
Is there a way to achieve this?
As I know, the deployment process you want couldn't be done. No matter on which way WebJob is deployed, Job is copied to the file system on the Kudu in essence. And WebJob is a function depending on Web App Service, so the deployment couldn't be processed as a whole. You could read the Wiki.
From your description, I suggest you using Azure Functions. You also could use TimeTrigger、BlobTrigger、HTTPTrigger etc. You could write just the code you need for the problem at hand, without worrying about a whole application or the infrastructure to run it.
If you still have questions, please let me know.

Octopus deployed continuous WebJob not running simultaneously in two locations

I have a continuous webjob that needs to run two webapps in two locations on a TimerTrigger. When I deploy the webjob from Visual Studio to both locations everything works well and both webjobs run at the same time.
Now I'm ready to start deploying this with Octopus-Deploy. I have successfully created a plan with two steps that does that and puts the assemblies in the correct location under the web apps (app_data\jobs\continuous\{jobname}) in Azure. The problem is that only one webjob executes its job at a time even though both webjobs have a status of Running. If I stop and start the one that's executing, the other webjob starts executing its job while the one I turned off/on has a status of running, but doesn't ever execute its job. Also, if I redeploy just one of them from visual studio, they both execute their jobs at both locations again.
I'm not doing anything with Singletons and have actually tried turning it off by using a 'settings.job' file with {is_singleton: false}. Is there something Octopus is doing with the package that makes Azure think the webjob is a singleton?
My guess is that the issue is caused by using the same storage account and host id for both of the apps that you deployed. When you do that, the WebJobs SDK views it as a single Web App that has been scaled out to two instances, and makes sure the timer is only run on one of them.
The simplest solution is to use a different storage account for each app.

Azure WebJob running old code

We have a bug in our WebJob running in our live environment, I have identified the bug and fixed it, this I can verify in our Dev-environment. I published my WebJob as a "Azure WebJob" to our live environment but the bug i still present. To add to the confusion the bug now just occurs sometimes. So for some reason the old code is running somewhere sometimes.
Can someone please help me understand this?
I had a similar problem. We deploy using a stage environment in Azure and it turned out that the "old" WebJobs (running code with an old version of the entity framework model) where still running on the queue. These jobs where then fetching messages and consuming them. To add to the problem the exception was consumed in a try catch and the status of the WebJob was success.
Check if you have a stage environment (add -stage to the Webapp name) and if so go in to the Azure management portal and stop them.
Note, it is not enough to stop the Webapp, you must stop the WebJobs directly. This is done (in the new portal) under Settings->WebJobs and then right-clicking on the webjobs name selecting stop.
I spent ages looking into this problem. Turns out I had the web job project running in a console on my PC at work! No matter what I did on Azure the presence of this exe running and using the same storage for the web jobs meant that the old code running on my work PC picked up the jobs before Azure did. Easy fix: just make sure no exes are running outside of Azure!
In our case the web app was published to physical path /site/www instead of the default /site/wwwroot, because of this the Azure web portal interface adds the WebJobs to folder /site/jobs, but the webdeploy via VS or Azure are still trying to publish the webjobs inside the /site/www.
More details at Publishing WebJobs with Azure Pipelines

Resources