I have a web site running in azure app service.
And it has also web jobs deployed.
Now I am creating a new slot for my app service, and deploying the web site code to the new slot.
When I swap the slots, the web jobs still run in the production slot or would they be swap to the new slot?
Thanks!
Under the covers, Webjobs live in your App Service's site \ wwwroot \ App_data \ jobs \ continuous | triggered folder. A slot swap is a set of actions that are performed that ultimately end in a routing rule switch within Azure.
Your webjobs will participate in a slot swap for the App Service.
However, there are some interesting corner cases. For example, let's assume that your staging and production application settings are identical and you have triggers defined for something like a ServiceBusTrigger. Unless you stop your staging slot Webjobs, your staging Webjob instance triggers will continue to poll your production resources, meaning you can be executing both your production and slot Webjob code.
Webjob too will be swapped for sure. There's a config setting to stop all webjobs within a deployment slot (WEBJOBS_STOPPED = 1), but there isn't something to stop just the specific ones within a slot, as far as I know.
Related
I have an App Service Plan, and in this plan I have deployed 5 components of my solution as Web Apps. I use 'Release Management' in Azure DevOps to deploy code to these apps.
To minimise downtime during deployment, I deploy to staging slots first, and then swop the staging slots into production slots to complete deployment.
I have configured App Service Warmup (as detailed here) to call an endpoint that will 'warm up' the application during the slot swapping process.
This seems to work, but I have two issues:
Even though the warmup has run, the first request made to the app after slot swapping takes a long time. I suspect that this is due to the production slot having a 'sticky / slot settings', which as I understand it necessitates an app restart. To test this, I removed the slot settings, but the delay is still there.
The applications are dependent on each other, and the slot swapping (even though kicked off in parallel in Azure DevOps), is not guaranteed to complete at the same time, which means that it is possible for newer code to be interacting with old code. While we can engineer around this, this is not optimal.
From my investigations so far, the only way I can think of to work around these issues is to spin up a second app service plan, and configure traffic manager to sit in front of the two service plans. When deploying, I will prioritise one of the service plans while I deploy to the other service plan, and once that is complete divert traffic to the newly deployed service plan while upgrading the other, and then balancing the traffic between the two again once both are on the same code level.
What is the current 'best practice' for zero downtime deployments when using WebApps in Azure?
Is the duplicated service plan with traffic manager a viable option, and if not, what would you suggest?
Follow these more best practice recommendation.
SWAP BASED ON THE STATUS CODE
During the swap operation the site in the staging slot is warmed up by making an HTTP request to its root directory. More detailed explanation of that process is available at How to warm up Azure Web App during deployment slots swap.
By default the swap will proceed as long as the site responds with any status code. However, if you prefer the swap to not proceed if the application fails to warm up then you can configure it by using these app settings:
WEBSITE_SWAP_WARMUP_PING_PATH: The path to make the warm up request
to. Set this to a URL path that begins with a slash as the value. For
example, “/warmup.php”. The default value is /.
WEBSITE_SWAP_WARMUP_PING_STATUSES:Expected HTTP response codes for
the warm-up operation. Set this to a comma-separated list of HTTP
status codes. For example: “200,202” . If the returned status code is
not in the list, the swap operation will not complete. By default,
all response codes are valid.
MINIMIZE RANDOM COLD STARTS
WEBSITE_ADD_SITENAME_BINDINGS_IN_APPHOST_CONFIG: setting this to “1”
will prevent web app’s worker process and app domain from recycling
when the App Service’s storage infrastructure gets reconfigured.
https://ruslany.net/2019/06/azure-app-service-deployment-slots-tips-and-tricks/#prevent-cold-start
CONTROL SLOT-STICKY CONFIGURATION
If however for any reason you need to revert to the old behavior of swapping these settings then you can add the app setting WEBSITE_OVERRIDE_PRESERVE_DEFAULT_STICKY_SLOT_SETTINGS to every slot of the app and set its value to “0” or “false”.
https://ruslany.net/2019/06/azure-app-service-deployment-slots-tips-and-tricks/#slot-sticky-config
I would suggest to use Local cache in conjunction with Deployment Slots to prevent any downtime.
Add the sticky app setting WEBSITE_LOCAL_CACHE_OPTION with the value Always to your Production slot. If you're using WEBSITE_LOCAL_CACHE_SIZEINMB, also add it as a sticky setting to your Production slot.
• Create a Staging slot and publish to your Staging slot. You typically don't set the staging slot to use Local Cache to enable a seamless build-deploy-test life-cycle for staging if you get the benefits of Local Cache for the production slot.
• Test your site against your Staging slot.
• When you are ready, issue a swap operation between your Staging and Production slots.
• Sticky settings include name and sticky to a slot. So when the Staging slot gets swapped into Production, it inherits the Local Cache app settings. The newly swapped Production slot will run against the local cache after a few minutes and will be warmed up as part of slot warmup after swap. So when the slot swap is complete, your Production slot is running against the local cache.
Refer the Azure Best Practice Document:
https://learn.microsoft.com/en-us/azure/app-service/deploy-best-practices
https://learn.microsoft.com/en-us/azure/app-service/overview-local-cache
I utilise the "Swap with Preview" feature before warming the sites.
The main problem with swapping slots is that the worker processes are restarted when you swap. This means the sites need to be re-warmed.
When you use Swap with Preview the worker processes are restarted but the swap does not complete, meaning you can warm the sites accordingly. Once you are happy with your testing and performance you simply "Complete Swap" and the sites will respond the same.
I heard that there will be no session time out happens when the deployment switch happens in Azure from staging to production. Is my understanding correct?. If so how Azure handles internally this switch?.
The answer depends on what switch form staging to production you're talking about. Because you could use Deployment Slots like this, but it is not recommended to have a full-blown Staging environment as a slot on your App Service. Since these slots run on the same App Service Plan as Production, heavy load on Staging could hurt Production performance.
I tend to think of it more as a 'Pre-Production' environment using a Deployment Slot where you can do some last checks (smoke test) before you release the new version of your application into the wild.
I would think sessions are managed internally since the two slots run on the same App Service Plan, making this a relative simple scenario.
Documentation
Deploying an app to a slot first and swapping it into production ensures that all instances of the slot are warmed up before being swapped into production. This eliminates downtime when you deploy your app. The traffic redirection is seamless, and no requests are dropped as a result of swap operations.
Find more information about using slots here: Set up staging environments in Azure App Service
When I swap deployment slots in Azure, do the web jobs restart? If so, do they restart with the destination configuration or source configuration values?
In my case, I don't want the web jobs to run in the staging slot. To deal with this, I have the following code:
public static void Main()
{
// Only run web jobs when configured
bool enableWebJobs = false;
bool.TryParse(ConfigurationManager.AppSettings["EnableWebJobs"], out enableWebJobs);
if (enableWebJobs)
{
var host = new JobHost();
host.RunAndBlock();
}
else
{
// Sleep so the Azure web jobs platform doesn't try to continually restart the process
while (true)
{
Thread.Sleep(60000);
}
}
}
I'm just unsure if the web jobs will be restarted after the swap with the correct AppSettings. If not, then this won't work at all as EnableWebJobs will remain false.
The Configuration is swapped too so your code will not work as you hoped.
Using Amit Apple's post
How to prevent Azure webjobs from being swapped in Azure website production <--> staging slots
To prevent WebJobs from running on the staging slot you can add an app
setting called WEBJOBS_STOPPED and set it to 1 (in the azure portal)
I have disabled webjobs in staging with this sticky slot setting.
Looking at the webjobs logs in the Kudu panel, the webjobs fire up when the app is deployed and are not affected by the actual swap (which AFAIK is just the load balancer routing to the new live/stage).
edit: I am wrong. There is a restart as part of the swap. (see Understanding site swaps) namely:
Here is what happens when you swap a source slot (let's call it 'Staging') into a target slot (Production).
First, the Staging site needs to go through some setting changes for
App Setting and Connection Strings that are marked as 'slot'. There
are also other changes related to source control that may need to be
applied. This causes the Staging site to restart, which is fine.
Next, the Staging site gets warmed up, by having a request sent to
its root path (i.e. '/'), and waiting for it to complete.
Now that the Staging site is warm, it gets swapped into Production.
There is no down time, since it goes straight from one warm site to
another one.
Finally, the site that used to be Production and is now Staging also
needs to get some settings apply, causing it to restart. Again, this
is fine since it happens in the staging site.
I have an Azure website with a production and staging slot with multiple instances.
I run several Azure webjobs on the site, some of which are triggered others that are continuous.
I understand triggered webjobs will only run once on one instance.
My current setup is:
Production site has no deployment other than swapping from staging
Staging website is setup to continuous deployment from bitbucket
Webjobs are deployed from within VS to production slot using "publish to Azure" (since AFAIK there is no support for continuous deployment of webjobs that are schedule based)
The VS solution for the website is different from the VS solution containing webjobs
I noticed when I swap production and staging, webjobs are also swapped!
I realized this when I had deployed a new webjob to production and subsequently did a website deployment followed by a swap, the newly deployed webjob was no longer in the production slot!
My questions:
is it possible to prevent webjobs from swapping between production and staging (webjobs are really entirely different pieces of code than the website)
What exactly is swapped for webjobs? Binaries? Schedule? Both ?
how do i prevent all webjobs from running on the staging slot? My webjobs do not need to be highly available and i can test them offline easily.
If i VS deploy webjobs to the staging slot and do a swap, then the current staging slot will be missing the deployed webjob and I'll lose it when i do my next website update, so where should i be deploying my webjobs?
There is a related thread but it assumes the webjob is deployed simultaneously with a website which is not what I have currently.
I really like websites and webjobs, but it seems the story related to continuous independent deployment of webjobs and websites is broken.
Would really appreciate advice!
Thanks
Azure websites deployment slots are not an easy concept to understand, together with WebJobs it gets a little bit more difficult.
I suggest reading the following post to get a better understanding on deployment slots - http://blog.amitapple.com/post/2014/11/azure-websites-slots/ (including the comments section for useful information)
To get a better understanding on how WebJobs work and deployed see this post - http://blog.amitapple.com/post/74215124623/deploy-azure-webjobs/
It is important to understand that:
When swapping deployment slots you actually swap the content/files of the website between the 2 slots.
WebJobs are part of the website content.
So when you swap the slot you actually swap the website files including the WebJob.
Note it is a bad practice to deploy a WebJob directly and not as part of your website files/repository, this is probably the cause of issues you are having.
To prevent WebJobs from running on the staging slot you can add an app setting called WEBJOBS_STOPPED and set it to 1 (in the azure portal). (source). Make sure this app setting is sticky to the slot otherwise it'll propagate to the production slot.
The by far easiest solution to this problem is to have a separate Web App for your Web Job. Just place the Worker Web App on the same Azure Service Plan and you will have two separate Web Apps running on the same machine(s).
web-app
web-app-worker
Here's Amit's answer to #3 using PowerShell,
Function Disable-AzureSlotJobs($SiteName, $Slot = 'staging')
{
$site = Get-AzureWebsite -Name $SiteName -Slot $Slot
$site.AppSettings.WEBJOBS_STOPPED = "1"
Set-AzureWebsite -Name $SiteName -Slot $Slot -AppSettings $site.AppSettings
Set-AzureWebsite -Name $SiteName -SlotStickyAppSettingNames #("WEBJOBS_STOPPED")
}
'MySite1', 'MySite2', 'MySite3' | % { Disable-AzureSlotJobs $_ }
I have a website with a bunch of webjobs. The webjobs are continuous, but use Quartz.net to schedule internally. I'm using deployment slots to deploy my site to a staging site, which I then swap into production.
All works well, but I want to stop my webjobs from ever scaling out with my web app (i.e. not participate in auto-scale).
Now, I know I can create a settings.job file and set { "is_singleton": true } ... BUT ... in my testing, that breaks my deployments to my staging site - what happens when I deploy is that on my staging slot they all become stopped (presumably because my settings.job file prevents them from running). If I remove the settings.job file and deploy to my staging site again, this doesn't happen - they remain running.
How do I stop my webjobs from scaling out with auto-scale, without breaking the deployment slot swapping strategy?
Thanks
I re-tested my deployment with a settings.job file with is_singleton set, and it worked correctly - the webjobs deployed successfully and were running after the deployment had finished.
I'm not sure why this wasn't the case before - either something changed with Kudo, or perhaps I was confusing things by deploying my webjobs inconsistently, but in any case, I can report this is no longer an issue :)