I'm attempting to implement a Function slot and prevent any downtime during deployment. I can get this working but only if I never change any application settings. When an application setting is changed it causes a restart.
One pattern I've seen recommended for this is to not declare application settings in ARM for production slot. This seems to make sense and, surprisingly, works with Complete mode.
However, when deploying the Function for the first time to a clean environment without production app settings defined this will always fail on slot swap. Many of the Azure Functions are set in app settings. For example the Functions version, app insights key, storage, etc. If you omit these on a clean deploy and attempt to swap it will never deploy.
So I'm in a position where I need to omit application settings to get the deployment model to work but the application settings include mandatory settings for the infrastructure. I've tried switching to using az cli for the function app creation. I've had limited success but I also have ARM resources that depend on the function existing such as role assignments and alerts. I'd rather not go down a hybrid model if I can help it and I'd also like to keep the declarative target state that's provided with ARM's complete mode.
Am I missing an option here to get zero downtime, plus successful first deploy and allow updating of Function settings? Looks like it's not possible.
Related
I have an ARM template, that performs resource group deployment. It mainly consist of web apps and key vault. I want to go with blue/green deployment and sometimes need to add only blue slot settings to web app.
When we are deploying web app site all as one resource (https://learn.microsoft.com/en-us/azure/templates/microsoft.web/sites?tabs=json), it's quite easy as appSettings are being defined as key-values and we can add 3rd key as "slotSetting": true.
However, in my scenario, I want to deploy all in Complete mode, in the same time add dynamically key vault access policies. So based on this doc - https://learn.microsoft.com/en-us/azure/app-service/app-service-key-vault-references#azure-resource-manager-deployment - I need to configure appSettings as separate resource, which is an object, so there's no way to add this additional property.
With this, I have 2 questions:
Is there a way to add this slotSetting when deploying appSettings as separate resource?
Those are little bit off topic but about that case as well:
but I'm trying to find a best practise, when it comes to deploying resources + key vault and dynamic access policies. As on complete mode we can't just conditionally not deploy key vault (because it will try to remove it) and on the other hand when we define it, we need to add empty array of AccessPolicies (empty because we need to add accessPolicies in separate resources so we could loop over all web apps and get identity), so by and large, the apps are down for some time. Not the best practise when you want to reach 0 downtime deployment.
In what exact scenarios we want to use Complete mode deployment? My team leads are pushing for it but I don't see much added value. If we have ARM template well defined, all manual changes will be changed or moved to defaults in Incremental as well as Complete. Just additional resources are deleted. Do you have any interesting use case to share?
Not really sure I fully understand your question but when deploying slots you can deploy different app settings to the individual slot in the ARM template simply by specifying them in the slot setting itself when you create it.
You can create the RBAC policies first for the Vault and then apply them to the Vault at creation time but it gets a little complicated, and why dont you just use incremental mode, the only difference is that things not specified wont be deleted.
I have never found a good use case for Complete mode other than dev instances where I can specify a single resource and have it clean up all other things, but in a produciton environment running Complete mode seems to be totally weird
We have 4 Azure environments for the stages of our development process; Dev, QA, UAT, Production. As you would expect, settings and options need to differ between environments, e.g., "apiurl": "http://dev-api.ourdomain.com" in dev needs to become "apiurl": "https://uat-api.ourdomain.com" for UAT.
At the moment we manually set these in the App Service Configuration page in the Azure Portal. There are problems with this method we are trying to overcome:
It cannot be timed to happen with releases, it has to be manually done
It's prone to human error
Previous values are lost
We cannot easily compare values between environments
We cannot easily see which settings are no longer used
We would like to setup an appsettings.json with environment transforms for the differences. This addresses the last 3 issues as it will be stored in our code source control (if not secret), but this is useless if we cannot deploy that same file to set the Azure configuration. Pipeline steps might solve issues 1 and 2, but reintroduces issues 3 and 5.
Surely there is a simple way to do this that I am missing?
I'd recommend creating the appsettings.json file in your source code, and use a pipeline to deploy the app service with it included; those settings will take effect. The pipeline can either adjust the file contents immediately before deployment, or upload the file as-is but also deploy app settings to override the file's contents.
This fits your requirements because:
using a pipeline for automation gives you timing control (req 1) and reduces the risk of human error (req 2)
keeping some of the settings in source control gives you a (partial) history of previous values (req 3)
some of the settings may be the same across all environments; those can be left untouched in the appsettings.json, but those which differ can be overridden by the pipeline and adjusted correctly
you could override settings by using a script task such as Powershell or a FileTransform task to actually change the appsettings.json
or, you could override settings by automated configuration of the azure app service configuration, using the Azure CLI or an ARM Template for example.
For comparison of settings across the different environments (req 5), I'd recommend:
organising your pipeline variables into different groups by environment, so the pipeline code makes it clear what the differences will be when deployed
using azure cli or powershell commands to interrogate the actual deployed app services.
Our company website will soon be hosted in an App Service in Azure. The website communicates with an API layer that also hosted in Azure and links to our internal systems and databases. The architecture at this level cannot be changed at this time and has quite a bit of background history, etc.
We are looking at implementing always on deployments using Deployment Slots in the App Service in Azure. The API layer will have non-breaking changes for each deployment and deploying the APIs will be the first part of any release, with the website following.
Is have a clear separation between our environments and the release will be tested in Dev, Test and Pre-Prod environments before the production deployment begins. Overall the whole process is fairly simple until it comes to post-implementation (PI) testing that is currently this is mandatory in our company.
We need to be able to test the production deployment prior to the customers using the site. Currently we feature toggle the site into maintenance mode unless its being accessed from a select IP address list. We now need to perform the PI testing on the new version of the site whilst the customer continues to use the older version of the site. I wasn't sure of the best way of achieving this.
One idea I did have is having a subdomain that links directly to the websites _staging deployment slot bypassing the deployment slot settings. In turn some logic in here could go direct to the API _staging deployment slot. This would give the option to post implement the change just prior to clicking the 'Swap' button to swap over the deployment slots.
I know the overall process isn't ideal, but at the moment this can't be changed. Does anyone have any thoughts or other suggestions on the above please?
Azure makes it easy to create deployment slots for App Services. It’s available in the Standard or Premium App Service plan mode. Deployment slots are actually live apps with their own hostnames. App content and configuration elements can be swapped between two deployment slots, including the production slot.
Azure customers can easily perform the following steps
- Deploy the web application to an online deployment slot.
- Run the tests on a deployment slot, within the live environment that potential testers are going to use. Testing environment and production environment exist side-by-side and provide the similar environment.
- Perform an internal swapping of the IP addresses of both slots (via load balancing and traffic management for both the nodes — slots)
- Update applications with zero downtime
- Swap back to a previous version of your app instantly, with zero downtime for users.
References
https://learn.microsoft.com/en-us/azure/app-service/deploy-staging-slots
The overall reason to have deployment slots enabled is that it helps your team to run live testing on the production environment, and in case there are some problems on the production slot, it lets you roll back the swap without having to take your application down for maintenance.
We are deploying into azure using Octopus deploy. We are using it since more than a year, and suddenly we started (about 3 weeks ago) to get errors on few deployments.
Microsoft.Web.Deployment.DeploymentDetailedClientServerException: Web Deploy cannot modify the file 'msvcr120.dll' on the destination because it is locked by an external process. In order to allow the publish operation to succeed, you may need to either restart your application to release the lock, or use the AppOffline rule handler for .Net applications on your next publish attempt. Learn more at: http://go.microsoft.com/fwlink/?LinkId=221672#ERROR_FILE_IN_USE
We have the webapp running and always on and we have the app setting 'MSDEPLOY_RENAME_LOCKED_FILES' to 1 that in theory prevents this.
Does anyone knows if something was changed in azure or octopus?
There are a number of reasons files may be locked during deployment. You should be able to get an idea of what may be locking files by using the kudu process explorer, which you can access using the url {yoursite}.scm.azurewebsites.net.
In order to avoid the locking issue altogether, you could make use of slots to achieve a zero downtime deployment if that's an option for you. In this case you could stop the site or enable App Offline which should unlock any files and allow the deployment to succeed after which a slot swap will make the deployment live. App Offline is preferred over using MSDEPLOY_RENAME_LOCKED_FILES, but will take the application offline during the deployment. Octopus also has support for this as an option on the Deploy an Azure Web App step itself, so may be worth a try even without slots.
You can use custom pre/post deployment scripts as part of your Deploy an Azure Web App to make use of the Stop-AzureRmWebAppSlot, Start-AzureRmWebAppSlot and Switch-AzureRmWebAppSlot Powershell commands Azure commandlets to achieve the above.
An alternative may be to use zip deployments, however, the Deploy an Azure Web App Octopus step doesn't have first class support for this quite yet. It can still be achieved using a Run an Azure PowerShell Script along with a package references if this is what you are wanting to do.
I am updating my Azure App Service from Azure DevOps. Currently, my release is like this:
Stop the App Service,
Update the App Service, and
Start the App Service.
My question is whether it reasonable to stop the App Service during the update? When I select a release template from Azure DevOps for Azure App Service, there are't any stop/start steps, only the update step. So I am wondering if the stop/start is even needed?
What we have done mostly is:
Stop staging slot
Deploy to slot
Start slot
Swap staging to production
Stop staging slot
Martin's suggestion on Take app offline is also a good one!
We prefer to deploy to slots and then swap so we incur minimal impact to production and can also rollback easily.
Stopping/taking app offline can prevent file locking issues.
It probably depends on your app. If you don't have any issues when you just update your app (such as the a file is in use issue) you can consider to use the Take App Offline flag which will place an app_offline.htm file in the root directory of the App Service during the update (then it will be removed). This way user will recognize that something is happening with the app.
However, I often ended up doing the same like you: Stop, Update, Start 😉
There are (5) options for safe-deployment (atomic updates) to Azure Web Apps. Here is my preferred order ranked by priority and feature richness:
Run-from-Package + ZipDeploy (makes site read-only)
ZipDeploy (using kudu REST api - automatically takes site offline)
Azure CLI (az webapp)
msdeploy (-enableRule:AppOffline, or stop/start site to enforce atomicity)
FTP (using publish profile, make sure to upload appoffline.htm)
There are numerous other deployment options like cloud sync, github continuous, local git, etc - but they are all built upon Kudu APIs (as is Azure CLI).
Note: If you're using Azure DevOps - it's supports nearly all these options - leverage the Azure App Service Deploy task
Agree with both Martin and juunas. If you want to deploy without impacting users then you need to use the slot swap approach. juunas brings up the great point of easily rolling back too. Our approach includes another slot we call "hotfix". This adds a few benefits:
Having an environment with production configs so that you can optionally do additional testing before actually doing the swap.
Roll back in prod even when devs have already deployed into a staging environment.
Allows you to test bugs in the current and previous versions of the code. Helpful when someone says "well it worked before this deployment"...
This is what it looks like.