Azure Cloud Services Slots - azure

I've got few Cloud Services that have both Production and Staging slots. Since I hadn't deploy the environment and I'm not aware of what exactly stands behind them - can I delete the Staging slots to lower the cost because they are billed the same as Production slots? And if I download the config files, would I be able eventually to import them back in Azure?

You can directly deploy to the production slot of a Azure Cloud Service. If you have more than one role instance (you are running multiple role instances to get the stated SLA right?), Azure will automatically upgrade each role instance independently of one another.
While this saves you a little bit of money by not deploying to the staging slot, we found the staging slot deployment to work with our continuous integration strategy better.
Reference : How to Manage Cloud Services

Can I delete the Staging slots to lower the cost because they are
billed the same as Production slots?
You should be able to delete staging slot without impacting the production slot. However if you have some users that are connecting to the staging slot, they will no longer be able to connect to the application once you delete the deployment from staging slot.
And if I download the config files, would I be able eventually to
import them back in Azure?
Merely download the config file is not going to help as you would also need the package file. What you should do instead is invoke Get Deployment Service Management API REST operation. What it will do is copy both the config file and package file in a storage account of your choice.

Related

how can we transform entire web.config from staging slot to production slot in azure?

I have created multiple slots(test, stage and prod) in my azure app service. Similarly I have created respective web.config files for each environment. I am deploying my application through octopus deployment tool in test environment slot, so initially it's picking web.test.config file and it's working fine.
But, I want to swap complete transformation section of web.config file when I swap it to stage or Prod slot while doing swapping through azure portal. Is there any way to do ?
using application setting and connection string of configuration setting, I am able to segregate the setting of each slot. But I am not sure how can I do it for other section like system.identityModel,system.web system.identityModel.services, etc. Therefore I want to replace complete transformation section according to environment while doing swapping.
When I talked with the app service team, they said slots are not meant for this purpose. The main purpose of slots is to allow deployment of new versions with little or no downtime. Or to test new features with a small percentage of the traffic. Not really for different environments, you should use have separate app services for that, to which you deploy separately.

Should I stop an Azure App Service during an update?

I am updating my Azure App Service from Azure DevOps. Currently, my release is like this:
Stop the App Service,
Update the App Service, and
Start the App Service.
My question is whether it reasonable to stop the App Service during the update? When I select a release template from Azure DevOps for Azure App Service, there are't any stop/start steps, only the update step. So I am wondering if the stop/start is even needed?
What we have done mostly is:
Stop staging slot
Deploy to slot
Start slot
Swap staging to production
Stop staging slot
Martin's suggestion on Take app offline is also a good one!
We prefer to deploy to slots and then swap so we incur minimal impact to production and can also rollback easily.
Stopping/taking app offline can prevent file locking issues.
It probably depends on your app. If you don't have any issues when you just update your app (such as the a file is in use issue) you can consider to use the Take App Offline flag which will place an app_offline.htm file in the root directory of the App Service during the update (then it will be removed). This way user will recognize that something is happening with the app.
However, I often ended up doing the same like you: Stop, Update, Start 😉
There are (5) options for safe-deployment (atomic updates) to Azure Web Apps. Here is my preferred order ranked by priority and feature richness:
Run-from-Package + ZipDeploy (makes site read-only)
ZipDeploy (using kudu REST api - automatically takes site offline)
Azure CLI (az webapp)
msdeploy (-enableRule:AppOffline, or stop/start site to enforce atomicity)
FTP (using publish profile, make sure to upload appoffline.htm)
There are numerous other deployment options like cloud sync, github continuous, local git, etc - but they are all built upon Kudu APIs (as is Azure CLI).
Note: If you're using Azure DevOps - it's supports nearly all these options - leverage the Azure App Service Deploy task
Agree with both Martin and juunas. If you want to deploy without impacting users then you need to use the slot swap approach. juunas brings up the great point of easily rolling back too. Our approach includes another slot we call "hotfix". This adds a few benefits:
Having an environment with production configs so that you can optionally do additional testing before actually doing the swap.
Roll back in prod even when devs have already deployed into a staging environment.
Allows you to test bugs in the current and previous versions of the code. Helpful when someone says "well it worked before this deployment"...
This is what it looks like.

How does Azure Deployment switch works from staging to production?

I heard that there will be no session time out happens when the deployment switch happens in Azure from staging to production. Is my understanding correct?. If so how Azure handles internally this switch?.
The answer depends on what switch form staging to production you're talking about. Because you could use Deployment Slots like this, but it is not recommended to have a full-blown Staging environment as a slot on your App Service. Since these slots run on the same App Service Plan as Production, heavy load on Staging could hurt Production performance.
I tend to think of it more as a 'Pre-Production' environment using a Deployment Slot where you can do some last checks (smoke test) before you release the new version of your application into the wild.
I would think sessions are managed internally since the two slots run on the same App Service Plan, making this a relative simple scenario.
Documentation
Deploying an app to a slot first and swapping it into production ensures that all instances of the slot are warmed up before being swapped into production. This eliminates downtime when you deploy your app. The traffic redirection is seamless, and no requests are dropped as a result of swap operations.
Find more information about using slots here: Set up staging environments in Azure App Service

Azure App Service Application Logs for permanent availability

We've 3 separate deployments:
Azure App Service (Web API application) w/ no web-jobs
Azure Dummy Web Site w/ 1 Web Job
Azure Dummy Web site w/ 1 Web Job
We will update them from time-to-time; publishing to a Staging Deployment-Slot and then swapping the Staging slot to Production once we're comfortable w/ what was deployed. We sometimes do this for all 3 at the same time, and sometimes only 1 at a time.
The applications use NLog to to log interesting things (at DEBUG, INFO, etc.). We've configured xsi:type="File" targets that write to the local drive (which I realize is blob-backed and shared across all instances). We currently write to a location under %HOME%\site\wwwroot.
Problem with our approach is the log-files are bound to the particular Slot we happen to be running in.
So when we log in Production for a month, deploy to a Staging slot, then swap it to Production ... we wipe out the month's worth of NLog application logs. Actually it gets swapped over to Staging and we'd have to do some manually copying/moving to merge it with the new Production application log.
I am pretty sure we're missing something simple and we're doing this wrong, or the hard-way.
What is a way that we can "log" to one place assigned to "Production" and have that log data contain, in only one place, all the data logged by our application over multiple deployments to Production.
As Suwat Ch said. If you swap your web app from production to staging, the log file will move to staging too. If you want to keep the logging file in production, I would suggest you save the log info at Azure storage. Please try AzureTableStorageNLogTarget to save log info at azure table storage. Here is a github resource which demonstrator how to use it. Hope this could give you tips.

Can I somehow do an auto swap between staging and production in azure cloud services?

I basically want to do a auto swap between staging and production on azure cloud services.
Basically I have a QA environment that needs a fixed IP address for the QA people to test after a developer finishes a task. Because in staging sometimes the ip changes due to some problems that can occur on the TFS builds I want to be able to have a fixed address for the QA team to access without having to click swap manually.
When you do a VIP SWAP between Production and Staging deployments. The VIPs of both deployments will be EXCHANGED. So Production will become staging and staging will be become Production by IP swap.
If you want your new Staging deployment(previously Production) hold all the latest bits of your application, then you have to re-deploy the application to new staging. This process can be automated through PowerShell. Below resources can be helpful for you to get started with automating deployment process.
To create a new deployment using PowerShell
Run PowerShell script in TFS build process
My first question would be why the staging deployment/slot is having those problems?
If the need is to programmatically perform the VIP swap, I'd think you could add some custom logic to do so via the Windows Azure service management API. Add this into your build definition perhaps (e.g. execute via PowerShell).

Resources