Azure seamless upgrade when database schema changes - azure

Let's say I have a production and staging deployment both using their own (SQL Azure) databases. If the schema in staging has changed and needs to be deployed to production is there a defined way of achieving the database upgrade on the production database (without downtime)?
e.g. If I swap VIP staging <-> production (and at same time automate changing connection strings somehow) what is the best process to automate upgrading the sql azure database.
My thought would be to spot the environment change in RoleEnvironmentChanging (though not sure that VIP swap even fires RoleEnvironmentChanginng) and run the sql script against the to-be database (i.e. prod) at that point, however I need to make sure that script is only run once and there will be multiple instances transitioning.

So you have production deployment which has its own SQL Azure database and staging deployment which has its own SQL Azure database. In this situation both the application have their connection string pointing to two different databases.
Your first requirement is to change the Database schema on fly when you swap the deployment or do something and I have the following concern with that design:
If you write any code inside the role to do "ONCE and only ONCE" action, there is no guarantee that that this will happen only ONCE. It will happen multiple time depend on several scenario such as
1.1 In any situation you VM needs to be reimage by the system and this CODE will do the exactly same what it did during last reimage
1.2 You might protect it to not happen at role start or VM start by some registry method of some external key but there is full proof mechanism that not to happen.
Because of it I would suggest when you are ready to SWAP your deployments you can:
2.1 Run the script to update to the production related SQL Azure schema (This will have no impact on application download because it is not touched but while your database schema is updated, you may know better how it impact your application)
2.2 Change the configuration in staging deployment to point to production SQL Azure (This will not have any production application downtime at all)
2.3 SWAP the deployment (This will also have no application downtime)
So even when you manually update the DB Schema and then SWAP the deployment there is no significant downtime besides the time take by DB to update the schema.

I have been looking on best practices for this all over the place and have found none. So far this is what I do:
Deploy to staging (Production is already running)
Copy app_offline.htm file to the web root on Production. This way I block users from using the application, thus blocking changes to the database. I am using only one instance.
Backup the database.
Run DDL, DML and SP scripts. This updates the production database to the latest schema.
Test application on Staging.
Swap VIP. This brings the application back online since the app_offline.htm file is not present on Staging (new Production).
If something goes wrong, swap VIP again, restore database and delete app_offline.htm.
With this approach I have a downtime of ~5 minutes approximately; my database is small, which is better than waiting for the Vm to be created and users getting errors.

Related

Azure production environment and maintaining an sql database with migrations

I've built a small web app that i'm thinking of deploying as a production site to azure. I've used a code first approach. I'm after some advice regarding maintaining an azure production db when it is possible/likely that some of my models might still change in the early phase of deployment depending on user testing, etc.
My workflow is probably not ideal. I've published to azure for testing and my data is stored in an sql db called project_db on azure.
But I keep making changes to my models on my local machine. And rather than using migrations - which i kind of get - but also find a bit difficult to handle, my workflow is that I change my model eg. adding a property - then I will go and delete my local database and then build my solution again. At least on my local machine that works without having to implement migrations. And i don't need any locally stored data and it just seeds again.
I was hoping to confirm that if I head down this path i'd have to do the same thing on azure. That is if i've changed my models locally, deleted my localdb and then built my solution again locally - I can't just publish to azure and expect my previously created sql project_db to work. I'd have to delete the azure project_db and create a new azure db that would be built based on my changed models.
And once i have changed my models on my machine (before having enabled migrations) - say i've added 10 new properties to IdentityModels.cs and i want to deploy to my existing project_db that already contains data...if I enable migrations at this point will it migrate to azure and maintain my data. Or do migrations have to be enabled from the beginning before the very first publishing of the db to azure?
I did try to publish to the azure project_db after having changed my models (incl IdentityModel.cs) on my local machine. I wasn't able to log in even though the AspNetUser table still contained the email addresses, etc. that had previoulsy been entered. I'm assuming that's an issue with my changed models having mismatched the azure AspNetUser table in project_db.
Thanks for the advice.

Which pieces do or do not persist in an Azure Cloud Service Web Role?

My understanding of the VMs involved in Azure Cloud Services is that at least some parts of it are not meant to persist throughout the lifetime of the service (unlike regular VMs that you can create through Azure).
This is why you must use Startup Tasks in your ServiceDefinition.csdef file in order to configure certain things.
However, after playing around with it for a while, I can't figure out what does and does not persist.
For instance, I installed an ISAPI filter into IIS by logging into remote desktop. That seems to have persisted across deployments and even a reimaging.
Is there a list somewhere of what does and does not persist and when that persistence will end (what triggers the clearing of it)?
See http://blogs.msdn.com/b/kwill/archive/2012/10/05/windows-azure-disk-partition-preservation.aspx for information about what is preserved on an Azure PaaS VM in different scenarios.
In short, the only things that will truly persist are things packaged in your cscfg/cspkg (ie. startup tasks). Anything else done at runtime or via RDP will eventually be removed.
See - How to: Update a cloud service role or deployment - in most cases, an UPDATE to an existing deployment will preserve local data while updating the application code for your cloud service.
Be aware that if you change the size of a role (that is, the size of a virtual machine that hosts a role instance) or the number of roles, each role instance (virtual machine) must be re-imaged, and any local data will be lost.
Also if you use the standard deployment practice of creating a new deployment in the staging slot and then swapping the VIP, you will also lose all local data (these are new VMs).

Prevent azure staging environment from accessing queue messages

After swapping the latest azure deployment from staging to production, I need to prevent the staging worker role from accessing the queue messages. I can do this by detecting if the environment is staging or production in code, but can anyone tell me if there is a any other way to prevent staging environment from accessing and processing queue messages??
Thanks for the help!
Mahesh
There is nothing in the platform that would do this. This is an app/code thing. If the app has the credentials (for example, account name and key) to access the queue, then it is doing what it was coded to do.
Have your staging environment use the primary storage key and your production environment use the secondary storage key. When you do the VIP swap you can regenerate the storage key that your now-staging environment is using which will result in it no longer having credentials to access the queue.
Notice that this does introduce a timing issue. If you do the swap first and then change the storage keys then you run the risk of the worker roles picking up messages in between the two operations. If you change the keys first and then do the swap then there will be a second or two where your production service is no longer pulling messages from the queue. It will depend on what your service does as to whether or not this timing issue is acceptable to you.
You can actually detect which Deployment Slot that current instance is running in. I detailed how to do this here: https://stackoverflow.com/a/18138700/1424115
It's really not as easy as it should be, but it's definitely possible.
If this is a question of protecting your DEV/TEST environment from your PRODUCTION environment, you may want to consider separate Azure subscriptions (one for each environment). This guide from Patterns and Practices talks about the advantages of this approach.
http://msdn.microsoft.com/en-us/library/ff803371.aspx#sec29
kwill's answer of regenerating keys is a good one, but I ended up doing this:
Optional - stop the production worker role from listening to the queue by changing an appropriate configuration key which tells it to ignore messages, then rebooting the VM (either through the management portal or by killing the WaHostBootstrapper.exe)
Publish to the staged environment (this will start accessing the queue, which is fine in our case)
Swap staged <-> production via Azure
Publish again, this time to the new staged environment (old live)
You now have both production and staging worker roles running the latest version and servicing the queue(s). This is a good thing for us, as it gives us twice the capacity, and since staging is running anyway we may as well use it!
It's important that you only use staging as a method of publishing to live (as it was intended) - create a whole new environment for testing/QA purposes, which has its own storage account and message queues.

Azure SQL Database naming ambiguity

Our application uses an Azure SQL Database.
Apart from our local dev setup, we have two environments:
Staging (for quality assurance and client testing), and
Production (live)
The Staging database and Production database are stored on two separate SQL Database servers. In both servers, the databases are named the same.
Problem:
Since the server names are automatically and uniquely generated (and are a bunch of randomly generated letters), it is very difficult to distinguish between Staging and Production. Screenshot from the Azure portal below:
This also increases the possibility of pointing to the wrong database when running change scripts, queries, etc. If it was possible to alias/rename the servers, then this wouldn't be a problem, but I know that this isn't possible.
Any suggestions? What do you do in your environment?
If you want to have speaking database URLs you could use custom DNS Names for your SQL Azure Servers.
So you could CNAME your custom domains like this:
liveDB.mydomain.com to random2323LIVE32323.database.windows.net
stageDB.mydomain.com to random43435STAGE34.database.windows.net
But there is one caveat:
You still need the server name, because you need to login as user#random2323LIVE32323.
Anyways... if you use this scenario, the worst case can be a rejected login, if you mix the real server names.
For a detailed explanation see here
Although it's a bit more administrative work, I typically recommend different Live IDs for Stage vs. Prod. That's because I normally want to have a different set of administrators for my cloud services. If you name one Live ID as PRODAppName and STGAppName, you can make yourself the co-admin on both of those live IDs, and simply use the Filter capability of the portal to only see PROD or STG when you need to know which service is which. Hopefully this makes sense.

Is it possible to change the connection string during an Azure VIP Swap

I'm trying to setup Staging and Live environments in Azure (September toolkit) and I want a separate Staging and Live database - with different connection strings. Obviously I can do this with web.config transformations back in Visual Studio, but is there a way I can automate a change of connection string during a VIP-swap - so that the staging site points to staging data and the live site to live data? I'd prefer not to have to deploy twice.
With the management APIs and the PowerShell Cmdlets, you can automate a large amount of the Azure platform and this can include coordinating a VIP switch and a connection string change.
This is the approach:
Add your database connection string to your ServiceConfiguration file.
Modify your app logic to read the connection string from the Azure specific config by using RoleEnvironment.GetConfigurationSettingValue rather than the more typical .NET config ConfigurationManager.ConnectionStrings API
Implement RoleEnvironmentChanging so that your logic will be notified if the Azure service configuration ever changes. Add code to update your app's connection string in here, again using RoleEnvironment.GetConfigurationSettingValue.
Deploy to staging with a ServiceConfiguration setting for your "staging" DB connection string
Write a PowerShell script that will invoke the VIP switch (build around the Move-Deployment cmdlet from the Windows Azure Platform PowerShell Cmdlets 2.0) and invoke a configuration change with a new ServiceConfiguration file that includes your "production" DB connection string (see Set-DeploymentConfiguration)
Taken together, step 5 will perform the VIP switch and perform a connection string update in a single automated operation.
I don't believe anything changes as far as the role is concerned when you do a VIP swap. Rather, it alters the load balancer configuration.
So nothing happens in your app to cause it to change configuration. The only thing I can think of is that the URL changes between the two. You could implement code that chose one of two connection strings, based on the URL with which it was accessed (assuming that we're only talking about a web role), but it seems messy.
Fundamentally, I think the issue is that staging isn't a separate test environment; it's a stepping stone into production. Thus, Microsoft's assumption is that the configuration doesn't change.

Resources