Azure production environment and maintaining an sql database with migrations - azure

I've built a small web app that i'm thinking of deploying as a production site to azure. I've used a code first approach. I'm after some advice regarding maintaining an azure production db when it is possible/likely that some of my models might still change in the early phase of deployment depending on user testing, etc.
My workflow is probably not ideal. I've published to azure for testing and my data is stored in an sql db called project_db on azure.
But I keep making changes to my models on my local machine. And rather than using migrations - which i kind of get - but also find a bit difficult to handle, my workflow is that I change my model eg. adding a property - then I will go and delete my local database and then build my solution again. At least on my local machine that works without having to implement migrations. And i don't need any locally stored data and it just seeds again.
I was hoping to confirm that if I head down this path i'd have to do the same thing on azure. That is if i've changed my models locally, deleted my localdb and then built my solution again locally - I can't just publish to azure and expect my previously created sql project_db to work. I'd have to delete the azure project_db and create a new azure db that would be built based on my changed models.
And once i have changed my models on my machine (before having enabled migrations) - say i've added 10 new properties to IdentityModels.cs and i want to deploy to my existing project_db that already contains data...if I enable migrations at this point will it migrate to azure and maintain my data. Or do migrations have to be enabled from the beginning before the very first publishing of the db to azure?
I did try to publish to the azure project_db after having changed my models (incl IdentityModel.cs) on my local machine. I wasn't able to log in even though the AspNetUser table still contained the email addresses, etc. that had previoulsy been entered. I'm assuming that's an issue with my changed models having mismatched the azure AspNetUser table in project_db.
Thanks for the advice.

Related

Azure Database Migration Service stuck at "log shipping in progress"

I have a super small (no data, just a few single column tables, I'm just testing) on premise SQL server database that I'm trying to do an online migration for, to a SQL Managed Instance.
As far as I know I've configured everything as it should be - backup files are present in the file share, and the DMS is set up and can see both the SQL Server and the Managed Instance. However, it doesn't restore anything. It's stuck saying "log shipping in progress":
If I look at the managed instance itself, I can see a database has been created, and is currently in "Restoring" status.
My question is: how can I resolve this?
Maybe there are other logs I can look at, or there's some other permissions thing I don't know about, or something else?
I've tried creating a new project from scratch, but it had the same issue. And I've tried waiting... but I don't think it's working. As I mentioned, this is a DB with only a few tables (maybe 4), a single column in each table, no data at all.
Looking at your image, looks like there is no issue with DMS connecting to backup location and uploading backup and log files.
What is interesting is the field for last backup file applied and last applied LSN is empty. Makes me think there is some issue on the SQL MI machine.
I would recommend to open a case with MS Support on this.
One other thing you can try is to do a manual failover and your machine will failover to secondary node and then run the DMS job again.
https://techcommunity.microsoft.com/t5/azure-sql/user-initiated-manual-failover-on-sql-managed-instance/ba-p/1538803
Also try to take a look at what is going on with the SQL MI with any blocking.
You can use sp_whoisactive which is the latest version and it works on Azure SQL MI
https://github.com/amachanic/sp_whoisactive/releases/tag/v11.35
You need to assign permission to the service principal running the migration service at subscription level:
az role assignment create --assignee [YOUR SERVICE PRINCIPAL] --role contributor

Achieving MasterData deduplication on Azure

I am looking at achieving Master Data deduplication based on match percentages in AzureDB...was looking at something equivalent to Master Data Services/ DQS (Data Quality Services) in SQL Server2012
https://channel9.msdn.com/posts/SQL11UPD05-REC-06
Broadly looking for controls on match rules (exact, close match etc), handle dependencies and audit trail(undo capability etc)
I reckon this must be available in Azure cloud, if this is made available in SQL Server. Could you pls point me to how I get this done on AzureDB
Please note- I am NOT looking for data Sources like MelissaDAta, D&B that are listed on the Azure marketplace
Master Data Services is not just a database process: it also centrally involves a website component, which still (as of 2021) requires some Windows server running IIS.
This can be an Azure Virtual Machine (link to documentation) but there is no serverless offering for this at this time.
The database itself can be hosted on an Azure SQL Managed Instance (link to documentation) but not on a standalone Azure SQL DB, as far as I can tell. This is presumably because some of the essential components of MDS sit outside the database, much like other services like SSIS are more than just a database.
Data Quality Services is a similar story: it uses three databases (link to documentation) and seemingly some components outside the databases, so wouldn't be possible to deploy in standalone Azure SQL DBs. It may be possible to run on a Managed Instance, I couldn't find a clear answer to that. And again, there is no fully-serverless offering at this time.
Of course, all of this can easily be run via IaaS (Infrastructure as a Service) using an Azure virtual machine running SQL Server.

Azure SQL Database naming ambiguity

Our application uses an Azure SQL Database.
Apart from our local dev setup, we have two environments:
Staging (for quality assurance and client testing), and
Production (live)
The Staging database and Production database are stored on two separate SQL Database servers. In both servers, the databases are named the same.
Problem:
Since the server names are automatically and uniquely generated (and are a bunch of randomly generated letters), it is very difficult to distinguish between Staging and Production. Screenshot from the Azure portal below:
This also increases the possibility of pointing to the wrong database when running change scripts, queries, etc. If it was possible to alias/rename the servers, then this wouldn't be a problem, but I know that this isn't possible.
Any suggestions? What do you do in your environment?
If you want to have speaking database URLs you could use custom DNS Names for your SQL Azure Servers.
So you could CNAME your custom domains like this:
liveDB.mydomain.com to random2323LIVE32323.database.windows.net
stageDB.mydomain.com to random43435STAGE34.database.windows.net
But there is one caveat:
You still need the server name, because you need to login as user#random2323LIVE32323.
Anyways... if you use this scenario, the worst case can be a rejected login, if you mix the real server names.
For a detailed explanation see here
Although it's a bit more administrative work, I typically recommend different Live IDs for Stage vs. Prod. That's because I normally want to have a different set of administrators for my cloud services. If you name one Live ID as PRODAppName and STGAppName, you can make yourself the co-admin on both of those live IDs, and simply use the Filter capability of the portal to only see PROD or STG when you need to know which service is which. Hopefully this makes sense.

Azure seamless upgrade when database schema changes

Let's say I have a production and staging deployment both using their own (SQL Azure) databases. If the schema in staging has changed and needs to be deployed to production is there a defined way of achieving the database upgrade on the production database (without downtime)?
e.g. If I swap VIP staging <-> production (and at same time automate changing connection strings somehow) what is the best process to automate upgrading the sql azure database.
My thought would be to spot the environment change in RoleEnvironmentChanging (though not sure that VIP swap even fires RoleEnvironmentChanginng) and run the sql script against the to-be database (i.e. prod) at that point, however I need to make sure that script is only run once and there will be multiple instances transitioning.
So you have production deployment which has its own SQL Azure database and staging deployment which has its own SQL Azure database. In this situation both the application have their connection string pointing to two different databases.
Your first requirement is to change the Database schema on fly when you swap the deployment or do something and I have the following concern with that design:
If you write any code inside the role to do "ONCE and only ONCE" action, there is no guarantee that that this will happen only ONCE. It will happen multiple time depend on several scenario such as
1.1 In any situation you VM needs to be reimage by the system and this CODE will do the exactly same what it did during last reimage
1.2 You might protect it to not happen at role start or VM start by some registry method of some external key but there is full proof mechanism that not to happen.
Because of it I would suggest when you are ready to SWAP your deployments you can:
2.1 Run the script to update to the production related SQL Azure schema (This will have no impact on application download because it is not touched but while your database schema is updated, you may know better how it impact your application)
2.2 Change the configuration in staging deployment to point to production SQL Azure (This will not have any production application downtime at all)
2.3 SWAP the deployment (This will also have no application downtime)
So even when you manually update the DB Schema and then SWAP the deployment there is no significant downtime besides the time take by DB to update the schema.
I have been looking on best practices for this all over the place and have found none. So far this is what I do:
Deploy to staging (Production is already running)
Copy app_offline.htm file to the web root on Production. This way I block users from using the application, thus blocking changes to the database. I am using only one instance.
Backup the database.
Run DDL, DML and SP scripts. This updates the production database to the latest schema.
Test application on Staging.
Swap VIP. This brings the application back online since the app_offline.htm file is not present on Staging (new Production).
If something goes wrong, swap VIP again, restore database and delete app_offline.htm.
With this approach I have a downtime of ~5 minutes approximately; my database is small, which is better than waiting for the Vm to be created and users getting errors.

Is it possible to change the connection string during an Azure VIP Swap

I'm trying to setup Staging and Live environments in Azure (September toolkit) and I want a separate Staging and Live database - with different connection strings. Obviously I can do this with web.config transformations back in Visual Studio, but is there a way I can automate a change of connection string during a VIP-swap - so that the staging site points to staging data and the live site to live data? I'd prefer not to have to deploy twice.
With the management APIs and the PowerShell Cmdlets, you can automate a large amount of the Azure platform and this can include coordinating a VIP switch and a connection string change.
This is the approach:
Add your database connection string to your ServiceConfiguration file.
Modify your app logic to read the connection string from the Azure specific config by using RoleEnvironment.GetConfigurationSettingValue rather than the more typical .NET config ConfigurationManager.ConnectionStrings API
Implement RoleEnvironmentChanging so that your logic will be notified if the Azure service configuration ever changes. Add code to update your app's connection string in here, again using RoleEnvironment.GetConfigurationSettingValue.
Deploy to staging with a ServiceConfiguration setting for your "staging" DB connection string
Write a PowerShell script that will invoke the VIP switch (build around the Move-Deployment cmdlet from the Windows Azure Platform PowerShell Cmdlets 2.0) and invoke a configuration change with a new ServiceConfiguration file that includes your "production" DB connection string (see Set-DeploymentConfiguration)
Taken together, step 5 will perform the VIP switch and perform a connection string update in a single automated operation.
I don't believe anything changes as far as the role is concerned when you do a VIP swap. Rather, it alters the load balancer configuration.
So nothing happens in your app to cause it to change configuration. The only thing I can think of is that the URL changes between the two. You could implement code that chose one of two connection strings, based on the URL with which it was accessed (assuming that we're only talking about a web role), but it seems messy.
Fundamentally, I think the issue is that staging isn't a separate test environment; it's a stepping stone into production. Thus, Microsoft's assumption is that the configuration doesn't change.

Resources