I have a super small (no data, just a few single column tables, I'm just testing) on premise SQL server database that I'm trying to do an online migration for, to a SQL Managed Instance.
As far as I know I've configured everything as it should be - backup files are present in the file share, and the DMS is set up and can see both the SQL Server and the Managed Instance. However, it doesn't restore anything. It's stuck saying "log shipping in progress":
If I look at the managed instance itself, I can see a database has been created, and is currently in "Restoring" status.
My question is: how can I resolve this?
Maybe there are other logs I can look at, or there's some other permissions thing I don't know about, or something else?
I've tried creating a new project from scratch, but it had the same issue. And I've tried waiting... but I don't think it's working. As I mentioned, this is a DB with only a few tables (maybe 4), a single column in each table, no data at all.
Looking at your image, looks like there is no issue with DMS connecting to backup location and uploading backup and log files.
What is interesting is the field for last backup file applied and last applied LSN is empty. Makes me think there is some issue on the SQL MI machine.
I would recommend to open a case with MS Support on this.
One other thing you can try is to do a manual failover and your machine will failover to secondary node and then run the DMS job again.
https://techcommunity.microsoft.com/t5/azure-sql/user-initiated-manual-failover-on-sql-managed-instance/ba-p/1538803
Also try to take a look at what is going on with the SQL MI with any blocking.
You can use sp_whoisactive which is the latest version and it works on Azure SQL MI
https://github.com/amachanic/sp_whoisactive/releases/tag/v11.35
You need to assign permission to the service principal running the migration service at subscription level:
az role assignment create --assignee [YOUR SERVICE PRINCIPAL] --role contributor
Related
I have recently started receiving "Can not connect to the database in its current state." when trying to connect to my Azure SQL database. As far as I am aware nothing has changed and I get the same error using the query tool in the Azure portal itself using etiher sql authentication or Azure Ad authentication.
The server status is "online" and the trouble-shooter says that it can not find any issues. I have a limit on the billing of the account but have not hit it.
I cant find any way of "restarting" the sql instance or any logs that indicate what might be wrong. I cant raise a support ticket as the troubleshooter suggests that nothing is wrong.
Does anyone have any insight as to what might cause this or what I might try to get it resovled?
Edit: I believe that this is an error code for a Billing issue now - despite the billing appearing fine the subscription is disabled. All of the help articles indicated that code changes were needed to connect reliably but I dont think that is the case.
thanks in advance.
Are you using Key Vault with Azure SQL Database? Make sure permissions to the key vault are properly set.
Another possible reason is the database may be in recovery state. In that case you may have to create a support ticket here.
I have had an Azure SQL DB point in time restore running for two days. I want to cancel it as I think there is an issue. I can see the DB restoring in SSMS but can't find the deployment in my Azure Portal. Does anyone know how to cancel it? I have tried using Azure CLI but I can't see the resource.
It's called Azure Hiccups, it happened to me yesterday on Switzerland West region between 10:20 and 10:40.
I re-run it and everything was fixed.
If I check the Activity Log I can see the error:
But if I browse in the Service Health it says everything was good:
What to do in case of Azure Hiccups:
FIX: Re-run the task, hopefully it will fix the issue, like when you hit an old TV with your fist.
PREVENT: You can try to create an Activity Log alert but once again it will be based on Service Health (which says that everything is good) and not on the actual Activity Log. So you will probably miss issues like this and will discover the problem 24h later.
POST-MORTEM: You can take a screenshot of the failed task/service in the Activity Log, show it to Microsoft and ask for a refund if possible. For the future you can check the current status of Azure in the official Status page and subscribe to the RSS feed. You can browse the Azure Status History. But as I said none of the last two reports the Azure Hiccups so the screenshot of the Activity Log is still the only proof that a tree yesterday has fallen in the forest.
As Microsoft SLA says that the High availability for Azure SQL Database and SQL Managed Instance is 99.99% of the year you can start collecting those screenshot and open tickets with their support.
After dropping the Database this morning, the operation status of which was unsuccessful. The Restore has finally been canceled 8 hrs after attempting to drop the database.
Found a solution, just create a new database of the same name. And the restoring one will be replaced with the one created, then you can delete it.
I have started using my free Azure account and I found out that I cannot create SQL Managed Instance. I get a cryptic error message telling me to change subscription or region, no clear information. The list of free services does not include SQL MI but it does not mean much. SQL Dedicated Pool or Synapse are also not listed but I tried to create them and the Portal does not complain yet even though I did not click the final Create button yet.
So SQL Managed Instances are only available on certain subscription types. See:
You probably have an Azure Trial subscription. If not, you might also want to check your region as there are region limitations as mentioned in the article above.
I've built a small web app that i'm thinking of deploying as a production site to azure. I've used a code first approach. I'm after some advice regarding maintaining an azure production db when it is possible/likely that some of my models might still change in the early phase of deployment depending on user testing, etc.
My workflow is probably not ideal. I've published to azure for testing and my data is stored in an sql db called project_db on azure.
But I keep making changes to my models on my local machine. And rather than using migrations - which i kind of get - but also find a bit difficult to handle, my workflow is that I change my model eg. adding a property - then I will go and delete my local database and then build my solution again. At least on my local machine that works without having to implement migrations. And i don't need any locally stored data and it just seeds again.
I was hoping to confirm that if I head down this path i'd have to do the same thing on azure. That is if i've changed my models locally, deleted my localdb and then built my solution again locally - I can't just publish to azure and expect my previously created sql project_db to work. I'd have to delete the azure project_db and create a new azure db that would be built based on my changed models.
And once i have changed my models on my machine (before having enabled migrations) - say i've added 10 new properties to IdentityModels.cs and i want to deploy to my existing project_db that already contains data...if I enable migrations at this point will it migrate to azure and maintain my data. Or do migrations have to be enabled from the beginning before the very first publishing of the db to azure?
I did try to publish to the azure project_db after having changed my models (incl IdentityModel.cs) on my local machine. I wasn't able to log in even though the AspNetUser table still contained the email addresses, etc. that had previoulsy been entered. I'm assuming that's an issue with my changed models having mismatched the azure AspNetUser table in project_db.
Thanks for the advice.
Let's say I have a production and staging deployment both using their own (SQL Azure) databases. If the schema in staging has changed and needs to be deployed to production is there a defined way of achieving the database upgrade on the production database (without downtime)?
e.g. If I swap VIP staging <-> production (and at same time automate changing connection strings somehow) what is the best process to automate upgrading the sql azure database.
My thought would be to spot the environment change in RoleEnvironmentChanging (though not sure that VIP swap even fires RoleEnvironmentChanginng) and run the sql script against the to-be database (i.e. prod) at that point, however I need to make sure that script is only run once and there will be multiple instances transitioning.
So you have production deployment which has its own SQL Azure database and staging deployment which has its own SQL Azure database. In this situation both the application have their connection string pointing to two different databases.
Your first requirement is to change the Database schema on fly when you swap the deployment or do something and I have the following concern with that design:
If you write any code inside the role to do "ONCE and only ONCE" action, there is no guarantee that that this will happen only ONCE. It will happen multiple time depend on several scenario such as
1.1 In any situation you VM needs to be reimage by the system and this CODE will do the exactly same what it did during last reimage
1.2 You might protect it to not happen at role start or VM start by some registry method of some external key but there is full proof mechanism that not to happen.
Because of it I would suggest when you are ready to SWAP your deployments you can:
2.1 Run the script to update to the production related SQL Azure schema (This will have no impact on application download because it is not touched but while your database schema is updated, you may know better how it impact your application)
2.2 Change the configuration in staging deployment to point to production SQL Azure (This will not have any production application downtime at all)
2.3 SWAP the deployment (This will also have no application downtime)
So even when you manually update the DB Schema and then SWAP the deployment there is no significant downtime besides the time take by DB to update the schema.
I have been looking on best practices for this all over the place and have found none. So far this is what I do:
Deploy to staging (Production is already running)
Copy app_offline.htm file to the web root on Production. This way I block users from using the application, thus blocking changes to the database. I am using only one instance.
Backup the database.
Run DDL, DML and SP scripts. This updates the production database to the latest schema.
Test application on Staging.
Swap VIP. This brings the application back online since the app_offline.htm file is not present on Staging (new Production).
If something goes wrong, swap VIP again, restore database and delete app_offline.htm.
With this approach I have a downtime of ~5 minutes approximately; my database is small, which is better than waiting for the Vm to be created and users getting errors.