As admin for an Azure subscription I am getting emails reporting:
Automated SQL Export failed for {server:database} at 9/5/2013 12:00:11
AM. The temporary database copy to export from could not be made.
I deleted this server, without explicitly removing the automated export configuration.
First question: how do I tell Azure to stop trying to do the export?
Second question: newbie mistake (if so please let me know what I failed to do) or bug?
Thanks!
I believe that this might be because the schema of the database has changed, which has resulted in an object (stored procedure, view, function) becoming invalid.
This problem / solution is elaborated on in another question:
Automated azure sql export fails
Related
We have a release pipeline in Azure DevOps that deploys a database project to our Azure SQL Database via the Azure SQL Dacpac Task. Everything has been working fine but suddenly yesterday the pipeline started failing with the following error:
##[error]*** An error occurred during deployment plan generation. Deployment cannot continue.
##[error]Error SQL72018: Permission could not be imported but one or more of these objects exist in your source.
As far as I know, nothing has changed on the database side or in the pipeline. We also ruled out that it could be an issue with the specific dacpac file because previously successful releases now fail with the same error.
I searched extensively for the SQL72018 error, but didn't really find any answers as to what would be causing that so am wondering if there was some Azure DevOps task update or something that we could be missing?
Not sure what would have caused this to break out of nowhere like that.
It does work if we add the p:/ignorePermissions=true parameter to the task, but we have never needed that before this.
UPDATE:
Wanted to update this as I was able to gather a little more information by adding the /Diagnostics:True parameter to the pipeline task in order to print out Diagnostic info from SQLPackage.
When I added that, I also see this error:
Microsoft.Data.Tools.Diagnostics.Tracer Error: 1 : 2022-04-05T08:38:37 : Error detected when reverse engineering the database. Severity:'Warning' Prefix:'' Error Code:'0' Message:The permission 'VDP ' was not recognized and was not imported. If this problem persists, contact customer support.
So it looks like some "VDP" permission is what is causing the issue, but we don't know what that permission is for or where it came from as it's not in the database project.
We finally got to the bottom of this. It turns out a new permission was added to the database the day before the pipeline started to fail. The database permission that caused the issue was VIEW DATABASE PERFORMANCE STATE. That was the VDP permission that SQLPackage.exe was complaining about.
Unsure why that particular permission caused the error as we manage all of our database permissions outside of the database project, so not sure why other permissions wouldn't have caused issues prior to this one.
Since we are managing permissions outside of our database project, the resolution was to add the p:/ignorePermissions=true SQLPackage parameter to the pipeline permanently. This was confirmed as the appropriate solution by a Microsoft representative after we put a ticket in.
It seems you have a spurious/orphan permission in your target database - as mentioned in this post How could our database be causing SqlPackage to fail? (SQL72018).
I'm using Entity Framework with a code first model; I've got my InitialCreate migration setup and working locally, I can run code against my database context, and everything works.
But when I deploy my project to Azure, I just get a connection string error ("Format of the initialization string does not conform to specification starting at index 0.").
I can't seem to find where in the Publish dialog are the options to create the Azure database. -- Do I have to create the database separately and hook them up manually? -- If so, what exact process should I follow. Does the database need to have contents?
I thought Microsoft was making a big deal that this could all be done in a single deploy step, but that doesn't seem to be the case from my current experience.
When you publish your project in the publish dialog, there is an option for the code first migration in the Settings tab, it will automatically show your data context and it will give you the option to set the remote connection string, and this will add a section in web.config to specify the data context and the Migration class to run during the migration process.
It will also allow you to set if you want to run the code first Migration or not.
You can also take a backup from the dev and clear the data then upload it to Azure SQL DB, this way the code first data context will check at first connection and it will find the code an database the same
I have an automatic backup running each night through the Portal which should back up my Azure database to blob storage as a .bacpac file and up until Friday that had been working successfully.
Each night I get an email error saying:
Automated SQL Export failed for myServer:myDatabase at 5/30/2016 11:35:39 PM. The temporary database copy was made, but this copy could not be exported to the .bacpac file.
Some tutorials suggest logging into the Portal and doing it manually. When I do this it works successfully and I am able to see the file without error. But on the following night, the process fails again (it doesn't recover itself from performing a manual backup). Is there a way to get more information on why it is failing?
In the new Portal, you can find more information via audit log, database level operations will be logged there including import/export.
OK so after further analysis I was able to pinpoint the root cause of my issue to a Stored Procedure.
I had a Stored Proc which was explicitly referencing my database. Whenever the database backup is taken in Azure, it creates a temporary name and at that point, "breaks" the Stored Procedure as it was Self Referencing.
Fixing the Stored Proc has resumed the automatic backups.
An example of a statement the Proc was calling was:
Select Name from MyDatabase.Dbo.MyTable
This should be rewritten as the following to make it exportable:
Select Name from Dbo.MyTable
Note that while I was able to obtain a more meaningful error using a local copy of Sql Server Management Studio, no error was present in the Azure Portal.
Hopefully this will help someone else.
I've been using the export database feature on Azure for the past year without problems.
However during the past week the feature seems to error more than it actually works. On some occasions the export takes an hour or more (where it used to be a minute or less) and on other occasions it errors without completing the job. On the Info Screen in Azure it says "Status - Pending" over and over again.
Does anybody know what I should do? Should I create another container? Could there be something else wrong?
Thanks for reading
It seems this happens from time to time due to Azure problems. For anybody who encounters this, the perfectly adequate workaround is to use Sql Server Management Studio, select the database and Export Data-Tier Application.
Here is another post on the same issue:
http://social.msdn.microsoft.com/Forums/windowsazure/en-US/bb66505e-7008-4555-b8cd-eec9ae25066f/unable-to-export-db-via-portal?forum=ssdsgetstarted
After installing a custom CLR object Sql Server Developer Tools (SSDT) VS2012 will not allow an update. The error is "Source schema drift detected. Press Compare to refresh. After refresh same thing happens.
Tried
In settings, I set the object to just Stored Procedures.
Settings ->General -> Block on possible data loss -> tried both on and off.
This sort of loop can also be caused by a referenced SSDT project failing to build. The referenced project may be missing, unloaded, or have an error which prevents the compare from completing.
This is not an answer but a clue to deal with this problem.
I was to update a colum from varchar[200] to varchar[MAX] and got this problem as well. So I logged in the server and tried to update the database manually via SQL Management Studio which was installed there, and I got this error:
"Saving changes is not permitted. The changes you have made require the folloing tables to be drpped and re-created. You have either made changes to a table that can't be re-created or enable the option Prevent saving changes that require the table to be re-created."
Seems that re-creating table is something so dangerous that "block/unblock on possible data lose" cannot handle. So I think only if we can walk around this LOCAL warning, could we update the database REMOTELY.
But, why [200] to [max] leads to re-creating table? It does not make any sense. I tried [200] to [1000], and it did not work as well. This might be the key to this problem.
And, if you do the same update in Server explorer in VS, instead of SQL Management Studio, it works. Again, why?
This can happen when a db user "changes".
The following rather scary forum page recounts issues where foreign hackers were trying to brute-force access to the "sa" db user, with each attempt changing the sa-user's date timestamp (which is seen as a schema drift):
https://social.msdn.microsoft.com/Forums/sqlserver/en-US/5c22a7b4-7a82-4717-a118-2475bc62705b/schema-compareupdate-error-target-schema-drift-detected?forum=ssdt
Here is also mentioned that you can query the sa-user a few times, to see if this is happening to you:
SELECT * FROM sys.server_principals WHERE principal_id=1
I am currently experiencing the same issue (that the sa-user is being modified; I know nothing about hackers yet) and am yet to find a solution.
Edit - I turned on logging in Windows Firewall via properties > logging, and we setup a blocking rule on port 3071, which had a lot of unexplained traffic. Then the problem went away.
I tried running VS as an administrator, it worked.