Publish to Azure from Vs2013 with 'Execute Code First Migrations' checked overwrote remote database - azure

While regular publish to Azure with WebDeploy, had checked Execute Code First Migrations, which i did before.
But this time the Use this connection string at runtime, was also checked, and i published without noticing it. as a result the remote azure db was wiped and instead is seeded with what looks like a default database with aspnetmemembership tables and _Migrations table that only has migrations related to identity tables.
The production data w db structures is gone and I did not yet setup backup on azure, doing it now.
Is there way to restore the database from some sort of auto backup on azure, i have web version w 1Gb size selected, I do not see any options
this suggests that web version would not have any daily backup, but also that web version is discountinued as of april, but i still have it. http://msdn.microsoft.com/en-us/library/jj650016.aspx
and another questions, i understand everything that happened? But it seems extremly dangerous that its so easy to wipe out the whole database and VS shows no warning nor publishing to azure notifies of anything. Is there anything that can be done to prevent dumb but yet very costly erros like this ?
TIA

Related

Publishing ASP.NET MvC to Azure with SQLite - data fetching fails

Just created a simple ASP.NET MvC project, to list blood pressure measurements. I opted to use SQLite as a database as it is (supposedly) embedded into the project, therefore eliminating the need for an external database. Which is expensive, and the reason why I chose to go with SQLite. That way I would only need to host the web app, which is free, if I chose the free tier, F1.
Publishing through VS2022 is successful, and the app shows correctly, except it shows none of the measurements. Which renders the app ((no) pun intended) useless, at least as a cloud app. I have done some research, and changed the publishingsettings a couple of times, but this is how they look right now.
Configuration: Release
Target Framework: net6.0
Deployment Mode: Self-contained
Target Runtime: win-x86
File Publish Options: None of the options chosen
Databases: Default Connection - Use this connection string at runtime:
=> Data source=bloodpressuremeasurements.db
Entity Framework Migrations: BloodPressureContext (name of the DbContext)
- Apply this migration on publish: NOT chosen, since it gave me an exception and publish failed
Site Extension Options: Install ASP.NET Core Logging Integration Site Extension
- NOT chosen
I also tried changing the option for the db file to Copy To Output Directory: Copy always.
That didn't change a thing. What am I missing?
The website works now as intended, with all the data shown. It looks like the problem stems from scaffolding read and write methods, which made Visual Studio 2022 pull in EntityFrameworkCore.SqlServer. Which is not what I wanted, since I'm using SQLite.
That in turn created some service dependencies under Connected Services, one of them being SQL Server something. It also appeared under the Publish menu, and seems to have caused the compilator to view the connection string as an SQL Server database connection.
I created a new app, and copied the code from the first one. I was careful not to scaffold, as I only need a Get method, to show all measurements. I need none of the other methods in CRUD, neither Post, Delete, nor Update. I will add new measurements by running the app again locally, and read the measurements from a CSV file (did that in the beginning). Then I will publish the app anew, with the updated SQLite database.

The best way to publish new version to Azure app/services?

Say I have 1 azure app which calls 1 azure api service. Now I need to update both applications to a newer version, in the most extended scale, i.e. database not compatible, api has changes to existing method signatures that are not compatible to old version invocation either. I use visual studio's publish profile to directly update. The problem I've been facing is that during the publish process, although it's only a few seconds of time, there're still active end users doing things on the web app and making api calls. I've personally seen results in such situations which are unstable, unpredictable and the saved data might be simply corrupt data.
So is there a better way to achieve some sort of 'flash update' which causes absolutely no side effect to end users? Thanks.
You should look at a different deployment strategy. First update the database, maybe with accepting null values, deploy a new API next to the current one. Validate it. Switch the traffic from current to new. Same for the website. It is a blue green deployment strategy, requires some more effort but solves the downtime or errors. https://www.martinfowler.com/bliki/BlueGreenDeployment.html
For the web app, you should use the deployment slots, deploy your new version to a staging slot and once you are ready, it is a matter of pointing the site URL to the new slot. This doesn't take anytime at all.
For the database, I believe you should freeze updates, take a backup and let the users work in readonly mode, and once you finish all your DB migration and changes, point the application to the new database and that is it.

Automated SQL Export Failed

I have an automatic backup running each night through the Portal which should back up my Azure database to blob storage as a .bacpac file and up until Friday that had been working successfully.
Each night I get an email error saying:
Automated SQL Export failed for myServer:myDatabase at 5/30/2016 11:35:39 PM. The temporary database copy was made, but this copy could not be exported to the .bacpac file.
Some tutorials suggest logging into the Portal and doing it manually. When I do this it works successfully and I am able to see the file without error. But on the following night, the process fails again (it doesn't recover itself from performing a manual backup). Is there a way to get more information on why it is failing?
In the new Portal, you can find more information via audit log, database level operations will be logged there including import/export.
OK so after further analysis I was able to pinpoint the root cause of my issue to a Stored Procedure.
I had a Stored Proc which was explicitly referencing my database. Whenever the database backup is taken in Azure, it creates a temporary name and at that point, "breaks" the Stored Procedure as it was Self Referencing.
Fixing the Stored Proc has resumed the automatic backups.
An example of a statement the Proc was calling was:
Select Name from MyDatabase.Dbo.MyTable
This should be rewritten as the following to make it exportable:
Select Name from Dbo.MyTable
Note that while I was able to obtain a more meaningful error using a local copy of Sql Server Management Studio, no error was present in the Azure Portal.
Hopefully this will help someone else.

Access 2013 web app - restoring previous app snapshot package without reverting data (structured staging environment)

I have a reasonably complex Access 2013 web app which is now in production on hosted O365 Sharepoint. I would like to take a backup (package snapshot) into a test environment, and then migrate this to production once development is complete (I certainly don't want to do development on the production system!). The problem is that the snapshot also backs up all data so uploading the new package over the top of the existing package in the sharepoint app repository reverts the data to the time of snapshot as well. Alternatively, rolling back to the original snapshot if there are issues would lose all data after the new package was applied.
I can easily get a second version of the app going by saving as a new application etc but this creates a new product ID etc in the app store. We also use a Access 2013 desktop accdb frontend to hook directly into the Azure SQL database to do all the stuff that the web app can't provide (formatted reports etc) so I cant just create a new app every time without dealing with all of the credential and database renaming issues.
So my question is, does anybody know how to safely operate a test environment for Access 2013 web app development? One needs to be able to apply an updated version, or rollback to the old one if there are problems without rolling back the data. With the desktop client I can just save a new copy of the accdb file every time obviously. I dont mind creating a new instance or link to the app on sharepoint each time, however this obviously generates a totally new database (server name, db location, login id's etc) as well. You would hope there is a way to upload and replace your app without touching the data, else how else can you develop without working directly in production?
Any answers would be really appreciated.
Thanks.

Sql Schema Compare will not update after CLR object installed 'Source schema drift detected'

After installing a custom CLR object Sql Server Developer Tools (SSDT) VS2012 will not allow an update. The error is "Source schema drift detected. Press Compare to refresh. After refresh same thing happens.
Tried
In settings, I set the object to just Stored Procedures.
Settings ->General -> Block on possible data loss -> tried both on and off.
This sort of loop can also be caused by a referenced SSDT project failing to build. The referenced project may be missing, unloaded, or have an error which prevents the compare from completing.
This is not an answer but a clue to deal with this problem.
I was to update a colum from varchar[200] to varchar[MAX] and got this problem as well. So I logged in the server and tried to update the database manually via SQL Management Studio which was installed there, and I got this error:
"Saving changes is not permitted. The changes you have made require the folloing tables to be drpped and re-created. You have either made changes to a table that can't be re-created or enable the option Prevent saving changes that require the table to be re-created."
Seems that re-creating table is something so dangerous that "block/unblock on possible data lose" cannot handle. So I think only if we can walk around this LOCAL warning, could we update the database REMOTELY.
But, why [200] to [max] leads to re-creating table? It does not make any sense. I tried [200] to [1000], and it did not work as well. This might be the key to this problem.
And, if you do the same update in Server explorer in VS, instead of SQL Management Studio, it works. Again, why?
This can happen when a db user "changes".
The following rather scary forum page recounts issues where foreign hackers were trying to brute-force access to the "sa" db user, with each attempt changing the sa-user's date timestamp (which is seen as a schema drift):
https://social.msdn.microsoft.com/Forums/sqlserver/en-US/5c22a7b4-7a82-4717-a118-2475bc62705b/schema-compareupdate-error-target-schema-drift-detected?forum=ssdt
Here is also mentioned that you can query the sa-user a few times, to see if this is happening to you:
SELECT * FROM sys.server_principals WHERE principal_id=1
I am currently experiencing the same issue (that the sa-user is being modified; I know nothing about hackers yet) and am yet to find a solution.
Edit - I turned on logging in Windows Firewall via properties > logging, and we setup a blocking rule on port 3071, which had a lot of unexplained traffic. Then the problem went away.
I tried running VS as an administrator, it worked.

Resources