SQL Azure unexpected database deletion/recreation - azure

I've been scratching my head on this for hours, but can't seem to figure out what's wrong.
Here's our project basic setup:
MVC 3.0 Project with ASP.NET Membership
Entity Framework 4.3, Code First approach
Local environment: local SQL Server with 2 MDF database files attached (aspnet.mdf + entities.mdf)
Server environment: Windows Azure + 2 SQL Azure databases (aspnet and entities)
Here's what we did:
Created local and remote databases, modified web.config to use SQLEXPRESS connection strings in debug mode and SQL Azure connection strings in release mode
Created a SampleData class extending DropCreateDatabaseAlways<Entities> with a Seed method to seed data.
Used System.Data.Entity.Database.SetInitializer(new Models.SampleData()); in Application_Start to seed data to our databases.
Ran app locally - tables were created and seeded, all OK.
Deployed, ran remote app - tables were created and seeded, all OK.
Added pre-processor directives to stop destroying the Entity database at each application start on our remote Azure environment:
#if DEBUG
System.Data.Entity.Database.SetInitializer(new Models.SampleData());
#else
System.Data.Entity.Database.SetInitializer<Entities>(null);
#endif
Here's where it got ugly
We enabled Migrations using NuGet, with AutomaticMigrationsEnabled = true;
Everything was running smooth and nice. We left it cooking for a couple days
Today, we noticed an unknown bug on the Azure environment:
we have several classes deriving from a superclass SuperClass
the corresponding Entity table stores all of these objects in the same SuperClass table, using a discriminator to know which column to feed from when loading the various classes
While the loading went just fine before today, it doesn't anymore. We get the following error message:
The 'Foo' property on 'SubClass1' could not be set to a 'null' value. You must set this property to a non-null value of type 'Int32'.
After a quick check, our SuperClass table has columns Foo and Foo1. Logical enough, since SuperClass has 2 subclasses SubClass1 and SubClass2, each with a Foo property. In our case, Foo is NULL but Foo1 has an int32 value. So the problem is not with the database - rather, it would seem that the link between our Model and Database has been lost. The discriminator logic was corrupted.
Trying to find indications on what could've gone wrong, we noticed several things:
Even though we never performed any migration on the SQL Azure Entity database, the database now has a _MigrationHistory table
The _MigrationHistory table has one record:
MigrationID: 201204102350574_InitialCreate
CreatedOn: 4/10/2012 11:50:57 PM
Model: <Binary data>
ProductVersion: 4.3.1
Looking at other tables, most of them were emptied when this migration happened. Only the tables that were initially seeded with SampleData remained untouched.
Checking in with the SQL Azure Management portal, our Entity database shows the following creation date: 4/10/2012 23:50:55.
Here is our understanding
For some reason, SQL Azure deleted and recreated our database
The _MigrationHistory table was created in the process, registering a starting point to test the model against for future migrations
Here are our Questions
Who / What triggered the database deletion / recreation?
How could EF re-seed our sample data since Application_Start has System.Data.Entity.Database.SetInitializer<Entities>(null);?
EDIT: Looking at what could've gone wrong, we noticed one thing we didn't respect in this SQL Azure tutorial: we didn't remove PersistSecurityInfo from our SQL Azure Entity database connection string after the database was created. Can't see why on Earth it could have caused the problem, but still worth mentioning...

Nevermind, found the cause of our problem. In case anybody wonders: we hadn't made any Azure deployment since the addition of the pre-processor directives. MS must have restarted the machine our VM resided on, and the new VM recreated the database using see data.
Lesson learned: always do frequent Azure deployments.

Related

Connection to Entity Framework works locally, Was working in Azure but now I get "Invalid object name..."

I have looked through various posts related to this problem, but none provide an answer. I created a .Net 5.0 app that accesses an Azure SQL DB using EF 6.4.4 which works with .Net standard libraries. I modified the EF by adding a function that creates the connection string from appsettings.json since .Net 5 apps don't use a web.config file. This also works well in Azure with the configuration settings in an app service.
The connection string looks like this:
metadata=res://*/EF.myDB.csdl|res://*/EF.myDB.ssdl|res://*/EF.myDB.msl;provider=System.Data.SqlClient;provider connection string='Data Source=tcp:mydb.database.windows.net,1433;Initial Catalog=myDB;Integrated Security=False;Persist Security Info=False;User ID=myuserid#mydb;Password="password";MultipleActiveResultSets=True;Connect Timeout=120;Encrypt=True;TrustServerCertificate=True'
I also have a deployment pipeline that will deploy the code after a check-in instead of using the Visual Studio publish feature, but the pipeline deployed code has the same problem.
When I first created the app and published it to the app service, it worked. Recently I updated the app with no changes to the EF connection. Now I get the "Invalid Object name when I reference any table in the model. If I run the same code locally and connect to the Azure SQL DB, the DB is accessed as expected. This problem only occurs when running in the Azure app service. Note that there are no connection strings configured for the app service since the EF string is built from the config settings. I saw this post, but I don't think it applies:
Local works. Azure give error: Invalid object name 'dbo.AspNetUsers'. Why?
even though the problem is the same. I have also read various posts about the format of the EF connection string. Since my model is database first, (and the connection used to work), I'm confident the string has the correct format. I don't think the problem is in the code since it works when running locally with a connection to the Azure SQL DB. It must have something to do with the Azure app service configuration, but I'm not sure what to look for at this point. Unfortunately I don't have a copy of the code and publish files that did work to compare to, but it the pipeline build doesn't work either and that it how the code would normally be deployed. Thanks for any insight you might have!
UPDATE
metadata=res://*/EF.myDB.csdl|res://*/EF.myDB.ssdl|res://*/EF.myDB.msl;provider=System.Data.SqlClient;provider connection string='Data Source=tcp:yourdbsqlserver.database.windows.net,1433;Initial Catalog=yourdb;Persist Security Info=False;User ID=userid;Password=your_password;MultipleActiveResultSets=False;Encrypt=True;TrustServerCertificate=False;Connection Timeout=30'
When the troubleshooting problem is not on the string, our easiest way is to use vs2019 to re-use the generated string.
Your connection string should be like below.
<connectionStrings>
<add name="SchoolDBEntities" connectionString="metadata=res://*/SchoolDB.csdl|res://*/SchoolDB.ssdl|res://*/SchoolDB.msl;provider=System.Data.SqlClient;provider connection string="data source=.\sqlexpress;initial catalog=SchoolDB;integrated security=True;multipleactiveresultsets=True;application name=EntityFramework"" providerName="System.Data.EntityClient"/>
</connectionStrings>
For more details, you can refer my answer in the post and the tutorial.
1. Timeout period elasped prior to obtaining a connection from the pool - Entity Framework
2. Entity Framework Tutorial
The problem was one of my config settings in Azure. The catalog parameter was missing. A simple fix, but the error message was misleading, so I thought I would note that here in case anyone else gets the same "Invalid object name" message when referencing an Azure SQL DB with EF. It would have been more helpful if the message was "catalog name invalid" or "unable to connect to database".
For those who are curious about building an EF connection string, here is example code:
public string BuildEFConnectionString(SqlConnectionStringModel sqlModel, EntityConnectionStringModel entityModel)
{
SqlConnectionStringBuilder sqlString = new SqlConnectionStringBuilder()
{
DataSource = sqlModel.DataSource,
InitialCatalog = sqlModel.InitialCatalog,
PersistSecurityInfo = sqlModel.PersistSecurityInfo,
UserID = sqlModel.UserID, // Blank if using Windows authentication
Password = sqlModel.Password, // Blank if using Windows authentication
MultipleActiveResultSets = sqlModel.MultipleActiveResultSets,
Encrypt = sqlModel.Encrypt,
TrustServerCertificate = sqlModel.TrustServerCertificate,
IntegratedSecurity = sqlModel.IntegratedSecurity,
ConnectTimeout = sqlModel.ConnectTimeout
};
//Build an Entity Framework connection string
EntityConnectionStringBuilder entityString = new EntityConnectionStringBuilder()
{
Provider = entityModel.Provider, // "System.Data.SqlClient",
Metadata = entityModel.Metadata,
ProviderConnectionString = sqlString.ToString()
};
return entityString.ConnectionString;
}
Given what I have learned, the properties should be validated before the string is returned. If the string is created this way, all of the connection string properties can be added to the config settings in the app service. I used the options pattern to get them at runtime. Thanks to everyone for your suggestions.

Azure SQL serverless is not waking up on connection attempt

I'm testing Azure SQL Serverless and from SSMS it seems to work fine, but from my ASP.NET Core application it never wakes up.
Using SSMS I can open a connection to a sleeping Serverless SQL database and after a delay the connection will go through.
Using my ASP.NET Core application I tried the same. From the login page I tried to login, which opens a connection to the database. After 10 or 11 seconds (I looked up the default timeout and its supposed to be 15 seconds but in this case it always seems to be about 10.5 seconds +/-0.5s). According to the docs, the first connection attempt may fail but subsequent ones should succeed, but I can send multiple queries to the database and it always fails with the following error:
Microsoft.Data.SqlClient.SqlException (0x80131904): Database 'myDb' on server
'MyDbSvr.database.windows.net' is not currently available. Please retry the connection later. If the
problem persists, contact customer support, and provide them the session tracing ID of
'{XXXXXXXX-XXXX-XXXX-XXXX-XXXXXXXXXXXX}'.
If I wake the database up using SSMS then the login web page can connect to the database and succeeds.
I have added Connect Timeout=120; to the connection string.
The connection does happen during an HTTP request that is marked async on the Controller, thought I don't know if that makes any difference.
Am I doing something wrong or is there something additional I need to do to get the DB to wake?
[updte]
as an extra test wrote the following test
void Main()
{
SqlConnection con = new SqlConnection("Server=mydbsvr.database.windows.net;Database=mydb;User Id=abc;Password=xyz;Connect Timeout=120;");
Console.WriteLine(con.ConnectionTimeout);
con.Open();
var cmd = con.CreateCommand();
cmd.CommandText = "select getdate();";
Console.WriteLine(cmd.ExecuteScalar());
}
and got the same error.
I figured it out and its the dumbest thing.
This Azure SQL Server instance was migrated from another subscription and the group that migrated it gave it a new name, but they did something that allowed the use of the old name also. I'm researching to figure out how that was done. I will update this answer when I find out what that was.
As it turns out, using the old name with an Serverless Database won't wake up the db. Don't know why. But if you change to use the new/real server name it works. you do have to add a retry to the connection as it may fail the first few times.
[Update]
The new server allows logins using the old name by using a Azure SQL Database Alias https://learn.microsoft.com/en-us/azure/sql-database/dns-alias-overview

Core data and cloudkit sync wwdc 2019 not working for beta 3

I am trying to replicate the result of WWDC talk on syncing core data with cloud kit automatically.
I tried three approaches:
Making a new master slave view app and following the steps at in
wwdc 2019 talk, in this case no syncing happens
Downloading the sample wwdc 2019 app also in this case no symcing happens
I made a small app with a small core data and a cloud kit container in this case syncing happens but I have to restart the app. I suspected it had to do with history management so observed the NSPersistentStoreRemoteChange notification not nothing receives.
Appreciate any help.
I also played around with CoreData and iCloud and it work perfectly. I would like to list some important points that may help you go further:
You have to run the app on a real device with iCloud Acc We can now test iCloud Sync on Simulator, but it will not get notification automatically. We have to trigger manually by select Debug > Trigger iCloud Sync
Make sure you added Push Notification and iCloud capability to your app. Make sure that you don't Dave issue with iCloud container (in this case, you will see red text on iCloud session in Xcode)
In order to refresh the view automatically, you need to add this line into your Core Data Stack: container.viewContext.automaticallyMergesChangesFromParent = true.
Code:
public lazy var persistentContainer: NSPersistentCloudKitContainer = {
/*
The persistent container for the application. This implementation
creates and returns a container, having loaded the store for the
application to it. This property is optional since there are legitimate
error conditions that could cause the creation of the store to fail.
*/
let container = NSPersistentCloudKitContainer(name: self.modelName)
container.viewContext.automaticallyMergesChangesFromParent = true
container.loadPersistentStores(completionHandler: { (storeDescription, error) in
if let error = error as NSError? {
// Replace this implementation with code to handle the error appropriately.
// fatalError() causes the application to generate a crash log and terminate. You should not use this function in a shipping application, although it may be useful during development.
/*
Typical reasons for an error here include:
* The parent directory does not exist, cannot be created, or disallows writing.
* The persistent store is not accessible, due to permissions or data protection when the device is locked.
* The device is out of space.
* The store could not be migrated to the current model version.
Check the error message to determine what the actual problem was.
*/
fatalError("Unresolved error \(error), \(error.userInfo)")
}
})
return container
}()
When you add some data, normally you should see console log begin with CloudKit: CoreData+CloudKit: ..........
Sometimes the data is not synced immediately, in this case, I force close the app and build a new one, then the data get syncing.
There was one time, the data get synced after few hours :(
I found that the NSPersistentStoreRemoteChange notification is posted by the NSPersistentStoreCoordinator and not by the NSPersistentCloudKitContainer, so the following code solves the problem:
// Observe Core Data remote change notifications.
NotificationCenter.default.addObserver(
self, selector: #selector(self.storeRemoteChange(_:)),
name: .NSPersistentStoreRemoteChange, object: container.persistentStoreCoordinator)
Also ran into the issue with .NSPersistentStoreRemoteChange notification not being sent.
Code from Apples example:
// Observe Core Data remote change notifications.
NotificationCenter.default.addObserver(
self, selector: #selector(type(of: self).storeRemoteChange(_:)),
name: .NSPersistentStoreRemoteChange, object: container)
Solution for me was to not set the container as object for the notification, but nil instead. Is it not used anyway and prevents the notification from being received:
// Observe Core Data remote change notifications.
NotificationCenter.default.addObserver(
self, selector: #selector(type(of: self).storeRemoteChange(_:)),
name: .NSPersistentStoreRemoteChange, object: nil)
Update:
As per this answer: https://stackoverflow.com/a/60142380/3187762
The correct way would be to set container.persistentStoreCoordinator as object:
// Observe Core Data remote change notifications.
NotificationCenter.default.addObserver(
self, selector: #selector(type(of: self).storeRemoteChange(_:)),
name: .NSPersistentStoreRemoteChange, object: container.persistentStoreCoordinator)
I had the same problem, reason was that iCloudDrive must be enabled in your devices. Check it in the Settings of every your device
I understand this answer comes late and is not actually specific to the WWDC 19 SynchronizingALocalStoreToTheCloud Apple's sample project to which OP refers to, but I had syncing issues (not upon launch, when it synced fine, but only during the app being active but idle, which seems to be case 3 of the original question) in a project that uses Core Data + CloudKit with NSPersistentCloudKitContainer and I believe the same problems I had - and now apparently I have solved - might affect other Users reading this question in the future.
My app was built using Xcode's 11 Master-Detail template with Core Data + CloudKit from the start, so I had to do very little to have syncing work initially:
Enable Remote Notifications Background Mode in Signing & Capabilities for my target;
Add the iCloud capability for CloudKit;
Select the container iCloud.com.domain.AppName
Add viewContext.automaticallyMergesChangesFromParent = true
Basically, I followed Getting Started With NSPersistentCloudKitContainer by Andrew Bancroft and this was enough to have the MVP sync between devices (Catalina, iOS 13, iPadOS 13) not only upon launch, but also when the app was running and active (thanks to step 4 above) and another device edited/added/deleted an object.Being the Xcode template, it did not have the additional customisations / advanced behaviours of WWDC 2019's sample project, but it actually accomplished the goal pretty well and I was satisfied, so I moved on to other parts of this app's development and stopped thinking about sync.
A few days ago, I noticed that the iOS/iPadOS app was now only syncing upon launch, and not while the app was active and idle on screen; on macOS the behaviour was slightly different, because a simple command-tab triggered sync when reactivating the app, but again, if the Mac app was frontmost, no syncing for changes coming from other devices.
I initially blamed a couple of modifications I did in the meantime:
In order to have the sqlite accessible in a Share Extension, I moved the container in an app group by subclassing NSPersistentCloudKitContainer;
I changed the capitalisation in the name of the app and, since I could not delete the CloudKit database, I created a new container named iCloud.com.domain.AppnameApp (CloudKit is case insensitive, apparently, and yes, I should really start to care less about such things).
While I was pretty sure that I saw syncing work as well as before after each one of these changes, having sync (apparently) suddenly break convinced me, for at least a few hours, that either one of those modification from the default path caused the notifications to stop being received while the app was active, and that then the merge would only happen upon launch as I was seeing because the running app was not made aware of changes.
I should mention, because this could help others in my situation, that I was sure notifications were triggered upon Core Data context saves because CloudKit Dashboard was showing the notifications being sent:
So, I tried a few times clearing Derived Data (one never knows), deleting the apps on all devices and resetting the Development Environment in CloudKit's Dashboard (something I already did periodically during development), but I still had the issue of the notifications not being received.
Finally, I realised that resetting the CloudKit environment and deleting the apps was indeed useful (and I actually rebooted everything just to be safe ;) but I also needed to delete the app data from iCloud (from iCloud's Settings screen on the last iOS device where the app was still installed, after deleting from the others) if I really wanted a clean slate; otherwise, my somewhat messed up database would sync back to the newly installed app.
And indeed, a truly clean slate with a fresh Development Environment, newly installed apps and rebooted devices resumed the notifications being detected from the devices also when the apps are frontmost.So, if you feel your setup is correct and have already read enough times that viewContext.automaticallyMergesChangesFromParent = true is the only thing you need, but still can't see changes come from other devices, don't exclude that something could have been messed up beyond your control (don't get me wrong: I'm 100% sure that it must have been something that I did!) and try to have a fresh start... it might seem obscure, but what isn't with the syncing method we are choosing for our app?

Cannot update identity column 'scope_local_id'

Update I revised my approach to use 1 sync scope per database schema. This eliminated the problem for all schemas except dbo. Then I split dbo into several sync scopes, which seems to have eliminated the problem in dbo. It is not clear to me why having a large number of tables in a single sync scope would lead to this particular error message.
I am using Microsoft Sync Framework 2.1 to set up synchronization between two databases. I am provisioning the Azure database using the following code:
// Set up destination server for sync
var destinationScopeProvisioning = new SqlSyncScopeProvisioning(DestinationConnection, scope);
if (!destinationScopeProvisioning.ScopeExists(ScopeName))
{
destinationScopeProvisioning.Apply();
}
This code throws an exception after running for several minutes:
An unhandled exception of type 'Microsoft.Synchronization.Data.DbPartiallyProvisionedException' occurred in Microsoft.Synchronization.Data.SqlServer.dll
Additional information: Cannot update identity column 'scope_local_id'.
Ordinarily, this kind of error is caused by a scope that already exists, so I've tried the following three methods (several times) to clean up the scopes and start again:
Deprovisioning the scope by invoking deprovisioning.DeprovisionScope(ScopeName)
Deprovisioning the store by invokeing deprovisioning.DeprovisionStore()
Dropping and recreating the database.
Unfortunately, none of those worked.

How to manage centralized values in a sharded environment

I have an ASP.NET app being developed for Windows Azure. It's been deemed necessary that we use sharding for the DB to improve write times since the app is very write heavy but the data is easily isolated. However, I need to keep track of a few central variables across all instances, and I'm not sure the best place to store that info. What are my options?
Requirements:
Must be durable, can survive instance reboots
Must be synchronized. It's incredibly important to avoid conflicting updates or at least throw an exception in such cases, rather than overwriting values or failing silently.
Must be reasonably fast (2000+ read/writes per second
I thought about writing a separate component to run on a worker role that simply reads/writes the values in memory and flushes them to disk every so often, but I figure there's got to be something already written for that purpose that I can appropriate in Windows Azure.
I think what I'm looking for is a system like Apache ZooKeeper, but I dont' want to have to deal with installing the JRE during the worker role startup and all that jazz.
Edit: Based on the suggestion below, I'm trying to use Azure Table Storage using the following code:
var context = table.ServiceClient.GetTableServiceContext();
var item = context.CreateQuery<OfferDataItemTableEntity>(table.Name)
.Where(x => x.PartitionKey == Name).FirstOrDefault();
if (item == null)
{
item = new OfferDataItemTableEntity(Name);
context.AddObject(table.Name, item);
}
if (item.Allocated < Quantity)
{
allocated = ++item.Allocated;
context.UpdateObject(item);
context.SaveChanges();
return true;
}
However, the context.UpdateObject(item) call fails with The context is not currently tracking the entity. Doesn't querying the context for the item initially add it to the context tracking mechanism?
Have you looked into SQL Azure Federations? It seems like exactly what you're looking for:
sharding for SQL Azure.
Here are a few links to read:
http://msdn.microsoft.com/en-us/library/windowsazure/hh597452.aspx
http://convective.wordpress.com/2012/03/05/introduction-to-sql-azure-federations/
http://searchcloudapplications.techtarget.com/tip/Tips-for-deploying-SQL-Azure-Federations
What you need is Table Storage since it matches all your requirements:
Durable: Yes, Table Storage is part of a Storage Account, which isn't related to a specific Cloud Service or instance.
Synchronized: Yes, Table Storage is part of a Storage Account, which isn't related to a specific Cloud Service or instance.
It's incredibly important to avoid conflicting updates: Yes, this is possible with the use of ETags
Reasonably fast? Very fast, up to 20,000 entities/messages/blobs per second
Update:
Here is some sample code that uses the new storage SDK (2.0):
var storageAccount = CloudStorageAccount.DevelopmentStorageAccount;
var table = storageAccount.CreateCloudTableClient()
.GetTableReference("Records");
table.CreateIfNotExists();
// Add item.
table.Execute(TableOperation.Insert(new MyEntity() { PartitionKey = "", RowKey ="123456", Customer = "Sandrino" }));
var user1record = table.Execute(TableOperation.Retrieve<MyEntity>("", "123456")).Result as MyEntity;
var user2record = table.Execute(TableOperation.Retrieve<MyEntity>("", "123456")).Result as MyEntity;
user1record.Customer = "Steve";
table.Execute(TableOperation.Replace(user1record));
user2record.Customer = "John";
table.Execute(TableOperation.Replace(user2record));
First it adds the item 123456.
Then I'm simulating 2 users getting that same record (imagine they both opened a page displaying the record).
User 1 is fast and updates the item. This works.
User 2 still had the window open. This means he's working on an old version of the item. He updates the old item and tries to save it. This causes the following exception (this is possible because the SDK matches the ETag):
The remote server returned an error: (412) Precondition Failed.
I ended up with a hybrid cache / table storage solution. All instances track the variable via Azure caching, while the first instance spins up a timer that saves the value to table storage once per second. On startup, the cache variable is initialized with the value saved to table storage, if available.

Resources