Cannot update identity column 'scope_local_id' - azure

Update I revised my approach to use 1 sync scope per database schema. This eliminated the problem for all schemas except dbo. Then I split dbo into several sync scopes, which seems to have eliminated the problem in dbo. It is not clear to me why having a large number of tables in a single sync scope would lead to this particular error message.
I am using Microsoft Sync Framework 2.1 to set up synchronization between two databases. I am provisioning the Azure database using the following code:
// Set up destination server for sync
var destinationScopeProvisioning = new SqlSyncScopeProvisioning(DestinationConnection, scope);
if (!destinationScopeProvisioning.ScopeExists(ScopeName))
{
destinationScopeProvisioning.Apply();
}
This code throws an exception after running for several minutes:
An unhandled exception of type 'Microsoft.Synchronization.Data.DbPartiallyProvisionedException' occurred in Microsoft.Synchronization.Data.SqlServer.dll
Additional information: Cannot update identity column 'scope_local_id'.
Ordinarily, this kind of error is caused by a scope that already exists, so I've tried the following three methods (several times) to clean up the scopes and start again:
Deprovisioning the scope by invoking deprovisioning.DeprovisionScope(ScopeName)
Deprovisioning the store by invokeing deprovisioning.DeprovisionStore()
Dropping and recreating the database.
Unfortunately, none of those worked.

Related

Multi-threaded PUPL SQL Error code 000099997 occurred in module CIPPUPFN:HA100 - UPDATE TCTL

We are on CC&B 2.6. We run base process PUPL Payment Upload on 8 threads. Occasionally the process fails on a thread. The other threads run successfully. And when we restart the job, that thread recovers and completes successfully.
The error message shown is:
SQL Error code 000099997 occurred in module CIPPUPFN:HA100 - UPDATE TCTL
Has anyone encountered this?
This problem might have been caused by a bug in the product that handles the interaction of
sessions and the ThreadlocalStorage object. Some session-related state, such
as the map of DBLocalCobolSQLProcessBackend instances was stored on
ThreadlocalStorage, when it should be on FrameworkSession.
If another session is temporarily "pushed" onto ThreadlocalStorage (e.g. via
SessionExecutable>>doInReadOnlySession()) the original session is set aside
but the map of DBLocalCobolSQLProcessBackend was improperly destroyed and could not be restored when the original session was restored.
If a subsequent db-related call from COBOL into Java requires access to an instance of DBLocalCobolSQLProcessBackend, the desired instance was missing and a new, empty one was created, which lead to a NPE in downstream code.

Azure PushAsync() error

I encounter a wired error when use PushAsync() function to sync data from local database to Azure. f I take InsertAsync(item) action only once, the sync works wells. But if I did more than two times, the PushAsync() will throw an exception that ask me to check pushasync result.
I followed this tutorial http://azure.microsoft.com/en-us/documentation/articles/mobile-services-windows-store-dotnet-get-started-offline-data/

Azure Document Db Worker Role

I am having problems getting the Microsoft.Azure.Documents library to initialize the client in an azure worker role. I'm using Nuget Package 0.9.1-preview.
I have mimicked what was done in the example for azure document
When running locally through the emulator I can connect fine with the documentdb and it runs as expected. When running in the worker role, I am getting a series of NullReferenceException and then ArgumentNullException.
The bottom System.NullReferenceException that is highlighted above has this call stack
so the nullReferenceExceptions start in this call at the new DocumentClient.
var endpoint = "myendpoint";
var authKey = "myauthkey";
var enpointUri = new Uri(endpoint);
DocumentClient client = new DocumentClient(endpointUri, authKey);
Nothing changes between running it locally vs on the worker role other then the environment (obviously).
Has anyone gotten DocumentDb to work on a worker role or does anyone have an idea why it would be throwing null reference exceptions? The parameters getting passed into the DocumentClient() are filled.
UPDATE:
I tried to rewrite it being more generic which helped at least let the worker role run and let me attached a debugger. It is throwing the error on the new DocumentClient. Seems like some security passing is null. Both the required parameters on initialization are not null. Is there a security setting I need to change for my worker role to be able to connect to my documentdb? (still works locally fine)
UPDATE 2:
I can get the instance to run in release mode, but not debug mode. So it must be something to do with some security setting or storage setting that is misconfigured I guess?
It seems I'm getting System.Security.SecurityExceptions - only when using The DocumentDb - queues do not give me that error. All Call Stacks for that error seem to be with System.Diagnostics.EventLog. The very first Exception I see in the Intellitrace Summary is System.Threading.WaitHandleCannotBeOpenedException.
More Info
Intellitrace summary exception data:
top is the earliest and bottom is the latest (so System.Security.SecurityException happens first then the NullReference)
The solution for me to get rid of the security exception and null reference exception was to disable intellitrace. Once I did that, I was able to deploy and attach debugger and see everything working.
Not sure what is between the null in intellitrace and the DocumentClient, but hopefully it's just in relation to the nuget and it will be fixed in the next iteration.
unable to repro.
I created a new Worker Role. Single instance. Added authkey & endoint config to cscfg.
Created private static DocumentClient at WorkerRole class level
Init DocumentClient in OnStart
Dispose DocumentClient in OnStop
In RunAsync inside loop,
execute a query Works as expected.
Test in emulator works.
Deployed as Release to Production slot. works.
Deployed as Debug to Staging with Remote Debug. works.
Attached VS to CloudService, breakpoint hit inside loop.
Working solution : http://ryancrawcour.blob.core.windows.net/samples/AzureCloudService1.zip

How to manage centralized values in a sharded environment

I have an ASP.NET app being developed for Windows Azure. It's been deemed necessary that we use sharding for the DB to improve write times since the app is very write heavy but the data is easily isolated. However, I need to keep track of a few central variables across all instances, and I'm not sure the best place to store that info. What are my options?
Requirements:
Must be durable, can survive instance reboots
Must be synchronized. It's incredibly important to avoid conflicting updates or at least throw an exception in such cases, rather than overwriting values or failing silently.
Must be reasonably fast (2000+ read/writes per second
I thought about writing a separate component to run on a worker role that simply reads/writes the values in memory and flushes them to disk every so often, but I figure there's got to be something already written for that purpose that I can appropriate in Windows Azure.
I think what I'm looking for is a system like Apache ZooKeeper, but I dont' want to have to deal with installing the JRE during the worker role startup and all that jazz.
Edit: Based on the suggestion below, I'm trying to use Azure Table Storage using the following code:
var context = table.ServiceClient.GetTableServiceContext();
var item = context.CreateQuery<OfferDataItemTableEntity>(table.Name)
.Where(x => x.PartitionKey == Name).FirstOrDefault();
if (item == null)
{
item = new OfferDataItemTableEntity(Name);
context.AddObject(table.Name, item);
}
if (item.Allocated < Quantity)
{
allocated = ++item.Allocated;
context.UpdateObject(item);
context.SaveChanges();
return true;
}
However, the context.UpdateObject(item) call fails with The context is not currently tracking the entity. Doesn't querying the context for the item initially add it to the context tracking mechanism?
Have you looked into SQL Azure Federations? It seems like exactly what you're looking for:
sharding for SQL Azure.
Here are a few links to read:
http://msdn.microsoft.com/en-us/library/windowsazure/hh597452.aspx
http://convective.wordpress.com/2012/03/05/introduction-to-sql-azure-federations/
http://searchcloudapplications.techtarget.com/tip/Tips-for-deploying-SQL-Azure-Federations
What you need is Table Storage since it matches all your requirements:
Durable: Yes, Table Storage is part of a Storage Account, which isn't related to a specific Cloud Service or instance.
Synchronized: Yes, Table Storage is part of a Storage Account, which isn't related to a specific Cloud Service or instance.
It's incredibly important to avoid conflicting updates: Yes, this is possible with the use of ETags
Reasonably fast? Very fast, up to 20,000 entities/messages/blobs per second
Update:
Here is some sample code that uses the new storage SDK (2.0):
var storageAccount = CloudStorageAccount.DevelopmentStorageAccount;
var table = storageAccount.CreateCloudTableClient()
.GetTableReference("Records");
table.CreateIfNotExists();
// Add item.
table.Execute(TableOperation.Insert(new MyEntity() { PartitionKey = "", RowKey ="123456", Customer = "Sandrino" }));
var user1record = table.Execute(TableOperation.Retrieve<MyEntity>("", "123456")).Result as MyEntity;
var user2record = table.Execute(TableOperation.Retrieve<MyEntity>("", "123456")).Result as MyEntity;
user1record.Customer = "Steve";
table.Execute(TableOperation.Replace(user1record));
user2record.Customer = "John";
table.Execute(TableOperation.Replace(user2record));
First it adds the item 123456.
Then I'm simulating 2 users getting that same record (imagine they both opened a page displaying the record).
User 1 is fast and updates the item. This works.
User 2 still had the window open. This means he's working on an old version of the item. He updates the old item and tries to save it. This causes the following exception (this is possible because the SDK matches the ETag):
The remote server returned an error: (412) Precondition Failed.
I ended up with a hybrid cache / table storage solution. All instances track the variable via Azure caching, while the first instance spins up a timer that saves the value to table storage once per second. On startup, the cache variable is initialized with the value saved to table storage, if available.

SQL Azure unexpected database deletion/recreation

I've been scratching my head on this for hours, but can't seem to figure out what's wrong.
Here's our project basic setup:
MVC 3.0 Project with ASP.NET Membership
Entity Framework 4.3, Code First approach
Local environment: local SQL Server with 2 MDF database files attached (aspnet.mdf + entities.mdf)
Server environment: Windows Azure + 2 SQL Azure databases (aspnet and entities)
Here's what we did:
Created local and remote databases, modified web.config to use SQLEXPRESS connection strings in debug mode and SQL Azure connection strings in release mode
Created a SampleData class extending DropCreateDatabaseAlways<Entities> with a Seed method to seed data.
Used System.Data.Entity.Database.SetInitializer(new Models.SampleData()); in Application_Start to seed data to our databases.
Ran app locally - tables were created and seeded, all OK.
Deployed, ran remote app - tables were created and seeded, all OK.
Added pre-processor directives to stop destroying the Entity database at each application start on our remote Azure environment:
#if DEBUG
System.Data.Entity.Database.SetInitializer(new Models.SampleData());
#else
System.Data.Entity.Database.SetInitializer<Entities>(null);
#endif
Here's where it got ugly
We enabled Migrations using NuGet, with AutomaticMigrationsEnabled = true;
Everything was running smooth and nice. We left it cooking for a couple days
Today, we noticed an unknown bug on the Azure environment:
we have several classes deriving from a superclass SuperClass
the corresponding Entity table stores all of these objects in the same SuperClass table, using a discriminator to know which column to feed from when loading the various classes
While the loading went just fine before today, it doesn't anymore. We get the following error message:
The 'Foo' property on 'SubClass1' could not be set to a 'null' value. You must set this property to a non-null value of type 'Int32'.
After a quick check, our SuperClass table has columns Foo and Foo1. Logical enough, since SuperClass has 2 subclasses SubClass1 and SubClass2, each with a Foo property. In our case, Foo is NULL but Foo1 has an int32 value. So the problem is not with the database - rather, it would seem that the link between our Model and Database has been lost. The discriminator logic was corrupted.
Trying to find indications on what could've gone wrong, we noticed several things:
Even though we never performed any migration on the SQL Azure Entity database, the database now has a _MigrationHistory table
The _MigrationHistory table has one record:
MigrationID: 201204102350574_InitialCreate
CreatedOn: 4/10/2012 11:50:57 PM
Model: <Binary data>
ProductVersion: 4.3.1
Looking at other tables, most of them were emptied when this migration happened. Only the tables that were initially seeded with SampleData remained untouched.
Checking in with the SQL Azure Management portal, our Entity database shows the following creation date: 4/10/2012 23:50:55.
Here is our understanding
For some reason, SQL Azure deleted and recreated our database
The _MigrationHistory table was created in the process, registering a starting point to test the model against for future migrations
Here are our Questions
Who / What triggered the database deletion / recreation?
How could EF re-seed our sample data since Application_Start has System.Data.Entity.Database.SetInitializer<Entities>(null);?
EDIT: Looking at what could've gone wrong, we noticed one thing we didn't respect in this SQL Azure tutorial: we didn't remove PersistSecurityInfo from our SQL Azure Entity database connection string after the database was created. Can't see why on Earth it could have caused the problem, but still worth mentioning...
Nevermind, found the cause of our problem. In case anybody wonders: we hadn't made any Azure deployment since the addition of the pre-processor directives. MS must have restarted the machine our VM resided on, and the new VM recreated the database using see data.
Lesson learned: always do frequent Azure deployments.

Resources