IsConcurrencyToken(true) does not actually check concurrency tokens - ef-core-3.0

I have a property configured like this:
public byte[] Timestamp { get; set; }
And then in my DbContext, i use the Fluent API like so:
modelBuilder.Entity<MyClass>()
.Property(x => x.Timestamp)
.IsRowVersion()
.IsConcurrencyToken(true);
So naturally i went ahead and wrote a unit test, ensuring that an entity with a wrong timestamp set would not get saved. I used Sqlite and some custom Sql to make RowVersion work in Unit Tests but to my surprise i never got an exception. Then, i tested it in our application, and i also did not get an exception when a wrong Timestamp was set on the entity.
var myInstance = await myDbContext.Instances
.Include(x => x...)
.Include(x => x...)
.SingleAsync(x => x.Id == id);
// set other values, add new entities to relationships, aso
myInstance.Timestamp = new byte[] { 1, 2, 3, 4 };
await myDbContext.SaveChangesAsync();
I am clearly missing something here. I thought configuring IsRowVersion would be enough to force EF Core to include a WHERE Timestamp = clause in the UPDATE but it seems like thats not the case. As you can see, i also tried calling IsConcurrencyToken (even with its default value of true, just to be sure) but to no avail.
Edit: I have worked "around" it now by including the Timestamp in my SingleAsync call, but this still leaves me unsure if its still possible to not get a concurrency exception, as the Timestamp set on my entity is apparently not checked at all when saving?

This is a known behavior of EF core, as documented here:
https://github.com/aspnet/EntityFrameworkCore/issues/18505
Manually changing the value of the token is considered to be a no-op, as the original value that was queried from the database is being used for a concurrency check. Manually changing the value does nothing, as it is effectively being ignored.

It is possible to manually set the token's value and have EF Core's optimistic concurrency check fail if the database has a newer version of the entity:
db.Entry(entity).Property(nameof(Entity.ConcurrencyToken)).OriginalValue = entity.ConcurrencyToken;
As per this comment in #user604613's answer

Related

How to avoid changing property values in an NSBatchInsertRequest?

I have a simple Core Data entity Story that occasionally I update with the latest data from a network call. This network call sometimes updates many, many stories instances, so I run an NSBatchInsertRequest, shown below. (The other reason I'm using a batch insert is that many stories might need to be added to the persistent store.)
The problem is a user can have already marked a Story as a favorite. When they do that, I set story.isFavorite = true on the main thread and save viewContext.
However, when the batch insert occurs it overwrites story.isFavorite, setting it back to false, even though I'm using NSMergeByPropertyObjectTrumpMergePolicy on both the batch insert and view contexts. I am not touching story.isFavorite in the batch insert handler either so I don't expect that property to be overwritten.
I thought the benefit of a batch insert with this merge policy was to avoid first fetching + then manually updating changed properties + finally saving. What is the right way to avoid changing property values in an NSBatchInsertRequest?
Story
#objc(Story)
public class Story: NSManagedObject {
#NSManaged public var title: String?
#NSManaged public var storyURL: URL?
#NSManaged public var updatedTime: Date?
#NSManaged public var isFavorite: Bool // <- the problem property
}
Batch insert
container.viewContext.mergePolicy = NSMergeByPropertyObjectTrumpMergePolicy
container.viewContext.automaticallyMergesChangesFromParent = false
let context = NSManagedObjectContext(concurrencyType: .privateQueueConcurrencyType)
context.parent = container.viewContext
context.mergePolicy = NSMergeByPropertyObjectTrumpMergePolicy
context.perform {
let batchInsert = NSBatchInsertRequest(entity: Story.entity(), managedObjectHandler: { managedObject in
let story = managedObject as! Story
let storyResponse = downloadedStories[I]
// Update story with latest response data BUT don't modify story.isFavorite.
story.title = storyResponse.title
story.storyURL = storyResponse.storyURL
story.updatedTime = storyResponse.updatedTime
// ...
})
let result = try context.execute(batchInsert) as? NSBatchInsertResult
if let insertedIDs = result?.result as? [NSManagedObjectID] {
// Merge changes into parent context. Skip save() because not needed for batch insert.
NSManagedObjectContext.mergeChanges(fromRemoteContextSave: [NSInsertedObjectsKey: insertedIDs], into: [container.viewContext])
}
}
Edit
The Story entity does have a unique value constraint using attribute storyURL.
Update after Michael Tsai's answer
By making the Story entity attribute isFavorite a non-Optional Boolean without a default value (it was marked as Optional before, though I'm not sure it makes a difference here) and keeping the Use Scalar Type box checked, I can confirm that existing objects in the store will not be modified (at all) with this configuration of the batch insert context.
context.persistentStoreCoordinator = container.persistentStoreCoordinator
// HOWEVER, observe that regardless of the merge policy below,
// setting `context.parent = container.viewContext` will also
// overwrite the store data!
context.mergePolicy = NSMergeByPropertyStoreTrumpMergePolicy
// NSMergeByPropertyObjectTrumpMergePolicy ignores objects in the store
// (which have the same unique constraint value, here equal `storyURL`)
// and overwrites all properties.
// To confirm that the batch insert operation does not modify
// existing Story instances (at all), first delete all instances where
// where isFavorite == false. Then load the all story data again and
// execute the NSBatchInsertRequest with this change to managedObjectHandler:
story.title = storyResponse.title + " (modified)"
You will see the missing stories get inserted back, this time with their titles having a suffix " (modified)"; but previously favorited stories
do not get modified (basically, with this setup, the batch insert won't re-insert objects).
So the isFavorite property does not get overwritten BUT neither do any properties that should be changed (because they received a new title, for example).
Therefore, if you don't want your objects to get updated, but you want completely new objects to be inserted, you can use this approach.
However, if you are expecting your objects to require updates here are some alternatives:
you may opt to run a separate update operation, maybe an NSBatchUpdateRequest after you run your batch insert in this way,
or after the batch insert, you can update certain properties in a simple loop in a (possibly background/child) context without a batch operation, which could be fine if there isn't tons of data;
lastly, you might be able to first batch insert new data to a temporary store before somehow manually merging your choice of properties with the new store, then delete the temporary store.
A simpler approach: you could fetch the all properties you want to keep unchanged before you execute the batch insert (storing them in an dictionary keyed by your object's uniqueness constraint value), and then during the batch insert set the property again.
For this approach, you will want to use a different merge policy such as NSMergeByPropertyObjectTrumpMergePolicy so that the updated object gets re-inserted into the store (make sure to fetch all properties that you don't want to lose in advance of the batch insert)
random idea: How to Save Data When Using One ManagedObjectContext and PersistentStoreCoordinator with Two Stores
I don't think it is actually possible to do a partial update with a batch insert request. It's hard to know for sure because I don't think any of this is documented except in WWDC sessions. When I first watched the 2019 session, I was excited because the presenter said:
Attributes that are optional or configured with default values can be omitted from the dictionary as well.
In the case of updating an object with unique constraint, the existing values will not be changed.
I took this to mean that:
You can omit values for new objects, and you'll get the defaults or NULL. That makes sense.
If there's an existing object and you omit a value, that value will not the changed. So you can purposely omit values to do a partial update, i.e. update other values while leaving your isFavorite alone.
But, after writing code to test this and looking at the output from com.apple.CoreData.SQLDebug, what actually seems to happen with NSMergeByPropertyObjectTrumpMergePolicy is:
If you omit a value that's required you get a validation error.
If you omit a value that's optional, it updates the row to NULL. For a Bool property in Swift, this will become false.
If you omit a value with a default value, it updates the row to the default.
This is a shame because it seems like partial updates could be implemented by having the ON CONFLICT clause only specify DO UPDATE SET for the attributes that you actually set. But (as of macOS 11) Core Data seems to always generate SQL to set all of the columns.
In summary, with batch inserts, NSMergeByPropertyObjectTrumpMergePolicy does not actually merge by property based on what's changed (like with a regular Core Data save). Rather, it either inserts a new row (if the object is absent) or overwrites all the columns but preserves the objectID (if the object was present).
NSMergeByPropertyStoreTrumpMergePolicy also doesn't merge by property. It just means to leave the stored object alone if it's already present.
Update (2021-06-24): I heard from DTS that Apple considers the current (iOS 14/macOS 11) behavior described above a bug, and that it should let you batch insert without changing omitted properties. The Radar number is 79747419.

Seeder method for Azure Database in EF Core 2

What is the proper method to seed data into an Azure Database? Currently in development I have a seeder method that inserts the first couple of users as well as products. The Users (including admin user) username and password are hardcoded into the Seed method, is this an acceptable practice?
As far as the products are concerned, I have a json file with the product names and descriptions - which in development the seeder method iterates through and inserts the data.
To answer your question "The Users (including admin user) username and password are hardcoded into the Seed method, is this an acceptable practice?"
No you should keep your password in cleartext format, though you can keep it it encrypet mode and seed it.
In EF Core 2.1, the seeding workflow is quite different. There is now Fluent API logic to define the seed data in OnModelCreating. Then, when you create a migration, the seeding is transformed into migration commands to perform inserts, and is eventually transformed into SQL that that particular migration executes. Further migrations will know to insert more data, or even perform updates and deletes, depending on what changes you make in the OnModelCreating method.
Suppose thethree classes in my model are Magazine, Article and Author. A magazine can have one or more articles and an article can have one author. There’s also a PublicationsContext that uses SQLite as its data provider and has some basic SQL logging set up.
Let take an example of single entity type.
Let’s start by seeing what it looks like to provide seed data for a magazine—at its simplest.
The key to the new seeding feature is the HasData Fluent API method, which you can apply to an Entity in the OnModelCreating method.
Here’s the structure of the Magazine type:
public class Magazine
{
public int MagazineId { get; set; }
public string Name { get; set; }
public string Publisher { get; set; }
public List<Article> Articles { get; set; }
}
It has a key property, MagazineId, two strings and a list of Article types. Now let’s seed it with data for a single magazine:
protected override void OnModelCreating (ModelBuilder modelBuilder)
{
modelBuilder.Entity<Magazine> ().HasData
(new Magazine { MagazineId = 1, Name = "MSDN Magazine" });
}
A couple things to pay attention to here: First, I’m explicitly setting the key property, MagazineId. Second, I’m not supplying the Publisher string.
Next, I’ll add a migration, my first for this model. I happen to be using Visual Studio Code for this project, which is a .NET Core app, so I’m using the CLI migrations command, “dotnet ef migrations add init.” The resulting migration file contains all of the usual CreateTable and other relevant logic, followed by code to insert the new data, specifying the table name, columns and values:
migrationBuilder.InsertData(
table: "Magazines",
columns: new[] { "MagazineId", "Name", "Publisher" },
values: new object[] { 1, "MSDN Magazine", null });
Inserting the primary key value stands out to me here—especially after I’ve checked how the MagazineId column was defined further up in the migration file. It’s a column that should auto-increment, so you may not expect that value to be explicitly inserted:
MagazineId = table.Column<int>(nullable: false)
.Annotation("Sqlite:Autoincrement", true)
Let’s continue to see how this works out. Using the migrations script command, “dotnet ef migrations script,” to show what will be sent to the database, I can see that the primary key value will still be inserted into the key column:
INSERT INTO "Magazines" ("MagazineId", "Name", "Publisher")
VALUES (1, 'MSDN Magazine', NULL);
That’s because I’m targeting SQLite. SQLite will insert a key value if it’s provided, overriding the auto-increment. But what about with a SQL Server database, which definitely won’t do that on the fly?
I switched the context to use the SQL Server provider to investigate and saw that the SQL generated by the SQL Server provider includes logic to temporarily set IDENTITY_INSERT ON. That way, the supplied value will be inserted into the primary key column. Mystery solved!
You can use HasData to insert multiple rows at a time, though keep in mind that HasData is specific to a single entity. You can’t combine inserts to multiple tables with HasData. Here, I’m inserting two magazines at once:
modelBuilder.Entity<Magazine>()
.HasData(new Magazine{MagazineId=2, Name="New Yorker"},
new Magazine{MagazineId=3, Name="Scientific American"}
);
For a complete example , you can browse through this sample repo
Hope it helps.

How to load document out of database instead of memory

Using Raven client and server #30155. I'm basically doing the following in a controller:
public ActionResult Update(string id, EditModel model)
{
var store = provider.StartTransaction(false);
var document = store.Load<T>(id);
model.UpdateEntity(document) // overwrite document property values with those of edit model.
document.Update(store); // tell document to update itself if it passes some conflict checking
}
Then in document.Update, I try do this:
var old = store.Load<T>(this.Id);
if (old.Date != this.Date)
{
// Resolve conflicts that occur by moving document period
}
store.Update(this);
Now, I run into the problem that old gets loaded out of memory instead of the database and already contains the updated values. Thus, it never goes into the conflict check.
I tried working around the problem by changing the Controller.Update method into:
public ActionResult Update(string id, EditModel model)
{
var store = provider.StartTransaction(false);
var document = store.Load<T>(id);
store.Dispose();
model.UpdateEntity(document) // overwrite document property values with those of edit model.
store = provider.StartTransaction(false);
document.Update(store); // tell document to update itself if it passes some conflict checking
}
This results in me getting a Raven.Client.Exceptions.NonUniqueObjectException with the text: Attempted to associate a different object with id
Now, the questions:
Why would Raven care if I try and associate a new object with the id as long as the new object carries the proper e-tag and type?
Is it possible to load a document in its database state (overriding default behavior to fetch document from memory if it exists there)?
What is a good solution to getting the document.Update() to work (preferably without having to pass the old object along)?
Why would Raven care if I try and associate a new object with the id as long as the new object carries the proper e-tag and type?
RavenDB leans on being able to serve the documents from memory (which is faster). By checking for persisting objects for the same id, hard to debug errors are prevented.
EDIT: See comment of Rayen below. If you enable concurrency checking / provide etag in the Store, you can bypass the error.
Is it possible to load a document in its database state (overriding default behavior to fetch document from memory if it exists there)?
Apparantly not.
What is a good solution to getting the document.Update() to work (preferably without having to pass the old object along)?
I went with refactoring the document.Update method to also have an optional parameter to receive the old date period, since #1 and #2 don't seem possible.
RavenDB supports optimistic concurrency out of the box. The only thing you need to do is to call it.
session.Advanced.UseOptimisticConcurrency = true;
See:
http://ravendb.net/docs/article-page/3.5/Csharp/client-api/session/configuration/how-to-enable-optimistic-concurrency

Optimistic concurrency despite lock

I have multiple threads running a batch job. When each thread finishes it calls this method of mine:
private static readonly Object lockVar = new Object();
public void UserIsDone(int batchId, int userId)
{
//Get the batch user
var batchUser = context.ScheduledUsersBatchUsers.SingleOrDefault(x => x.User.Id == userId && x.Batch.Id == batchId);
if (batchUser != null)
{
lock (lockVar)
{
context.ScheduledUsersBatchUsers.Remove(batchUser);
context.SaveChanges();
//Try to get the batch with the assumption it has no users left. If we do get the batch back, it means there are no users left.
var dbBatch = context.ScheduledUsersBatches.SingleOrDefault(x => x.Id == batchId && !x.Users.Any());
//So this must have been the last user, the batch is empty, so we fetch it and remove it.
if (dbBatch != null)
{
context.ScheduledUsersBatches.Remove(dbBatch);
context.SaveChanges();
}
}
}
}
What this method does is very simple, it looks up the "BatchUser" to remove him from the queue, which it does. That part works swell.
However, after removing the user I want to check if that was the last user in the whole batch. But since this is multithreaded a race condition can happen.
So I put the removing of the batch user within a lock, after I remove the user, I check if the batch has no more batch users.
But here is my problem... even tho I have a lock, and the query to get the "dbBatch" clearly requires it to have no users to return the object... even so, I sometimes get it back with users like so:
When I do get that, I also get the following error on SaveChanges()
However, at other times I get the dbBatch object back correctly with no children, like so:
And when I do, it all works great, no exceptions.
With debugger I can catch the error by setting a breakpoint on the lock statement (see screenshot one). Then all threads get to the lock (while one goes in). Then I always get the error.
If I only have a breakpoint inside the if-statement it's more random.
With the lock in place, I don't see how this happens.
Update
I Ninject my context, and this is my ninject code
kernel.Bind<MyContext>()
.To<MyContext>()
.InRequestScope()
.WithConstructorArgument("connectionStringOrName", "MyConnection");
kernel.Bind<DbContext>().ToMethod(context => kernel.Get<MyContext>()).InRequestScope();
Update 2
I also tried this solution https://msdn.microsoft.com/en-us/data/jj592904.aspx
But strangely I don't get a DbUpdateConcurrencyException but rather I get a DbUpdateException that has an InnerException that is OptimisticConcurrencyException.
But neither DbUpdateException or OptimisticConcurrencyException contains a Entries property so I can't do ex.Entries.Single().Reload();
I'm also adding the exception in text form here
Here in text also, The outer exception of type DbUpdateException: {"An error occurred while saving entities that do not expose foreign key properties for their relationships. The EntityEntries property will return null because a single entity cannot be identified as the source of the exception. Handling of exceptions while saving can be made easier by exposing foreign key properties in your entity types. See the InnerException for details."}
The InnerException of type OptimisticConcurrencyException: {"Store update, insert, or delete statement affected an unexpected number of rows (0). Entities may have been modified or deleted since entities were loaded. See http://go.microsoft.com/fwlink/?LinkId=472540 for information on understanding and handling optimistic concurrency exceptions."}

MVC 3 EF Code-first to webhost database trouble

Im fairly new to ASP.NET MVC 3, and to coding in general really.
I have a very very small application i want to upload to my webhosting domain.
I am using entity framework, and it works fine on my local machine.
I've entered a new connection string to use my remote database instead however it dosen't really work, first of all i have 1 single MSSQL database, which cannot be de dropped and recreated, so i cannot use that strategy in my initializer, i tried to supply null in the strategy, but to no avail, my tables simply does not get created in my database and thats the problem, i don't know how i am to do that with entity framework.
When i run the application, it tries to select the data from the database, that part works fine, i just dont know how to be able to create those tabes in my database through codefirst.
I could probaly get it to work through manually recreating the tables, but i want to know the solution through codefirst.
This is my initializer class
public class EntityInit : DropCreateDatabaseIfModelChanges<NewsContext>
{
private NewsContext _db = new NewsContext();
protected override void Seed(NewsContext context)
{
new List<News>
{
new News{ Author="Michael Brandt", Title="Test News 1 ", NewsBody="Bblablabalblaaaaa1" },
new News{ Author="Michael Brandt", Title="Test News 2 ", NewsBody="Bblablabalblaaaaa2" },
new News{ Author="Michael Brandt", Title="Test News 3 ", NewsBody="Bblablabalblaaaaa3" },
new News{ Author="Michael Brandt", Title="Test News 4 ", NewsBody="Bblablabalblaaaaa4" },
}.ForEach(a => context.News.Add(a));
base.Seed(context);
}
}
As i said, im really new to all this, so excuse me, if im lacking to provide the proper information you need to answer my question, just me know and i will answer it
Initialization strategies do not support upgrade strategies at the moment.
Initialization strategies should be used to initialise a new database. all subsequent changes should be done using scripts at the moment.
the best practice as we speak is to modify the database with a script, and then adjust by hand the code to reflect this change.
in future releases, upgrade / migration strategies will be available.
try to execute the scripts statement by statement from a custom IDatabaseInitializer
then from this you can read the database version in the db and apply the missing scripts to your database. simply store a db version in a table. then level up with change scripts.
public class Initializer : IDatabaseInitializer<MyContext>
{
public void InitializeDatabase(MyContext context)
{
if (!context.Database.Exists() || !context.Database.CompatibleWithModel(false))
{
context.Database.Delete();
context.Database.Create();
var jobInstanceStateList = EnumExtensions.ConvertEnumToDictionary<JobInstanceStateEnum>().ToList();
jobInstanceStateList.ForEach(kvp => context.JobInstanceStateLookup.Add(
new JobInstanceStateLookup()
{
JobInstanceStateLookupId = kvp.Value,
Value = kvp.Key
}));
context.SaveChanges();
}
}
}
Have you tried to use the CreateDatabaseOnlyIfNotExists
– Every time the context is initialized, database will be recreated if it does not exist.
The database initializer can be set using the SetInitializer method of the Database class.If nothing is specified it will use the CreateDatabaseOnlyIfNotExists class to initialize the database.
Database.SetInitializer(null);
-
Database.SetInitializer<NewsContext>(new CreateDatabaseOnlyIfNotExists<NewsContext>());
I'm not sure if this is the exact syntax as I have not written this in a while. But it should be very similar.
If you are using a very small application, you maybe could go for SQL CE 4.0.
The bin-deployment should allow you to run SQL CE 4.0 even if your provider doesn't have the binaries installed for it. You can read more here.
That we you can actually use whatever initializer you want, since you now don't have the problem of not being able to drop databases and delete tables.
could this be of any help?

Resources