How to load document out of database instead of memory - c#-4.0

Using Raven client and server #30155. I'm basically doing the following in a controller:
public ActionResult Update(string id, EditModel model)
{
var store = provider.StartTransaction(false);
var document = store.Load<T>(id);
model.UpdateEntity(document) // overwrite document property values with those of edit model.
document.Update(store); // tell document to update itself if it passes some conflict checking
}
Then in document.Update, I try do this:
var old = store.Load<T>(this.Id);
if (old.Date != this.Date)
{
// Resolve conflicts that occur by moving document period
}
store.Update(this);
Now, I run into the problem that old gets loaded out of memory instead of the database and already contains the updated values. Thus, it never goes into the conflict check.
I tried working around the problem by changing the Controller.Update method into:
public ActionResult Update(string id, EditModel model)
{
var store = provider.StartTransaction(false);
var document = store.Load<T>(id);
store.Dispose();
model.UpdateEntity(document) // overwrite document property values with those of edit model.
store = provider.StartTransaction(false);
document.Update(store); // tell document to update itself if it passes some conflict checking
}
This results in me getting a Raven.Client.Exceptions.NonUniqueObjectException with the text: Attempted to associate a different object with id
Now, the questions:
Why would Raven care if I try and associate a new object with the id as long as the new object carries the proper e-tag and type?
Is it possible to load a document in its database state (overriding default behavior to fetch document from memory if it exists there)?
What is a good solution to getting the document.Update() to work (preferably without having to pass the old object along)?

Why would Raven care if I try and associate a new object with the id as long as the new object carries the proper e-tag and type?
RavenDB leans on being able to serve the documents from memory (which is faster). By checking for persisting objects for the same id, hard to debug errors are prevented.
EDIT: See comment of Rayen below. If you enable concurrency checking / provide etag in the Store, you can bypass the error.
Is it possible to load a document in its database state (overriding default behavior to fetch document from memory if it exists there)?
Apparantly not.
What is a good solution to getting the document.Update() to work (preferably without having to pass the old object along)?
I went with refactoring the document.Update method to also have an optional parameter to receive the old date period, since #1 and #2 don't seem possible.

RavenDB supports optimistic concurrency out of the box. The only thing you need to do is to call it.
session.Advanced.UseOptimisticConcurrency = true;
See:
http://ravendb.net/docs/article-page/3.5/Csharp/client-api/session/configuration/how-to-enable-optimistic-concurrency

Related

How to avoid changing property values in an NSBatchInsertRequest?

I have a simple Core Data entity Story that occasionally I update with the latest data from a network call. This network call sometimes updates many, many stories instances, so I run an NSBatchInsertRequest, shown below. (The other reason I'm using a batch insert is that many stories might need to be added to the persistent store.)
The problem is a user can have already marked a Story as a favorite. When they do that, I set story.isFavorite = true on the main thread and save viewContext.
However, when the batch insert occurs it overwrites story.isFavorite, setting it back to false, even though I'm using NSMergeByPropertyObjectTrumpMergePolicy on both the batch insert and view contexts. I am not touching story.isFavorite in the batch insert handler either so I don't expect that property to be overwritten.
I thought the benefit of a batch insert with this merge policy was to avoid first fetching + then manually updating changed properties + finally saving. What is the right way to avoid changing property values in an NSBatchInsertRequest?
Story
#objc(Story)
public class Story: NSManagedObject {
#NSManaged public var title: String?
#NSManaged public var storyURL: URL?
#NSManaged public var updatedTime: Date?
#NSManaged public var isFavorite: Bool // <- the problem property
}
Batch insert
container.viewContext.mergePolicy = NSMergeByPropertyObjectTrumpMergePolicy
container.viewContext.automaticallyMergesChangesFromParent = false
let context = NSManagedObjectContext(concurrencyType: .privateQueueConcurrencyType)
context.parent = container.viewContext
context.mergePolicy = NSMergeByPropertyObjectTrumpMergePolicy
context.perform {
let batchInsert = NSBatchInsertRequest(entity: Story.entity(), managedObjectHandler: { managedObject in
let story = managedObject as! Story
let storyResponse = downloadedStories[I]
// Update story with latest response data BUT don't modify story.isFavorite.
story.title = storyResponse.title
story.storyURL = storyResponse.storyURL
story.updatedTime = storyResponse.updatedTime
// ...
})
let result = try context.execute(batchInsert) as? NSBatchInsertResult
if let insertedIDs = result?.result as? [NSManagedObjectID] {
// Merge changes into parent context. Skip save() because not needed for batch insert.
NSManagedObjectContext.mergeChanges(fromRemoteContextSave: [NSInsertedObjectsKey: insertedIDs], into: [container.viewContext])
}
}
Edit
The Story entity does have a unique value constraint using attribute storyURL.
Update after Michael Tsai's answer
By making the Story entity attribute isFavorite a non-Optional Boolean without a default value (it was marked as Optional before, though I'm not sure it makes a difference here) and keeping the Use Scalar Type box checked, I can confirm that existing objects in the store will not be modified (at all) with this configuration of the batch insert context.
context.persistentStoreCoordinator = container.persistentStoreCoordinator
// HOWEVER, observe that regardless of the merge policy below,
// setting `context.parent = container.viewContext` will also
// overwrite the store data!
context.mergePolicy = NSMergeByPropertyStoreTrumpMergePolicy
// NSMergeByPropertyObjectTrumpMergePolicy ignores objects in the store
// (which have the same unique constraint value, here equal `storyURL`)
// and overwrites all properties.
// To confirm that the batch insert operation does not modify
// existing Story instances (at all), first delete all instances where
// where isFavorite == false. Then load the all story data again and
// execute the NSBatchInsertRequest with this change to managedObjectHandler:
story.title = storyResponse.title + " (modified)"
You will see the missing stories get inserted back, this time with their titles having a suffix " (modified)"; but previously favorited stories
do not get modified (basically, with this setup, the batch insert won't re-insert objects).
So the isFavorite property does not get overwritten BUT neither do any properties that should be changed (because they received a new title, for example).
Therefore, if you don't want your objects to get updated, but you want completely new objects to be inserted, you can use this approach.
However, if you are expecting your objects to require updates here are some alternatives:
you may opt to run a separate update operation, maybe an NSBatchUpdateRequest after you run your batch insert in this way,
or after the batch insert, you can update certain properties in a simple loop in a (possibly background/child) context without a batch operation, which could be fine if there isn't tons of data;
lastly, you might be able to first batch insert new data to a temporary store before somehow manually merging your choice of properties with the new store, then delete the temporary store.
A simpler approach: you could fetch the all properties you want to keep unchanged before you execute the batch insert (storing them in an dictionary keyed by your object's uniqueness constraint value), and then during the batch insert set the property again.
For this approach, you will want to use a different merge policy such as NSMergeByPropertyObjectTrumpMergePolicy so that the updated object gets re-inserted into the store (make sure to fetch all properties that you don't want to lose in advance of the batch insert)
random idea: How to Save Data When Using One ManagedObjectContext and PersistentStoreCoordinator with Two Stores
I don't think it is actually possible to do a partial update with a batch insert request. It's hard to know for sure because I don't think any of this is documented except in WWDC sessions. When I first watched the 2019 session, I was excited because the presenter said:
Attributes that are optional or configured with default values can be omitted from the dictionary as well.
In the case of updating an object with unique constraint, the existing values will not be changed.
I took this to mean that:
You can omit values for new objects, and you'll get the defaults or NULL. That makes sense.
If there's an existing object and you omit a value, that value will not the changed. So you can purposely omit values to do a partial update, i.e. update other values while leaving your isFavorite alone.
But, after writing code to test this and looking at the output from com.apple.CoreData.SQLDebug, what actually seems to happen with NSMergeByPropertyObjectTrumpMergePolicy is:
If you omit a value that's required you get a validation error.
If you omit a value that's optional, it updates the row to NULL. For a Bool property in Swift, this will become false.
If you omit a value with a default value, it updates the row to the default.
This is a shame because it seems like partial updates could be implemented by having the ON CONFLICT clause only specify DO UPDATE SET for the attributes that you actually set. But (as of macOS 11) Core Data seems to always generate SQL to set all of the columns.
In summary, with batch inserts, NSMergeByPropertyObjectTrumpMergePolicy does not actually merge by property based on what's changed (like with a regular Core Data save). Rather, it either inserts a new row (if the object is absent) or overwrites all the columns but preserves the objectID (if the object was present).
NSMergeByPropertyStoreTrumpMergePolicy also doesn't merge by property. It just means to leave the stored object alone if it's already present.
Update (2021-06-24): I heard from DTS that Apple considers the current (iOS 14/macOS 11) behavior described above a bug, and that it should let you batch insert without changing omitted properties. The Radar number is 79747419.

IsConcurrencyToken(true) does not actually check concurrency tokens

I have a property configured like this:
public byte[] Timestamp { get; set; }
And then in my DbContext, i use the Fluent API like so:
modelBuilder.Entity<MyClass>()
.Property(x => x.Timestamp)
.IsRowVersion()
.IsConcurrencyToken(true);
So naturally i went ahead and wrote a unit test, ensuring that an entity with a wrong timestamp set would not get saved. I used Sqlite and some custom Sql to make RowVersion work in Unit Tests but to my surprise i never got an exception. Then, i tested it in our application, and i also did not get an exception when a wrong Timestamp was set on the entity.
var myInstance = await myDbContext.Instances
.Include(x => x...)
.Include(x => x...)
.SingleAsync(x => x.Id == id);
// set other values, add new entities to relationships, aso
myInstance.Timestamp = new byte[] { 1, 2, 3, 4 };
await myDbContext.SaveChangesAsync();
I am clearly missing something here. I thought configuring IsRowVersion would be enough to force EF Core to include a WHERE Timestamp = clause in the UPDATE but it seems like thats not the case. As you can see, i also tried calling IsConcurrencyToken (even with its default value of true, just to be sure) but to no avail.
Edit: I have worked "around" it now by including the Timestamp in my SingleAsync call, but this still leaves me unsure if its still possible to not get a concurrency exception, as the Timestamp set on my entity is apparently not checked at all when saving?
This is a known behavior of EF core, as documented here:
https://github.com/aspnet/EntityFrameworkCore/issues/18505
Manually changing the value of the token is considered to be a no-op, as the original value that was queried from the database is being used for a concurrency check. Manually changing the value does nothing, as it is effectively being ignored.
It is possible to manually set the token's value and have EF Core's optimistic concurrency check fail if the database has a newer version of the entity:
db.Entry(entity).Property(nameof(Entity.ConcurrencyToken)).OriginalValue = entity.ConcurrencyToken;
As per this comment in #user604613's answer

Srapi - retrieve 1-n property from the Lifecycle call back model parameter

i am using Strapi for a prototype and i am meeting the following issue. I have created a new content type "Checklist" and i added in it a relation property 1 to many with the User model provided by the users-permissions plugin.
Then i wanted to add some custom logic on the lifecycle call back, in beforeSave and in beforeUpdate from which i would like to access the user assigned to the Checklist.
The code looks like that:
{
var self = module.exports = {
// Before saving a value.
// Fired before an `insert` or `update` query.
generateLabel : (model) => {
var label = "";
var day = _moment(model.date,_moment.ISO_8601).year();
var month = _moment(model.date,_moment.ISO_8601).day();
var year = _moment(model.date,_moment.ISO_8601).month();
console.log(model);
if (model.user) {
label = `${model.user}-${year}-${month}-${day}`;
}else{
label = `unassigned-${year}-${month}-${day}`;
}
return label;
I call the method generateLabel from the callback. It works, but my model.user always returned undefined. It is a 1-n property. I can access model.date property (one of the field i have created) without any issue, so i guess the pbs is related to something i have to do to populate the user relation, but i am not sure on how to proceed.
When i log the model object, the console display what i guess is a complete mongoose object but i am not sure where to go from there as if i try to access the property that i see in the console, i will always reach an undefined.
Thanks in advance for your time, i use the following
strapi: 3.0.0-alpha.13.0.1
nodejs: v9.10.1
mongodb: 3.6.3
macos high sierra
Also running into the similar / same issue, I think this has to do with the user permissions plugin, and having to use that to access the User model. Or I thought about trying to find the User that’s associated with the id of the newly created record. I’m trying to use AfterCreate. Anyone that could shed some light on this would be great!
It's because relational attributes are not send in create fonction (See your checklist service add function).
Relations are handled in an other function updateRelations.
The thing you can do is to send values in Checklit.create()

Serialize then deserialize mongoose.Model instance

Here's my problem:
I have a system with two hosts: machine M1 and machine M2 each running a node.js process on the same codebase.
On M1 I have a mongoose.Model instance (user) which I need to pass (using a REST api call) to M2. I have to have the complete instance of user on M2, ie. all data, virtuals, plugins, save() should work as expected.
One solution is to only send user's ObjectId, then on M2 to perform a query to mongodb to fetch the full object. I don't want to do this!
Another solution would be to serialize it using user.toJSON() or user.toObject() then send it down the wire. On M2, all I do is new User(userObject).
The problem is that when calling .save() on this it will interpret it as a new object and attempt an insert() instead of an .update().
To fix this I can set .isNew = false on the object, however, when updating, the delta (difference between stored model and updated values) now contains all the data and mongo complains it will not update a document's _id
Is there an elegant way to solve this using a native method or plugin ?! Am I doing it wrong?!
It may be wrong, but you may try using upsert() on the created user object instead of saving it.
If you plan to use it usually, you may end up defining a new static method in your schema, like:
Schema.statics.createOrUpdate = function(object, callback) {
statics.upsert(this, object, callback);
};
This way you can pass your user data to M2, and on M2, you can update it easily.
This is a late call but I faced same problem before finding out Model.hydrate method.
http://mongoosejs.com/docs/api.html#model_Model.hydrate
Shortcut for creating a new Document from existing raw data, pre-saved
in the DB. The document returned has no paths marked as modified
initially.
So on M2 User.hydarate(userObject) can be used instead of new User(userObject) and no need to set isNew to false.

Incremental loading in Azure Mobile Services

Given the following code:
listView.ItemsSource =
App.azureClient.GetTable<SomeTable>().ToIncrementalLoadingCollection();
We get incremental loading without further changes.
But what if we modify the read.js server side script to e.g. use mssql to query another table instead. What happens to the incremental loading? I'm assuming it breaks; if so, what's needed to support it again?
And what if the query used the untyped version instead, e.g.
App.azureClient.GetTable("SomeTable").ReadAsync(...)
Could incremental loading be somehow supported in this case, or must it be done "by hand" somehow?
Bonus points for insights on how Azure Mobile Services implements incremental loading between the server and the client.
The incremental loading collection works by sending the $top and $skip query parameters (those are also sent when you do a query by using the .Take and .Skip methods in the table). So if you want to modify the read script to do something other than the default behavior, while still maintaining the ability to use that table with an incremental loading collection, you need to take those values into account.
To do that, you can ask for the query components, which will contain the values, as shown below:
function read(query, user, request) {
var queryComponents = query.getComponents();
console.log('query components: ', queryComponents); // useful to see all information
var top = queryComponents.take;
var skip = queryComponents.skip;
// do whatever you want with those values, then call request.respond(...)
}
The way it's implemented at the client is by using a class which implements the ISupportIncrementalLoading interface. You can see it (and the full source code for the client SDKs) in the GitHub repository, or more specifically the MobileServiceIncrementalLoadingCollection class (the method is added as an extension in the MobileServiceIncrementalLoadingCollectionExtensions class).
And the untyped table does not have that method - as you can see in the extension class, it's only added to the typed version of the table.

Resources