In RowPersisted event. Is there a way to know which fields were updated ? I have a customization in my RowPersisted event. But I only want to execute it if certain field(s) were actually modified. At the moment the event is firing unnecessarily as it reacts every time its saved.
TIA
UPDATE
Just to add. My customization has got nothing to do with the field values nor overriding the saving itself. I'm just using the RowPersisted event to kick off my customization.
If you want to compare the current row with any changes to the row with unchanged values (as it was from last persist) you can use a cache instance and call GetOriginal.
For example using an extension on sales order to check to see if the order qty or order total changed...
[PXOverride]
public virtual void Persist(Action del)
{
// Current object with any changed values
var salesOrder = Base.Document.Current;
// Unchanged object as it was set from the last save/persist
var unchangedSalesOrder = Base.Document.Cache.GetOriginal(salesOrder);
if (!Base.Document.Cache.ObjectsEqual<SOOrder.orderQty, SOOrder.curyOrderTotal>(salesOrder, unchangedSalesOrder))
{
PXTrace.WriteInformation("My values changed");
}
del?.Invoke();
}
Edit: I think at some point the GetOriginal was not publicly accessible. Not sure which version but if you cannot find this call it could be you are on an older version of Acumatica where this call cannot be used.
Below is the description of the RowPersisted event from https://help-2018r2.acumatica.com
public delegate void PXRowPersisted(PXCache sender, PXRowPersistedEventArgs e )
Parameters
sender (Required). The cache object that raised the event
e (Required). The instance of the PXRowPersistedEventArgs type that holds data for the >RowPersisted event
The RowPersisted event is triggered in the process of committing changes to the database for every data record whose status is Inserted, Updated, or Deleted. The RowPersisted event is triggered twice:
When the data record has been committed to the database and the status of the transaction scope (indicated in the e.TranStatus field) is Open.
When the status of the transaction scope has changed to Completed, indicating successful committing, or Aborted, indicating that a database error has occurred and changes to the database have been dropped.
The e parameter has the only Row property which is the current modified record.
You can check your condition on the e.Row and execute your code.
You should not use PXRowPersisted event for modifying values on the Completed transaction. If you need to modify values before/after save the best practice is to override Persist and use PXTransactionScope and invocation of the baseMethod, see example below:
[PXOverride]
public void Persist(Action baseMethod)
{
using(PXTransactionScope sc = new PXTransactionScope())
{
//... do your code here
baseMethod?.Invoke();
//... or here
sc.Complete();
}
}
UPDATED
Ideally, you should follow the rules below:
if you want to update the values of other fields of your record during the update of some fields then you should use the corresponding PXFieldUpdated event handler.
If you want to prevent saving of the record depending on some conditions of the field values of your record you should you PXRowPersisting event handler.
If you want to update DAC/Table of other maintenance/entry you should do it in the Persist method.
Related
I have a simple Core Data entity Story that occasionally I update with the latest data from a network call. This network call sometimes updates many, many stories instances, so I run an NSBatchInsertRequest, shown below. (The other reason I'm using a batch insert is that many stories might need to be added to the persistent store.)
The problem is a user can have already marked a Story as a favorite. When they do that, I set story.isFavorite = true on the main thread and save viewContext.
However, when the batch insert occurs it overwrites story.isFavorite, setting it back to false, even though I'm using NSMergeByPropertyObjectTrumpMergePolicy on both the batch insert and view contexts. I am not touching story.isFavorite in the batch insert handler either so I don't expect that property to be overwritten.
I thought the benefit of a batch insert with this merge policy was to avoid first fetching + then manually updating changed properties + finally saving. What is the right way to avoid changing property values in an NSBatchInsertRequest?
Story
#objc(Story)
public class Story: NSManagedObject {
#NSManaged public var title: String?
#NSManaged public var storyURL: URL?
#NSManaged public var updatedTime: Date?
#NSManaged public var isFavorite: Bool // <- the problem property
}
Batch insert
container.viewContext.mergePolicy = NSMergeByPropertyObjectTrumpMergePolicy
container.viewContext.automaticallyMergesChangesFromParent = false
let context = NSManagedObjectContext(concurrencyType: .privateQueueConcurrencyType)
context.parent = container.viewContext
context.mergePolicy = NSMergeByPropertyObjectTrumpMergePolicy
context.perform {
let batchInsert = NSBatchInsertRequest(entity: Story.entity(), managedObjectHandler: { managedObject in
let story = managedObject as! Story
let storyResponse = downloadedStories[I]
// Update story with latest response data BUT don't modify story.isFavorite.
story.title = storyResponse.title
story.storyURL = storyResponse.storyURL
story.updatedTime = storyResponse.updatedTime
// ...
})
let result = try context.execute(batchInsert) as? NSBatchInsertResult
if let insertedIDs = result?.result as? [NSManagedObjectID] {
// Merge changes into parent context. Skip save() because not needed for batch insert.
NSManagedObjectContext.mergeChanges(fromRemoteContextSave: [NSInsertedObjectsKey: insertedIDs], into: [container.viewContext])
}
}
Edit
The Story entity does have a unique value constraint using attribute storyURL.
Update after Michael Tsai's answer
By making the Story entity attribute isFavorite a non-Optional Boolean without a default value (it was marked as Optional before, though I'm not sure it makes a difference here) and keeping the Use Scalar Type box checked, I can confirm that existing objects in the store will not be modified (at all) with this configuration of the batch insert context.
context.persistentStoreCoordinator = container.persistentStoreCoordinator
// HOWEVER, observe that regardless of the merge policy below,
// setting `context.parent = container.viewContext` will also
// overwrite the store data!
context.mergePolicy = NSMergeByPropertyStoreTrumpMergePolicy
// NSMergeByPropertyObjectTrumpMergePolicy ignores objects in the store
// (which have the same unique constraint value, here equal `storyURL`)
// and overwrites all properties.
// To confirm that the batch insert operation does not modify
// existing Story instances (at all), first delete all instances where
// where isFavorite == false. Then load the all story data again and
// execute the NSBatchInsertRequest with this change to managedObjectHandler:
story.title = storyResponse.title + " (modified)"
You will see the missing stories get inserted back, this time with their titles having a suffix " (modified)"; but previously favorited stories
do not get modified (basically, with this setup, the batch insert won't re-insert objects).
So the isFavorite property does not get overwritten BUT neither do any properties that should be changed (because they received a new title, for example).
Therefore, if you don't want your objects to get updated, but you want completely new objects to be inserted, you can use this approach.
However, if you are expecting your objects to require updates here are some alternatives:
you may opt to run a separate update operation, maybe an NSBatchUpdateRequest after you run your batch insert in this way,
or after the batch insert, you can update certain properties in a simple loop in a (possibly background/child) context without a batch operation, which could be fine if there isn't tons of data;
lastly, you might be able to first batch insert new data to a temporary store before somehow manually merging your choice of properties with the new store, then delete the temporary store.
A simpler approach: you could fetch the all properties you want to keep unchanged before you execute the batch insert (storing them in an dictionary keyed by your object's uniqueness constraint value), and then during the batch insert set the property again.
For this approach, you will want to use a different merge policy such as NSMergeByPropertyObjectTrumpMergePolicy so that the updated object gets re-inserted into the store (make sure to fetch all properties that you don't want to lose in advance of the batch insert)
random idea: How to Save Data When Using One ManagedObjectContext and PersistentStoreCoordinator with Two Stores
I don't think it is actually possible to do a partial update with a batch insert request. It's hard to know for sure because I don't think any of this is documented except in WWDC sessions. When I first watched the 2019 session, I was excited because the presenter said:
Attributes that are optional or configured with default values can be omitted from the dictionary as well.
In the case of updating an object with unique constraint, the existing values will not be changed.
I took this to mean that:
You can omit values for new objects, and you'll get the defaults or NULL. That makes sense.
If there's an existing object and you omit a value, that value will not the changed. So you can purposely omit values to do a partial update, i.e. update other values while leaving your isFavorite alone.
But, after writing code to test this and looking at the output from com.apple.CoreData.SQLDebug, what actually seems to happen with NSMergeByPropertyObjectTrumpMergePolicy is:
If you omit a value that's required you get a validation error.
If you omit a value that's optional, it updates the row to NULL. For a Bool property in Swift, this will become false.
If you omit a value with a default value, it updates the row to the default.
This is a shame because it seems like partial updates could be implemented by having the ON CONFLICT clause only specify DO UPDATE SET for the attributes that you actually set. But (as of macOS 11) Core Data seems to always generate SQL to set all of the columns.
In summary, with batch inserts, NSMergeByPropertyObjectTrumpMergePolicy does not actually merge by property based on what's changed (like with a regular Core Data save). Rather, it either inserts a new row (if the object is absent) or overwrites all the columns but preserves the objectID (if the object was present).
NSMergeByPropertyStoreTrumpMergePolicy also doesn't merge by property. It just means to leave the stored object alone if it's already present.
Update (2021-06-24): I heard from DTS that Apple considers the current (iOS 14/macOS 11) behavior described above a bug, and that it should let you batch insert without changing omitted properties. The Radar number is 79747419.
In Fixed Assets (FA303000), I have customization that contains two custom fields and one custom table which is a child table referenced with FixedAsset's AssetID column.
Now, for some reason, we have to delete half of our fixed assets and put it back into Acumatica again. We are not creating the whole snapshot to create and restore the process. We are processing half of fixed assets records by removing it and put it back.
My initial thought was I just needed to put those records into some temporary table using (select * into duplicate_FixedAsset from FixedAsset.) then, delete those records from FixedAsset and put it back into FixedAsset.
Custom Fields record from Fixed Assets will be back, and for my custom child table that is linked via AssetID, I can use AssetCD to get AssetID and link it back again and everything should be fine.
But, I got wrong when inserting this record didn't appear back into the Fixed Assets page.
Upon inspecting Row_Deleting event, I found below code snippet.
protected virtual void FixedAsset_RowDeleting(PXCache sender, PXRowDeletingEventArgs e)
{
FixedAsset asset = (FixedAsset)e.Row;
if (asset == null) return;
if (null != (FATran)PXSelect<FATran, Where<FATran.assetID, Equal<Current<FixedAsset.assetID>>, And<FATran.batchNbr, IsNotNull>>>.SelectSingleBound(this, new object[] { asset }))
{
throw new PXSetPropertyException(Messages.BalanceRecordCannotBeDeleted);
}
this.EnsureCachePersistence(typeof(FARegister));
this.EnsureCachePersistence(typeof(FABookHistory));
foreach (FARegister reg in PXSelectJoinGroupBy<FARegister,
LeftJoin<FATran, On<FARegister.refNbr, Equal<FATran.refNbr>>>,
Where<FATran.assetID, Equal<Required<FixedAsset.assetID>>>,
Aggregate<GroupBy<FARegister.refNbr>>>.Select(this, asset.AssetID))
{
this.Caches<FARegister>().Delete(reg);
}
foreach (FABookHistory hist in PXSelect<FABookHistory, Where<FABookHistory.assetID, Equal<Required<FixedAsset.assetID>>>>.Select(this, asset.AssetID))
{
this.Caches<FABookHistory>().Delete(hist);
}
}
So, it doesn't seem that simple process that I have thought at the beginning. It shows that FARegister and FABookHistory are linked as well.
So, I would really appreciate it if I know what is the proper steps to follow to make sure that not only I get the Fixed Assets back without breaking any relation to other tables but also properly link back to my custom fields and table.
I want to update two database tables using hibernate entityManager. Currently I am updating 2nd table after verifying that data has been updated in 1st table.
My question is how to Rollback 1st table if data is not updated in 2nd table.
This is how I am updating individual table.
try {
wapi = getWapiUserUserAuthFlagValues(subject, UserId);
wapi.setFlags((int) flags);
entityManager.getTransaction().begin();
entityManager.merge(wapi);
entityManager.flush();
entityManager.getTransaction().commit();
} catch (NoResultException nre) {
wapi = new Wapi();
wapi.setSubject(merchant);
wapi.setUserId(UserId);
wapi.setFlags((int) flags);
entityManager.getTransaction().rollback();
}
Note - I am calling separate methods to update each table data
Thanks
I got the solution. basically I was calling two methods to update 2 db table and these two methods I am calling from one method.
Ex - I am method p and q from method r.
Initially I was calling begin, merge, flush and commit entitymanager methods in both p and q.
Now I am calling begin and commit in r and merge and flush in p and q.
So now my tables are getting updated together and rollback is also simple.
Hope it will help someone. Because I wasted my time for this, probably it can save someone else.
Thanks
I have multiple threads running a batch job. When each thread finishes it calls this method of mine:
private static readonly Object lockVar = new Object();
public void UserIsDone(int batchId, int userId)
{
//Get the batch user
var batchUser = context.ScheduledUsersBatchUsers.SingleOrDefault(x => x.User.Id == userId && x.Batch.Id == batchId);
if (batchUser != null)
{
lock (lockVar)
{
context.ScheduledUsersBatchUsers.Remove(batchUser);
context.SaveChanges();
//Try to get the batch with the assumption it has no users left. If we do get the batch back, it means there are no users left.
var dbBatch = context.ScheduledUsersBatches.SingleOrDefault(x => x.Id == batchId && !x.Users.Any());
//So this must have been the last user, the batch is empty, so we fetch it and remove it.
if (dbBatch != null)
{
context.ScheduledUsersBatches.Remove(dbBatch);
context.SaveChanges();
}
}
}
}
What this method does is very simple, it looks up the "BatchUser" to remove him from the queue, which it does. That part works swell.
However, after removing the user I want to check if that was the last user in the whole batch. But since this is multithreaded a race condition can happen.
So I put the removing of the batch user within a lock, after I remove the user, I check if the batch has no more batch users.
But here is my problem... even tho I have a lock, and the query to get the "dbBatch" clearly requires it to have no users to return the object... even so, I sometimes get it back with users like so:
When I do get that, I also get the following error on SaveChanges()
However, at other times I get the dbBatch object back correctly with no children, like so:
And when I do, it all works great, no exceptions.
With debugger I can catch the error by setting a breakpoint on the lock statement (see screenshot one). Then all threads get to the lock (while one goes in). Then I always get the error.
If I only have a breakpoint inside the if-statement it's more random.
With the lock in place, I don't see how this happens.
Update
I Ninject my context, and this is my ninject code
kernel.Bind<MyContext>()
.To<MyContext>()
.InRequestScope()
.WithConstructorArgument("connectionStringOrName", "MyConnection");
kernel.Bind<DbContext>().ToMethod(context => kernel.Get<MyContext>()).InRequestScope();
Update 2
I also tried this solution https://msdn.microsoft.com/en-us/data/jj592904.aspx
But strangely I don't get a DbUpdateConcurrencyException but rather I get a DbUpdateException that has an InnerException that is OptimisticConcurrencyException.
But neither DbUpdateException or OptimisticConcurrencyException contains a Entries property so I can't do ex.Entries.Single().Reload();
I'm also adding the exception in text form here
Here in text also, The outer exception of type DbUpdateException: {"An error occurred while saving entities that do not expose foreign key properties for their relationships. The EntityEntries property will return null because a single entity cannot be identified as the source of the exception. Handling of exceptions while saving can be made easier by exposing foreign key properties in your entity types. See the InnerException for details."}
The InnerException of type OptimisticConcurrencyException: {"Store update, insert, or delete statement affected an unexpected number of rows (0). Entities may have been modified or deleted since entities were loaded. See http://go.microsoft.com/fwlink/?LinkId=472540 for information on understanding and handling optimistic concurrency exceptions."}
I am using cqrs and ddd to build my application.
I have an account entity, a transaction entity and a transactionLine entity. A transaction contains multiple transactionLines. Each transactionLine has an amount and points to an account.
If a user adds a transactionLine in a transaction that already has a transactionLine that points to the same account as the one the new transactionLine, I want to simply add the new transactionLine amount to the existing one, preventing a transaction from having two transactionLines that point to the same account.
Ex :
Before command :
transaction
transactionLine1(amount=100, account=2)
transactionLine2(amount=50, account=1)
Command :
addNewTransaction(amount=25, account=1)
Desired result :
transaction
transactionLine1(amount=100, account=2)
transactionLine2(amount=75, account=1) // Add amount (50+25) instead of two different transactionLines
instead of
transaction
transactionLine1(amount=100, account=2)
transactionLine2(amount=50, account=1)
transactionLine3(amount=25, account=1) // Error, two different transactionLines point to the same account
But I wonder if it is best to handle this in the command or the event handler.
If this case is handled by the command handler
Before command :
transaction
transactionLine1(amount=100, account=2)
transactionLine2(amount=50, account=1)
Command :
addNewTransaction(amount=25, account=1) // Detects the case
Dispatches event
transactionLineAmountChanged(transactionLine=2, amount=75)
AddTransactionLine command is received
Check if a transactionLine exists in the new transactionLine's transaction with the same account
If so, emit a transactionAmountChangedEvt event
Otherwise, emit a transactionAddedEvt event
Corresponding event handler handles the right event
If this case is handled by the event handler
Before command :
transaction
transactionLine1(amount=100, account=2)
transactionLine2(amount=50, account=1)
Command :
addNewTransaction(amount=25, account=1)
Dispatches event
transactionLineAdded(transactionLine=3, amount=25)
Handler // Detects the case
transactionLine2.amount = 75
AddTransactionLine command is received
TransactionLineAdded event is dispatched
TransactionLineAdded is handled
Check if the added transaction's transactionLine points to the same account as an existing transactionLine in this account
If so, just add the amount of the new transactionLine to the existing transactionLine
Otherwise, add a new transactionLine
Neither commands nor events should contain domain logic, only the domain should contain the domain logic. In your domain, aggregate roots represent transaction boundaries (not your transaction entities, but transactions for the logic). Handling logic within commands or events would bypass those boundaries and make your system very brittle.
The right place for that logic is the transaction entity.
So the best way would be
AddTransactionCommand finds the correct transaction entity and calls
Transaction.AddLine(...), which does the logic and publishes events of what happened
TransactionLineAddedEvent or TransactionLineChangedEvent depending on what happened.
Think of commands and events as 'containers', 'dtos' of data that you are going to need in order to hydrate your AggregateRoots or send out to the world (event) for other Bounded Contexts to consume them. That's it. Any other operation that is strictly related to your Domain has no place but your AggregateRoots, Entities and Value Objects.
You can add some 'validation' to your Commands, either by using DataAnnotations or your own implementation of a validate method.
public interface ICommand
{
void Validate();
}
public class ChangeCustomerName : ICommand
{
public string Name {get;set;}
public void Validate()
{
if(Name == "No one")
{
throw new InvalidOperationException("Sorry Aria Stark... we need a name here!");
}
}
}