Delete rules for a Core Data many-to-many relationship - core-data

I know there are a lot of questions about the delete rules of Core Data relationships, but I didn't find an answer to my "problem".
In my Core Data Model, I have a many-to-many relationship between the CDMTransaction and CDMTransactionTag entities (CDMTransaction.tags <<->> CDMTransactionTag.transactions). Each transaction can be linked to zero, one or more tag, and then every tag can be linked to one or more transaction (or zero, but this doesn't make sense to keep it, and this is what I'm working on).
So when I delete a tag (will a "Nullify" delete rule), it is removed from the transactions that had this tag. This is OK.
But what I would like to do, is when I delete a transaction and its linked tag(s) remained unused (CDMTransactionTag.transactions.#count == 0), this/these tag(s) should be also deleted.
Can I set a "Cascade" rule for the CDMTransaction entity? It would delete all its linked tags, even if they are still linked to other transactions, no?
Am I forced to do that programmatically?
Thanks!
Edit: in fact, I just would like to delete the CDMTransactionTag instances when their .transactions.#count == 0 (so it shouldn't be checked only when I delete some transactions, but also when I change the tags of a transaction).

Don't know if you are still pursuing to a solution within your desired scope.
I'm dealing with similar issues and would like to share my findings so far:
Among the delete rules, only cascade will delete the object(s) (e.g. in your question: your *Tag entity instance(s)) at the relationship (1st) destination,
it does delete the destination objects even if they are "still linked to other transactions" unless...,
unless the "linked to other transactions" is a relationship(2nd) with delete rule of deny,
because only deny rule can prevent source object to be deleted when there is at least one object at the relationship(2nd) destination among the rules; and,
the delete rules are to specify what should happen if AN ATTEMPT is made to delete the source object, (i.e. the moment just right before the possible source object deletion occurs).
I have not yet tested the above design for the similar issues but I will try and post the result later if I may see some ongoing activities to the post.

Related

how to delete workorders in maximo

In maximo, can we delete a work order? The select action menu gives me
BMXAA4612E - Cannot delete because it is, or had at one time been approved.
I have created numerous test work orders and it is getting difficult to track new work orders.
Short Answer: You can't.
Standard Maximo will not let you delete it past a certain point. It keeps this data around for a sort of auditing (and because there could be a lot of dependencies to undo).
Bad Answer: With some database queries. You can of course delete about anything if you start modifying Maximo's underlying database directly. The Work Order object has a number of related tables though, so make sure you delete all referenced data from those as well. And there might be other places you need to update too, like a PM due date, depending on the situation.
In a normal Maximo environment, it is very common to have a lot of open work orders at any given time. You are going to need to develop ways to handle the fluff. Closing your old test work orders helps, because Maximo filters out closed work orders by default. Standard filters with some specialized data and saved queries are some other options.
Clear attribute: FIRSTAPPRSTATUS
update woactivity set FIRSTAPPRSTATUS = null
where ;
Then deleting should be possible
--> However it is not (in maximo 7.6)
For Maximo 7.6, change the workorder status to WAPPR from the backend or through MIF and then use MIF to delete the record(s)
I created an action that executes set FIRSAPPRSTATUS null and created an escalation with the condition of selecting a work order. After the escalation is completed, deletion will be available if the remaining conditions for deleting the work order are met.

Deployment type-code changed from Reserved Hybris to Non-reserved codes. Do I need Update or Initialize the whole system?

I used some of the Hybris reserved Deployment code and then later changed to non-reserved deployment type codes. Do I need to Initialize the system in-order to reflect the changes with new deployment code or just an Update works. There are many items that deployment code has been changed. Why update doesn't work?
When you use a reserved code in your deployment table, you're likely to add the attributes of your object in an existing table. If you have attributes with the same name, it'll surely be a mess in the table (I don't know how hybris will choose the table type for example).
When you run an update with the good deployment code, it will create a new table which is just fine. The other table which has been used by two objects will still remain potentially broken because hybris won't delete any column.
That's why you should initialize your system to have a clean DB. The issue is that you'll lose all your data.
If you need to migrate data it will be probably quite hard because you must have to look on the broken table and distinguish between the attributes that should not be there and the others. So I hope for you that it's just a dev issue!
Actually i would suggest you to do initialize rather than update more likely that the update will not work for you in this case and probably you will get some error messages saying invalid pk xxxxxxxxxxxx because of unknown typecode yyyy.
As you may know the typeCode (deployment code) is an essential operator for the generation process of PKs in Hybris and thanks to it Hybris can ensure the uniquenessity of the PKs, so even if you change the old typeCode with a new one it's very likely that Hybris will still keep the old typeCode somewhere hence PKs already generated will never be consistent with the new typeCode.
So that's why you should never change the typecode of an item once given.
My suggestion is :
To make a backup of your existing data (you can export it from HMC,
you may take a look at alain.janinm's answer here).
Then initialize your System.
Then re-import the data again.
Note : that typecodes between 0 and 10000 are already reserved for hybris
particular items.

Mapped field not updating when parent record does

2013 On-Premise
Hello,
I have a parent record and a subgrid that can create a related record. When I create this related record several of the parent fields are mapped over to save the user double entry & mistakes.BTW the related record is being created via a quick create form.
Everything works great....at first.
If the parent record changes and the changes are saved. Then a new related record created the mapped fields DO NOT reflect the updated parent?
Further this behavior exist if there are NO related records or several.
Is my relationship not properly defined...i.e. needing cascading?...I thought that was just for cascading deletes???
Any input greatly appreciated
#Dave
My apologies...perhaps I have not been clear or I am not understanding you.
....If you need the previously mapped fields to change when the parent record values change....
This is where I am wondering if I am not being clear or understanding. This is happening on the "create" not existing records.
So I thought perhaps incorrectly if I changed the parent record and then went to create a new related record it would get the new mapping?? BOLD just so text isn't lost between picts.
The mapping functionality is only applied when the child record is created. Cascading only applies to events like deleting, sharing, unsharing, assigning, and re-parenting the parent record. Mapping is not involved in cascading at all. - http://msdn.microsoft.com/en-us/library/gg309412.aspx
If you need the previously mapped fields to change when the parent record values change, this would best be addressed with a plugin. You may also consider making the child's mapped fields read only so user's don't think they can enter information in the child record's fields that get populated from the parent.

How does EE delete entries?

I'm trying to figure out how ExpressionEngine deletes entries.
I've written an log-like extension that tracks when an entry is created. When I delete an entry through EE's edit section, the entry is also removed from the separate table I created for my extension.
How does EE know to delete the row from my table when the entry is removed? One of the columns in my table is `entry_id`. It would seem like EE automatically checks all tables for a entry_id column and if the value matches the value being deleted, the row is removed. Can anyone confirm this?
It would explain why I didn't have to make a function that hooks into delete_entries_loop to achieve this functionality.
That's very odd. That behavior would be insane if it was indeed the case!
Looking at the delete_entry() method of the Channel Entries API, the deletions are very specifically limited to:
channel_titles
channel_data
category_posts
relationships
comments
comment_subscriptions
channel_entries_autosave
entry_versioning
The Channel Fields API is also called, to let fieldtypes delete what they need to from their own database tables based on the entry being deleted, but only if they contain a delete() method.
I'd suggest turning on the output profiler, then running the deletion routine to see what queries are being run.

How to handle entity creation/editing in a master-detail

I'm wondering what strategies people are using to handle the creation and editing of an entity in a master-detail setup. (Our app is an internet-enabled desktop app.)
Here's how we currently handle this: a form is created in a popup for the entity that needs to be edited, which we give a copy of the object. When the user clicks the "Cancel" button, we close the window and ignore the object completely. When the user clicks the "OK" button, the master view is notified and receives the edited entity. It then copies the properties of the modified entity into the original entity using originalEntity.copyFrom(modifiedEntity). In case we want to create a new entity, we pass an empty entity to the popup which the user can then edit as if it was an existing entity. The master view needs to decide whether to "insert" or "update" the entities it receives into the collection it manages.
I have some questions and observations on the above workflow:
who should handle the creation of the copy of the entity? (master or detail)
we use copyFrom() to prevent having to replace entities in a collection which could cause references to break. Is there a better way to do this? (implementing copyFrom() can be tricky)
new entities receive an id of -1 (which the server tier/hibernate uses to differentiate between an insert or an update). This could potentially cause problems when looking up (cached) entities by id before they are saved. Should we use a temporary unique id for each new entity instead?
Can anyone share tips & tricks or experiences? Thanks!
Edit: I know there is no absolute wrong or right answer to this question, so I'm just looking for people to share thoughts and pros/cons on the way they handle master/details situations.
There are a number of ways you could alter this approach. Keep in mind that no solution can really be "wrong" per se. It all depends on the details of your situation. Here's one way to skin the cat.
who should handle the creation of the copy of the entity? (master or detail)
I see the master as an in-memory list representation of a subset of persisted entities. I would allow the master to handle any changes to its list. The list itself could be a custom collection. Use an ItemChanged event to fire a notification to the master that an item has been updated and needs to be persisted. Fire a NewItem event to notify the master of an insert.
we use copyFrom() to prevent having to replace entities in a collection which could cause references to break. Is there a better way to do this? (implementing copyFrom() can be tricky)
Instead of using copyFrom(), I would pass the existing reference to the details popup. If you're using an enumerable collection to store the master list, you can pass the object returned from list[index] to the details window. The reference itself will be altered so there's no need to use any kind of Replace method on the list. When OK is pressed, fire that ItemChanged event. You can even pass the index so it knows which object to update.
new entities receive an id of -1 (which the server tier/hibernate uses to differentiate between an insert or an update). This could potentially cause problems when looking up (cached) entities by id before they are saved. Should we use a temporary unique id for each new entity instead?
Are changes not immediately persisted? Use a Hibernate Session with the Unit of Work pattern to determine what's being inserted and what's being updated. There are more examples of Unit of Work out there. You might have to check out some blog posts by the .NET community if there's not much on the Java end. The concept is the same animal either way.
Hope this helps!
The CSLA library can help with this situation a lot.
However, if you want to self implement :
You have a master object, the master object contains a list of child objects.
The detail form can edit a child object directly. Since everything is reference types, the master object is automatically updated.
The issue is knowing that the master object is dirty, and therefore should be persisted to your database or whatnot.
CSLA handles this with an IsDirty() property. In the master object you would query each child object to see if it is dirty, and if so persist everything (as well as tracking if the master object itself is dirty)
You can also handle this is the INotifyPropertyChanged interface.
As for some of your other questions :
You want to separate your logic. The entity can handle storage of its own properties, and integrity rules for itself, but logic for how different object interact with each other should be separate. Look into patterns such as MVC or MVP.
In this case, creation of a new child object should either be in the master object, or should be in a separate business logic object that creates the child and then adds it to the parent.
For IDs, using GUIDs as the ID can save you quite a bit of problems, because then you don't have to talk to the database to determine a correct ID. You can keep a flag on the object for if it is new or not (and therefore should be inserted or updated).
Again, CSLA handles all of this for you, but does have quite a bit of overhead.
regarding undo on cancel : CSLA has n-level undo implemented, but if you are trying to do it by hand, I would either use your CopyFrom function, or refresh the object's data from the persistance layer on cancel (re-fetch).
i just implemented such a model.but not using NH, i am using my own code to persist objects in Oracle Db.
i have used the master detail concept in the same web form.
like i have master entity grid and on detail action command i open a penal just below the clicked master record row.
On Detail Add mode, i just populate an empty entity whose id were generated in negative numbers by a static field.and on Save Detail button i saved that entity in the details list of the Master Record in Asp.NET Session.
On Detail Edit,View i populated the Detail Panel with selected Detail through ajax calls using Jquery and appended that penal just below the clicked row.
On Save Button i persisted the Master Session (containing list of Details) in database.
and i worked good for me as if multiple details a master need to fill.
also if you like you can use Jquery Modal to Popup that Panel instead of appending below the row.
Hope it helps :)
Thanks,

Resources