Can I skip operations in nested contexts? - core-data

Before nested contexts my life was easy declaring some common functions from NSManagedObjects such as:
awakeFromFetch
awakeFromInsert
validateX
didChangeValueForKey
Through try and fail, call it, life lessons. I learnt to be more careful about those functions when using inheritance with my entities.
Now jumping to the nested context part of Core Data, I am getting used to see the very same NSManagedObject existing in several contexts. I am trying to grasp a key part of this: The above mentioned functions might be called several times from different contexts, but things like validation, value observation, and initialization are only relevant once. Therefore I normally check if the self.managedObjectContext correspond to the main context I work with. If they are the same, I run the whole function, if not, I just call [super] and return.
Is that the way I am supposed to deal with nested contexts?
Doing this has saved me several times. I was surprised seing awakeFromInsert called more than once for the same object, but even if I abort the initialization part of that function in all contexts but the main one, the information is persistet back to the store. I would like some confirmation that I am dealing with this topic the right way.
UPDATE:
Trying to make my ideas clear I wrote a post about this topic in my blog. What I wrote there has saved me several times from invalid entity layouts and other errors. However I am not sure if there are approaches better than mine.
http://www.digital-lumberjack.com/blog/2012/11/surviving-core-data/

Related

How would I make separate duplicated logic in Value Objects

I'm learning DDD (Domain Driven Design) and reading Clean Architecture. I found out about Value Objects and have been reading more about them and have implemented some in my application which could be found https://gist.github.com/Tyler0722/73ec826be814b8e54b9f6ae6ae836409.
Value Objects are created by calling the factory method which performs validation and follows the separation of concerns principle. The problem is I have two Value Objects where the validation logic is duplicated. Username and QuizTitle where the only difference is MAX_LENGTH which violates the DRY principle. I was wondering if anyone knew anyway I could make this DRY.
If you want to remove repetition you must be very careful.
The first question you should ask yourself is
What is the same and what is different?
Maybe you will find the same or similar structure like
check the props
create an error result or a success result
If you found out what you think the same is you have to ask yourself more questions:
Do the things that are (look) the same have different responsibilities?
Do they change for the same or for different reasons?"
These are the questions that are about the single responsibility principle. Sometime things look the same by accident. They look the same but are different, because they change of different reasons. If you eliminate the "duplication" in this case you will have more problems then you solve.
E.g. if you consolidate the validation logic of QuizTitle and Username into one method and later you get a change request for the QuizTitle validation you can easily break the validation behavior of Username. I guess it should not change if the QuizTitle validation changes.
You might also want to read the good article by Martin Fowler about Avoiding Repetition
My suggestion is to not do it. In this case it's just not necessary.
But if you want to do it anyway, you can build a Validation class with static functions. At my work we use Webmozart/Assert and if you take a look at its implementation you will understand what I mean. In my opinion there are very handy. Also you can pass the MAX_LENGHT as an argument.
Well, I agree with Darius Mann, that these value objects have similar business logic is ocasional, if they are different concepts in your app, keep them separated and independently, it is my opinion.

ABAP DDD how to correctly implement add/update/remove child from aggregate via RFC?

I'm trying to follow DDD principles for my current project. Unfortunately I have to use RFCs due to technical constraints, so no OData and no REST. It's quite a long question I hope it's OK to ask this in Stackoverflow.
In any case, I have an entity class WorkOrder with a list of Operation objects.
I have a WorkOrderRepository class with a SAVE method that only receives a WorkOrder object and is able to save everything (header data, address, etc) in one go. No matter if it's creation, update or deleted. The repository hides the BAPI calls from the rest.
Now I want to implement the logic to add/update/remove Operations to the work order object and I'm not sure even if the names I give to the methods are correct. Maybe they should be insert/edit/delete... I'm quite confused with this since in every place I look they use different names.
But the most important are my 2 specific doubts:
Should I have only 1 RFC that receives all the updates to a WorkOrder entity including header, operations? Or should I create 1 RFC per operation that only handles one operation at a time? Bear in mind that the UI mockup expects that the user can add/delete multiple operations before clicking a SAVE button and RFC has implicit commit, and that to my knowledge a DDD entity should be updated always in one call.
Option 1:
FUNCTION ZWORKORDER_HDR_UPD
IMPORTING
VALUE(I_WORKORDER_ID) TYPE AUFNR
VALUE(I_WORKORDER_HDR_CHG) TYPE ZWORKORDER_HDR_CHG
VALUE(I_WORKORDER_HDR_UPD) TYPE ZWORKORDER_HDR_UPD "X structure for the BAPI
VALUE(I_OPERATIONS_CHG) TYPE ZOPERATIONS_CHG
VALUE(I_OPERATIONS_UPD) TYPE ZOPERATIONS_UPD
VALUE(I_OPERATIONS_DEL) TYPE ZOPERATIONS_DEL
EXPORTING
VALUE(E_ERRORS) TYPE BAPIRET2_T.
Option 2
FUNCTION ZWORKORDER_OPERATION_CRT
IMPORTING
VALUE(I_WORKORDER_ID) TYPE AUFNR
VALUE(I_OPERATION) TYPE ZOPERATION_CHG
EXPORTING
VALUE(E_ERRORS) TYPE BAPIRET2_T.
FUNCTION ZWORKORDER_OPERATION_UPD
IMPORTING
VALUE(I_WORKORDER_ID) TYPE AUFNR
VALUE(I_OPERATION_CHG) TYPE ZOPERATION_CHG
VALUE(I_OPERATION_UPD) TYPE ZOPERATION_UPD
EXPORTING
VALUE(E_ERRORS) TYPE BAPIRET2_T.
FUNCTION ZWORKORDER_OPERATION_DEL
IMPORTING
VALUE(I_WORKORDER_ID) TYPE AUFNR
VALUE(I_OPERATION_ID) TYPE ZOPERATION_ID
EXPORTING
VALUE(E_ERRORS) TYPE BAPIRET2_T.
How should my Workorder methods look to handle this? I'm specially confused with the update method, since I'm not sure if I should first get the existing operation and then update it or let the parent class do it. But maybe my approaches are completely wrong from the root.
Option 1:
workorder->add_operation( i_operation ). "Pass flat structure from RFC? Or first create object?
workorder->update_operation( i_operation_chg
i_operation_upd ).
workorder->delete_operation( i_operation_id ).
Option 2:
workorder->add_operation( ).
operation = workorder->get_operation(i_operation_chg->get_id())
operation->update( i_operation_chg
i_operation_upd ).
operation->delete_operation( i_operation_id ).
The simplest solution is always the best (KISS and YAGNI principles). It doesn't really matter if you create 1 or 3 RFC-enabled function module, so if you can achieve your goal with one function module, then do it with one.
I think you need to have two RFC-enabled function modules. One to validate the maintained operations (do the validations as far as possible), but that should not save anything to the database, and another one called after the user clicks the SAVE button, to save the whole "WorkOrder", including the maintained operations (at this time, there will be the complete validation also).
If you don't need to define an "operation" class for something else, right now, then keep it simple, no need to instantiate an object. Note that you may create an "operation" class with private static methods, and being a friend of the "workorder" class (only this class can use the operation class), just to organize better your code.
PS: although I don't know what is "Domain-Driven Design", I don't see how your question is related to it, because it just looks like simple program design.
We also work with DDD, although luckily not with RFCs but instead Gateway/OData. Since REST by defintion is stateless we always commit in our application service layer.
What we do is have three "DDD" entities
app_Service, domain_service, repository (and also some data containers like aggregates), where the app_service in your case would expose the methods create, update and delete, and possibly also validate.
Then I would write four thin RFCs (crud + validate) that basically passes the data to the app service.
As for being able to handle multiple updates in each save, we always model our OData (your RFC) after the UI requirements, then it's the app_service task to make sense of the data from a system point of view.
We make heavy use of Abap Clean Code (it's even part of our ATC check), and they clearly state that you should have seperate methods, which is also OO best practices.
https://github.com/SAP/styleguides/blob/master/clean-abap/CleanABAP.md#split-methods-instead-of-adding-optional-parameters

How to go about creating an Aggregate

Well this time the question I have in mind is what should be the necessary level of abstraction required to construct an Aggregate.
e.g.
Order is composed on OrderWorkflowHistory, Comments
Do I go with
Order <>- OrderWorkflowHistory <>- WorkflowActivity
Order <>- CommentHistory <>- Comment
OR
Order <>- WorkflowActivity
Order <>- Comment
Where OrderWorkflowHistory is just an object which will encapsulate all the workflow activities that took place. It maintains a list. Order simply delegates the job of maintaining th list of activities to this object.
CommentHistory is similarly a wrapper around (list) comments appended by users.
When it comes to database, ultimately the Order gets written to ORDER table and the list of workflow activities gets written to WORKFLOW_ACTIVITY table. The OrderWorkflowHistory has no importance when it comes to persistence.
From DDD perspective which would be most optimal. Please share your experiences !!
As you describe it, the containers (OrderWorkflowHistory, CommentHistory) don't seem to encapsulate much behaviour. On that basis I'd vote to omit them and manage the lists directly in Order.
One caveat. You may find increasing amounts of behaviour required of the list (e.g. sophisticated searches). If that occurs it may make sense to introduce one/both containers to encapulate that logic and stop Order becoming bloated.
I'd likely start with the simple solution (no containers) and only introduce them if justified as above. As long as external clients make all calls through Order's interface you can refactor Order internally without impacting the clients.
hth.
This is a good question, how to model and enrich your domain. But sooo hard to answer since it vary so much for different domain.
My experince has been that when I started with DDD I ended up with a lots of repositories and a few Value Objects. I reread some books and looked into several DDD code examples with an open mind (there are so many different ways you can implement DDD. Not all of them suits your current project scenario).
I started to try to have in mind that "more value objects, more value objects, more value objects". Why?
Well Value objects brings less tight dependencies, and more behaviour.
In your example above with one to many (1-n) relationship I have solved 1-n rel. in different ways depending on my use cases uses the domain.
(1)Sometimes I create a wrapper class (like your OrderWorkflowHistory) that is a value object. The whole list of child objects is set when object is created. This scenario is good when you have a set of child objects that must be set during one request. For example a Qeustion Weights on a Questionaire form. Then all questions should get their question weight through a method Questionaire.ApplyTuning(QuestionaireTuning) where QuestionaireTuning is like your OrderWorkflowHistory, a wrapper around a List. This add a lot to the domain:
a) The Questionaire will never get in a invalid state. Once we apply tuning we do it against all questions in questionaire.
b) The QuestionaireTuning can provide good access/search methods to retrieve a weight for a specific question or to calculate average weight score... etc.
(2)Another approach has been to have the 1-n wrapper class not being a Value object. This approach suits more if you want to add a child object now and then. The parent cannot be in a invalid state because of x numbers of child objects. This typical wrapper class has Add(Child...) method and several search/contains/exists/check methods.
(3)The third approach is just having the IList exposed as a readonly collection. You can add some search functionality with Extension methods (new in .Net 3.0) but I think it's a design smell. Better to incapsulate the provided list access methods through a list-wrapper class.
Look at http://dddsamplenet.codeplex.com/ for some example of approach one.
I believe the entire discussion with modeling Value objects, entities and who is responsible for what behaviour is the most centric in DDD. Please share your thoughts around this topic...

Doubts About Core Data NSManagedObject Deep Copy

I have a situation where I must copy one NSManagedObject from the main context into an editing context. It sounds unnecessary to most people as I have seen in similar situations described in Stackoverflow but I looks like I need it.
In my app there are many views in a tab bar and every view handles different information that is related to the other views. I think I need multiple MOCs since the user may jump from tab to tab and leave unsaved changes in some tab but maybe it saves data in some other tab/view so if that happens the changes in the rest of the views are saved without user consent and in the worst case scenario makes the app crash.
For adding new information I got away by using an adding MOC and then merging changes in both MOCs but for editing is not that easy. I saw a similar situation here in Stackoverflow but the app crashes since my data model doesn't seem to use NSMutableSet for the relationships (I don't think I have a many-to-many relationship, just one-to-many) I think it can be modified so I can retrieve the relationships as if they were attributes
for (NSString *attr in relationships) {
[cloned setValue:[source valueForKey:attr] forKey:attr];
}
but I don't know how to merge the changes of the cloned and original objects. I think I could just delete the object from the main context, then merge both contexts and save changes in the main context but I don't know if is the right way to do it. I'm also concerned about database integrity since I'm not sure that the inverse relationships will keep the same reference to the cloned object as if it were the original one.
Can some one please enlighten me about this?
Wow, lots of questions and randomness in this one. First, you do not need to add comments to your own question, better to edit the question itself.
Second, you don't need multiple NSManagedObjectContext instances if you are running a single threaded application. Your entire application can easily run off of a single instance. Multiple contexts are for rare edge cases and multi-threading situations.
By using a single context it will resolve all of your issues with your attempts to clone. However, if you are still wondering how to do a deep copy of a NSManagedObject you can get some guidance from the example code in my book at The Pragmatic Programmers; the source code of which is free to download.
Update
All NSManagedObjectContext instances are "editing ones". You only need a single context per thread. You can easily ask the context for the non-inserted entities if you want to delete them before a save and you can easily save entities as they are changed. Except for some extreme edge cases you do not need a second context.
Update
You are still creating more work for yourself then you need to. You don't need to save an object for the view to go away. You can leave as many objects as you want in an unsaved state in the context.
You are going to spend more time debugging dealing a deep copy of objects than it is worth with the design you are describing. If you are going to use more than one context, make sure they are attached to the same NSPersistentStoreCoordinator so that you don't need to copy the objects around, you can just save the "secondary" context and then catch the save notification in the main context.

Using object's setter to trigger data updates, best practices

I have an object that gets instantiated in a linq to sql method. When the object fields are being assigned, i want to check a date field and if it is an old date, retrieve data from another table and perform calculations before continuing with assigning this object.
Is there anything wrong with triggering such an event through the property setter Or should I independently check the date through some service and make the changes if necessary at some point aftwewards?
There's nothing wrong with doing some logic from within your setters, but you should be careful about just how much logic you put within your setters. One of the fundamental problems of setters is that since they act like attributes, but have backing code, it's easy to forget that there are potentially some non-trivial actions going on behind the scenes.
This sort of thing can cause problems if you have accessors which use accessors which use accessors; you can rapidly end up causing unexpected performance problems. Generally, it's a good idea to keep the actions of setters (or getters, for that matter) to a relatively small set of actions. For example, validation can work perfectly fine in a setter, but I'd generally advise against doing validation against external resources, because of two things: first, resource delays can cause problems with expected access speed, and secondly, the number of external resource accesses can destroy your performance.
Generally, the rule is this: keep it simple. It's not unreasonable to do complicated things in a setter, but if you do, it's really important to understand the consequences of all of the actions you'll be causing, and it's EXTREMELY important to document what it does extremely well, so the next guy (or girl) to use the code doesn't just try to naively use the accessor and end up causing massive resource contention issues unexpectedly.
Half the point of using setters instead of, say, public fields, is to be able to trigger events associated with setting certain data.
Keyword: associated. If you're just using the setter as a "convenient" time to do some other stuff because it happens to work, you're doing it wrong. If setting this value requires other work to be done, then by all means, use the setter to do it.

Resources