Doubly-linked lists in Core Data - core-data

I haven't found an answer on SO for a doubly-linked list implementation in Core Data.
Here's my object model:
It's a simple music playlist, with each song (URL) being a separate element. All relationships are to-one without inverses. I understand that inverses are often needed to make things work efficiently for inserts/deletes in the CD store, so that could be a required but not terribly useful addition.
Inserting and deleting elements would make use of the standard doubly-linked list methods.
Is this all that's required for a doubly-linked implementation in Core Data?

Related

Multiple Data Transfer Objects for same domain model

How do you solve a situation when you have multiple representations of same object, depending on a view?
For example, lets say you have a book store. Within a book store, you have 2 main representations of Books:
In Lists (search results, browse by category, author, etc...): This is a compact representation that might have some aggregates like for example NumberOfAuthors and NumberOfRwviews. Each Author and Review are entities themselves saved in db.
DetailsView: here you wouldn't have aggregates but real values for each Author, as Book has a property AuthorsList.
Case 2 is clear, you get all from DB and show it. But how to solve case 1. if you want to reduce number of connections and payload to/from DB? So, if you don't want to get all actual Authors and Reviews from DB but just 2 ints for count for each of them.
Full normalized solution would be 2, but 1 seems to require either some denormalization or create 2 different entities: BookDetails and BookCompact within Business Layer.
Important: I am not talking about View DTOs, but actually getting data from DB which doesn't fit into Business Layer Book class.
For me it sounds like multiple Query Models (QM).
I used DDD with CQRS/ES style, so aggregate roots are producing events based on commands being passed in. To those events multiple QMs are subscribed. So I create multiple "views" based on requirements.
The ES (event-sourcing) has huge power - I can introduce another QMs later by replaying stored events.
Sounds like managing a lot of similar, or even duplicate data, but it has sense for me.
QMs can and are optimized to contain just enough data/structure/indexes for given purpose. This is the way out of "shared data model". I see the huge evil in "RDMS" one for all approach. You will always get lost in complexity of managing shared model - like you do.
I had a very good result with the following design:
domain package contains #Entity classes which contain all necessary data which are stored in database
dto package which contains view/views of entity which will be returned from service
Dto should have constructor which takes entity as parameter. To copy data easier you can use BeanUtils.copyProperties(domainClass, dtoClass);
By doing this you are sharing only minimal amount of information and it is returned in object which does not have any functionality.

What class name should I use for a class use to CRUD with some data type in nodejs

In many case, I need write a lot of class work with CRUD for some class. For example CRUD with pure object User, Book, Tag.
I usually make a directory named models, put all the CRUD classed into the models folder.
But I feel that the word model is not show essence. Is the word model well-defined in computer science? It means the pure object of User, or the means of CRUD of User?
I also use another name services for more complex logic, For example UserService may require other models than UserModel. But the word service is often conflict with some other case like an online service, backend service.
Are there any good names for the model and service in my case? BTW, I am most using Node.js; it may not conflict with the general conventions used in Node.js.
Ultimately, it will come down to what makes the code the most understandable both to you and to someone down the road who may have occasion to work on your code. If 'model' and 'services' convey the thought of what lies within in an obvious way to anyone coming in to the code, then they are probably fine. As far as standards, I don't know if there is a 'defined' set of names you have to use. In MVC, for example, you will use 'Models' as one of your folders in order to store all of the actual models you will be feeding your views, and this is understood in the MVC architecture that those names (Models, Views, Controllers) are the standard.
I agree with you that Model is a little ambiguous. Sometimes it is used to indicate domain objects such as User/Book/Tag, but sometimes it is used to indicate objects that deal with business logic, such as "Buying a book","Authenticating a user".
What's common to both uses is that "Model" is clearly separated from UI, that is handled entirely by the Views and the Controllers.
Another useful name is Entities. In Robert Martin's work on Object Oriented Design, he speaks of use-case-driven design, and distinguishes between three kinds of objects: Entity Objects, Interactor objects and Boundary objects.
Entity objects are useful in multiple use-cases. For example, in a book selling system, entities can be Book/User/Recommendation/Review.
Interactor objects implement use-cases, and they typically use multiple entity objects. For example, Purchase_Book/Login/Search_Books can be such objects.
Boundary objects are used for transferring data across module boundaries, and are used for building interfaces between parts of the system, which should be decoupled from one-another. For example, a controller may need to create a Purchase_Book object, and in order to create it, it needs to pass data about what book ID needs to be purchased, by what user ID, etc... and this data can be packed in a boundary object called Purchase_Request.
While Interactor and Boundary require more explanation, I find that the word Entities is meaningful and can be grasped intuitively without reading any explanation.

Zipper data structure for graphical model editor

I'm writing a graphical editor for a "model" (i.e. a collection of boxes and lines with some kind of semantics, such as UML, the details of which don't matter here). So I want to have a data structure representing the model, and a diagram where an edit to the diagram causes a corresponding change in the model. So if, for instance, a model element has some text in an attribute, and I edit the text in the diagram, I want the model element to be updated.
The model will probably be represented as a tree, but I want to have the diagram editor know as little about the model representation as possible. (I'm using the diagrams framework, so associating arbitrary information with a graphical element is easy). There will probably be a "model" class to encode the interface, if I can just figure out what that should be.
If I were doing this in an imperative language it would be straightforward: I'd just have a reference from the graphical element in the diagram back to the model element. In theory I could still do this by building up the model from a massive collection of IORefs, but that would be writing a Java program in Haskell.
Clearly, each graphical element is going to have some kind of cookie associated with it that will enable the model update to happen. One simple answer would be to give each model element a unique identifier and store the model in a Data.Map lookup table. But that requires significant bookkeeping to ensure that no two model elements get the same identifier. It also strikes me as a "stringly typed" solution; you have to handle cases where an object is deleted but there is a dangling reference to it elsewhere, and its difficult to say anything about the internal structure of the model in your types.
On the other hand Oleg's writings about zippers with multiple holes and cursors with clear transactional sharing sounds like a better option, if only I could understand it. I get the basic idea of list and tree zippers and the differentiation of a data structure. Would it be possible for every element in a diagram to hold a cursor into a zipper of the model? So that if a change is made it can then be committed to all the other cursors? Including tree operations (such as moving a subtree from one place to another)?
It would particularly help me at this point if there was some kind of tutorial on delimited continuations, and an explanation of how they make Oleg's multi-cursor zippers work, that is a bit less steep than Oleg's postings?
I think you're currently working from a design in which each node in the model tree is represented by a separate graphical widget, and each of these widgets may update the model independently. If so, I don't believe that a multi-hole zipper will be very practical. The problem is that the complexity of the zipper grows quickly with the number of holes you wish to support. As you get much beyond 2 terms, the size of the zipper will get quite large. From a differentiation point of view, a 2-hole zipper is a zipper over 1-hole zippers, so the complexity grows by operation of the chain rule.
Instead, you can borrow an idea from MVC. Each node is still associated with a widget, but they don't communicate directly. Rather they go through an intermediary controller, which maintains a single zipper. When widgets are updated, they notify the controller, which serializes all updates and modifies the zipper accordingly.
The widgets will still need some sort of identifier to reference model nodes. I've found it's often easiest to use the node's path, e.g. [0] for the root, [1,0] for the root's second child, etc. This has a few advantages. It's easy to determine the node a path refers to, and it's also easy for a zipper to calculate the shortest path from the current location to a given node. For a tree structure, they're also unique up to deletion and reinsertion. Even that isn't usually a problem because, when the controller is notified that nodes should be deleted, it can delete the corresponding widgets and disregard any associated updates. As long as the widget lifetime is tied to each node's lifetime, the path will be sufficiently unique to identify any modifications.
For tree operations, I would probably destroy then recreate graphical widgets.
As an example, I have some code which does this sort of thing. In this model there aren't separate widgets for each node, rather I render everything using diagrams then query the diagram based on the click position to get the path into the data model. It's far from complete, and I haven't looked at it for a while, so it might not build, but the code may give you some ideas.

How to go about creating an Aggregate

Well this time the question I have in mind is what should be the necessary level of abstraction required to construct an Aggregate.
e.g.
Order is composed on OrderWorkflowHistory, Comments
Do I go with
Order <>- OrderWorkflowHistory <>- WorkflowActivity
Order <>- CommentHistory <>- Comment
OR
Order <>- WorkflowActivity
Order <>- Comment
Where OrderWorkflowHistory is just an object which will encapsulate all the workflow activities that took place. It maintains a list. Order simply delegates the job of maintaining th list of activities to this object.
CommentHistory is similarly a wrapper around (list) comments appended by users.
When it comes to database, ultimately the Order gets written to ORDER table and the list of workflow activities gets written to WORKFLOW_ACTIVITY table. The OrderWorkflowHistory has no importance when it comes to persistence.
From DDD perspective which would be most optimal. Please share your experiences !!
As you describe it, the containers (OrderWorkflowHistory, CommentHistory) don't seem to encapsulate much behaviour. On that basis I'd vote to omit them and manage the lists directly in Order.
One caveat. You may find increasing amounts of behaviour required of the list (e.g. sophisticated searches). If that occurs it may make sense to introduce one/both containers to encapulate that logic and stop Order becoming bloated.
I'd likely start with the simple solution (no containers) and only introduce them if justified as above. As long as external clients make all calls through Order's interface you can refactor Order internally without impacting the clients.
hth.
This is a good question, how to model and enrich your domain. But sooo hard to answer since it vary so much for different domain.
My experince has been that when I started with DDD I ended up with a lots of repositories and a few Value Objects. I reread some books and looked into several DDD code examples with an open mind (there are so many different ways you can implement DDD. Not all of them suits your current project scenario).
I started to try to have in mind that "more value objects, more value objects, more value objects". Why?
Well Value objects brings less tight dependencies, and more behaviour.
In your example above with one to many (1-n) relationship I have solved 1-n rel. in different ways depending on my use cases uses the domain.
(1)Sometimes I create a wrapper class (like your OrderWorkflowHistory) that is a value object. The whole list of child objects is set when object is created. This scenario is good when you have a set of child objects that must be set during one request. For example a Qeustion Weights on a Questionaire form. Then all questions should get their question weight through a method Questionaire.ApplyTuning(QuestionaireTuning) where QuestionaireTuning is like your OrderWorkflowHistory, a wrapper around a List. This add a lot to the domain:
a) The Questionaire will never get in a invalid state. Once we apply tuning we do it against all questions in questionaire.
b) The QuestionaireTuning can provide good access/search methods to retrieve a weight for a specific question or to calculate average weight score... etc.
(2)Another approach has been to have the 1-n wrapper class not being a Value object. This approach suits more if you want to add a child object now and then. The parent cannot be in a invalid state because of x numbers of child objects. This typical wrapper class has Add(Child...) method and several search/contains/exists/check methods.
(3)The third approach is just having the IList exposed as a readonly collection. You can add some search functionality with Extension methods (new in .Net 3.0) but I think it's a design smell. Better to incapsulate the provided list access methods through a list-wrapper class.
Look at http://dddsamplenet.codeplex.com/ for some example of approach one.
I believe the entire discussion with modeling Value objects, entities and who is responsible for what behaviour is the most centric in DDD. Please share your thoughts around this topic...

Are there any good patterns for handling list of entities

In "DDD" what is the best patterns for handling different versions of your entities, e.g. Entities in a list vs the full object. I would like to avoid the overhead of getting properties I do not need when displaying the entities in a list
Would you have a separate entity type used in lists or just fill up your full entity type partially?
Would you use inheritance?
I understand your urge to create "views" of models in the domain, but would recommend against it. Personally, I use the entire entity inside of the domain, regardless of the situation. The entity is the entity, and anything less or more just does not feel clean. That does not mean that I can't use a reference to the entity to help focus my use of the items in the list, though.
The entity does not cross the domain boundary in my implementation. Instead, I return a type of DTO and have application services that can abstract a view from it. This allows, for example, allowing a presenter to generate the correct view model from a DTO and provide it to the view. I don't know if you are talking about operations in the domain services or in the application services, but there are a couple of things you can do that could be applied to either (or both).
You can do certain things to reduce the performance penalty of working with the entire entity in the domain layers, as well. One thing to look at is implementing some sort of cache-aside implementation. When an entity is requested, check to see if it is cached. If it is, return the cached version. If it isn't, pull it and then cache it before returning. When the entity is updated, evict it from the cache and do your update. I have purposely created my concrete repository implementations to be cache-aware to facilitate this. One other thing to consider using an approach like this is that it is beneficial to do as many fine-grained operations as possible. While that seems illogical at first, if entities are commonly "gotten" from your data store, it is easy to set up some logging to measure the number of cache hits to cache misses.
Coming full circle, to your question... Most lists I deal with are small, so I incur the penalty of loading up the entity in its entirety. Assuming that most use cases will involve the user drilling into one or more of the items, they are pre-cached because of the cache-aside implementation. The number of items is fluid, but I generally apply this approach to anything less than twenty five entities in a list.
For larger lists, I just use IDs. Most likely, the use case here is some sort of search result. Search results are commonly paged, for example, and this does not fit into the above pattern. Instead, I use the larger list of IDs as a sliding range window of entities I am interested in that I then pass to a GetRangeById() method that all of my repositories have - written to purposely take a list of identifiers and load them one at a time so they are cached. In essence, this will take a larger lightweight list and zero in just on the area I am interested in at a given point in time.
With an approach like this, the important thing to realize is that it is highly scalable. It might not baseline as fast as a non-cached approach with small sets of data, but will perform better with larger sets of data. There is an implied performance overhead of operation at play here, but it degrades at a slower rate than a standard "load 'em up" pattern, as well.
You can use CQRS pattern to separate query processing and command processing. And you can do it even on a single database. In such a case you would map you view models directly to the tables in databse (via NHibernate for example). Commands (writes) would go through real domain model and would be persisted in the DB. Queries (like get me a list of entities) would bypass the domain a go straight do DB. There is no point in querying domain object because you actually don't invoke any business logic in them, just retrieving some data.
You can also extend this solution to full-featured CQRS by having separate stores for command side and for query side. Query side would be synchronized by means of replication or pub/sub messaging.

Resources