Managing complex state in FP - haskell

I want to write a simulation of a multi-entity system. I believe such systems motivated creation of Simula and OOP where each object would maintain its own state and the runtime would manage the the entire system (e.g. stop threads, serialize data).
On the other hand, I would like to have ability to rewind, change the simulation parameters and compare the results. Thus, immutability sounds great (at least up to almost certain garbage collection issues caused by keeping track of possibly redundant data).
However I don't know how to model this. Does this mean that I must put every interacting entity into a single, huge structure where each object update would require locating it first?
I'm worried that such approach would affect performance badly because of GC overhead and constant structure traversals as opposed to keeping one fixed address of entity in memory.
UPDATE
To clarify, this question asks if there is any other design option available other than creating a single structure that contains all possibly interacting entities as a root. Intuitively, such a structure would imply logarithmic single update penalty unless updates are "clustered" somehow to amortize.
Is there a known system where interactions could be modelled differently? For example, like in cold/hot data storage optimization?
After some research, there seems to be a connection with N-body simulation where systems can be clustered but I'm not familiar with it yet. Even so, would that also mean I need to have a single structure of clusters?

While I agree with the people commenting that this is a vague question, I'll still try to address some of the issues put forth.
It's true that there's some performance overhead from immutability, because when you use mutable state, you can update some values in-place, whereas with immutable state, some copying has to take place.
It is, however, a common misconception that this is causes problems with big 'object' graphs. It doesn't have to.
Consider a Haskell data structure:
data BigDataStructure = BigDataStructure {
bigChild1 :: AnotherBigDataStructure
, bigChild2 :: YetAnotherBigDataStructure
-- more elements go here...
, bigChildN :: Whatever }
deriving (Show, Eq)
Imagine that each of these child elements are big and complex themselves. If you want to change, say, bigChild2, you could write something like:
updatedValue = myValue { bigChild2 = updatedChild }
When you do that, some data copying takes place, but it's often less that most people think. This expression does create a new BigDataStructure record, but it doesn't 'deep copy' any of its values. It just reuses bigChild1, updatedChild, bigChildN, and all the other values, because they're immutable.
In theory (but we'll get back to that in a minute), the flatter your data structures are, the more data sharing should be enabled. If, on the other hand, you have some deeply nested data structures, and you need to update the leafs, you'll need to create a copy of the immediate parents of those leafs, plus the parents of those parents, and their parents as well, all the way to the root. That might be expensive.
That's the theory, though, but we've known for decades that it's impractical to try predict how software will perform. Instead, try to measure it.
While the OP suggest that significant data is involved, it doesn't state how much, and neither does it state the hardware specs of the system that's going to run the simulation. So, as Eric Lippert explains so well, the person who can best answer questions about performance is you.
P.S. It's my experience that when I start to encounter performance problems, I need to get creative with how I design my system. Efficient data structures can address many performance issues. This is just as much the case in OOP as it is in FP.

Related

What is the difference between date transfer object (DTO) and representation object of domain driven design pattern?

I know DTO is returned by the server-side and received by the client-side, but I am confused by the representation object in DDD. I think they are almost the same. Can someone tell me their differences?
Can someone tell me their differences?
They solve different problems in different contexts
Data transfer is a boundary concern - how do we move information from here to there (across a remote interface)? Among the issues that you may run into: the transfer of information is slow, or expensive. One way of keeping this under control is to move information in a larger grain.
the main reason for using a Data Transfer Object is to batch up what would be multiple remote calls into a single call -- Martin Fowler, Patterns of Enterprise Application Architecture
In other words, a DTO is your programs representation of a fat message.
In DDD, the value object pattern is a modeling concern; it is used to couple immutable representations of information and related computations.
A DTO tends to look like a data structure, with methods that can be used to transform that data structure into a representation (for example: an array of bytes) that can be sent across a boundary.
A value object tends to look like a data structure, with methods that can be used to compute other information that is likely to be interesting in your domain.
DTO tend to be more stable (or at least backwards compatible) out of necessity -- because producer and consumer are remote from one another, coordinating a change to both requires more effort than a single local change.
Value objects, in contrast, are easier to change because they are a domain model concern. IF you want to change the model, that's just one thing, and correspondingly easier to coordinate.
(There's kind of a hedge - for system that need persistence, we need some way to get the information out of the object into a representation that can be stored and retrieved. That's not necessarily a value object concern, especially if you are willing to use general purpose data structures to move information in and out of "the model".)
In the kingdom of nouns, the lines can get blurry - partly because any information that isn't a general purpose data structure/primitive is "an object", and partly because you can often get away with using the same objects for your internal concerns and boundary cnocerns.

Sequence Diagram: Interactions with resources (DB, Network, Caches, etc)

I am currently making a behavior assessment of different software modules regarding access to DB, Network, amount of memory allocations, etc.
The main goal is to pick a main use case( let's say system initialization) and recognize the modules that are:
Unnecessarily accessing DB.
Creating too many caches for same data.
Making too many allocations (or too big) at once.
Spawning many threads,
Network access
By assessing those, I could have an overview of the modules that need to be reworked in order to improve performance, delete redundant DB accesses, avoid CPU usage peaks, etc.
I found the sequence diagram a good candidate to represent the use cases behavior, but I am not sure how to depict their interaction with the above mentioned activities.
I could do something like shown in this picture, but that is an "invention" of tagging functions with colors. I not sure if it is too simplistic or childish (too many colors?).
I wonder if there is any specific UML diagram to represent these kind of interactions.
Using SDs is probably the most appropriate approach here. You might consider timing diagrams in certain cases if you need to present timing constraints. However, SDs already have a way to show timing constraints which is quite powerful.
You should adorn your diagram with a comment telling that the length of the colored self-calls represent percentage of use or something like that (or just adding a title telling this). Using colors is perfect by the way.
As a side note: (the colored) self-calls are shown with a self-pointing arrow like this
but I'd guess your picture can be understood by anyone and you can see that as nitpicking. And most likely they are not real self-calls but just indicators. So that's fine too.
tl;dr Whatever transports the message is appropriate.

Domain Objects containing lots of Data

Our Domain has a need to deal with large amounts (possibly more than 1000 records worth) of objects as domain concepts. This is largely historical data that Domain business logic needs do use. Normally this kind of processing depends on a Stored Procedure or some other service to do this kind of work, but since it is all intimately Domain Related, and we want to maintain the validity of the Model, we'd like to find a solution that allows the Aggregate to manage all of the business logic and rules required to work with the data.
Essentially, we're talking about past transaction data. Our idea was to build a lightweight class and create an instance for each transaction we need to work with from the database. We're uncomfortable with this because of the volume of objects we'd be instantiating and the potential performance hit, but we're equally uncomfortable with offloading this Domain logic to a stored procedure since that would break the consistency of our Model.
Any ideas on how we can approach this?
"1000" isn't really that big a number when it comes to simple objects. I know that a given thread in the system I work on may be holding on to tens of thousands of domain objects at a given time, all while other threads are doing the same at the same time. By the time you consider all of the different things going on in a reasonably complicated application, 1000 objects is kind of a drop in the bucket.
YMMV depending on what sort of resources those objects are holding on to, system load, hard performance requirements, or any number of other factors, but if, as you say, they're just "lightweight" objects, I'd make sure you actually have a performance problem on your hands before you try getting too fancy.
Lazy loading is one technique for mitigating this problem and most of the popular object-relational management solutions implement it. It has detractors (for example, see this answer to Lazy loading - what’s the best approach?), but others consider lazy loading indispensable.
Pros
Can reduce the memory footprint of your aggregates to a manageable level.
Lets your ORM infrastructure manage your units of work for you.
In cases where you don't need a lot of child data, it can be faster than fully materializing ("hydrating") your aggregate root.
Cons
Chattier that materializing your aggregates all at once. You make a lot of small trips to the database.
Usually requires architectural changes to your domain entity classes, which can compromise your own design. (For example, NHibernate just requires you to expose a default constructor make your entities virtual to take advantage of lazy loading - but I've seen other solutions that are much more intrusive).
By contrast, another approach would be to create multiple classes to represent each entity. These classes would essentially be partial aggregates tailored to specific use cases. The main drawback to this is that you risk inflating the number of classes and the amount of logic that your domain clients need to deal with.
When you say 1000 records worth, do you mean 1000 tables or 1000 rows? How much data would be loaded into memory?
It all depends on the memory footprint of your objects. Lazy loading can indeed help, if the objects in question references other objects which are not of interest in your process.
If you end up with a performance hog, you must ask yourself (or perhaps your client) if the process must run synchronously, or if it can be offloaded to a batch process somewhere else.
Using DDD, How Does One Implement Batch Processing?

Mutable Value Objects / Sharing State (and beer brewing!)

I'm a rusty programmer attempting to become learned in the field again. I've discovered, fitfully, that my self-taught and formal education both induced some bad habits. As such, I'm trying to get my mind around good design patterns, and -- by extension -- when they're wrong. The language is Java, and here's my issue:
I'm attempting to write software to assist in beer brewing. In brewing, sometimes you must substitute a particular variety of hop for what's called for in the recipe. For example, you might have a recipe that calls for 'Amarillo' hops, but all you can get is 'Cascade', which has a similar enough aroma for substitution; hops have an Alpha Acid amount (per a given mass), and the ratio between two hops is part of the substitution formula. I'm attempting to model this (properly) in my program.
My initial go is to have two objects. One a HopVariety, which has general descriptive information about a variety of hop, and one a HopIngredient, which is a particular instantiation of a HopVariety and also includes the amount used in a given recipe. HopIngredient should have knowledge of its variety, and HopVariety should have knowledge of what can be used as a substitute for it (not all substitutions are symmetric). This seems like good OOP.
The problem is this: I'm trying to follow good practice and make my value objects immutable. (In my head, I'm classifying HopVariety and HopIngredient as value objects, not 'actors'.) However, I need the user to be able to update a given HopVariety with new viable substitutions. If I follow immutability, these changes will not propagate to individual ingredients. If choose mutability, I'm Behaving Badly by potentially introducing side-effects by sharing a mutable value object.
So, option B: introduce a VarietyCollection of sorts, and loosely couple the ingredients and the varieties by way of a name or unique identifier. And then a VarietySubstitutionManager, so that varieties don't hold references to other varieties, only to their ids. This goes against what I want to do, because holding a reference to the variety object makes intuitive sense, and now I'm introducing what feels like excessive levels of abstraction, and also separating functions from the data.
So, how do I properly share state amongst what amounts to specific instances? What's the proper, or at least, sanest way to solve the problem?
Are you sure HopVariety should be a value object? It sounds to me like an entity - "Amarillo" hop variety is one and only one, so it should be a uniquely identifiable object. Just like "Tony Wooster" is only one (well not exactly, but you get the point ;) )
I recommend reading about Domain Driven Design and the differences between entities and value objects.
BTW: DDD book has a lot of examples of situations like yours and how to handle them.
I think your choices are either to
have the 'update variety' function walk the existing object graph and create a new one with all of the ingredients updated, or
keep using mutability
With the information at hand, it isn't clear to me which is better for your situation.
How badly do you want to avoid mutation of objects with multiple references?
If you want the recipe collection to reflect the user updates to the variety set, and you want to avoid mutable fields in your objects, then you'll be forcing yourself into using functional object update to handle user input.
At the end of that road lies the I/O monad. How far do you want to go in that direction?
With a functional language that allows mutation side-effects, e.g. S/ML etc., you might opt to keep your variety and ingredient objects pure, and to store functions that return the current variety object from the latest variety collection stored in a single mutable reference cell. This might seem like a reasonable way to split the difference.

Am I allowed to have "incomplete" aggregates in DDD?

DDD states that you should only ever access entities through their aggregate root. So say for instance that you have an aggregate root X which potentially has a lot of child Y entities. Now, for some scenario, you only really care about a subset of these Y entities at a time (maybe you're displaying them in a paged list or whatever).
Is it OK to implement a repository then, so that in such scenarios it returns an incomplete aggregate? Ie. an X object who'se Ys collection only contains the Y instances we're interested in and not all of them? This could for instance cause methods on X which perform some calculation involving the Ys to not behave as expected.
Is this perhaps an indication that the Y entity in question should be considered promoted to an aggregate root?
My current idea (in C#) is to leverage the delayed execution of LINQ, so that my X object has an IQueryable to represent its relationship with Y. This way, I can have transparent lazy loading with filtering... But getting this to work with an ORM (Linq to Sql in my case) might be a bit tricky.
Any other clever ideas?
I consider an aggregate root with a lot of child entities to be a code smell, or a DDD smell if you will. :-) Generally I look at two options.
Split your aggregate into many smaller aggregates. This means that my original design was not optimal and I need to identify some new entities.
Split your domain into multiple bounded contexts. This means that there are specific sets of scenarios that use a common subset of the entities in the aggregate, while there are other sets of scenarios that use a different subset.
Jimmy Nilsson hints in his book that instead of reading a complete aggregate you can read a snapshot of parts of it. But you are not supposed to be able to save changes in the snapshot classes to the database.
Jimmy Nilsson's book Chapter 6: Preparing for infrastructure - Querying. Page 226.
Snapshot pattern
You're really asking two overlapping questions.
The title and first half of your question are philosophical/theoretical. I think the reason for accessing entities only through their "aggregate root" is to abstract away the kinds of implementation details you're describing. Access through the aggregate root is a way to reduce complexity by having a trusted point of access. You're eliminating friction/ambiguity/uncertainty by adhering to a convention. It doesn't matter how it's implemented within the root, you just know that when you ask for an entity it will be there. I don't think this perspective rules out a "filtered repository" as you describe. But to provide a pit of success for devs to fall into, it should be impossible instantiate the repository without being explicit about its "filteredness;" likewise, if shared access to a repository instance is possible, the "filteredness" should be explicit when coding in the caller.
The second half of your question is about implementation on a specific platform. Not sure why you mention delayed execution, I think that's really orthogonal to the filtering question. The filtering itself could be a bit tricky to implement with LINQ. Maybe rather than inlining the Where lambdas, you set up a collection of them and select one depending on the filter you need.
You are allowed since the code will compile anyway, but if you're going for a pure DDD design you should not have incomplete instances of objects.
You should look into LazyLoading if you're afraid to load a huge object of which you will only use a small portion of its child entities.
LazyLoading delays the loading of whatever you decide to lazy-load until the moment they are accessed. They make use of callbacks to call the loading method once the code calls for them.
Is it OK to implement a repository then, so that in such scenarios it
returns an incomplete aggregate?
Not at all. Aggregate is a transnational boundary to change the state of your system. Never use aggregates for querying data. Split the system into Write and Read sides. (read about CQR & CQRS). When we think "CRUD" based, we implement our system, based on some resource. Lets say you have "Appointment" aggregate. Thinking "Crudish" means we should implement usecases Create, Update, Delete, GetAll appointments. That means Appointment[] should be returned for GetAll. When you think usecase based, (HexagonalArchitecture) your usecases would be ScheduleAppointment, RescheduleAppointment, CancelAppointment. But for query side it can be: /myCalendar. We return back all appointments for a specific user in a ClientCalendar object. Create separate DTO's for Query sides. Never use aggregates for this purpose.

Resources