I am using Spring, hibernate for my . For Update I call repo.findById(id) to get the current instance, thereafter call model.setXXX() to update fields and finally, repo.save(mdl) to save it back. For delete, I call repo.remove(id) , now both my update and delete are state-driven , and may bring about state change on my models and sometimes , based on the state, certain actions like delete are not allowed. How do i go about implementing those behaviour in such a way that this logic is essentially within the domain models and doesn't leak to repository implementation?
I would do something like this:
// entity
class User{
...
bool CanBeArchived(){
// check whether objects can be archived based
// on the rules and state that you mention
}
...
}
// repository
interface UserArchiver{
// implementation calls 'CanBeArchived' and throws if it fails
void Archive(User user)
}
You don't seem to like the approach outlined above because
... logic is essentially within the domain models and doesn't leak to
repository implementation
I don't think that you should be worried about domain leaking into repository implementation. This is unavoidable anyway, data access layer (repo implementation) has intimate knowledge of domain objects anyway, otherwise it would not be able to hydrate them. What you should be worried is a reverse - 'data access leaking into domain' and this is not happening in the code above.
Related
Assume that I have two aggregates: Vehicles and Drivers, And I have a rule that a vehicle cannot be assigned to a driver if the driver is on vacation.
So, my implementation is:
class Vehicle {
public void assignDriver(driver Driver) {
if (driver.isInVacation()){
throw new Exception();
}
// ....
}
}
Is it ok to pass an aggregate to another one as a parameter? Am I doing anything wrong here?
I'd say your design is perfectly valid and reflects the Ubiquitous Language very well. There's several examples in the Implementing Domain-Driven Design book where an AR is passed as an argument to another AR.
e.g.
Forum#moderatePost: Post is not only provided to Forum, but modified by it.
Group#addUser: User provided, but translated to GroupMember.
If you really want to decouple you could also do something like vehicule.assignDriver(driver.id(), driver.isInVacation()) or introduce some kind of intermediary VO that holds only the necessary state from Driver to make an assignation decision.
However, note that any decision made using external data is considered stale. For instance, what happens if the driver goes in vacation right after it's been assigned to a vehicule?
In such cases you may want to use exception reports (e.g. list all vehicules with an unavailable driver), flag vehicules for a driver re-assignation, etc. Eventual consistency could be done either through batch processing or messaging (event processing).
You could also seek to make the rule strongly-consistent by inverting the relationship, where Driver keeps a set of vehiculeId it drives. Then you could use a DB unique constraint to ensure the same vehicule doesn't have more than 1 driver assigned. You could also violate the rule of modifying only 1 AR per transaction and model the 2-way relationship to protect both invariants in the model.
However, I'd advise you to think of the real world scenario here. I doubt you can prevent a driver from going away. The system must reflect the real world which is probably the book of record for that scenario, meaning the best you can do with strong consistency is probably unassign a driver from all it's vehicules while he's away. In that case, is it really important that vehicules gets unassigned immediately in the same TX or a delay could be acceptable?
In general, an aggregate should keep its own boundaries (to avoid data-load issues and transaction-scoping issues, check this page for example), and therefore only reference another aggregate by identity, e.g. assignDriver(id guid).
That means you would have to query the driver prior to invoking assignDriver, in order to perform validation check:
class MyAppService {
public void execute() {
// Get driver...
if (driver.isInVacation()){
throw new Exception();
}
// Get vehicle...
vehicle.assignDriver(driver.id);
}
}
Suppose you're in a micro-services architecture,
you have a 'Driver Management' service, and an 'Assignation Service' and you're not sharing code between both apart from technical libraries.
You'll naturally have 2 classes for 'Driver',
An aggregate in 'Driver Management' which will hold the operations to manage the state of a driver.
And a value object in the 'Assignation Service' which will only contain the relevant information for assignation.
This separation is harder to see/achieve when you're in a monolithic codebase
I also agree with #plalx, there's more to it for the enforcement of the rule, not only a check on creation, for which you could implement on of the solutions he suggested.
I encourage you to think in events, what happens when:
a driver has scheduled vacation
when he's back from vacation
if he changes he vacation dates
Did you explore creating an Aggregate for Assignation?
I have seen information on rehydrating aggregate roots in SO, but I am posting this question because I did not find any information in SO about doing so with in the context of an event sourced framework.
Has a best practice been discovered or developed for how to rehydrate aggregate roots when operating on the command side of an application using the event sourcing and CQRS pattern
OR is this still more of a “preference“ among architects?
I have read through a number of blogs and watched a number of conference presentations on you tube and I seem to get different guidance depending on who I am attending to.
On the one hand, I have found information stating fairly clearly that developers should create aggregates to hydrate themselves using “apply“ methods on events obtained directly from the event store..
On the other hand, I have also seen in several places where presenters and bloggers have recommended rehydrating aggregate roots by submitting a query to the read side of the application. Some have suggested creating specific validation “buckets“ / projections on the read side to facilitate this.
Can anyone help point me in the right direction on discovering if there is a single best practice or if the answer primarily depends upon performance issues or some other issue I am not thinking about?
Hydrating Aggregates in an event sourced framework is a well-understood problem.
On the one hand, I have found information stating fairly clearly that
developers should create aggregates to hydrate themselves using
“apply“ methods on events obtained directly from the event store..
This is the prescribed way of handling it. There are various ways of achieving this, but I would suggest keeping any persistence logic (reading or writing events) outside of your Aggregate. One simple way is to expose a constructor that accepts domain events and then applies those events.
On the other hand, I have also seen in several places where presenters
and bloggers have recommended rehydrating aggregate roots by
submitting a query to the read side of the application. Some have
suggested creating specific validation “buckets“ / projections on the
read side to facilitate this.
You can use the concept of snapshots as a way of optimizing your reads. This will create a memoized version of your hydrated Aggregate. You can load this snapshot and then only apply events that were generated since the snapshot was created. In this case, your Aggregate can define a constructor that takes two parameters: an existing state (snapshot) and any remaining domain events that can then be applied to that snapshot.
Snapshots are just an optimization and should be considered as such. You can create a system that does not use snapshots and apply them once read performance becomes a bottleneck.
On the other hand, I have also seen in several places where presenters
and bloggers have recommended rehydrating aggregate roots by
submitting a query to the read side of the application
Snapshots are not really part of the read side of the application. Data on the read side exists to satisfy use cases within the application. Those can change based on requirements even if the underlying domain does not change. As such, you shouldn't use read side data in your domain at all.
Event sourcing has developed different styles over the years. I could divide all o those into two big categories:
an event stream represents one entity (an aggregate in case of DDD)
one (partitioned) event stream for a (sub)system
When you deal with one stream per (sub)system, you aren't able to rehydrate the write-side on the fly, it is physically impossible due to the number of events in that stream. Therefore, you would rely on the projected read-side to retrieve the current entity state. As a consequence, this read-side must be fully consistent.
When going with the DDD-flavoured event sourcing, there's a strong consensus in the community how it should be done. The state of the aggregate (not just the root, but the whole aggregate) is restored by the command side before calling the domain model. You always restore using events. When snapshotting is enabled, snapshots are also stored as events in the aggregate snapshot stream, so you read the last one and all events from the snapshot version.
Concerning the Apply thing. You need to clearly separate the function that adds new events to the changes list (what you're going to save) and functions what mutate the aggregate state when events are applied.
The first function is the one called Apply and the second one is often called When. So you call the Apply function in your aggregate code to build up the changelist. The When function is called when restoring the aggregate state from events when you read the stream, and also from the Apply function.
You can find a simplistic example of an event-sourced aggregate in my book repo: https://github.com/alexeyzimarev/ddd-book/blob/master/chapter13/src/Marketplace.Ads.Domain/ClassifiedAds/ClassifiedAd.cs
For example:
public void Publish(UserId userId)
=> Apply(
new V1.ClassifiedAdPublished
{
Id = Id,
ApprovedBy = userId,
OwnerId = OwnerId,
PublishedAt = DateTimeOffset.Now
}
);
And for the When:
protected override void When(object #event)
{
switch (#event)
{
// more code here
case V1.ClassifiedAdPublished e:
ApprovedBy = UserId.FromGuid(e.ApprovedBy);
State = ClassifiedAdState.Active;
break;
// and more here
}
}
I have to solve a domain problem and I have some doubts about what is the better solution. I am going to present the problem:
I have Applications and each Application has many Process. An Application has some ProcessSettings too. I have some business rules when I have to create a Process, for example, based on the process settings of application, I have to apply some rules on some process properties.
I have considered Application as aggregate root and Process as other aggregate root, and ProcessSettings as a value object inside Application aggregate.
I have a use case to create processes, and the logic is to create a valid instance of process and persist it with ProcessRepository. Well, I think I have two options to apply the process settings:
In the use case, get the process settings from Application aggregate by ApplicationId through a domain service in Application aggregate, and pass ProcessSettings to process create method.
In the use case, to create the process and through a domain service in Application aggregate pass a copy of process (a value object) to apply the process settings.
What approach do you believe is most correct to use?, or do you implement it in another way?
Thanks in advance!
Our product owner told us that if the client paid for some settings in
a moment and created a process that settings will be valid for that
process if the client does not update it. If the client leave to paid
some settings then, when the client want to update that process our
system will not allow update it because the actual settings will not
be fit to the process data
That makes the implementation much easier, given that process settings-based validation only has to occur in process creation/update scenarios. Furthermore, I would guess that race conditions would also be irrelevant to the business, such as if settings are changed at the same time a process gets created/updated.
In light of this, we can assume that ProcessSettings and Process can be in distinct consistency boundaries. In other words, both can be part of separate aggregate roots.
Furthermore, it's important to recognize that the settings-based validation are not Process invariants, meaning the Process shouldn't be responsible for enforcing these rules itself. Since these aren't invariants you also shouldn't strive for an always-valid strategy and use a deferred validation strategy instead.
From that point there are many good ways of modeling this use case, which will all boil down to something like:
//Application layer service
void createProcess(processId, applicationId, data) {
application = applicationRepository.applicationOfId(applicationId);
process = application.createProcess(processId, data);
processRepository.add(process);
}
//Application AR
Process createProcess(processId, data) {
process = new Process(processId, this.id, data);
this.processSettings.ensureRespectedBy(process);
return process;
}
If ProcessSettings are part of the Application AR then it could make sense to put a factory method on Application for creating processes given it holds the necessary state to perform the validation, like in the above example. That removes the need from introducing a dedicated domain service for the task, such as a stand-alone factory.
If ProcessSettings can be it's own aggregate root you could always do the same, but introduce a lookup domain service for settings:
//Application AR
Process createProcess(processId, data, settingsLookupService) {
process = new Process(processId, this.id, data);
processSettings = settingsLookupService.findByApplicationId(this.id);
processSettings.ensureRespectedBy(process);
return process;
}
Some might say your aggregate is not pure anymore however, given it's performing indirect IO through calling the settingsLookupService. If you want to avoid such dependency then you may introduce a domain service such as ProcessDomainService to encapsulate the creation/update logic or you may even consider the lookup logic is not complex enough and put it directly in the application layer.
//Application layer service
void createProcess(processId, applicationId, data) {
processSettings = processRepository.findByApplicationId(applicationId);
process = application.createProcess(processId, data, processSettings);
processRepository.add(process);
}
There's no way for us to tell which approach is better in your specific scenario and sometimes there isin't even a perfect way and many various ways could be equally good. By experience it's a good idea to keep aggregates pure though as it's easier for unit tests (less mocking).
In our wicket application I need to start a long-running operation. It will communicate with an external device and provide a result after some time (up to a few minutes).
Java-wise the long running operation is started by a method where I can provide a callback.
public interface LegacyThingy {
void startLegacyWork(WorkFinished callback);
}
public interface WorkFinished {
public void success(Whatever ...);
// failure never happens
}
On my Wicket Page I plan to add an Ajax Button to invoke startLegacyWork(...) providing an appropriate callback. For the result I'd display a panel that polls for the result using an AbstractTimerBehavior.
What boggles my mind is the following problem:
To keep state Wicket serializes the component tree along with the data, thus the data needs to be wrapped in serializable models (or detachable models).
So to keep the "connection" between the result panel and the WorkFinished callback I'd need some way to create a link between the "we serialize everything" world of Wicket and the "Hey I'm a Java Object and nobody manages my lifetime" world of the legacy interface.
Of course I could store ongoing operations in a kind of global map and use a Wicket detachable model that looks them up by id ... but that feels dirty and I don't assume that's the correct way. (It opens up a whole can of worms regarding lifetime of such things).
Or I'm looking at a completly wrong angle on how to do long running operations from wicket?
I think the approach with the global map is good. Wicket also uses something similar internally - org.apache.wicket.protocol.http.StoredResponsesMap. This is a special map that keeps the generated responses for REDIRECT_TO_BUFFER strategy. It has the logic to keep the entries for at most some pre-configured duration and also can have upper limit of entries.
I understand that App Server takes care of the threading so the developer should only concentrate on the business logic...
but consider an example. A stateless EJB has a member of type CountManager.
#WebService
#Stateless
public class StatelessEJB {
private CountManager countManager;
...
public void incrementCount() {countManager.incrementCount();}
public int getCount(){return countManager.getCount();}
}
And the CountManager
public class CountManager {
public void increaseCount() {
// read count from database
// increase count
// save the new count in database table.
}
public int getCount() {
// returns the count value from database.
}
}
The developer should think about multi-threading here. If you make CountManager also an EJB, I guess problem won't go away.
What would be the general guideline for developer to watch out for?
Update:
Changed the code. Assume that the methods of EJB are exposed as webservice, so we have no control what order client calls them. Transaction attribute is default. Does this code behave correctly under multi threaded scenario?
The fact that EJB are thread-safe doesn't mean that different methods invocations will give you consistent results.
EJB gives you the certainty that every method in your particular EJB instance will be executed by exactly one thread. This doesn't save you from multiple users accessing different instances of your EJB and inconsistent results dangers.
Your CountManager seems to be a regular Java class which means that you hold a state in Stateless EJB. This is not good and EJB thread-safety won't protect you from anything in such case. Your object can be accessed through multiple EJB instances at the same time.
Between your client's first method invocation StatelessEJB.incrementCount() (which starts a transaction - default TransactionAttribute) and the second client's method invocation StatelessEJB.getCount() (which starts new transaction) many things might happen and the value of the count could be changed.
If you'd change it to be an EJB I don't think you'd be any more safe. If it's a SLSB than it still can't have any state. If the state is not realized as a EJB field variable but a database fetched data, than it's definitely better but still - the transaction is not a real help for you because your WebService client still executes these two methods separately therefore landing in two different transactions.
The simple solution would be to:
use the database (no state in SLSB) which can be synchronized with your EJB transaction,
execute both of these methods within the transaction (like incrementAndGet(-) method for WebService client).
Than you can be fairly sure that the results you get are consistent.
Notice that is not really a problem of synchronization or multi-threading, but of transactional behavior.
The above code, if run inside an EJB, will take care of race conditions by delegating transaction support to the data base. Depending on the isolation level and transactional attributes, the data base can take care of locking the underlying tables to ensure that the information remains consistent, even in the face of concurrent access and/or modifications.