Single Responsibility Principle in Clean Architecture, Aggregating UseCases in one UseCaseManager which can provide UseCase based on In & Out Object - aggregation

I want to implement Single Responsibility principle in my projects Domain layer (Clean MVVM).
I've approximately 200 different use-cases which are being very hectic to manage. Now I'm thinking to create one UseCaseManager which can provide me required UseCase based on Input & Output Object.
I've tried an approach but that's not looking very good.I'm mentioning some sample code, Please help me how can I aggregate all the UseCases to one UseCaseManager.
UseCase1:
public class ActualUseCase1 extends AsyncUseCase<Object3,Object4> {
public ActualUseCase1(SchedulerProvider schedulerProvider) {
super(schedulerProvider);
}
#Override
public Flowable<Object4> buildUseCaseFlowable(Object3 input) {
return Flowable.just(new Object4());
}
}
UseCase2:
public class ActualUseCase2 extends AsyncUseCase<Object1, Object2> {
public ActualUseCase2(SchedulerProvider schedulerProvider) {
super(schedulerProvider);
}
#Override
public Flowable<Object2> buildUseCaseFlowable(Object1 input) {
return Flowable.just(new Object2());
}
}
UseCaseManager:
public interface UseCaseManager<In, Out> {
<T> T getUseCase(In input, Out output);
}
T can be different UseCase with different In & Out Object.
UseCaseManagerImpl:
public class UseCaseManagerImpl implements UseCaseManager {
#Override
public Object getUseCase(Object object1, Object object2) {
return null;
}
}
Now this is the main problem, I'm not able to understand. How can i implement getUseCase method.

I think you're re-inventing the abstract factory pattern. Google will provide you with lots of content on that subject...
The tricky bit is how you decide which subtype to instantiate and return; that can be as simple as a switch statement, or involve lookup tables, etc. The key point is that you isolate that logic into a single place, where you can unit test it.
A bigger question is - how do you end up with 200 subclasses?

Okay I am getting an idea here that you want sort of a system wherein for a given input you get some output. And you can have 200 such inputs for which 200 corresponding outputs are possible. And you want to make all that manageable.
I will try to explain the solution I have in mind. I am a beginner in Java and hence won’t be able to produce much code.
You can implement this using the Chain of Responsibility pattern. In this design pattern, you have a job runner (UseCaseManagaer in your case) and several jobs (UseCases) to run, which will be run in sequence until one of them returns an output.
You can create a RequestPipeline class which will be a job runner. In the UseCaseManager, you instantiate the pipeline once and all the use cases you want to add using a Builder Pattern like so:
RequestPipeline.add(new UseCase1())
RequestPipeline.add(new UseCase2())...
When an input comes in, you trigger the RequestPipeline which will run all the jobs added to it, one after the other in sequence. If a UseCase returns null, the job runner will call the next UseCase in line until it finds a UseCase which can manage the input and produce an answer.
The advantages of this design pattern are:
Abstraction: The RequestPipeline is responsible for running the jobs in line but does not know anything about the jobs it is
running. The UseCase on the other hand only knows about processing
it’s own use-case. It’s a unit in itself. Hence Single Responsibility Principle is satisfied as both are independent of each other and are re-usable whenever we have a similar design requirement later.
Easily extensible: If you have to add 10 other use-cases, you can easily add them to the RequestPipeline and you are done.
No switch case and if-else. This in itself is a big achievement. I love Chain of Responsibility for this very reason.
Declarative Programming: We simple declare what we need to do and leave the details how to do it to the separate units. The design of the code is easily understandable by a new developer.
More control: RequestPipeline has the ability to dynamically decide the job to run at run-time.
Reference: https://www.google.co.in/amp/s/www.geeksforgeeks.org/chain-responsibility-design-pattern/amp/
Some Java Code is provided here for you to check, if it satisfies your use-case.
Hope this helps. Please let me know if you have any doubts in the comment section.

What you are trying to do is NOT the single responsibility, it's the opposite.
Single responsibility means
There should be one reason to change
See The Single Responsibility Principle
The UseCaseManager you try to implement will handle all of your 200 use cases. Thus it will change whenever a use case change." - That's is mixing of concerns and not separating them.
Usually use cases are invoked by controllers and usually a controller also has a single responsibility. Thus the controller knows which use case it must invoke. Thus I don't see the need for a UseCaseManager.
I guess there is another problem in your design that leads to the problem you have. But since I don't have your full source code I can't give you any further advice.

Related

Can an aggregate issue its own commands?

Question
Can an AR issue its own commands, or it is better to issue them through a processor that listen event emitted by the outer command?
BTW: If you consider that this question could lead to "primarily opinionated” answers, I would still want to know whether or not it is considered a good practice, and the why.
PHP code sample
class PatchableComponent extends EventSourcedAggregateRoot
implements Entity, ReconstitutableEventSourcedAggregateRoot
{
...
public static function importFromCore(...): PatchableComponent
{
$patchableComponent = new self;
$patchableComponent->applyPatchableComponentWasImportedFromCore(
new PatchableComponentWasImportedFromCore(...)
);
// Here, the AR issue its own startLookingForPatches() command.
$patchableComponent->startLookingForPatches();
return $patchableComponent;
}
public function startLookingForPatches(): void
{
$this->applyPatchableComponentStartedLookingForPatches(
new PatchableComponentStartedLookingForPatches(...)
);
}
...
}
Can an AR issue its own commands, or it is better to issue them through a processor that listen event emitted by the outer command?
An aggregate can certainly call its own methods; adding extra layers of indirection is not normally necessary or desirable.
I know there is an accepted answer for this question, but wanted to chip in my own 2 cents.
When you state that an Aggregate is issuing a Command, your code sample doesn't actually do that. Your example is that of an Aggregate performing certain behavior. The concept of a "Command" is that of a message that encapsulates a user's intent (Use Case). A Command would be typically (and hopefully) handled by a CommandHandler which would then call methods on the Aggregate to perform the work. The Aggregate doesn't really know about Use Cases.
If you separate the concept of the Command from the concept of the Aggregate then you are free to implement Behavior in a way that makes your domain flexible. You are able to add new Use Cases (Commands) and new Behaviors (in your Aggregate) independently of each other.

Dealing with a user dependent application

An application that I'm currently writing is heavily dependent on the current logged in user, to give a concrete example lets say we have a list of Products.
Now every user has the 'rights' to see certain Products, particular details of this product, and edit / remove fewer of those.
E.g.:
The user can see 3/5 products
The user can see extra details from 2 out of those 3 products
...
As this is the case with most of the application's domain, I have a tendency to pass around the user in most methods. Which becomes cumbersome from time to time. As I have to pass in the user in some methods, just to pass it down to another one that needs it.
My gut tells me I'm missing something, but I'm not sure how I could tackle this problem.
I gave some thoughts at using a Class that holds this user, and inject that class everywhere I need it. Or using a static Property.
Now from time to time it is handy pass in the user in the method, I guess I could override it then:
public doSomething(User user = null)
{
var u = user ?? this.authService.User;
...
}
Are there other ways you could tackle this kind of problem ?
This is going to depend on where you are in the project in terms of progress. In some instances you may not have the leeway to change this but if you have more control or are starting out then you may have options.
Typically Identity & Access Control is a bounded context on its own. Authentication and authorization should not be in your core domain. Your core domain (or even sub-domains) are interested in doing what they do if you have access but it is not the domain's responsibility to determine that access.
The authorization should take place outside the domain. If you find that you are querying your domain then things probably need to change since you need a dedicated query layer that will probably apply the authorization. Any commands that are limited will have authorization applied at the integration/application layer. Whether we want to restrict a user from registering a new order or even new orders of a certain type should not really matter i.t.o. the domain since it is only the granularity that changes.
You may have a sub-domain that deals with the authorization specific to your domain and an Identity & Access Control generic sub-domain that is more orthogonal.
But you may be in a scenario where there is an uncomfortably high level of coupling between the data element authorization (a level of classification) and the structure. I am of the opinion that fluid classification should be kept away from ones structure as the repercussions of classification changes are too great.
Just some thoughts :)
Your gut is correct, keep listen to it.
Authorization checks should not be mixed with core domain checks. For example, the if that checks that the user may update the product details and the if that checks that the product details are long enough should not be contained in the same class or even the same bounded context. If you have a monolith then the two checks should be contained in separate namespaces/modules.
Now I will tell you how I do it. In my latest monolithic project I use CQRS a lot, I like the separation between Commands and Queries. I will give an example of command validation but this can be extended to query validation and even to non-CQRS architectures.
For every command I register zero or more command validators that check if the command may be sent to the aggregate. These validators are eventual consistent. If a command passes all the validators then the command is sent to the aggregate where it is further checked but in a strong consistent manner. So, we are talking about two kinds of validation: validation outside the aggregate and validation inside the aggregate. The checks that belongs to other bounded context can be implemented using command validators outside the aggregate, that's how I do it. And now some example source code, in PHP:
<?php
namespace CoreDomain {
class ProductAggregate
{
public function handle(ChangeProductDetails $command):void //no return value
{
//this check is strong consistent
//the method yields zero or more events or exception in case of failure
if (strlen($command->getProductDetails()) < 10) {
throw new \Exception("Product details must be at least 10 characters long");
}
yield new ProductDetailsWereChanged($command->getProductId(), $command->getProductDetails());
}
}
}
namespace Authorization {
class UserCanChangeProductDetailsValidator
{
private $authenticationReaderService;
private $productsPermissionsService;
public function validate(ChangeProductDetails $command): void //no return value, if all is good no exception are thrown
{
//this check is eventual consistent
if (!$this->productsPermissionsService->canUserChangeProductDetails($this->authenticationReaderService->getAuthenticatedUserId(), $command->getProductId())) {
throw new \Exception("User may not change product details");
}
}
}
}
This example uses a style where commands are sent directly to the aggregates but you should apply this pattern to other styles too. For brevity, the details of command validators registering are not included.

Reactive programming in Domain Design

I'm used to develop software creating "domain entities" -- these entities "depends" only on other entities inside the domain.
Let's say I have this interface
package domain;
import domain.beans.City;
public interface CitiesRepository {
City get(String cityName);
}
As you can see, the City I am returning is again a domain object. Implementations of this CitiesRepository can be found outside the domain, and can rely upon a database, an http client, a cached decorator etc.
I am now working with a reactive framework -- vert.x -- and I am trying to understand how I can keep working using such model. I don't want vert.x specific answer but only to understand if there is any pattern/best practice to understand how to achieve this.
In the reactive-programming there is almost never a return value but always some callback/handler that will consume the event after it happened. Should I rewrite my interfaces to be "reactive"?
package domain;
import domain.beans.City;
public interface CitiesRepository {
void get(String cityName, DomainHandler<City> cityHandler);
}
Just providing this example caused me some some spaghetti-headache when thinking to the implementations where I have to deal with the Handler of the "reactive framework" to "fill" my domain handler.
Should I stop thinking in this kind of design when working with reactive model? Should I prefer an Observable/Promise approach?
Any hint would be really appreciated
In the reactive systems I've been involved with there has been an event handler that would then use the repository:
public void SomeEventHandler : IHandle<SomeEvent> {
public SomeEventHandler(CityRepository repo) {}
}
You would then use your existing repository inside the handler code:
public void When(SomeEvent event) {
var city = _cityRepository.Get(event.CityName);
// do something with city
}
In the CompositionRoot of the application, the handler would be registered to handle the event through whatever messaging bus / reactive stream etc. will be receiving / producing the event.
So I wouldn't look to make the repository reactive, rather add in an event handler to use it.
With reactive design you add a layer of indirection of how the API is invoked and you specify that in addition to your original vanilla spec. The reason is, in async design it matters a lot how you invoke stuff and it is not always one size fits all, so better to not make early big decisions, or bind "what it is/does" to "how it does it".
There are three common tools for making things asynchronous:
future/promise
callback
message passing
Future/promise is the most binding of the three in terms of the whole design and is usually the most hairy on the implementation side and you need to do a lot of moves to prevent ABA bugs in your design and to truck futures which are still running something, but no one needs the results. Yes, they abstract away concurrency, are monadic etc., but they make you their hostage the moment you add the first one and they are quite hard to get rid of.
Callback is the fastest in a single process, but to make them work with actor-based system or across wires you inevitably are going to use messages. Moreover, the moment you need a first state machine you need event queue and messages right away. So to be most future proof, the safest path is to just go with messages. Moving between message and callback is very straightforward (when possible at all) for how simple both these mechanisms are.
A protocol to lookup city by key could be something like this:
protocol Requests
message GetCityRequest(name): Requests
protocol Responses
message GetCityResponse(cityMaybe): Responses
But knowing this topic really well, I'd say invest into the "state replication pattern" in generic form and use it for both simple static lookups and dynamic subscriptions. It is not hard to get it right and it will be your main working horse for most of your system needs.

Partial-mocking considered bad practice? (Mockito)

I'm unit-testing a business object using Mockito. The business object uses a DAO which normally gets data from a DB. To test the business object, I realized that it was easier to use a separate in-memory DAO (which keeps data in a HashMap) than to write all the
when(...).thenReturn(...)
statements. To create such a DAO, I started by partial-mocking my DAO interface like so:
when(daoMock.getById(anyInt())).then(new Answer() {
#Override
public Object answer(InvocationOnMock invocation) throws Throwable {
int id = (Integer) invocation.getArguments()[0];
return map.get(id);
}
});
but it occurred to me that it was easier to just implement a whole new DAO implementation myself (using in-memory HashMap) without even using Mockito (no need to get arguments out of that InvocationOnMock object) and make the tested business object use this new DAO.
Additionally, I've read that partial-mocking was considered bad practice. My question is: is what I'm doing a bad practice in my case? What are the downsides? To me this seems OK and I'm wondering what the potential problems could be.
I'm wondering why you need your fake DAO to be backed by a HashMap. I'm wondering whether your tests are too complex. I'm a big fan of having very simple test methods that each test one aspect of your SUT's behaviour. In principle, this is "one assertion per test", although sometimes I end up with a small handful of actual assert or verify lines, for example, if I'm asserting the correctness of a complex object. Please read http://blog.astrumfutura.com/2009/02/unit-testing-one-test-one-assertion-why-it-works/ or http://blog.jayfields.com/2007/06/testing-one-assertion-per-test.html to learn more about this principle.
So for each test method, you shouldn't be using your fake DAO over and over. Probably just once, or twice at the very most. Therefore, having a big HashMap full of data would seem to me to be EITHER redundant, OR an indication that your test is doing WAY more than it should. For each test method, you should really only need one or two items of data. If you set these up using a Mockito mock of your DAO interface, and put your when ... thenReturn in the test method itself, each test will be simple and readable, because the data that the particular test uses will be immediately visible.
You may also want to read up on the "arrange, act, assert" pattern, (http://www.arrangeactassert.com/why-and-what-is-arrange-act-assert/ and http://www.telerik.com/help/justmock/basic-usage-arrange-act-assert.html) and be careful about implementing this pattern INSIDE each test method, rather than having different parts of it scattered across your test class.
Without seeing more of your actual test code, it's difficult to know what other advice to give you. Mockito is supposed to make mocking easier, not harder; so if you've got a test where that's not happening for you, then it's certainly worth asking whether you're doing something non-standard. What you're doing is not "partial mocking", but it certainly seems like a bit of a testing smell to me. Not least because it couples lots of your test methods together - ask yourself what would happen if you had to change some of the data in the HashMap.
You may find https://softwareengineering.stackexchange.com/questions/158397/do-large-test-methods-indicate-a-code-smell useful too.
When testing my classes, I often use a combination of Mockito-made mocks and also fakes, which are very much what you are describing. In your situation I agree that a fake implementation sounds better.
There's nothing particularly wrong with partial mocks, but it makes it a little harder to determine when you're calling the real object and when you're calling your mocked method--especially because Mockito silently fails to mock final methods. Innocent-looking changes to the original class may change the implementation of the partial mock, causing your test to stop working.
If you have the flexibility, I recommend extracting an interface that exposes the method you need to call, which will make it easier whether you choose a mock or a fake.
To write a fake, implement that small interface without Mockito using a simple class (nested in your test, if you'd like). This will make it very easy to see what is happening; the downside is that if you write a very complicated Fake you may find you need to test the Fake too. If you have a lot of tests that could make use of a good Fake implementation, this may be worth the extra code.
I highly recommend "Mocks aren't Stubs", an article by Martin Fowler (famous for his book Refactoring). He goes over the names of different types of test doubles, and the differences between them.

Domain driven design: Avoiding anemic domains and modelling real world roles

I'm looking for some advice on how much I should be concerned around avoiding the anemic domain model. We are just starting on DDD and are struggling with analysis paralysis regarding simple design decisions. The latest point we are sticking on is where certain business logic belongs, for example we have an Order object, which has properties like Status etc. Now say I have to perform a command like UndoLastStatus because someone made a mistake with an order, this is not as simple as just changing the Status as other information has to be logged and properties changed. Now in the real world this is a pure administration task. So the way I see it I have two options I can think of:
Option 1: Add the method to order so something like Order.UndoLastStatus(), whilst this kinda make sense, it doesn't really reflect the domain. Also Order is the primary object in the system and if everything involving the order is placed in the order class things could get out of hand.
Option 2: Create a Shop object, and with that have different services which represent differant roles. So I might have Shop.AdminService, Shop.DispatchService, and Shop.InventoryService. So in this case I would have Shop.AdminService.UndoLastStatus(Order).
Now the second option we have something which reflects the domain much more, and would allow developers to talk to business experts about similar roles that actually exists. But its also heading toward an anemic model. Which would be the better way to go in general?
Option 2 would lead to procedural code for sure.
Might be easier to develop, but much harder to maintain.
Now in the real world this is a pure administration task
"Administration" tasks should be private and invoked through public, fully "domain`ish" actions. Preferably - still written in easy to understand code that is driven from domain.
As I see it - problem is that UndoLastStatus makes little sense to domain expert.
More likely they are talking about making, canceling and filling orders.
Something along these lines might fit better:
class Order{
void CancelOrder(){
Status=Status.Canceled;
}
void FillOrder(){
if(Status==Status.Canceled)
throw Exception();
Status=Status.Filled;
}
static void Make(){
return new Order();
}
void Order(){
Status=Status.Pending;
}
}
I personally dislike usage of "statuses", they are automatically shared to everything that uses them - i see that as unnecessary coupling.
So I would have something like this:
class Order{
void CancelOrder(){
IsCanceled=true;
}
void FillOrder(){
if(IsCanceled) throw Exception();
IsFilled=true;
}
static Order Make(){
return new Order();
}
void Order(){
IsPending=true;
}
}
For changing related things when order state changes, best bet is to use so called domain events.
My code would look along these lines:
class Order{
void CancelOrder(){
IsCanceled=true;
Raise(new Canceled(this));
}
//usage of nested classes for events is my homemade convention
class Canceled:Event<Order>{
void Canceled(Order order):base(order){}
}
}
class Customer{
private void BeHappy(){
Console.WriteLine("hooraay!");
}
//nb: nested class can see privates of Customer
class OnOrderCanceled:IEventHandler<Order.Canceled>{
void Handle(Order.Canceled e){
//caveat: this approach needs order->customer association
var order=e.Source;
order.Customer.BeHappy();
}
}
}
If Order grows too huge, You might want to check out what bounded contexts are (as Eric Evans says - if he had a chance to wrote his book again, he would move bounded contexts to the very beginning).
In short - it's a form of decomposition driven by domain.
Idea is relatively simple - it is OK to have multiple Orders from different viewpoints aka contexts.
E.g. - Order from Shopping context, Order from Accounting context.
namespace Shopping{
class Order{
//association with shopping cart
//might be vital for shopping but completely irrelevant for accounting
ShoppingCart Cart;
}
}
namespace Accounting{
class Order{
//something specific only to accounting
}
}
But usually enough domain itself avoids complexity and is easily decomposable if You listen to it closely enough. E.g. You might hear from experts terms like OrderLifeCycle, OrderHistory, OrderDescription that You can leverage as anchors for decomposition.
NB: Keep in mind - I got zero understanding about Your domain.
It's quite likely that those verbs I'm using are completely strange to it.
I would be guided by the GRASP principles. Apply the Information Expert design principle, that is you should assign the responsibility to the class that naturally has the most information required to fulfill the change.
In this case, since changing the order status involves other entities, I would make each of these low-level domain objects support a method to apply the change with respect to itself. Then also use a domain service layer as you describe in option 2, that abstracts the whole operation, spanning multiple domain objects as needed.
Also see the Facade pattern.
I think having a method like UndoLastStatus on the Order class feels a bit wrong because the reasons for its existence are in a sense outside of the scope of an order. On the other hand, having a method which is responsible for changing the status of an order, Order.ChangeStatus, fits nicely as a domain model. The status of an order is a proper domain concept and changing that status should be done through the Order class, since it owns the data associated with an order status - it is the responsibility of the Order class to keep itself consistent and in a proper state.
Another way to think of it is that the Order object is what's persisted to the database and it is the 'last stop' for all changes applied to an Order. It is easier to reason about what a valid state for an order might be from the perspective of an Order rather than from the perspective of an external component. This is what DDD and OOP are all about, making it easier for humans to reason about code. Furthermore, access to private or protected members may be required to execute a state change, in which case having the method be on the order class is a better option. This is one of the reasons why anemic domain models are frowned upon - they shift the responsibility of keeping state consistent away from the owning class, thereby breaking encapsulation among other things.
One way to implement a more specific operation such as UndoLastStatus would be to create an OrderService which exposes the domain and is how external components operate upon the domain. Then you can create a simple command object like this:
class UndoLastStatusCommand {
public Guid OrderId { get; set; }
}
An the OrderService would have a method to process that command:
public void Process(UndoLastStatusCommand command) {
using (var unitOfWork = UowManager.Start()) {
var order = this.orderRepository.Get(command.OrderId);
if (order == null)
throw some exception
// operate on domain to undo last status
unitOfWork.Commit();
}
}
So now the domain model for Order exposes all of the data and behavior that correspond to an Order, but the OrderService, and the service layer in general, declare the different kind of operations that are performed on an order and expose the domain for utilization by external components, such as the presentation layer.
Also consider looking into the concept of domain events which considers anemic domain models and ways of improving them.
It sounds like you are not driving this domain from tests. Take a look at the work of Rob Vens, especially his work on exploratory modeling, time inversion and active-passive.

Resources