Benefit Of Repository and what is difference between two statements - c#-4.0

I'm learning the repository pattern. I've implemented it in a sample project. But I don't know what the main benefit of repository is.
private IStudentRespostiry repository = null;
public StudentController()
{
this.repository = new StudentRepository();
}
public StudentController(IStudentRespostiry repository)
{
this.repository = repository;
}
StudentRepository class can also access the method by creating object of the class.
StudentRepository obj = new StudentRepository();
Anyone have Solid reason for this. One I know in hiding data.

The main reasons for repository people cite are testability and modularity. For testability, you can replace the concrete object with mock one where the repository is used. For modularity you can create different repository, that for example uses different data story.
But I'm highly skeptical of modularity, because repositories are often highly leaky abstractions and changing backed data store is extremely rare. Meaning that something that should be as simple as creating a different instance turns into complete rewrite. Defeating the purpose of repository.
There are other ways to achieve testability with your data store without worrying about leaky abstractions.
As for your code examples. In first case, first constructor is "default" one and second is probably for either IoC or testing with mocks. IMO there should be no "default" one, because it removes the purpose of actually having an IoC.

The second statement allows dependency injection. This means you can use an IoC container to inject the correct implementation.
So for example, in your unit tests you could inject an in memory database (see mocking) while your production code would use an implementation which hits the actual database.

Related

Nestjs typeorm inject repository best practices

I'm using Typeorm & nestjs.
I have question about injecting repository in service.
Usually, sample codes lead me to write below when inject repository in service
export class AService {
constructor(
#InjectRepository() aRepo: ARepo
) {}
}
This always has to write TypeOrm.feature([ARepo]) in AModule
imports: [TypeOrm.feature([ARepo])
But if I write code below
export class AService {
this.aRepo: ARepo;
constructor(
#InjectConnection() connection: Connection
){
this.aRepo = this.connection.getCustomRepository(ARepo);
}
}
I don't have to connect any repository in Module.
Why we use first way?
The short answer is that this comes down to your own preference. From what I can see in the GitHub repo for Nest TypeOrm, forFeature is just delegating to getCustomRepository anyways, so there isn't any functional difference there.
However, there are some important benefits (IMO) to the first way that you don't get with the second way.
First, when it comes time to test your services, you only need to mock your repositories. With the second way, you'd need to mock the repositories in addition to the connection, creating more work for yourself and a larger surface area to test.
IMO, in the absence of any performance or other gains, this is reason enough to stick to the first method.
Second, it's much more expected and much clearer to explicitly define your models in the forRoot. Other developers can reference your module to see what it's responsible for without needing to dive into your services.
Lastly, I should point out that while the second method is technically using dependency injection, it's not using dependency injection for the model itself, which you're presumably importing at the top of your file. This pattern is potentially limiting, as it becomes much harder to mock that imported repo if you ever need to in the future.
TLDR: If you don't care about any of the benefits, or are able to work around them, then there's functionally no difference between the two. They both do the same thing, and internally Nest calls the same methods.

Single Responsibility Principle in Clean Architecture, Aggregating UseCases in one UseCaseManager which can provide UseCase based on In & Out Object

I want to implement Single Responsibility principle in my projects Domain layer (Clean MVVM).
I've approximately 200 different use-cases which are being very hectic to manage. Now I'm thinking to create one UseCaseManager which can provide me required UseCase based on Input & Output Object.
I've tried an approach but that's not looking very good.I'm mentioning some sample code, Please help me how can I aggregate all the UseCases to one UseCaseManager.
UseCase1:
public class ActualUseCase1 extends AsyncUseCase<Object3,Object4> {
public ActualUseCase1(SchedulerProvider schedulerProvider) {
super(schedulerProvider);
}
#Override
public Flowable<Object4> buildUseCaseFlowable(Object3 input) {
return Flowable.just(new Object4());
}
}
UseCase2:
public class ActualUseCase2 extends AsyncUseCase<Object1, Object2> {
public ActualUseCase2(SchedulerProvider schedulerProvider) {
super(schedulerProvider);
}
#Override
public Flowable<Object2> buildUseCaseFlowable(Object1 input) {
return Flowable.just(new Object2());
}
}
UseCaseManager:
public interface UseCaseManager<In, Out> {
<T> T getUseCase(In input, Out output);
}
T can be different UseCase with different In & Out Object.
UseCaseManagerImpl:
public class UseCaseManagerImpl implements UseCaseManager {
#Override
public Object getUseCase(Object object1, Object object2) {
return null;
}
}
Now this is the main problem, I'm not able to understand. How can i implement getUseCase method.
I think you're re-inventing the abstract factory pattern. Google will provide you with lots of content on that subject...
The tricky bit is how you decide which subtype to instantiate and return; that can be as simple as a switch statement, or involve lookup tables, etc. The key point is that you isolate that logic into a single place, where you can unit test it.
A bigger question is - how do you end up with 200 subclasses?
Okay I am getting an idea here that you want sort of a system wherein for a given input you get some output. And you can have 200 such inputs for which 200 corresponding outputs are possible. And you want to make all that manageable.
I will try to explain the solution I have in mind. I am a beginner in Java and hence won’t be able to produce much code.
You can implement this using the Chain of Responsibility pattern. In this design pattern, you have a job runner (UseCaseManagaer in your case) and several jobs (UseCases) to run, which will be run in sequence until one of them returns an output.
You can create a RequestPipeline class which will be a job runner. In the UseCaseManager, you instantiate the pipeline once and all the use cases you want to add using a Builder Pattern like so:
RequestPipeline.add(new UseCase1())
RequestPipeline.add(new UseCase2())...
When an input comes in, you trigger the RequestPipeline which will run all the jobs added to it, one after the other in sequence. If a UseCase returns null, the job runner will call the next UseCase in line until it finds a UseCase which can manage the input and produce an answer.
The advantages of this design pattern are:
Abstraction: The RequestPipeline is responsible for running the jobs in line but does not know anything about the jobs it is
running. The UseCase on the other hand only knows about processing
it’s own use-case. It’s a unit in itself. Hence Single Responsibility Principle is satisfied as both are independent of each other and are re-usable whenever we have a similar design requirement later.
Easily extensible: If you have to add 10 other use-cases, you can easily add them to the RequestPipeline and you are done.
No switch case and if-else. This in itself is a big achievement. I love Chain of Responsibility for this very reason.
Declarative Programming: We simple declare what we need to do and leave the details how to do it to the separate units. The design of the code is easily understandable by a new developer.
More control: RequestPipeline has the ability to dynamically decide the job to run at run-time.
Reference: https://www.google.co.in/amp/s/www.geeksforgeeks.org/chain-responsibility-design-pattern/amp/
Some Java Code is provided here for you to check, if it satisfies your use-case.
Hope this helps. Please let me know if you have any doubts in the comment section.
What you are trying to do is NOT the single responsibility, it's the opposite.
Single responsibility means
There should be one reason to change
See The Single Responsibility Principle
The UseCaseManager you try to implement will handle all of your 200 use cases. Thus it will change whenever a use case change." - That's is mixing of concerns and not separating them.
Usually use cases are invoked by controllers and usually a controller also has a single responsibility. Thus the controller knows which use case it must invoke. Thus I don't see the need for a UseCaseManager.
I guess there is another problem in your design that leads to the problem you have. But since I don't have your full source code I can't give you any further advice.

Generic Vs Individual Repository for Aggregate Root

As I understand, the Bounded Context can have modules, the modules can have many aggregate roots, the aggregate root can have entities. For the persistence, each aggregate root should have a repository.
With the numerous aggregate roots in a large project, is it okay to use a Generic Repository, one for ready only and one for update? Or should have separate repository for each aggregate root which can provide better control.
In a large complex project, I wouldn't recommend using a generic repository since there will most likely be many specific cases beyond your basic GetById(), GetAll()... operations.
Greg Young has a great article on generic repositories : http://codebetter.com/gregyoung/2009/01/16/ddd-the-generic-repository/
is it okay to use a Generic Repository, one for ready only and one for update?
Repositories generally don't handle saving updates to your entities, i.e. they don't have an Update(EntityType entity) method. This is usually taken care of by your ORM's change tracker/Unit of Work implementation. However, if you're looking for an architecture that separates reads from writes, you should definitely have a look at CQRS.
Pure DDD is about making implicit explicit, ie : not using List(), but rather ListTheCustomerThatHaveNotBeSeenForALongTime().
What is at stake here is a technical implementation. From What I know, domain driven design does not provide technical choices.
Generic repository fits well. Your use of this generic repository might not fit the spirit of ddd though. It depends on your domain.
On some of the sample DDD applications that are published on the web, I have seen them have a base repository interface that each aggregate root repository inherits from. I, personally, do things a bit differently. Because repositories are supposed to look like collections to the application code, my base repository interface inherits from IEnumerable so I have:
public interface IRepository<T> : IEnumerable<T> where T : IAggregateRoot
{
}
I do have some base methods I put in there, but only ones that allow reading the collection because some of my aggregate root objects are encapsulated to the point that changes can ONLY be made through method calls.
To answer your question, yes it is fine to have a generic repository, but try not to define any functionality that shouldn't be inherited by ALL repositories. And, if you do accidentally define something that one repository doesn't need, refactor it out into all of the repository interfaces that do need it.
EDIT: Added example of how to make repositories behave just like any other ICollection object.
On the repositories that require CRUD operations, I add this:
void Add(T item); //Add
void Remove(T item); //Remove
T this[int index] { set; } //or T this[object id] { set; } //Update
Thanks for the comments. The approach that I took was separated the base repository interface into ReadOnly and Updatable. Every aggregate root entity will have it's own repository and is derived from Updatable or readonly repository. The repository at the aggregate root level will have it's own additional methods. I'm not planning to use a generic repository.
There is a reason I choose to have IReadOnlyRepository. In the future, I will convert the query part of the app to a CQRS. So the segregation to a ReadOnly interface supertype will help me at that point.

Pros and cons of DDD Repositories

Pros:
Repositories hide complex queries.
Repository methods can be used as transaction boundaries.
ORM can easily be mocked
Cons:
ORM frameworks offer already a collection like interface to persistent objects, what is the intention of repositories. So repositories add extra complexity to the system.
combinatorial explosion when using findBy methods. These methods can be avoided with Criteria objects, queries or example objects. But to do that no repository is needed because a ORM already supports these ways to find objects.
Since repositories are a collection of aggregate roots (in the sense of DDD), one have to create and pass around aggregate roots even if only a child object is modified.
Questions:
What pros and cons do you know?
Would you recommend to use repositories? (Why or why not?)
The main point of a repository (as in Single Responsibility Principle) is to abstract the concept of getting objects that have identity. As I've become more comfortable with DDD, I haven't found it useful to think about repositories as being mainly focused on data persistence but instead as factories that instantiate objects and persist their identity.
When you're using an ORM you should be using their API in as limited a way as possible, giving yourself a facade perhaps that is domain specific. So regardless your domain would still just see a repository. The fact that it has an ORM on the other side is an "implementation detail".
Repository brings domain model into focus by hiding data access details behind an interface that is based on ubiquitous language. When designing repository you concentrate on domain concepts, not on data access. From the DDD perspective, using ORM API directly is equivalent to using SQL directly.
This is how repository may look like in the order processing application:
List<Order> myOrders = Orders.FindPending()
Note that there are no data access terms like 'Criteria' or 'Query'. Internally 'FindPending' method may be implemented using Hibernate Criteria or HQL but this has nothing to do with DDD.
Method explosion is a valid concern. For example you may end up with multiple methods like:
Orders.FindPending()
Orders.FindPendingByDate(DateTime from, DateTime to)
Orders.FindPendingByAmount(Money amount)
Orders.FindShipped()
Orders.FindShippedOn(DateTime shippedDate)
etc
This can improved by using Specification pattern. For example you can have a class
class PendingOrderSpecification{
PendingOrderSpecification WithAmount(Money amount);
PendingOrderSpecification WithDate(DateTime from, DateTime to)
...
}
So that repository will look like this:
Orders.FindSatisfying(PendingOrderSpecification pendingSpec)
Orders.FindSatisfying(ShippedOrderSpecification shippedSpec)
Another option is to have separate repository for Pending and Shipped orders.
A repository is really just a layer of abstraction, like an interface. You use it when you want to decouple your data persistence implementation (i.e. your database).
I suppose if you don't want to decouple your DAL, then you don't need a repository. But there are many benefits to doing so, such as testability.
Regarding the combinatorial explosion of "Find" methods: in .NET you can return an IQueryable instead of an IEnumerable, and allow the calling client to run a Linq query on it, instead of using a Find method. This provides flexibility for the client, but sacrifices the ability to provide a well-defined, testable interface. Essentially, you trade off one set of benefits for another.

DDD Repositories and Factories

i've read a blog about DDD from Matt Petters
and according and there it is said that we create a repository (interface) for each entity and after that we create a RepositoryFactory that is going to give instances (declared as interfaces) of repositories
is this how project are done using DDD ?
i mean, i saw projects that i thought that they use DDD but they were calling each repository directly, there was no factory involved
and also
why do we need to create so much repository classes, why not use something like
public interface IRepository : IDisposable
{
T[] GetAll();
T[] GetAll(Expression<Func> filter);
T GetSingle(Expression<Func> filter);
T GetSingle(Expression<Func> filter, List<Expression<Func>> subSelectors);
void Delete(T entity);
void Add(T entity);
int SaveChanges();
}
i guess it could be something with violating the SOLID principles, or something else ?
There are many different ways of doing it. There's is not single 'right' way of doing it. Most people prefer a Repository per Entity because it lets them vary Domain Services in a more granular way. This definitely fits the 'S' in SOLID.
When it comes to factories, they should only be used when they add value. If all they do is to wrap a new operation, they don't add value.
Here are some scenarios in which factories add value:
Abtract Factories lets you vary Repository implementations independently of client code. This fits well with the 'L' in SOLID, but you could also achieve the same effect by using DI to inject the Repository into the Domain Service that requires it.
When the creation of an object in itself is such a complex operation (i.e. in involves much more than just creating a new instance) that it is best encapsulated behind an API.

Resources