Relying on a number of TestExecutionListeners in order to populate an in-memory DB and run integration tests - spring-test

I have a number of Spring integration tests that all somehow need to use data (from an in-memory database).
The tests all require subtly different data sets so that as of now I use plain Spring #Component helper classes (located in the test package hierarchy) that insert the data right from the test methods as show below:
#Autowired
private SomeHelper someHelper;
#Test
public void someIntegrationTest(){
//Arrange
someHelper.insertSomeData();
...
//Act
...
//Assert
...
}
I find this solution not very clean nor very beautiful and I am seeking to improve it or replace it with an alternative solution....
Would it be a good idea to implement a hierarchy of TestExecutionListeners where common required data would be inserted by the base class and data specific for the individual tests would be inserted by the subclasses of the base class??
If relying on TestExcutionListener in order to insert test data is not a good idea, then what could be a reliable and viable alternative?

Take a look at Spring Test DbUnit (and the related blog announcement).
I think it will satisfy your needs.
Regards,
Sam

Related

Why do I need mapstruct for when using Jhipster?

When I read mapstruct documentation they say: MapStruct is a Java annotation processor for the generation of type-safe bean mapping classes.
https://mapstruct.org/documentation/stable/reference/html/#introduction
Which leaves me with the queston. Why do I need mapstruct? Jhipster uses it and I have no clue why did they need it for in the first place? Why you need a mapping inside Jhipster.
They also mention that .... Compared to writing mapping code from hand, MapStruct saves time by generating code which is tedious and error-prone to write. So it saves time but it does not explain why you need it, right?
Thanks. I hope they can modify the documentation with the doubts and explanations that are written down here.
JHipster uses MapStruct to generate code for mapping entities to/from DTOs as explained in https://www.jhipster.tech/using-dtos/
You can get rid of it by copying generated Mapper classes into your source tree and then evolving them manually. This could be useful if you don't plan to use JHipster beyond project bootstrapping and/or want to build DTOs that are too complex for MapStruct.
It might sound more work at first but it's simple code and you will need to do the same anyway in the frontend code.
Basically, a Mapper is a simple service that maps an entity to/from a Data Transfer Object. It does not require any library to do so, not to implement any specific interface, you just call writers from values you got from getters.
If you don't want to start from scratch, let's say you have defined a Book entity, you can find an example by searching for the BookMapperImpl.java class generated by MapStruct in your target directory. Then, you can copy it to you src directory, you get rid of mapstruct imports in BookMapperImpl, you delete the BookMapper interface and rename the BookMapperImpl to BookMapper.

Single Responsibility Principle in Clean Architecture, Aggregating UseCases in one UseCaseManager which can provide UseCase based on In & Out Object

I want to implement Single Responsibility principle in my projects Domain layer (Clean MVVM).
I've approximately 200 different use-cases which are being very hectic to manage. Now I'm thinking to create one UseCaseManager which can provide me required UseCase based on Input & Output Object.
I've tried an approach but that's not looking very good.I'm mentioning some sample code, Please help me how can I aggregate all the UseCases to one UseCaseManager.
UseCase1:
public class ActualUseCase1 extends AsyncUseCase<Object3,Object4> {
public ActualUseCase1(SchedulerProvider schedulerProvider) {
super(schedulerProvider);
}
#Override
public Flowable<Object4> buildUseCaseFlowable(Object3 input) {
return Flowable.just(new Object4());
}
}
UseCase2:
public class ActualUseCase2 extends AsyncUseCase<Object1, Object2> {
public ActualUseCase2(SchedulerProvider schedulerProvider) {
super(schedulerProvider);
}
#Override
public Flowable<Object2> buildUseCaseFlowable(Object1 input) {
return Flowable.just(new Object2());
}
}
UseCaseManager:
public interface UseCaseManager<In, Out> {
<T> T getUseCase(In input, Out output);
}
T can be different UseCase with different In & Out Object.
UseCaseManagerImpl:
public class UseCaseManagerImpl implements UseCaseManager {
#Override
public Object getUseCase(Object object1, Object object2) {
return null;
}
}
Now this is the main problem, I'm not able to understand. How can i implement getUseCase method.
I think you're re-inventing the abstract factory pattern. Google will provide you with lots of content on that subject...
The tricky bit is how you decide which subtype to instantiate and return; that can be as simple as a switch statement, or involve lookup tables, etc. The key point is that you isolate that logic into a single place, where you can unit test it.
A bigger question is - how do you end up with 200 subclasses?
Okay I am getting an idea here that you want sort of a system wherein for a given input you get some output. And you can have 200 such inputs for which 200 corresponding outputs are possible. And you want to make all that manageable.
I will try to explain the solution I have in mind. I am a beginner in Java and hence won’t be able to produce much code.
You can implement this using the Chain of Responsibility pattern. In this design pattern, you have a job runner (UseCaseManagaer in your case) and several jobs (UseCases) to run, which will be run in sequence until one of them returns an output.
You can create a RequestPipeline class which will be a job runner. In the UseCaseManager, you instantiate the pipeline once and all the use cases you want to add using a Builder Pattern like so:
RequestPipeline.add(new UseCase1())
RequestPipeline.add(new UseCase2())...
When an input comes in, you trigger the RequestPipeline which will run all the jobs added to it, one after the other in sequence. If a UseCase returns null, the job runner will call the next UseCase in line until it finds a UseCase which can manage the input and produce an answer.
The advantages of this design pattern are:
Abstraction: The RequestPipeline is responsible for running the jobs in line but does not know anything about the jobs it is
running. The UseCase on the other hand only knows about processing
it’s own use-case. It’s a unit in itself. Hence Single Responsibility Principle is satisfied as both are independent of each other and are re-usable whenever we have a similar design requirement later.
Easily extensible: If you have to add 10 other use-cases, you can easily add them to the RequestPipeline and you are done.
No switch case and if-else. This in itself is a big achievement. I love Chain of Responsibility for this very reason.
Declarative Programming: We simple declare what we need to do and leave the details how to do it to the separate units. The design of the code is easily understandable by a new developer.
More control: RequestPipeline has the ability to dynamically decide the job to run at run-time.
Reference: https://www.google.co.in/amp/s/www.geeksforgeeks.org/chain-responsibility-design-pattern/amp/
Some Java Code is provided here for you to check, if it satisfies your use-case.
Hope this helps. Please let me know if you have any doubts in the comment section.
What you are trying to do is NOT the single responsibility, it's the opposite.
Single responsibility means
There should be one reason to change
See The Single Responsibility Principle
The UseCaseManager you try to implement will handle all of your 200 use cases. Thus it will change whenever a use case change." - That's is mixing of concerns and not separating them.
Usually use cases are invoked by controllers and usually a controller also has a single responsibility. Thus the controller knows which use case it must invoke. Thus I don't see the need for a UseCaseManager.
I guess there is another problem in your design that leads to the problem you have. But since I don't have your full source code I can't give you any further advice.

Benefit Of Repository and what is difference between two statements

I'm learning the repository pattern. I've implemented it in a sample project. But I don't know what the main benefit of repository is.
private IStudentRespostiry repository = null;
public StudentController()
{
this.repository = new StudentRepository();
}
public StudentController(IStudentRespostiry repository)
{
this.repository = repository;
}
StudentRepository class can also access the method by creating object of the class.
StudentRepository obj = new StudentRepository();
Anyone have Solid reason for this. One I know in hiding data.
The main reasons for repository people cite are testability and modularity. For testability, you can replace the concrete object with mock one where the repository is used. For modularity you can create different repository, that for example uses different data story.
But I'm highly skeptical of modularity, because repositories are often highly leaky abstractions and changing backed data store is extremely rare. Meaning that something that should be as simple as creating a different instance turns into complete rewrite. Defeating the purpose of repository.
There are other ways to achieve testability with your data store without worrying about leaky abstractions.
As for your code examples. In first case, first constructor is "default" one and second is probably for either IoC or testing with mocks. IMO there should be no "default" one, because it removes the purpose of actually having an IoC.
The second statement allows dependency injection. This means you can use an IoC container to inject the correct implementation.
So for example, in your unit tests you could inject an in memory database (see mocking) while your production code would use an implementation which hits the actual database.

generalizing classes or not when using mapper for database

lets say i have the following classes:
customer, applicant, agency, partner
i need to work with databases and use mappers to map the objects to the database. my question is, can i generalize these classes with the class person and just implement a mapper for this class? basically it would look like this:
instead of this:
the mapper classes use orm to save and edit fields in the database, i use them because the application i am doing has to be developed further in the future.
If each of the classes (Partner, Applicant, etc.) has different attributes, you can't have only one mapper for all of the classes (well, you can, but then you would have to use meta-programming to retrieve information from the classes lower down the hierarchy). The solution also depends on how and who manages your database: do you have control over how it is designed or is it something that you cannot change? in the second case I would definitely use a mapper per class to allow full decoupling between DB and app. In the first case, you could use some kind of mapping hierarchy. And also, it depends on what language/frameworks you are using.
Good Luck!

How do I mock a class without an interface using StructureMap's AutoMocker?

I am huge proponent of testing and I think having to create extra interface to be able to write unit tests is small price to pay. I have added structure map automocker to test suite and it seems to be absolutely not able to mock class. Rhino mock has ability to mock public classes as long as public methods are marked virtual.
I would like to get rid of interfaces if possible. Any and all help appreciated.
Before I answer this, I would just like to point out that it completely defeats the purpose of using StructureMap when you don't use interfaces. (Well, not completely, but defeats enough of the purpose of using it for me to question why you've decided to go with StructureMap in the first place...) You won't get very far in your tests without interfaces or if you do, you're going to have all of your logic sitting in one class or 20-30 classes all tightly coupled, which again is missing the point of using StructureMap. Having said that I think this should work in situations where you need to mock out a concrete class
[Test]
public void TestMethod()
{
// Arrange
var service = new RhinoAutoMocker<BusinessRuleService>();
service.PartialMockTheClassUnderTest();
service.ClassUnderTest.Expect(x => x.VirtualMethodImTesting());
// Act
service.ClassUnderTest.CallableMethod();
// Assert
service.ClassUnderTest.VerifyAllExpectations();
// ... or other stuff ...
}

Resources