How do I mock a class without an interface using StructureMap's AutoMocker? - structuremap-automocking

I am huge proponent of testing and I think having to create extra interface to be able to write unit tests is small price to pay. I have added structure map automocker to test suite and it seems to be absolutely not able to mock class. Rhino mock has ability to mock public classes as long as public methods are marked virtual.
I would like to get rid of interfaces if possible. Any and all help appreciated.

Before I answer this, I would just like to point out that it completely defeats the purpose of using StructureMap when you don't use interfaces. (Well, not completely, but defeats enough of the purpose of using it for me to question why you've decided to go with StructureMap in the first place...) You won't get very far in your tests without interfaces or if you do, you're going to have all of your logic sitting in one class or 20-30 classes all tightly coupled, which again is missing the point of using StructureMap. Having said that I think this should work in situations where you need to mock out a concrete class
[Test]
public void TestMethod()
{
// Arrange
var service = new RhinoAutoMocker<BusinessRuleService>();
service.PartialMockTheClassUnderTest();
service.ClassUnderTest.Expect(x => x.VirtualMethodImTesting());
// Act
service.ClassUnderTest.CallableMethod();
// Assert
service.ClassUnderTest.VerifyAllExpectations();
// ... or other stuff ...
}

Related

Single Responsibility Principle in Clean Architecture, Aggregating UseCases in one UseCaseManager which can provide UseCase based on In & Out Object

I want to implement Single Responsibility principle in my projects Domain layer (Clean MVVM).
I've approximately 200 different use-cases which are being very hectic to manage. Now I'm thinking to create one UseCaseManager which can provide me required UseCase based on Input & Output Object.
I've tried an approach but that's not looking very good.I'm mentioning some sample code, Please help me how can I aggregate all the UseCases to one UseCaseManager.
UseCase1:
public class ActualUseCase1 extends AsyncUseCase<Object3,Object4> {
public ActualUseCase1(SchedulerProvider schedulerProvider) {
super(schedulerProvider);
}
#Override
public Flowable<Object4> buildUseCaseFlowable(Object3 input) {
return Flowable.just(new Object4());
}
}
UseCase2:
public class ActualUseCase2 extends AsyncUseCase<Object1, Object2> {
public ActualUseCase2(SchedulerProvider schedulerProvider) {
super(schedulerProvider);
}
#Override
public Flowable<Object2> buildUseCaseFlowable(Object1 input) {
return Flowable.just(new Object2());
}
}
UseCaseManager:
public interface UseCaseManager<In, Out> {
<T> T getUseCase(In input, Out output);
}
T can be different UseCase with different In & Out Object.
UseCaseManagerImpl:
public class UseCaseManagerImpl implements UseCaseManager {
#Override
public Object getUseCase(Object object1, Object object2) {
return null;
}
}
Now this is the main problem, I'm not able to understand. How can i implement getUseCase method.
I think you're re-inventing the abstract factory pattern. Google will provide you with lots of content on that subject...
The tricky bit is how you decide which subtype to instantiate and return; that can be as simple as a switch statement, or involve lookup tables, etc. The key point is that you isolate that logic into a single place, where you can unit test it.
A bigger question is - how do you end up with 200 subclasses?
Okay I am getting an idea here that you want sort of a system wherein for a given input you get some output. And you can have 200 such inputs for which 200 corresponding outputs are possible. And you want to make all that manageable.
I will try to explain the solution I have in mind. I am a beginner in Java and hence won’t be able to produce much code.
You can implement this using the Chain of Responsibility pattern. In this design pattern, you have a job runner (UseCaseManagaer in your case) and several jobs (UseCases) to run, which will be run in sequence until one of them returns an output.
You can create a RequestPipeline class which will be a job runner. In the UseCaseManager, you instantiate the pipeline once and all the use cases you want to add using a Builder Pattern like so:
RequestPipeline.add(new UseCase1())
RequestPipeline.add(new UseCase2())...
When an input comes in, you trigger the RequestPipeline which will run all the jobs added to it, one after the other in sequence. If a UseCase returns null, the job runner will call the next UseCase in line until it finds a UseCase which can manage the input and produce an answer.
The advantages of this design pattern are:
Abstraction: The RequestPipeline is responsible for running the jobs in line but does not know anything about the jobs it is
running. The UseCase on the other hand only knows about processing
it’s own use-case. It’s a unit in itself. Hence Single Responsibility Principle is satisfied as both are independent of each other and are re-usable whenever we have a similar design requirement later.
Easily extensible: If you have to add 10 other use-cases, you can easily add them to the RequestPipeline and you are done.
No switch case and if-else. This in itself is a big achievement. I love Chain of Responsibility for this very reason.
Declarative Programming: We simple declare what we need to do and leave the details how to do it to the separate units. The design of the code is easily understandable by a new developer.
More control: RequestPipeline has the ability to dynamically decide the job to run at run-time.
Reference: https://www.google.co.in/amp/s/www.geeksforgeeks.org/chain-responsibility-design-pattern/amp/
Some Java Code is provided here for you to check, if it satisfies your use-case.
Hope this helps. Please let me know if you have any doubts in the comment section.
What you are trying to do is NOT the single responsibility, it's the opposite.
Single responsibility means
There should be one reason to change
See The Single Responsibility Principle
The UseCaseManager you try to implement will handle all of your 200 use cases. Thus it will change whenever a use case change." - That's is mixing of concerns and not separating them.
Usually use cases are invoked by controllers and usually a controller also has a single responsibility. Thus the controller knows which use case it must invoke. Thus I don't see the need for a UseCaseManager.
I guess there is another problem in your design that leads to the problem you have. But since I don't have your full source code I can't give you any further advice.

Benefit Of Repository and what is difference between two statements

I'm learning the repository pattern. I've implemented it in a sample project. But I don't know what the main benefit of repository is.
private IStudentRespostiry repository = null;
public StudentController()
{
this.repository = new StudentRepository();
}
public StudentController(IStudentRespostiry repository)
{
this.repository = repository;
}
StudentRepository class can also access the method by creating object of the class.
StudentRepository obj = new StudentRepository();
Anyone have Solid reason for this. One I know in hiding data.
The main reasons for repository people cite are testability and modularity. For testability, you can replace the concrete object with mock one where the repository is used. For modularity you can create different repository, that for example uses different data story.
But I'm highly skeptical of modularity, because repositories are often highly leaky abstractions and changing backed data store is extremely rare. Meaning that something that should be as simple as creating a different instance turns into complete rewrite. Defeating the purpose of repository.
There are other ways to achieve testability with your data store without worrying about leaky abstractions.
As for your code examples. In first case, first constructor is "default" one and second is probably for either IoC or testing with mocks. IMO there should be no "default" one, because it removes the purpose of actually having an IoC.
The second statement allows dependency injection. This means you can use an IoC container to inject the correct implementation.
So for example, in your unit tests you could inject an in memory database (see mocking) while your production code would use an implementation which hits the actual database.

Partial-mocking considered bad practice? (Mockito)

I'm unit-testing a business object using Mockito. The business object uses a DAO which normally gets data from a DB. To test the business object, I realized that it was easier to use a separate in-memory DAO (which keeps data in a HashMap) than to write all the
when(...).thenReturn(...)
statements. To create such a DAO, I started by partial-mocking my DAO interface like so:
when(daoMock.getById(anyInt())).then(new Answer() {
#Override
public Object answer(InvocationOnMock invocation) throws Throwable {
int id = (Integer) invocation.getArguments()[0];
return map.get(id);
}
});
but it occurred to me that it was easier to just implement a whole new DAO implementation myself (using in-memory HashMap) without even using Mockito (no need to get arguments out of that InvocationOnMock object) and make the tested business object use this new DAO.
Additionally, I've read that partial-mocking was considered bad practice. My question is: is what I'm doing a bad practice in my case? What are the downsides? To me this seems OK and I'm wondering what the potential problems could be.
I'm wondering why you need your fake DAO to be backed by a HashMap. I'm wondering whether your tests are too complex. I'm a big fan of having very simple test methods that each test one aspect of your SUT's behaviour. In principle, this is "one assertion per test", although sometimes I end up with a small handful of actual assert or verify lines, for example, if I'm asserting the correctness of a complex object. Please read http://blog.astrumfutura.com/2009/02/unit-testing-one-test-one-assertion-why-it-works/ or http://blog.jayfields.com/2007/06/testing-one-assertion-per-test.html to learn more about this principle.
So for each test method, you shouldn't be using your fake DAO over and over. Probably just once, or twice at the very most. Therefore, having a big HashMap full of data would seem to me to be EITHER redundant, OR an indication that your test is doing WAY more than it should. For each test method, you should really only need one or two items of data. If you set these up using a Mockito mock of your DAO interface, and put your when ... thenReturn in the test method itself, each test will be simple and readable, because the data that the particular test uses will be immediately visible.
You may also want to read up on the "arrange, act, assert" pattern, (http://www.arrangeactassert.com/why-and-what-is-arrange-act-assert/ and http://www.telerik.com/help/justmock/basic-usage-arrange-act-assert.html) and be careful about implementing this pattern INSIDE each test method, rather than having different parts of it scattered across your test class.
Without seeing more of your actual test code, it's difficult to know what other advice to give you. Mockito is supposed to make mocking easier, not harder; so if you've got a test where that's not happening for you, then it's certainly worth asking whether you're doing something non-standard. What you're doing is not "partial mocking", but it certainly seems like a bit of a testing smell to me. Not least because it couples lots of your test methods together - ask yourself what would happen if you had to change some of the data in the HashMap.
You may find https://softwareengineering.stackexchange.com/questions/158397/do-large-test-methods-indicate-a-code-smell useful too.
When testing my classes, I often use a combination of Mockito-made mocks and also fakes, which are very much what you are describing. In your situation I agree that a fake implementation sounds better.
There's nothing particularly wrong with partial mocks, but it makes it a little harder to determine when you're calling the real object and when you're calling your mocked method--especially because Mockito silently fails to mock final methods. Innocent-looking changes to the original class may change the implementation of the partial mock, causing your test to stop working.
If you have the flexibility, I recommend extracting an interface that exposes the method you need to call, which will make it easier whether you choose a mock or a fake.
To write a fake, implement that small interface without Mockito using a simple class (nested in your test, if you'd like). This will make it very easy to see what is happening; the downside is that if you write a very complicated Fake you may find you need to test the Fake too. If you have a lot of tests that could make use of a good Fake implementation, this may be worth the extra code.
I highly recommend "Mocks aren't Stubs", an article by Martin Fowler (famous for his book Refactoring). He goes over the names of different types of test doubles, and the differences between them.

How to run aspect advice for a method which is called by another method in the same class

I am having a throuble about Spring AOP. I am trying to trigger a method using aspect but the method that will trigger the aspect is also the method of the same class and aspect is not working(No errors by the way).Like this
class A extends Runnable{
public void write(){
System.out.println('Hi');
}
public void run(){
this.write();
}
}
<aop:after-returning method="anyMethod" pointcut="execution(* A.write(..))"/>
Any ideas will be appreciated
Thanks
The fact that the advised method is called in a different thread doesn't make any difference. Just make sure the instance that you pass to the thread is created by the spring application context and not by your application code.
Also, since you're advising a method declared in a class, not an interface -- write() -- you'll need to perform load-time weaving (and have cglib in your classpath).
This is because Spring AOP is proxy based. You use a proxy to delegate calls to the underlying object. However, when an underlying object's method makes a call to another method inside it, of the same class (your use case) then proxy does not come into picture and hence what you are trying to achieve is not possible. There are some work arounds, but they kill the very purpose of AOP.
You can refer more information here.
http://docs.spring.io/spring/docs/3.1.x/spring-framework-reference/html/aop.html#aop-understanding-aop-proxies
As Abhishek Chauhan said, Spring AOP is proxy-based and thus cannot intercept direct calls to this.someMethod(). But the good news is that you can also use full-blown AspectJ within Spring applications via load-time weaving as described in the Spring manual. This way you can get rid of the limitation and even of the whole proxy overhead because AspectJ does not need any proxies.

About unit test, I want to ask that what kind of function/method we can use as unit testing object

I am beginner of unit testing so I want to ask what kind of function/method we can use as unit testing object.
I want unit test sharepoint code which written on C#.
By the way, I don't ask about unit testing framework. I want to know that what kind of function I can use as unit test object.
Ex:
// function that return a value.
string getTitle()
{
// TODO: code logic here
return "A Title";
}
Or
// function that no return a value
void doAction()
{
// TODO: code logic here
}
=> which one of them can use as unit testing object.
Your question is really vague.
If you're asking about unit testing techniques, get a book. Perhaps this or this.
If you're wanting to test code that calls SharePoint objects, you have to talk about tools. You have to fake these out using either Typemock Isolator or Moles. The SharePoint object model is full of concrete, non-inheritable objects.
To answer your question the first test is an easy object to test
assertSame(getTitle(), "A Title");
The second is more difficult because it doesn't give you a way to sense what is happening in it. You need to get access:
class SomethingDoer {
public sense = false;
public void DoAction()
{
// TODO: code logic here
sense = true;
}
}
test:
public doer = New SomethingDoer();
doer.DoAction();
assertTrue(doer.sense);
I use the public variable sense to test.
[NOTE] this is totally a toy to explain concept of unit testing requiring a way to sense and check behavior of system under test.
If you're a Java developer, you use JUnit or TestNG as your testing framework.
You write unit test classes that exercise your Java classes. You instantiate instances and call their methods in the tests.
If you're using .NET then I'd highly recommend Moq to create test only implementations of interfaces.
If you don't have interface declarations then Moles will allow you to test hard coded dependencies. Use this with caution however as in my opinion if you're using moles a lot then you have design issues with regards to lack of abstraction.
If you are looking for a unit testing framework then NUnit has a lot going for it in terms of how fluid it is to write. We've had some issues getting moles to play nicely with NUnit so in general we've moved over to MSTest which is built in to Visual Studio; whilst not as nice as NUnit it does testing without any problems.
If you want to do unit testing of your sharepoint web pages then you're really looking at Selenium to help with that. It's very fragile though so i'd leave this until you're sure that your UI is complete
If you are totally new to Unit testing I'd highly recommend reading about it first. If you get tired of looking at web pages then just create a new MVC web application and take a look at the unit tests that are created out of the box to get an idea (just ignore their terrible test method names)

Resources