The io.vertx.reactivex.core.eventbus.EventBus.rxSend() method has the following signature:
public <T> Single<Message<T>> rxSend(String address,
Object message,
DeliveryOptions options)
What is the correct way to mock this so that it returns a Single containing a real object? The issue is that the Message class has no constructor apart from one which which takes another Message object.
So the following will compile:
Mockito.when(eventBus.rxSend(Mockito.isA(String.class),
Mockito.isA(JsonObject.class),
Mockito.isA(DeliveryOptions.class))).thenReturn(Single.just(new Message<Object>(null)));
but of course Single.just(new Message<Object>(null))does not contain a real object which can then be passed on to test the next handler in the verticle.
Thanks
like i mentioned in my comment, i don't have an answer to your immediate question, but i'd instead like to recommend a different approach to getting the results you're looking for.
mocking types that you don't own is generally discouraged for a variety of reasons. the two that resonate most with me (as i've fallen victim) are:
if the real implementation of the mocked dependency changes, the mock's behavior will not automatically reveal any forward-breaking changes.
the more mocks a test introduces, the more cognitive load the test carries. and some tests require a lot of mocks in order to work.
there are lots of articles on the topic with more detailed viewpoints and opinions. if you're interested, refer to the Mockito wiki, or just Google around.
given all that, rather than mocking EventBus, why not use an actual instance and receive real reply Messages composed by the framework? sure, strictly speaking this becomes more of an integration test than a unit test, but is closer to the type of testing you want.
here's an example snippet from a test i wrote in an existing project with some added comments. (the code refers to some non-standard types with an -"Ext" suffix, but they aren't salient to the approach).
private EventBus eventBus;
#Before
public setUp(#NotNull TestContext context) {
eventBus = Vertx.vertx().eventBus()
}
#Test
public void ping_pong_reply_test(#NotNull TestContext context) {
final Async async = context.async();
// the following is a MessageConsumer registered
// with the EventBus for this specific test.
// the reference is retained so that it can be
// "unregistered()" upon completion of this test
// so as not to affect other tests.
final MessageConsumer<JsonObject> consumer = eventBus.consumer(Ping.class.getName(), message -> {
// here is where you would otherwise place
// your mock Message generation.
MessageExt.replyAsJsonObject(message, new Pong());
});
final Ping message = new Ping();
final DeliveryOptions options = null;
// the following uses an un-mocked EventBus to
// send an event and receive a real Message reply.
// created by the consumer above.
EventBusExt.rxSendJsonObject(eventBus, message, options).subscribe(
result ->
// result.body() is JSON that conforms to
// the Pong type
consumer.unregister();
async.complete();
},
error -> {
context.fail(error);
}
);
}
i hope this at least inspires some new thinking around your problem.
Related
I am considering using SpecFlow for a new automation project. Since SpecFlow is similar to Cucumber in the Java world this question applies to Cucumber as well.
In real world applications there are lists of complex objects and tests are required to look just for specific object in those lists and only for specific fields of.
For example, a chat application displays a list of messages, a message being a complex object comprising of a date, user name, user icon image, text, and maybe other complex objects like images, tables, etc.
Now, one test may require just to check that the chat is not empty. Other test may require just to check that a message from a specific user is present. And another one just to check for a message with a specific text. The amount of verification rules can grow into many tens.
Of course, one way to deal with that is to implement a "step" for each verification rule, hence writing tens of steps just to discover that yet another one is needed... :(
I found that a better way is to use NUnit Constrains (Hamcrest Matchers in Java) to define those rules, for example:
[Test]
public void ShouldNotBeEmpty() {
...
Assert.That(chatMessages, Is.Not.Empty);
}
[Test]
public void ShouldHaveMessageFrom(string user) {
...
Assert.That(chatMessages, Contains.Item(new Message() with User=user));
// sometimes the User field maybe a complex object too...
}
[Test]
public void ShouldHaveMessage(string text) {
...
Assert.That(chatMessages, Contains.Item(new Message() with Text=text));
}
This way the mechanism that brings chatMessages can work with any kind of verification rule. Hence in a BDD framework, one could make a single step to work for all:
public void Then_the_chat(IConstraint matcher) {
Assert.That(someHowLoadChatMessagesHere, matcher);
}
Is there any way in SpecFlow/Cucumber to have these rules mapped to Gerkin syntax?
Code reuse is not the biggest concern for a behavior-driven test. Accurately describing the business use case is what a BDD test should do, so repetitive code is more acceptable. The reality is that you do end up with a large number of step definitions. This is normal and expected for BDD testing.
Within the realm of a chat application, I see three options for writing steps that correspond to the unit test assertions in your question:
Unit Test:
[Test]
public void ShouldNotBeEmpty() {
...
Assert.That(chatMessages, Is.Not.Empty);
}
Gherkin:
Then the chat messages should not be empty
Unit Test:
[Test]
public void ShouldHaveMessageFrom(string user) {
...
Assert.That(chatMessages, Contains.Item(new Message() with User=user));
// sometimes the User field maybe a complex object too...
}
Gherkin:
Then the user should have a chat message from "Greg"
Unit Test:
[Test]
public void ShouldHaveMessage(string text) {
...
Assert.That(chatMessages, Contains.Item(new Message() with Text=text));
}
Gherkin:
Then the user should have a chat message with the following text:
"""
Hi, everyone!
How is the weather, today?
"""
Unit Test:
public void Then_the_chat(IConstraint matcher) {
Assert.That(someHowLoadChatMessagesHere, matcher);
}
This gets a little more difficult. Consider using a data table to specify a more complex object in your assertion in Gherkin:
Then the user should have the following chat messages:
| Sender | Date Sent | Message |
| Greg | 5/2/2022 9:24:18 AM | ... |
| Sarah | 5/2/2022 9:25:39 AM | ... |
SpecFlow will pass a Table object as the last parameter to this step definition. You can use the SpecFlow.Assist Table helpers to compare the data table to your expected messages.
This gives you some options to think about. Which one you choose should be determined by how well the step and scenario reads in Gherkin. Without more information, this is all I can provide. Feel free to try these out and post new questions concerning more specific problems.
We are facing an issue while exception is encountered in transformer.
Below is the scenario:
We have a router and a transformer with the below configuration
<bean id="commonMapper"
class="com.example.commonMapper"></bean>
<int:router input-channel="channelA" ref="commonMapper"
method="methodA" />
<int:transformer input-channel="channel_2"
ref="commonMapper" method="methodB"
output-channel="channelC"></int:transformer>
CommonMapper.java :
public String methodA(SomeBean someBean) {
if (<some business condition example someBean.getXValue()>) {
return "channel_1";
} else if(<some condition>) {
return "channel_2"; // Assuming it enters this condition, based on this the above transformer with input-channel="channel_2" gets called
}else if (<some condition>) {
return "channel_3";
} else {
return "channel_4";
}
}
public SomeBean methodB(Message<SomeBean> message)
throws Exception{
SomeBean someBean = message.getPayload();
someBean.setY(10/0); // Purposely introducing an exception
}
While debugging the application, we found that whenever an exception is encountered in methodB(), the control goes back to router reference method i.e. methodA() and again satisfy the condition and calls the transformer (with input-channel="channel_2"). This repeats for certain iteration. And then exception is logged via AnnotationMethodHandlerExceptionResolver -> resolveException.
Below are the queries:
Why does the router gets called again when it encounters an exception in transformer?
Is it the bug or the normal behavior?
How to tackle this issue?
Please let me know if you need any more details around it.
The Spring Integration flow is just a plain Java methods chain call. So, just looks at this like you call something like: foo() -> bar() -> baz(). So, when exception happens in the last one, without any try...catch in the call stack, the control will come back to the foo() and if there is some retry logic, it is going to call the same flow again.
I'm not sure what is your AnnotationMethodHandlerExceptionResolver, but looks like your talk about this one:
Deprecated.
as of Spring 3.2, in favor of ExceptionHandlerExceptionResolver
#Deprecated
public class AnnotationMethodHandlerExceptionResolver
extends AbstractHandlerExceptionResolver
Implementation of the HandlerExceptionResolver interface that handles exceptions through the ExceptionHandler annotation.
This exception resolver is enabled by default in the DispatcherServlet.
This means that you use pretty old Spring. I don't think that it is related though, but your top of the call stack is Spring MVC. You need to take a look there what's going on with the retry.
And answering to all you question at once: yes, this is a normal behavior - see Java call explanation above. You need to debug Spring code from the IDE to figure out what is going on the MVC level
I have been working on a "paved road" for setting up asynchronous messaging between two micro services using AMQP. We want to promote the use of separate domain objects for each service, which means that each service must define their own copy of any objects passed across the queue.
We are using Jackson2JsonMessageConverter on both the producer and the consumer side and we are using the Java DSL to wire the flows to/from the queues.
I am sure there is a way to do this, but it is escaping me: I want the consumer side to ignore the __TypeID__ header that is passed from the producer, as the consumer may have a different representation of that event (and it will likely be in in a different java package).
It appears there was work done such that if using the annotation #RabbitListener, an inferredArgumentTypeargument is derived and will override the header information. This is exactly what I would like to do, but I would like to use the Java DSL to do it. I have not yet found a clean way in which to do this and maybe I am just missing something obvious. It seems it would be fairly straight forward to derive the type when using the following DSL:
return IntegrationFlows
.from(
Amqp.inboundAdapter(factory, queueRemoteTaskStatus())
.concurrentConsumers(10)
.errorHandler(errorHandler)
.messageConverter(messageConverter)
)
.channel(channelRemoteTaskStatusIn())
.handle(listener, "handleRemoteTaskStatus")
.get();
However, this results in a ClassNotFound exception. The only way I have found to get around this, so far, is to set a custom message converter, which requires explicit definition of the type.
public class ForcedTypeJsonMessageConverter extends Jackson2JsonMessageConverter {
ForcedTypeJsonMessageConverter(final Class<?> forcedType) {
setClassMapper(new ClassMapper() {
#Override
public void fromClass(Class<?> clazz, MessageProperties properties) {
//this class is only used for inbound marshalling.
}
#Override
public Class<?> toClass(MessageProperties properties) {
return forcedType;
}
});
}
}
I would really like this to be derived, so the developer does not have to really deal with this.
Is there an easier way to do this?
The simplest way is to configure the Jackson converter's DefaultJackson2JavaTypeMapper with TypeIdMapping (setIdClassMapping()).
On the sending system, map foo:com.one.Foo and on the receiving system map foo:com.two.Foo.
Then, the __TypeId__ header gets foo and the receiving system will map it to its representation of a Foo.
EDIT
Another option would be to add an afterReceiveMessagePostProcessor to the inbound channel adapter's listener container - it could change the __TypeId__ header.
Looking at the message gateway methods return type semantics, the void return type indicates no reply is produced (no reply channel will be created), and the Future return type indicates asynchronous invocation mode (utilizing AsyncTaskExecutor).
Now, if one wishes to combine those two and make the no-reply method asynchronous, one could argue that the mere possibility of declaring a return type of Future<Void> would mean just that: the method is invoked asynchronously (by declaring a Future), and the method doesn't expect any reply (by declaring a type parameter Void).
Looking at the source code of GatewayProxyFactoryBean, it is clear this is not the case:
private Object invokeGatewayMethod(MethodInvocation invocation, boolean runningOnCallerThread) throws Exception {
...
boolean shouldReply = returnType != void.class;
...
Only the simple void return type is checked. So I'm wondering if this is a feature or a bug. If this is a feature, the Future<Void> return type is not behaving as one could be led to expect, and (in my opinion) should be handled differently (causing a validation error or something similar).
It's not clear what is the point of returning a Future<Void> in this case.
The reason we can't treat Future<Void> as "special" is that the downstream flow might return such an object; the framework can't imply intent.
If you want to run a flow that doesn't return a reply asynchronously, simply make the request channel an ExecutorChannel; if you are using XML configuration, documentation is here.
If you are using java configuration define the channel #Bean with type ExecutorChannel.
Thanks in advance for the help -
I am new to mockito but have spent the last day looking at examples and the documentation but haven't been able to find a solution to my problem, so hopefully this is not too dumb of a question.
I want to verify that deleteLogs() calls deleteLog(Path) NUM_LOGS_TO_DELETE number of times, per path marked for delete. I don't care what the path is in the mock (since I don't want to go to the file system, cluster, etc. for the test) so I verify that deleteLog was called NUM_LOGS_TO_DELETE times with any non-null Path as a parameter. When I step through the execution however, deleteLog gets passed a null argument - this results in a NullPointerException (based on the behavior of the code I inherited).
Maybe I am doing something wrong, but verify and the use of isNotNull seems pretty straight forward...here is my code:
MonitoringController mockController = mock(MonitoringController.class);
// Call the function whose behavior I want to verify
mockController.deleteLogs();
// Verify that mockController called deleteLog the appropriate number of times
verify(mockController, Mockito.times(NUM_LOGS_TO_DELETE)).deleteLog(isNotNull(Path.class));
Thanks again
I've never used isNotNull for arguments so I can't really say what's going wrong with you code - I always use an ArgumentCaptor. Basically you tell it what type of arguments to look for, it captures them, and then after the call you can assert the values you were looking for. Give the below code a try:
ArgumentCaptor<Path> pathCaptor = ArgumentCaptor.forClass(Path.class);
verify(mockController, Mockito.times(NUM_LOGS_TO_DELETE)).deleteLog(pathCaptor.capture());
for (Path path : pathCaptor.getAllValues()) {
assertNotNull(path);
}
As it turns out, isNotNull is a method that returns null, and that's deliberate. Mockito matchers work via side effects, so it's more-or-less expected for all matchers to return dummy values like null or 0 and instead record their expectations on a stack within the Mockito framework.
The unexpected part of this is that your MonitoringController.deleteLog is actually calling your code, rather than calling Mockito's verification code. Typically this happens because deleteLog is final: Mockito works through subclasses (actually dynamic proxies), and because final prohibits subclassing, the compiler basically skips the virtual method lookup and inlines a call directly to the implementation instead of Mockito's mock. Double-check that methods you're trying to stub or verify are not final, because you're counting on them not behaving as final in your test.
It's almost never correct to call a method on a mock directly in your test; if this is a MonitoringControllerTest, you should be using a real MonitoringController and mocking its dependencies. I hope your mockController.deleteLogs() is just meant to stand in for your actual test code, where you exercise some other component that depends on and interacts with MonitoringController.
Most tests don't need mocking at all. Let's say you have this class:
class MonitoringController {
private List<Log> logs = new ArrayList<>();
public void deleteLogs() {
logs.clear();
}
public int getLogCount() {
return logs.size();
}
}
Then this would be a valid test that doesn't use Mockito:
#Test public void deleteLogsShouldReturnZeroLogCount() {
MonitoringController controllerUnderTest = new MonitoringController();
controllerUnderTest.logSomeStuff(); // presumably you've tested elsewhere
// that this works
controllerUnderTest.deleteLogs();
assertEquals(0, controllerUnderTest.getLogCount());
}
But your monitoring controller could also look like this:
class MonitoringController {
private final LogRepository logRepository;
public MonitoringController(LogRepository logRepository) {
// By passing in your dependency, you have made the creator of your class
// responsible. This is called "Inversion-of-Control" (IoC), and is a key
// tenet of dependency injection.
this.logRepository = logRepository;
}
public void deleteLogs() {
logRepository.delete(RecordMatcher.ALL);
}
public int getLogCount() {
return logRepository.count(RecordMatcher.ALL);
}
}
Suddenly it may not be so easy to test your code, because it doesn't keep state of its own. To use the same test as the above one, you would need a working LogRepository. You could write a FakeLogRepository that keeps things in memory, which is a great strategy, or you could use Mockito to make a mock for you:
#Test public void deleteLogsShouldCallRepositoryDelete() {
LogRepository mockLogRepository = Mockito.mock(LogRepository.class);
MonitoringController controllerUnderTest =
new MonitoringController(mockLogRepository);
controllerUnderTest.deleteLogs();
// Now you can check that your REAL MonitoringController calls
// the right method on your MOCK dependency.
Mockito.verify(mockLogRepository).delete(Mockito.eq(RecordMatcher.ALL));
}
This shows some of the benefits and limitations of Mockito:
You don't need the implementation to keep state any more. You don't even need getLogCount to exist.
You can also skip creating the logs, because you're testing the interaction, not the state.
You're more tightly-bound to the implementation of MonitoringController: You can't simply test that it's holding to its general contract.
Mockito can stub individual interactions, but getting them consistent is hard. If you want your LogRepository.count to return 2 until you call delete, then return 0, that would be difficult to express in Mockito. This is why it may make sense to write fake implementations to represent stateful objects and leave Mockito mocks for stateless service interfaces.