How to Mock a Static function with WebMvcTest in Springboot - mockito

Is there any functionality within the #WebMvcTest annotation that can be used to test static functions of a class?

Since I believe I found it out (and it's lacking in the baeldung-tutorial linked), here's an example (I don't use #WebMvcTest currently, I am testing a configuration object):
(see below for an explanation)
(imports omitted in favor of brevity)
#Slf4j
#ExtendWith(SpringExtension.class)
#ContextConfiguration(classes = ServerObjectConfiguration.class)
class ServerObjectConfigurationTest {
// 192.168.178.82
private static final byte[] EXAMPLE_IP_V4 = {-64, -88, -78, 82};
// 2001:0db8:85a3:0000:0000:8a2e:0370:7334
private static final byte[] EXAMPLE_IP_V6 = {32, 1, 13, -72, -123, -93, 0, 0, 0, 0, -118, 46, 3, 112, 115, 52};
private static final byte[] EXAMPLE_MAC_ADDRESS = {-68, -48, 116, 9, -47, 11};
private static final MockedStatic<NetworkInterface> mockedNetworkInterface = Mockito.mockStatic(NetworkInterface.class);
private static final MockedStatic<InetAddress> mockedInetAddress = Mockito.mockStatic(InetAddress.class);
#Autowired
private ServerObjectConfiguration serverObjectConfiguration;
#Autowired
private Server server;
#SneakyThrows
#BeforeAll
static void setUp() {
// This is SPARTAAA... or rather: crazy, because everything java.net seems to smell of bad design decisions
InetAddress ipV4InetAddress = mock(InetAddress.class);
when(ipV4InetAddress.getAddress())
.thenReturn(EXAMPLE_IP_V4);
when(ipV4InetAddress.isSiteLocalAddress())
.thenReturn(true);
InetAddress ipV6InetAddress = mock(InetAddress.class);
when(ipV6InetAddress.getAddress())
.thenReturn(EXAMPLE_IP_V6);
when(ipV6InetAddress.isSiteLocalAddress())
.thenReturn(true);
InterfaceAddress ipV4InterfaceAddress = mock(InterfaceAddress.class);
when(ipV4InterfaceAddress.getAddress())
.thenReturn(ipV4InetAddress);
InterfaceAddress ipV6InterfaceAddress = mock(InterfaceAddress.class);
when(ipV6InterfaceAddress.getAddress())
.thenReturn(ipV6InetAddress);
NetworkInterface networkInterface = mock(NetworkInterface.class);
when(networkInterface.getInterfaceAddresses())
.thenReturn(List.of(ipV4InterfaceAddress, ipV6InterfaceAddress));
when(networkInterface.getHardwareAddress())
.thenReturn(EXAMPLE_MAC_ADDRESS);
mockedInetAddress.when(() -> InetAddress.getByAddress(EXAMPLE_IP_V4))
.thenReturn(ipV4InetAddress);
mockedInetAddress.when(() -> InetAddress.getByAddress(EXAMPLE_IP_V6))
.thenReturn(ipV6InetAddress);
mockedNetworkInterface.when(() -> NetworkInterface.getByInetAddress(ipV4InetAddress))
.thenReturn(networkInterface);
mockedNetworkInterface.when(() -> NetworkInterface.getByInetAddress(ipV6InetAddress))
.thenReturn(networkInterface);
mockedNetworkInterface.when(NetworkInterface::networkInterfaces)
.thenReturn(Stream.of(networkInterface));
}
#AfterAll
static void tearDown() {
mockedInetAddress.close();
mockedNetworkInterface.close();
}
#SneakyThrows
#Test
void test_serverObjectIsConfiguredAsExpected() {
// the bean uses NetworkInterface to get the IP-addresses and MAC-address
assertThat(server.getMacAddresses()).containsOnly(EXAMPLE_MAC_ADDRESS);
assertThat(server.getIpAddresses()).containsExactlyInAnyOrder(EXAMPLE_IP_V4, EXAMPLE_IP_V6);
}
}
To mock a static method in a way, that Spring Boot uses it during Bean initialization / context creation, you have to do it in a #BeforeAll (JUnit5) annotated method, which is static. The reason is, that you want to mock the static method as early in your test as you can.
I tried to use a thread-local Mockito#mockStatic call, but since my bean is getting initialized before the test even starts, my attempt at mocking the relevant static methods was too late.
This approach definitely works and the Server-bean, that contains the IP-addresses and the MAC-address, receives the values I am expecting it to.

Related

Spring Integration file adapter tests failing on macOS

I have a spring integration project with multiple tests that runs perfectly fine on Windows and Linux machines. Now, I've just bought a MacBook and tried running the tests on it. All of my file adapter tests (15) are failing.
The below example test verifies that when a file is placed in the inboundOutDirectory directory, that it's properly processed.
Test class example:
#RunWith(SpringRunner.class)
#ContextConfiguration(
initializers = ConfigFileApplicationContextInitializer.class,
classes = {StructuralEnricherIntegrationFlow.class, StructuralEnricherIntegrationFlowTests.Config.class})
#SpringIntegrationTest
#DirtiesContext
public class StructuralEnricherIntegrationFlowTests {
#Autowired
private MockIntegrationContext mockIntegrationContext;
#Autowired
private File inboundOutDirectory;
#Autowired
private File inboundProcessedDirectory;
#Autowired
private File inboundFailedDirectory;
#Autowired
private PollableChannel testChannel;
#After
public void tearDown() throws IOException {
FileUtils.cleanDirectory(inboundOutDirectory);
FileUtils.cleanDirectory(inboundProcessedDirectory);
FileUtils.cleanDirectory(inboundFailedDirectory);
}
#Test
public void testStructuralEnricherFlowEmptyResponse() throws Exception {
// given
String docId = "xxx";
// when
MessageHandler mockMessageHandler = mockMessageHandler().handleNextAndReply(m -> "{}");
this.mockIntegrationContext.substituteMessageHandlerFor("structuralEnricherEndpoint", mockMessageHandler);
String fileName = TestingUtils.createFileAggregated(docId, 1, ".json", inboundOutDirectory,
"{}");
// then
Message<String> receive = (Message<String>) testChannel.receive(3000);
JSONAssert.assertEquals(String.format("{xxx"),
receive.getPayload(), JSONCompareMode.LENIENT);
assertTrue(FileUtils.getFile(new File(this.inboundProcessedDirectory, fileName)).exists());
}
#Configuration
#EnableIntegration
public static class Config {
#Autowired
private File inboundOutDirectory;
#Bean
public PollableChannel testChannel() {
return new QueueChannel();
}
#Bean
public IntegrationFlow processedDirectory(#Value("${pattern}") String pattern) {
return IntegrationFlows
.from(Files.inboundAdapter(inboundOutDirectory)
.regexFilter(pattern)
.useWatchService(true)
.watchEvents(FileReadingMessageSource.WatchEventType.CREATE),
e -> e.poller(Pollers.fixedDelay(100)))
.transform(Files.toStringTransformer())
.channel("testChannel")
.get();
}
}
}
It seems that the testChannel never receives a message as I'm getting a NullPointerException at receive.getPayload(). I've tried increasing the timeout but it didn't help.
System info:
spring-boot-starter-parent 2.3.4.RELEASE
MacBook Pro 13 (2020) with macOS Big Sur 11.1
openjdk version "1.8.0_275"
The same test works for example on ubuntu with java version 1.8.0_275.
I'm not sure what's the problem so hopefully, you can help me out.
UPDATE
So it seems that the problem lies in the useWatchService(true) in the Files.inboundAdapter. This is how the processing of the document is currently triggered:
return IntegrationFlows
.from(Files.inboundAdapter(inboundOutDirectory)
.regexFilter(intermediatePattern)
.useWatchService(true)
.watchEvents(FileReadingMessageSource.WatchEventType.CREATE,
FileReadingMessageSource.WatchEventType.DELETE),
e -> e.poller(Pollers.fixedDelay(1000)
.taskExecutor(Executors.newFixedThreadPool(2))
.maxMessagesPerPoll(5)))
I have debugged this flow a bit more and it appears that when .useWatchService(true), then the adapter will pick up the file with a delay (it takes about 10 seconds). Not sure why it works differently on macOS. When I change it to .useWatchService(false), it works instantly.
Here is the util method that creates the files:
public static String createFileAggregated(String id, Integer documentId, String extension, File tmpDir, String content) throws Exception {
String filename = String.format("%s%04d%s", id, documentId, extension);
FileUtils.write(new File(tmpDir, filename), (Optional.ofNullable(content).orElse(filename)), "UTF-8", false);
return filename;
}

Use of setPreferredSize is disregarded by program

How do I successfully call setPreferredSize in a method? I'm calling setPreferredSize twice. If I remove the call inside the constructor, the panel doesn't appear at all, whereas it had earlier appeared with the undesired size (500,300). This demonstrates that setPreferredSize is being executed in the constructor, but not in the method of the same class. Note that this is the only issue (as far as I have tested) with my code; there's no unexpected interference outside the code below.
...
public abstract class XYGrapher extends JPanel{
...
JFrame frame;
JPanel contentPane;
...
public XYGrapher() {
frame = new JFrame("Grapher");
frame.setDefaultCloseOperation(JFrame.EXIT_ON_CLOSE);
contentPane = new JPanel();
contentPane.setPreferredSize(new Dimension(500, 300));
contentPane.setLayout(new SpringLayout());
this.setPreferredSize(new Dimension(500, 300));
frame.setContentPane(contentPane);
frame.pack();
frame.setVisible(true);
contentPane.add(this);
}
//
public void drawGraph(int xPixelStart, int yPixelStart, int pixelsWide, int pixelsHigh) {
...
contentPane.setPreferredSize(new Dimension(pixelsWide, pixelsHigh));
this.setPreferredSize(new Dimension(pixelsWide, pixelsHigh));
...
}
//*/
}
For reference, this is how XYGrapher eventually gets used:
public class GrapherTester extends XYGrapher{
...
public static void main(String[] args) {
GrapherTester g = new GrapherTester();
g.drawGraph(0,0,100,100);
}
}
I have managed to fix the issue in the meantime. Simply add
frame.pack();
to the method.

How to prevent duplicate tasks when run same IScheduledExecutorService on apps in cluster?

I want to understand difference between hazelcast methods for IScheduledExecutorService for prevent duplicate tasks.
I have two java app with HazelcastInstance. Respectively I have hazelcast cluster with two HazelcastInstances (servers).
I use IMap and want to reset AtomicLong every midnight.
config.getScheduledExecutorConfig("my scheduler")
.setPoolSize(16)
.setCapacity(100)
.setDurability(1);
class DelayedResetTask implements Runnable, HazelcastInstanceAware, Serializable {
static final long serialVersionUID = -7588380448693010399L;
private transient HazelcastInstance client;
#Override
public void run() {
final IMap<Long, AtomicLong> map = client.getMap(HazelcastConfiguration.mapName);
final ILogger logger = client.getLoggingService().getLogger(HazelcastInstance.class);
logger.info("Show data in cache before reset: " + map.entrySet());
map.keySet().forEach(key -> map.put(key, new AtomicLong(0)));
logger.info("Data was reseted: " + map.entrySet());
}
#Override
public void setHazelcastInstance(HazelcastInstance hazelcastInstance) { this.client = hazelcastInstance; }
}
private void resetAtMidnight() {
final Long midnight = LocalDateTime.now().until(LocalDate.now().plusDays(1).atStartOfDay(), ChronoUnit.MINUTES);
executor.scheduleAtFixedRate(new DelayedResetTask(), midnight, TimeUnit.DAYS.toMinutes(1), TimeUnit.MINUTES);
}
I don't want to execute this task on each instance in parallel. After reading documentation documentation I don't understand how I can execute reset in both servers for one step (without duplicate tasks, without execution on both servers at one time).
What method I can use for my task scheduleOnAllMembersAtFixedRate or scheduleAtFixedRate or scheduleOnMembersAtFixedRate.
How to prevent duplicate tasks when run same IScheduledExecutorService on apps in cluster?
You need to run your code only once in the cluster, since the map you are resetting can be accessed from any member. Both members access to the same map instance, only the entries are kept in different members. You can use scheduleAtFixedRate to run it once.
Additionally, you do not need to call IMap#keySet().forEach() to traverse over all entries in the map. Instead, you can use EntryProcessor as below:
public static class DelayedResetTask implements Runnable, HazelcastInstanceAware, Serializable {
static final long serialVersionUID = -7588380448693010399L;
private transient HazelcastInstance client;
#Override
public void run() {
final IMap<Long, AtomicLong> map = client.getMap(HazelcastConfiguration.mapName);
final ILogger logger = client.getLoggingService().getLogger(HazelcastInstance.class);
logger.info("Show data in cache before reset: " + map.entrySet());
map.executeOnEntries(new AbstractEntryProcessor() {
#Override
public Object process(Map.Entry entry) {
entry.setValue(new AtomicLong(0));
return null;
}
});
logger.info("Data was reseted: " + map.entrySet());
}
#Override
public void setHazelcastInstance(HazelcastInstance hazelcastInstance) { this.client = hazelcastInstance; }

Spring Integration Cassandra persistence workflow

I try to realize the following workflow with Spring Integration:
1) Poll REST API
2) store the POJO in Cassandra cluster
It's my first try with Spring Integration, so I'm still a bit overwhelmed about the mass of information from the reference. After some research, I could make the following work.
1) Poll REST API
2) Transform mapped POJO JSON result into a string
3) save string into file
Here's the code:
#Configuration
public class ConsulIntegrationConfig {
#InboundChannelAdapter(value = "consulHttp", poller = #Poller(maxMessagesPerPoll = "1", fixedDelay = "1000"))
public String consulAgentPoller() {
return "";
}
#Bean
public MessageChannel consulHttp() {
return MessageChannels.direct("consulHttp").get();
}
#Bean
#ServiceActivator(inputChannel = "consulHttp")
MessageHandler consulAgentHandler() {
final HttpRequestExecutingMessageHandler handler =
new HttpRequestExecutingMessageHandler("http://localhost:8500/v1/agent/self");
handler.setExpectedResponseType(AgentSelfResult.class);
handler.setOutputChannelName("consulAgentSelfChannel");
LOG.info("Created bean'consulAgentHandler'");
return handler;
}
#Bean
public MessageChannel consulAgentSelfChannel() {
return MessageChannels.direct("consulAgentSelfChannel").get();
}
#Bean
public MessageChannel consulAgentSelfFileChannel() {
return MessageChannels.direct("consulAgentSelfFileChannel").get();
}
#Bean
#ServiceActivator(inputChannel = "consulAgentSelfFileChannel")
MessageHandler consulAgentFileHandler() {
final Expression directoryExpression = new SpelExpressionParser().parseExpression("'./'");
final FileWritingMessageHandler handler = new FileWritingMessageHandler(directoryExpression);
handler.setFileNameGenerator(message -> "../../agent_self.txt");
handler.setFileExistsMode(FileExistsMode.APPEND);
handler.setCharset("UTF-8");
handler.setExpectReply(false);
return handler;
}
}
#Component
public final class ConsulAgentTransformer {
#Transformer(inputChannel = "consulAgentSelfChannel", outputChannel = "consulAgentSelfFileChannel")
public String transform(final AgentSelfResult json) throws IOException {
final String result = new StringBuilder(json.toString()).append("\n").toString();
return result;
}
This works fine!
But now, instead of writing the object to a file, I want to store it in a Cassandra cluster with spring-data-cassandra. For that, I commented out the file handler in the config file, return the POJO in transformer and created the following, :
#MessagingGateway(name = "consulCassandraGateway", defaultRequestChannel = "consulAgentSelfFileChannel")
public interface CassandraStorageService {
#Gateway(requestChannel="consulAgentSelfFileChannel")
void store(AgentSelfResult agentSelfResult);
}
#Component
public final class CassandraStorageServiceImpl implements CassandraStorageService {
#Override
public void store(AgentSelfResult agentSelfResult) {
//use spring-data-cassandra repository to store
LOG.info("Received 'AgentSelfResult': {} in Cassandra cluster...");
LOG.info("Trying to store 'AgentSelfResult' in Cassandra cluster...");
}
}
But this seems to be a wrong approach, the service method is never triggered.
So my question is, what would be a correct approach for my usecase? Do I have to implement the MessageHandler interface in my service component, and use a #ServiceActivator in my config. Or is there something missing in my current "gateway-approach"?? Or maybe there is another solution, that I'm not able to see..
Like mentioned before, I'm new to SI, so this may be a stupid question...
Nevertheless, thanks a lot in advance!
It's not clear how you are wiring in your CassandraStorageService bean.
The Spring Integration Cassandra Extension Project has a message-handler implementation.
The Cassandra Sink in spring-cloud-stream-modules uses it with Java configuration so you can use that as an example.
So I finally made it work. All I needed to do was
#Component
public final class CassandraStorageServiceImpl implements CassandraStorageService {
#ServiceActivator(inputChannel="consulAgentSelfFileChannel")
#Override
public void store(AgentSelfResult agentSelfResult) {
//use spring-data-cassandra repository to store
LOG.info("Received 'AgentSelfResult': {}...");
LOG.info("Trying to store 'AgentSelfResult' in Cassandra cluster...");
}
}
The CassandraMessageHandler and the spring-cloud-streaming seemed to be a to big overhead to my use case, and I didn't really understand yet... And with this solution, I keep control over what happens in my spring component.

JSF - CDI instancing again a session bean

The source code:
public class ReportGenerator implements Serializable {
private static final long serialVersionUID = -3995091296520157208L;
#Inject
private ReportCacheSession reportCacheSession;
#Inject
private UserSessionBean userSessionBean;
#Inject
private Instance<ReportBuilder> reportBuilderInstance;
public static final int BUILD_ERROR = 0;
public static final int BUILD_OK = 1;
public static final int BUILD_NOPAGES = 2;
private ReportBuilder reportBuilder = null;
private FileData build(String jasperName, Map<String, Object> params, String extension, boolean guardarCache, boolean inline) {
FileData fd = null;
reportBuilder = reportBuilderInstance.get();
if (reportBuilder != null) {
reportBuilder.jasperName = jasperName;
reportBuilder.emailName = SevUtils.getEmailName(userSessionBean.getUserInfo().getEmail());
reportBuilder.sessionId = JSFUtils.getSessionId();
reportBuilder.params = params;
reportBuilder.extension = extension;
//reportBuilder.config(jasperName, SevUtils.getEmailName(userSessionBean.getUserInfo().getEmail()), JSFUtils.getSessionId(), params, extension);
reportBuilder.start();
try {
reportBuilder.join();
} catch (InterruptedException ex) {
Logger.getLogger(ReportGenerator.class.getName()).log(Level.SEVERE, null, ex);
}
fd = reportBuilder.getFileData();
}
if (fd != null && fd.getState() == BUILD_OK) {
fd.setInline(inline);
if (guardarCache) {
reportCacheSession.addReport(fd);
}
}
return fd;
}
}
reportBuilder.start(); is a new Thread to generate the report(s), the problem is when the line reportCacheSession.addReport(fd); is called CDI create a new instance each time, but ReportCacheSession is a session bean annotated with javax.inject.Named and javax.enterprise.context.SessionScoped.
I don't know why this happens, but my solution is add a new line, like this:
FileData fd = null;
reportCacheSession.toString(); //NEW LINE
reportBuilder = reportBuilderInstance.get();
reportCacheSession.toString(); create the instance of ReportCacheSession before my thread is called and all works OK...
How the new thread affects to CDI? Why CDI created a new instance of my session bean when I called the thread before?
UPDATE 08/15/12:
Ok, I have changed my code to use the EJB annotation #Asynchronous, in this case I have problem when I'm generating a large PDF report (the XLS report works without problem), the file's size is incomplete(less bytes) and when I try to open it this appear in blank... Maybe a problem/bug with JRExporter#exportReport method...
LAST UPDATE:
Ok, the report generation was my mistake... the question is which alternative is best to use EJB Asynchronous or JMS? Thanks to all, each comment have led me to find a good solution...

Resources