Get MockWebServer port into the Spring container with SpringBootTest - spring-test

I have a Spring Boot + WebFlux application with the following structure:
A controller ControllerA that uses a service ServiceA.
ServiceA uses a WebClient to call a remote microservice microservice A.
The URL of the endpoint in microservice A is configurable in application.yaml.
Now I would like to test it using SpringBootTest and JUnit 5 Jupiter with the PER_CLASS test lifecycle. For this, I want to mock the remote microservice using OkHttp MockWebServer. The server is started before the first test method (#BeforeAll) and stopped after all tests (#AfterAll).
The problem is that the Spring container starts first, and then the test methods (and their before and after callbacks) are invoked. When the container starts it initializes the beans, and also ServiceA with the URL of microservice A from application.yaml. Only afterwards is MockWebServer started and I can query it for its URL, but at that point it's already too late and ServiceA is already instantiated with the 'wrong' (real) URL.
I got it to work in various ways using various hacks (e.g. fixing the port by overriding the URL with #TestPropertySource and then forcing WebTestServer to use this port) but all have disadvantages (what if the port is takes by a parallel CI job?).
As we are only going to have more and more such integration tests in our codebase, we need a good solution for it. I am thinking of writing a custom TestExecutionListener running before all of the Spring ones (and starting the mock server), but I'm not sure if this would work (I still don't know yet how I would pass the port to the container, I have never implemented such listeners).
Question: before I implement listeners or some other custom solution I wanted to make sure that I really need to. I searched the web a bit and didn't find anything, but I don't think we are the first team to write such tests, I'm just searching badly. Does anybody know of any kind of library that would solve this problem? Or maybe a neat Spring trick that I'm currently missing but one that would help?

Related

How to call an API running on same EC2 instance but on different port number

I am runnig multiple Node.js servers on one Ec2 Instance , and when i need to call any API that is running on different port of same instance , i need to use AXIOS for calling,I am wondering is this the only way that i can call my API, having known it's running on same EC2 , and is this efficient solution ? please i need guidance
Thank you in advance
I would like to answer this question based on two aspects,
1) The way your application communicating with other application
You can have two ways of doing it. The first one is synchronous, if you have to wait for result from second application and complete the logic in your first application, Then go with this approach. And yes, you can use REST call(using AXION or any REST client).gRpc is another option if you are planning to stream data.
The next one is asynchronous communication, where your application is not waiting for the result from second application(Fire and forget). This can be achieved using a Message queue
2) The way your application is deployed and how it is reffered.
Assuming that you are calling the application using IP:PORT or localhost:PORT(since it is in same VM), I can say that this is not good approach.
Think about a scenario where you need to move one of the application to a different box or you want to scale the application individually. In these cases it would be difficult for you to manage.
Do you have any API gateway or Reverse Proxy or Load-Balancer in front of your application? If yes, call your application through these services.
If you are not sure about the above, please follow the links below,
https://hub.packtpub.com/6-common-use-cases-of-reverse-proxy-scenarios/
https://www.redhat.com/en/topics/api/what-does-an-api-gateway-do
https://microservices.io/microservices/news/2015/03/15/deployment-patterns.html

Using nock to make a request made from a docker container?

My end-to-end tests are ran against a local docker environment of a micro service I'm testing, I want to mock out a request made via that docker environment from the end-to-end test but it just doesn't seem to let me.
const requestNock = nock('https://google.com').post('/foo').reply(200);
makeRequest('127.0.0.1/bar') // This path then calls https://google.com/bar
requestNock.isDone() // Returns false
Is something like this possible? If not is there any solution to mock out an external API's response made via a local docker environment?
This is not something Nock is designed to handle. It works by intercepting outbound HTTP requests inside a single Node process.
Without knowing more about what your docker does it's hard to say what your options are. If the service inside the Docker container has no logic and it's only purpose is to proxy requests during your tests, then I'd argue that it should be removed from the test and you can Nock out the requests inside the single process. I would call these integration tests.
Since you specified your tests are end-to-end, I have to assume the Docker service is more than a proxy. However, you usually don't want to nock any requests of end-to-end tests as it negates the advantages over integration tests.

How to expose SOAP Web Service using MarshallingWebServiceInboundGateway (with Spring Integration DSL)

I’m struggling to find good examples of Spring Integration using the MarshallingWebServiceInboundGateway
I put together a working sample that uses MarshallingWebServiceInboundGateway to expose an Order service, and when called it consumes an order detail service using the MarshallingWebServiceOutboundGateway
https://github.com/yortch/spring-integration-demo/blob/outboundgateway/services/order-flow/src/main/java/com/demo/integration/order/OrderEndpoint.java
#ServiceActivator(inputChannel= ChannelNames.ORDER_INVOCATION, outputChannel = ChannelNames.ORDER_DETAIL_REQUEST_BUILDER)
OrderRequest getOrder(OrderRequest orderRequest) {
return orderRequest;
}
This is somehow working, however my expectation for the method below is that this should be the signature of the web service method, i.e. return an OrderResponse type. I had this initially working this way when I was manually building the OrderResponse by calling other POJOs, however I can’t figure out how to keep the original web service method signature and internally use Spring Integration for the implementation, i.e via calling a channel to do the transformation and in turn call the order detail service (using MarshallingWebServiceOutboundGateway).
If you know any code examples for doing this, please share. I came across this one, but this is directly building the response (without using Spring Integration channels): https://bitbucket.org/tomask79/spring-boot-webservice-integration/src/master/
Sounds like some misunderstanding what is Spring Integration flow and how its endpoints work.
Three first class citizens in Spring Integration: Message, Channel, Endpoint
Endpoints are connected via channels
Endpoints consume messages from their input channels
Endpoints may produce messages as a result of their calculation into the output channels.
So, in your case you want to expose a SOAP service which is going to call internally another SOAP service.
You have started properly with the MarshallingWebServiceInboundGateway. This one, I guess, produces into its channel an OrderRequest object. It expects an OrderResponse in its replyChannel (explicit or temporary in headers). I'm not sure what your getOrder() does, but if there is yet a transformer and MarshallingWebServiceOutboundGateway, you need to consider to connect them all into the flow. So, I think that a result of your service, should go to the channel which is an input for the transformer. An output of this transformer should go to the MarshallingWebServiceOutboundGateway. And result of this gateway may go to some other transformer to build an OrderResponse which may just go into the reply channel of the MarshallingWebServiceInboundGateway.
If that's not what you expect as an explanation, I then will request you to rephrase your question...

Why don't there seem to be any robust HTTP server mocking packages for node? (or are there?)

I'm very new to Node.js, so I might just not be getting it, but after searching quite a bit, and trying a few different solutions, I am still not able to find a decent way to mock API responses using Node for acceptance testing.
I've got a javascript app (written in elm actually) that interacts with an API (pretty common, I imagine), and I want to write some acceptance tests... so I setup WebdriverIO with selenium and mocha, write some tests, and of course now I need to mock some API responses so that I can setup some theoretical scenarios to test under.
mock-api-server: Looked pretty nice, but there's no way to adjust the headers getting sent back from the server!
mock-http-server: Also looked pretty nice, lets me adjust headers, but there's no way to reset the mock responses without shutting down the whole server... !? And that has issues because the server won't shut down while the browser window is still open, so that means I have to close and relauch the browser just to clear the mocks!
json-server: Simple and decent way to mock some responses, but it relies entirely on files on disk for the responses. I want something I can configure from within a test run without reading and writing files to disk.
Am I missing something? Is this not how people do acceptance testing in the Node universe? Does everyone just use a fixed set of mock data for their entire test suite? That just sounds insane to me... Particularly since it seems like it wouldn't be that hard to write a good one based on express server that has all the necessary features... does it exist?
Necessary Features:
Server can be configured and launched from javascript
Responses(including headers) can be configured on the fly
Responses can also be reset easily on the fly, without shutting down the server.
I hit this problem too, so I built one: https://github.com/pimterry/mockttp
In terms of the things you're looking for, Mockttp:
Lets you start & reconfigure the server dynamically from JS during the test run, with no static files.
Lets you adjust headers
Lets you reset running servers (though I'd recommend shutting down & restarting anyway - with Mockttp that takes milliseconds, is clear & easily automatable, and gives you some nice guarantees)
On top of that, it:
Is configurable from both Node & browsers with identical code, so you can test universally
Can handle running tests in parallel for quicker testing
Can fake HTTPS servers, self-signing certificates automatically
Can mock as an intercepting proxy
Has a bunch of nice debugging support for when things go wrong (e.g. unmatched requests come back with a readable explanation of the current configuration, and example code that would make the request succeed)
Just to quickly comment on the other posts suggesting testing in-process: I really wouldn't. Partly because a whole bunch of limitations (you're tied to a specific environment, potentially even specific node version, you have to mock for the whole process, so no parallel tests, and you can't mock subprocesses), but mainly because it's not truly representative. For a very very small speed cost, you can test with real HTTP, and know that your requests & responses will definitely work in reality too.
Is this not how people do acceptance testing in the Node universe? Does everyone just use a fixed set of mock data for their entire test suite?
No. You don't have to make actual HTTP requests to test your apps.
All good test frameworks lets you fake HTTP by running the routes and handlers without making network requests. Also you can mock the functions that are making the actual HTTP requests to external APIs, which should be abstracted away in the first place, so no actual HTTP requests need to take place here as well.
And if that's not enough you can always write a trivially simple server using Express, Hapi, Restify, Loopback or some other frameworks, or plain http, or even net module (depending on how much control do you need - for example you should always test invalid responses that don't use HTTP protocol correctly, or broken connections, incomplete connections, slow connections etc. and for that you may need to use lower lever APIs in Node) to provide the mock data yourself.
By the way, you also always need to test responses with invalid JSON because people often wrongly assume that the JSON they get is always valid which it is not. See this answer to see why it is particularly important:
Calling a JSON API with Node.js
Particularly since it seems like it wouldn't be that hard to write a good one based on express server that has all the necessary features... does it exist?
Not everything that "wouldn't be that hard to write" necessarily has to exist. You may need to write it. Hey, you even have a road map ready:
Necessary Features:
Server can be configured and launched from javascript
Responses(including headers) can be configured on the fly
Responses can also be reset easily on the fly, without shutting down the server.
Now all you need is choose a name, create a repo on GitHub, create a project on npm and start coding.
You now, even "it wouldn't be that hard to write" it doesn't mean that it will write itself. Welcome to the open source world where instead of complaining that something doesn't exist people just write it.
You could try nock. https://github.com/node-nock
It supports all of your feature requests.

Ektorp querying performance against CouchDB is 4x slower when initiating the request from a remote host

I have a Spring MVC app running under Jetty. It connects to a CouchDB instance on the same host using Ektorp.
In this scenario, once the Web API request comes into Jetty, I don't have any code that connects to anything not on the localhost of where the Jetty instance is running. This is an important point for later.
I have debug statements in Jetty to show me the performance of various components of my app, including the performance of querying my CouchDB database.
Scenario 1: When I initiate the API request from localhost, i.e. I use Chrome to go to http://localhost:8080/, my debug statements indicate a CouchDB peformance of X.
Scenario 2: When I initiate the exact same API request from a remote host, i.e. I use Chrome to go to http://:8080/, my debug statements indicate a CouchDB performance of 4X.
It looks like something is causing the connection to CouchDB to be much slower in scenario 2 than scenario 1. That doesn't seem to make sense, since once the request comes into my app, regardless of where it came from, I don't have any code in the application that establishes the connection to CouchDB differently based on where the initial API request came from. As a matter of fact, I have nothing that establishes the connection to CouchDB differently based on anything.
It's always the same connection (from the application's perspective), and I have been able to reproduce this issue 100% of the time with a Jetty restart in between scenario 1 and 2, so it does not seem to be related to caching either.
I've gone fairly deep into StdCouchDbConnector and StdHttpClient to try to figure out if anything is different in these two scenarios, but cannot see anything different.
I have added timers around the executeRequest(HttpUriRequest request, boolean useBackend) call in StdHttpClient to confirm this is where the delay is happening and it is. The time difference between Scenario 1 and 2 is several fold on client.execute(), which basically uses the Apache HttpClient to connect to CouchDB.
I have also tried always using the "backend" HttpClient in StdHttpClient, just to take Apache HTTP caching out of the equation, and I've gotten the same results as Scenarios 1 and 2.
Has anyone run into this issue before, or does anyone have any idea what may be happening here? I have gone all the way down to the org.apache.http.impl.client.DefaultRequestDirectory to try to see if anything was different between scenarios 1 and 2, but couldn't find anything ...
A couple of additional notes:
a. I'm currently constrained to a Windows environment in EC2, so instances are virtualized.
b. Scenarios 1 and 2 give the same response time when the underlying instance is not virtualized. But see a - I have to be on AWS.
c. I can also reproduce similar 4X slower performance as in scenario 2 with this third scenario: instead of making the localhost:8080/ using Chrome, I make it using Postman, which is a Chrome application. Using Postman from the Jetty instance itself, I can reproduce the 4X slower times.
The only difference I see in c. above is that the request headers in Chrome's developer tools indicate a Remote Address of [::1]:8080. I don't have any way to set that through Postman, so I don't know if that's the difference maker. And if it were, first I wouldn't understand why. And second, I'm not sure what I could do about it, since I can't control how every single client is going to connect to my API.
All theories, questions, ideas welcome. Thanks in advance!

Resources